Academia.eduAcademia.edu

Calliope: Generating Symbolic Multi-Track Music on the Web

2022, Zenodo (CERN European Organization for Nuclear Research)

The development of computer-assisted composition (CAC) systems is a research activity that dates back to at least the works by IRCAM on OpenMusic [1]. CAC is a field that is concerned with developing systems that are capable of automating partially or completely the process of music composition. There exists several compositional tasks a system can address (e.g. rhythm generation, harmonization, melody generation, etc). These tasks can be realized with machine learning (ML) algorithms given a conditioning or not on prior musical sequences. Many ML-based CAC systems have emerged from both academia and industry over the years [2] [3]. For the majority of them, the user continuously generate music by tweaking a set of parameters that influences the model's generation. Building on top of Apollo, an interactive web environment that makes corpus-based music algorithms available for training and generation via a convenient graphical interface [4], Calliope is specialized for advanced MIDI manipulation in the browser and generative controllability of the Multi-Track Music Machine (MMM) model [5] for batch generation of partial or complete multi-track compositions. The aim is to enable the ability for composers to effectively co-create with a generative system. Calliope is built in Node.js, the Web stack (HTML, CSS, Javascript) and MongoDB. It is made interoperable with the MMM pretrained model via the Python runtime. MMM offers both global-level deep learning parameters (e.g. temperature) and track-level music-based constraint parameters: note density, polyphony range and note duration range. Bar selection can be used to refine the request for generation. It is also possible to delete or add MIDI tracks to an existing MIDI file in order to generate on a subset of the tracks or to generate a new track for a given composition. The composer makes use of all these varied controls to steer the generative behavior of the model and guide the composition process. Batch generation of musical outputs is implemented via the MMM's Python interface which offers batch support natively. This ability means that the composer can rapidly explore alternatives, including generating from a previously generated output, for a given set of control parameters. We have tested batch requests of 5, up to 1000 generated music excerpts at a Copyright: © 2022 Renaud Bougueng et al. This is an open-access article distributed under the terms of the Creative Commons Attribution 3.0 Unported License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Proceedings of the 19th Sound and Music Computing Conference, June 5-12th, 2022, Saint-Étienne (France) Calliope: Generating Symbolic Multi-Track Music on the Web Renaud Bougueng Simon Fraser University [email protected] Jeff Ens Simon Fraser University jeff [email protected] Philippe Pasquier Simon Fraser University [email protected] 1. EXTENDED ABSTRACT The development of computer-assisted composition (CAC) systems is a research activity that dates back to at least the works by IRCAM on OpenMusic [1]. CAC is a field that is concerned with developing systems that are capable of automating partially or completely the process of music composition. There exists several compositional tasks a system can address (e.g. rhythm generation, harmonization, melody generation, etc). These tasks can be realized with machine learning (ML) algorithms given a conditioning or not on prior musical sequences. Many ML-based CAC systems have emerged from both academia and industry over the years [2] [3]. For the majority of them, the user continuously generate music by tweaking a set of parameters that influences the model’s generation. Building on top of Apollo, an interactive web environment that makes corpus-based music algorithms available for training and generation via a convenient graphical interface [4], Calliope is specialized for advanced MIDI manipulation in the browser and generative controllability of the Multi-Track Music Machine (MMM) model [5] for batch generation of partial or complete multi-track compositions. The aim is to enable the ability for composers to effectively co-create with a generative system. Calliope is built in Node.js, the Web stack (HTML, CSS, Javascript) and MongoDB. It is made interoperable with the MMM pretrained model via the Python runtime. MMM offers both global-level deep learning parameters (e.g. temperature) and track-level music-based constraint parameters: note density, polyphony range and note duration range. Bar selection can be used to refine the request for generation. It is also possible to delete or add MIDI tracks to an existing MIDI file in order to generate on a subset of the tracks or to generate a new track for a given composition. The composer makes use of all these varied controls to steer the generative behavior of the model and guide the composition process. Batch generation of musical outputs is implemented via the MMM’s Python interface which offers batch support natively. This ability means that the composer can rapidly explore alternatives, including generating from a previously generated output, for a given set of control parameters. We have tested batch requests of 5, up to 1000 generated music excerpts at a Copyright: an © open-access 2022 article Renaud Bougueng distributed under Creative Commons Attribution 3.0 Unported License, et al. the which terms This is of the permits Figure 1. Calliope’s Interface time, which take from 3 seconds to 10 minutes on an average computer depending on the note density of the music input. This process drives an interactive loop of continuous music generation and playback listening for the composer to navigate the creative process. 2. REFERENCES [1] G. Assayag, C. Rueda, M. Laurson, C. Agon, and O. Delerue, “Computer-assisted composition at ircam: From patchwork to openmusic,” Computer Music Journal, vol. 23, no. 3, pp. 59–72, 1999. [2] C. Anderson, A. Eigenfeldt, and P. Pasquier, “The generative electronic dance music algorithmic system (gedmas),” in In Proceedings of the Second International Workshop on Musical Metacreation (MUME 2013) 2013., 2013. [3] A. Roberts, J. Engel, Y. Mann, J. Gillick, C. Kayacik, S. Nørly, M. Dinculescu, C. Radebaugh, C. Hawthorne, and D. Eck, “Magenta studio: Augmenting creativity with deep learning in ableton live,” 2019. [4] R. B. Tchemeube, J. Ens, and P. Pasquier, “Apollo: An interactive environment for generating symbolic musical phrases using corpus-based style imitation,” 2019. [5] J. Ens and P. Pasquier, “Mmm: Exploring conditional multi-track music generation with the transformer,” arXiv preprint arXiv:2008.06048, 2020. unre- stricted use, distribution, and reproduction in any medium, provided the original author and source are credited. 694