Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2022, Zenodo (CERN European Organization for Nuclear Research)
…
1 page
1 file
The development of computer-assisted composition (CAC) systems is a research activity that dates back to at least the works by IRCAM on OpenMusic [1]. CAC is a field that is concerned with developing systems that are capable of automating partially or completely the process of music composition. There exists several compositional tasks a system can address (e.g. rhythm generation, harmonization, melody generation, etc). These tasks can be realized with machine learning (ML) algorithms given a conditioning or not on prior musical sequences. Many ML-based CAC systems have emerged from both academia and industry over the years [2] [3]. For the majority of them, the user continuously generate music by tweaking a set of parameters that influences the model's generation. Building on top of Apollo, an interactive web environment that makes corpus-based music algorithms available for training and generation via a convenient graphical interface [4], Calliope is specialized for advanced MIDI manipulation in the browser and generative controllability of the Multi-Track Music Machine (MMM) model [5] for batch generation of partial or complete multi-track compositions. The aim is to enable the ability for composers to effectively co-create with a generative system. Calliope is built in Node.js, the Web stack (HTML, CSS, Javascript) and MongoDB. It is made interoperable with the MMM pretrained model via the Python runtime. MMM offers both global-level deep learning parameters (e.g. temperature) and track-level music-based constraint parameters: note density, polyphony range and note duration range. Bar selection can be used to refine the request for generation. It is also possible to delete or add MIDI tracks to an existing MIDI file in order to generate on a subset of the tracks or to generate a new track for a given composition. The composer makes use of all these varied controls to steer the generative behavior of the model and guide the composition process. Batch generation of musical outputs is implemented via the MMM's Python interface which offers batch support natively. This ability means that the composer can rapidly explore alternatives, including generating from a previously generated output, for a given set of control parameters. We have tested batch requests of 5, up to 1000 generated music excerpts at a Copyright: © 2022 Renaud Bougueng et al. This is an open-access article distributed under the terms of the Creative Commons Attribution 3.0 Unported License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
International Journal of Innovative Technology and Exploring Engineering, 2020
Humans have been entertained by music for millennia. For ages it has been treated as an art form which requires a lot of imagination, creativity and accumulation of feelings and emotions. Recent trends in the field of Artificial Intelligence have been getting traction and Researchers have been developing and generating rudimentary forms of music through the use of AI. Our goal is to generate novel music, which will be non-repetitive and enjoyable. We aim to utilize a couple of Machine Learning models for the same. Given a seed bar of music, our first Discriminatory network consisting of Support Vector Machines and Neural Nets will choose a note/chord to direct the next bar. Based on this chord or note another network, a Generative Net consisting of Generative Pretrained Transformers(GPT2) and LSTMs will generate the entire bar of music. Our two fold method is novel and our aim is to make the generation method as similar to music composition in reality as possible. This in turn resul...
International Journal on Artificial Intelligence Tools, 2005
Nobody would deny that music may evoke deep and profound emotions. In this paper, we present a perceptual music composition system that aims at the controlled manipulation of a user's emotional state. In contrast to traditional composing techniques, the single components of a composition, such as melody, harmony, rhythm and instrumentation, are selected and combined in a user-specific manner without requiring the user to continuously provide comments on the music employing input devices, such as keyboard or mouse.
IEEE Access
Music generation using deep learning has received considerable attention in recent years. Researchers have developed various generative models capable of imitating musical conventions, comprehending the musical corpora, and generating new samples based on the learning outcome. Although the samples generated by these models are persuasive, they often lack musical structure and creativity. For instance, a vanilla end-to-end approach, which deals with all levels of music representation at once, does not offer human-level control and interaction during the learning process, leading to constrained results. Indeed, music creation is a recurrent process that follows some principles by a musician, where various musical features are reused or adapted. On the other hand, a musical piece adheres to a musical style, breaking down into precise concepts of timbre style, performance style, composition style, and the coherency between these aspects. Here, we study and analyze the current advances in music generation using deep learning models through different criteria. We discuss the shortcomings and limitations of these models regarding interactivity and adaptability. Finally, we draw the potential future research direction addressing multi-agent systems and reinforcement learning algorithms to alleviate these shortcomings and limitations.
Creativity and Cognition
Calliope is a web application for co-creative multi-track music composition (MMM) in the symbolic domain. It is built to facilitate the use of multi-track music machine (MMM). The user can upload Musical Instrument Digital Interface (MIDI) files, visualize * All authors contributed equally to this research.
Artificial Intelligence in Music, Sound, Art and Design
We present a novel music generation framework for music infilling, with a user friendly interface. Infilling refers to the task of generating musical sections given the surrounding multi-track music. The proposed transformer-based framework is extensible for new control tokens as the added music control tokens such as tonal tension per bar and track polyphony level in this work. We explore the effects of including several musically meaningful control tokens, and evaluate the results using objective metrics related to pitch and rhythm. Our results demonstrate that adding additional control tokens helps to generate music with stronger stylistic similarities to the original music. It also provides the user with more control to change properties like the music texture and tonal tension in each bar compared to previous research which only provided control for track density. We present the model in a Google Colab notebook to enable interactive generation.
arXiv (Cornell University), 2020
We propose the Multi-Track Music Machine (MMM), a generative system based on the Transformer architecture that is capable of generating multi-track music. In contrast to previous work, which represents musical material as a single time-ordered sequence, where the musical events corresponding to different tracks are interleaved, we create a time-ordered sequence of musical events for each track and concatenate several tracks into a single sequence. This takes advantage of the Transformer's attention-mechanism, which can adeptly handle longterm dependencies. We explore how various representations can offer the user a high degree of control at generation time, providing an interactive demo that accommodates track-level and bar-level inpainting, and offers control over track instrumentation and note density.
2007
iv ACKNOWLEDGEMENTS I would like to thank the members of my committee for their support, patience and the many ways in which they have inspired me. I would also like to thank: Lance Putnam, for both the sharing of and support for his Synz audio library, and synthesis programming in general, Wesley Smith for inspiration and support both creatively and technically regarding 3-D graphics, and for the OpenGL bindings to Lua, Norman, and the author) for resources in the in the windowing/GUI implementation, Stephen Pope and Xavier Amatriain for instruction in software synthesis and event scheduling, My friends, family and loved ones for endless support, encouragement and patience. The rich new terrains offered by computer music invite the exploration of new techniques to compose within them. The computational nature of the medium has suggested algorithmic approaches to composition in the form of generative musical structure at the note level and above, and audio signal processing at the l...
2003
This paper presents a design of support system for musical composition based on Simulated Breeding. In our system named SBEAT3, each individual in the population is a short musical section of sixteen beats including 23 parts, thirteen solos, two chords and eight percussions.The melody and rhythm are generated by a type of recursive algorithm from genetic information. By selecting favorite piece among scores displayed on the screen, the user listens to the sounds and decides which should be theparents to reproduce the offspring in the next generation. The genetic codes of children are generated through mutation and crossover. Iterating this process, the user obtains better pieces gradually. Embedding some domain specific functions, such as changing tempo and selecting tones, we can build a useful tool to make it easier for a beginner to compose his/her favorite musical pieces.
Journal of Creative Music Systems
The International conference on AI Music Creativity (AIMC, https://aimusiccreativity.org/) is the merger of the international workshop on Musical Metacreation MUME (https://musicalmetacreation.org/) and the conference series on Computer Simulation of Music Creativity (CSMC, https://csmc2018.wordpress.com/). This special issue gathers selected papers from the first edition of the conference along with paper versions of two of its keynotes.This special issue contains six papers that apply novel approaches to the generation and classification of music. Covering several generative musical tasks such as composition, rhythm generation, orchestration, as well as some machine listening task of tempo and genre recognition, these selected papers present state of the art techniques in Music AI. The issue opens up with an ode on computer Musicking, by keynote speaker Alice Eldridge, and Johan Sundberg's use of analysis-by-synthesis for musical applications.
2000
This paper describes Roboser (http://www.roboser.com), an autonomous interactive music composition system. The core of the system comprises two components: a program for simulating large-scale neural networks, and an algorithmic composition system. Both components operate in real-time. Data from e.g. cameras, microphones and pressure sensors enter the simulated neural system, which is also used to actively control motor devices such as pantilt cameras and robots. The neural system relays data representing its current operational state on to the algorithmic composition system. The composition system in turn generates musical expressions of these neural states within an a priori stylistic framework. The result is a real-time system controlled by a brain-like structure that behaves and interacts within a given environment and expresses its internal states through music.
Ciência e Tecnologia de Alimentos
Einführung in die Repertory Grid-Technik, 1993
Jurnal mahasiswa BK An-Nur, 2024
Observatorio del Desarrollo, 2023
Δεξιότητες: Ένα γενικό πλαίσιο οριοθέτησης και νοηματοδότησης , 2022
isara solutions, 2024
École nationale supérieure d’architecture de Paris Belleville. Annuel 2018-2019, « Chantier(s) », 2020
FAMAGUSTA Book Presentation Ecole française d'Athènes, 2021
Revista CIDUI, 2018
2021
Pharmaceutical Patent Analyst, 2020
Ekonomika preduzeca
BMC Pediatrics, 2022
Creative Education, 2018
Chemical Research in Toxicology, 2011
Engineering, 2020
Development, 2006
SSRN Electronic Journal