Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2009, New Interfaces for Musical Expression
…
4 pages
1 file
Phonetic symbols describe movements of the vocal tract, tongue and lips, and are combined into complex movements forming the words of language. In music, vocables are words that describe musical sounds, by relating vocal movements to articulations of a musical instrument. We posit that vocable words allow the composers and listeners to engage closely with dimensions of timbre, and that vocables could see greater use in electronic music interfaces. A preliminary system for controlling percussive physical modelling synthesis with textual words is introduced, with particular application in expressive specification of timbre during computer music performances.
Workshop on current research directions in …, 2001
The Timbre Engine is a software synthesizer that operates on a set of controls and algorithms developed on a model of the additive synthesis parameters. It provides a real-time additive engine, a novel graphical user interface, the interfacing between them and control by a MIDI keyboard. The system has been built on an object oriented architecture designed to provide the necessary functionality and interfacing within all applications layers from the user interface to the real time sound synthesis engine. It also intended to provide an API that enables control of the synthesis by higher level entities (e.g. gestures) on a real time basis. The system is designed to be cross platform. The synthesizer is based on the timbre model which has been proven useful for the analysis/synthesis of musical sounds, notably because it is retaining most of the perceptual cues (spectral envelope, amplitude envelope and irregularity parameters) closely related to timbre. Furthermore, the parameters of the timbre model are believed to be intuitive and useful when creating new sounds. Finally, the real-time synthesis application is useful when discovering the range of musical sounds available by the timbre model and the discrimination of the timbre model parameters.
1999
This work involves the analysis of musical instrument sounds, the creation of timbre models, the estimation of the parameters of the timbre models and the analysis of the timbre model parameters.
This paper summarizes the authors experiences in developing tools for generating and performing sounds with computers using concepts, which strongly extend the traditional concept of " notes in a score ". The software environment KLANGPILOT allows to describe and formalize musical ideas in a language, which is both, close enough to traditional concepts to support an efficient creative process, and rich enough to allow the creation of (mainly synthetic) sounds with unusual, sophisticated spectral properties. Also some links between high level control of sound synthesis and computer aided composition are highlighted. Finally a way of playing back sounds is described, which provides a compromise between the safety of recorded tapes and the temporal flexibility of real time technology.
8th International Conference on Digital Audio …, 2005
The guitar is an instrument that gives the player great control over timbre. Different plucking techniques involve varying the finger position along the string, the inclination between the finger and the string, the inclination between the hand and the string and the degree of relaxation of the plucking finger. Guitarists perceive subtle variations of these parameters and they have developed a very rich vocabulary to describe the brightness, the colour, the shape and the texture of the sounds they produce on their instrument. Dark, bright, chocolatey, transparent, muddy, wooly, glassy, buttery, and metallic are just a few of those adjectives. The aim of this research is to conceive a computer tool producing the synthesis of the vocal imitation as well as the graphical representation of phonetic gestures underlying the description of the timbre of the classical guitar, as a function of the instrumental gesture parameters (mainly the plucking angle and distance from the bridge) and based on perceptual analogies between guitar and speech sounds. Similarly to the traditional teaching of tabla which uses onomatopeia to designate the different strokes, vocal imitation of guitar timbres could provide a common language to guitar performers, complementary to the mental imagery they commonly use to communicate about timbre, in a pedagogical context for example.
2007
Laptop performance of computer music has become wide spread in the electronic music community. It brings with it many issues pertaining to the communication of musical intent. Critics argue that performances of this nature fail to engage audiences as many performers use the mouse and keyboard to control their musical works, leaving no visual cues to guide the audience as to the correlation between performance gestures and musical outcomes. Interfaces need to communicate something of their task. The author will argue that cognitive affordances associated with the performance interface become paramount if the musical outcomes are to be perceived as clearly tied to realtime performance gestures, ie. That the audience is witnessing the creation of the music in that moment as distinct to the manipulation of pre-recorded or pre-sequenced events.
Forum Medientechnik
The original version of the KLANGPILOT environment for control of sound synthesis was developed in LISP in 1992, with the purpose of generating Csound scores while allowing more sophisticated control of sound synthesis. It was a text based, human- readable score language that described musical ideas in a way better suited to the creative process of composition. This language was then translated by the computer into detailed parameters for various sound synthesis methods. As faster computers became available, a new real-time version of KLANGPILOT was developed that aimed to provide a highly intuitive and interactive user interface for entering, editing and displaying sound including timbral parameters. This article focuses on the new KLANG- PILOT and its graphical paradigm of music representation within the Max/MSP environment – being placed somewhere in the middle between the traditional music notation and a spectral representation of sound.
2017
Timbre is a musical attribute that has been largely discussed among the research community. However, there is still a lot to investigate, especially in regards to timbre and orchestration, which involves polyphonic timbre: a phenomenon that emerges from the mixture of instruments playing simultaneously. In this paper, we report on the development of a system capable of automatically analysing and classifying perceptual qualities of timbre within orchestral audio samples. This approach has been integrated in a computer-aided orchestration system for string ensemble. Our rationale for developing such a system is to create a means of incorporating musical timbre in the composition of music, which is often focused mainly on traditional Western music theory. Such developments could enrich creative music systems, and aid composers in their metier.
2000
A real-time synthesis engine that models and predicts the timbre of acoustic instruments based on perceptual features is presented. The timbre characteristics and the mapping between control and timbre parameters are inferred from recorded musical data. In the synthesis step, timbre data is predicted based on new control data enabling applications such as synthesis and cross-synthesis of acoustic instruments and
Corpus-based concatenative synthesis is based on descriptor analysis of any number of existing or live-recorded sounds, and synthesis by selection of sound segments from the database matching given sound characteristics. It is well described in the literature, but has been rarely examined for its capacity as a new interface for musical expression. The outcome of such an examination is that the actual instru-ment is the space of sound characteristics, through which the performer navigates with gestures captured by various input devices. We will take a look at different types of interaction modes and controllers (positional, inertial, au-dio) and the gestures they afford, and provide a critical as-sessment of their musical and expressive capabilities, based on several years of musical experience, performing with the CataRT system for real-time CBCS.
Proceedings of the 7th international conference on New interfaces for musical expression - NIME '07, 2007
This paper presents two main ideas: (1) Various newly invented liquid-based or underwater musical instruments are proposed that function like woodwind instruments but use water instead of air. These "woodwater" instruments expand the space of known instruments to include all three states of matter: solid (strings, percussion); liquid (the proposed instruments); and gas (brass and woodwinds). Instruments that use the fourth state of matter (plasma) are also proposed. (2) Although the current trend in musical interfaces has been to expand versatililty and generality by separating the interface from the sound-producing medium, this paper identifies an opposite trend in musical interface design inspired by instruments such as the harp, the acoustic or electric guitar, the tin whistle, and the Neanderthal flute, that have a directness of user-interface, where the fingers of the musician are in direct physical contact with the sound-producing medium. The newly invented instruments are thus designed to have this sensually tempting intimacy not be lost behind layers of abstraction, while also allowing for the high degree of virtuosity. Examples presented include the poseidophone, an instrument made from an array of ripple tanks, each tuned for a particular note, and the hydraulophone, an instrument in which sound is produced by pressurized hydraulic fluid that is in direct physical contact with the fingers of the player. Instruments based on these primordial media tend to fall outside existing classifications and taxonomies of known musical instruments which only consider instruments that make sound with solid or gaseous states of matter. To better understand and contextualize some of the new primordial user interfaces, a broader concept of musical instrument classification is proposed that considers the states of matter of both the user-interface and the sound production medium.
Journal for the History of Astronomy, 2024
Dialogues between Media
Zeitschrift für Papyrologie und Epigraphik, 2021
Japan Society Proceedings, 2018
Aranzadi civil-mercantil, 2018
Contexto jurídico das novas famílias do século XXI – Vol. II, 2020
Education Sciences , 2023
Летопис Матице српске, 2017
Energies, 2017
Monthly Notices of the Royal Astronomical Society, 2000
Eskişehir Osmangazi üniversitesi ilahiyat fakültesi dergisi, 2017
Azania:archaeological Research in Africa, 2016
Proceedings of the International Conference …, 2011
Diálogo de voces. Nuevas lecturas sobre la obra de María Rosa Lojo, 2018