Academia.eduAcademia.edu

Two Network INSTALLATIONS:'1133'& 'Computer Voices

classes.berklee.edu

In this paper two network-installations are presented, '1133' and 'COMPUTER VOICES', both of which were developed at the Utrecht School of Music and Technology. 1133 is an installation of 15 networked Apple iMacs forming an ensemble that, by using genetic algorithms, generates and plays minimal music. In COMPUTER VOICES an attempt is made to bring these networked computers to life by giving each computer its own voice.

TWO NETWORK INSTALLATIONS: ‘1133’ & ‘COMPUTER VOICES’ Vincent Akkermans Utrecht School of the Arts Utrecht School of Music and Technology ABSTRACT In this paper two network-installations are presented, ‘1133’ and ‘COMPUTER VOICES’, both of which were developed at the Utrecht School of Music and Technology. 1133 is an installation of 15 networked Apple iMacs forming an ensemble that, by using genetic algorithms, generates and plays minimal music. In COMPUTER VOICES an attempt is made to bring these networked computers to life by giving each computer its own voice. 1. INTRODUCTION Nowadays, computers are present in our everyday lives. It is hard to find a computer that is not connected to the internet or a local network. What new perspectives or opportunities does this provide for computer music or art installations? The two installations that we present in this paper are studies into these new possibilities. Computers are connected to share data. This data has meaning since it represents objects and processes. When computers are connected in a network, a virtual space comes into existence. Data and processes can be regarded as to exist between computers in this virtual space instead of on a single computer. For this to happen a high abstraction of the network layer is necessary. Due to this abstraction, it is trivial where data is stored and processes are run. Virtual space is a world of its own and can be accessed and represented at different physical spaces through computers. Within this virtual space new meaning can arise and from this perspective it becomes possible to develop new concepts and ideas. As part of their third year curriculum of Music and Technology at the Utrecht School of Music and Technology, Than van Nispen tot Pannerden developed the installation COMPUTER VOICES and Vincent Akkermans, Bertus van Dalen en Siebe Domeijer developed the installation 1133. 2. 1133 2.1. Virtual Ensemble 1133 is an art installation of 15 networked Apple iMacs. Together the computers make up a virtual ensemble of musicians that play minimal music. Than van Nispen tot Pannerden Utrecht School of the Arts Utrecht School of Music and Technology The installation was inspired by Terry Riley’s piece ‘in C’ [7]. In C’s score consists of 53 short motives and is made up of only one page of sheet music. This one page and a handful of rules result in a performance that can last somewhere from 2 to 24 hours. The use of a small amount of musical material and the gradual development of this material is a characteristic of minimal music. 1133 uses just one single motif and develops this gradually through the use of genetic algorithms. 1133’s virtual ensemble is divided into a group of 14 or more musician computers and a single conductor computer. Each musician has its own sound and pitch range. The conductor’s tasks are to indicate the beat and to give the musicians directions. Figure 1. 1133 in action Before the ensemble starts playing, the motif, the scale and the tempo can be set on the conductor computer. The conductor distributes this information to all the musicians. As the conductor signals the ensemble to begin playing, all musicians start with the motif that was set. Each musician repeats this motif a random number of times. During these repetitions every musician will create a new motif to play for when the repetitions are done. The musicians each get separate directions from the conductor on the type of motif. This means every musician will create a different motif, which will also be repeated a number of times. This cycle repeats until the end of the performance. As the musicians play increasingly different motives, interlocking patterns emerge. All sound is generated using noise and different kinds of filtering techniques and has an idiophonic quality. Each note that is played is accompanied by a short flash of greenish, blueish light. This helps the observer to identify what musician is playing which notes and greatly enhances the observer’s experience. 2.2. Genetic Algorithms The cognitive process of creating a new motif to play can be viewed as a process of genetic mutation [6]. Pitches, durations, volume and the number of notes increase and decrease randomly within a limited range. Notes can be combined, split up, removed, added or transformed into rests. To guarantee the gradual development of the music, only small mutations are possible. The directions the conductor gives are combined, weighed fitness tests. A small amount of tests have been programmed, each testing for a specific musical characteristic, for example ambitus or percentage of notes on beat. By combining these specific fitness tests it is possible to dynamically create more complete and complex tests. The mutations with the highest scores, according to these fitness tests, will be chosen as the successors of the current motives. 3. COMPUTER VOICES 3.1. Concept COMPUTER VOICES is a composition installation for 15 networked Apple iMacs. The installation is an attempt to make computers in a networked environment come to life giving each computer its own voice and letting them sing and talk. Because the human voice is one of the sounds in which we identify ourselves most and associate with life, the choice was made to use voice synthesis techniques. To make the installation more interactive the possibility of giving directions to the composition was added. The concept consists of two elements. The first element is the singing of the computers. The second element is the interaction of the user with these singing computers and triggers the part of speech synthesis. 3.1.1. Speech The interaction between the user and the computer voices occurs through a chat window. A chat window was chosen as the means of interaction because it is a familiar form of direct communication between people over a network. The chat window displays the message ‘Hello?’ and tempts the user to reply. When the user replies the speech synthesis part of the composition is started. All computers show the user’s response in their chat window and pronounce this text. When a certain amount of text has been typed a second layer of speech is started. On each computer an internet browser is opened which loads a podcast with people speaking. The browser window is closed after a random amount of time. 3.1.2. Singing In COMPUTER VOICES every computer gets its own male or female voice. For the singing voices FOF synthesis techniques were used [3]. Males have a lower range and different resonance frequencies than female voices. The synthesised voices can have deviations in their ranges and resonances, thus creating basses, tenors, altos and sopranos. Like in 1133 there is one conductor computer and multiple performer computers. The conductor decides the overall tonality or mode, harmony or fundamental pitch and tempo, whereas vocalists, the performers, decide which notes they sing. 3.2. Notes 3.2.1. Mode Progressions Every mode has a certain quality as the result of the notes that the mode consists of. For example ”0 2 3 5 7 8 10” would make the Aeolian mode. A total of ten modes exist and during the performance the installation can switch modes. Every mode has a certain probability of changing to the next mode. The first mode for example has the following line in the probability-table: ”1 1 1 1 1 1 1 2”. This shows that there is a 7 to 1 chance of staying in the first mode and a 1 to 7 chance of changing to a next mode, in this case mode 2. The conductor ultimately decides what mode to change to. The last mode is a so-called ’Ligeti-mode’ [7] in which all notes are possible and all vocalists have slow glissandi, thus creating cluster-like sounds. The soloists in this mode have chromatic scales they can sing. 3.2.2. Harmonic Progressions Just like the mode progressions, harmonic progressions within the modes are also controlled by probability tables. The first mode for example has the following line in the harmonic progression probability-table: ”0 5 5 5 5”. This shows that there is a 4 to 1 chance of choosing a new fundamental pitch that is 5 semitones higher or 7 semitones lower than the current fundamental pitch. The harmonic progressions change from harmonically fit to harmonically less fit, ”6” which is a tritone. The harmonic progressions end in the chromatic scale of the Ligeti mode. After this last mode the progression returns to the more harmonically fit modes. 3.2.3. Vocalists For harmonic reasons the vocalists are not allowed to sing every note of the current mode. Vocalists are for example only allowed to sing the notes ”0 3 7 8” in the first, Aeolian mode, as shown in figure 2. Soloists are allowed to sing more notes than normal vocalists. The possible vocalist and soloist notes are stored in tables as intervals to the fundamental pitch. Each vocalist decides which note it will sing by randomly choosing the next or previous note-value from these tables. Upon a mode or harmonic change, vocalists will keep singing the current note in this new context if possible, creating more fluent melodies. 0 & . 3 O. 7 8 . O. application provides timing and a graphical user interface for setting the motif, tempo and fitness tests. The client application consists of a communication module, a sequencer module, a genetic algorithm module and a video and audio synthesis module. The sequencer module plays back and repeats the current motif. When the repetitions are done, the sequencer asks the genetic algorithm external for a new motif. Figure 2. Notes the vocalists can sing in the Aeolian mode 4.3.2. Genetic Algorithm External 4. IMPLEMENTATION 4.1. Communication System Both COMPUTER VOICES and 1133 are implemented in the Max/MSP programming language [4] and share the same communication system. The communication system is a layer on top of the [net.udp.send] and [net.udp.recv] objects and uses UDP broadcasting. Computers can be addressed separately or in groups. The syntax is modelled after object oriented function calls: nameComputer . comp1 . command arg1 arg2 arg3 etc. setRepetitions 7 comp7 comp9 group1 . setVolume 0.9 4.2. Distribution While developing both installations, we found that running tests takes too much time. Downloading, unpacking, installing and starting the newest version of the software required too much effort. Additionally, the computer lab is not always available. This made mistakes costly. Two solutions were developed to minimize the time spent on testing. All settings are stored in a start-up script that can be loaded on the conductor computer. It initializes all musician computers. This saves the trouble of going from computer to computer to set them up by hand. Another small patch was created, that is installed by default on every computer. This patch receives unix shell commands over the network and executes them through the [aka.shell] external [2]. By running the osascript program it is possible to execute applescript commands. Applescript makes it possible to easily download and start the newest version of the software, open the volume of the iMacs and shut them down at the end of the day. The genetic algorithm is implemented in a java external for Max/MSP. This external has methods to mutate and test the currently playing motif. The highest scoring mutation is saved and sent to the sequencer when the sequencer asks for it. At that moment the external starts mutating a copy of this new motif. The mutating and testing of motives is done in a separate thread and uses all processing power of a single processing core. To change the direction of the development of the piece it is possible to dynamically alter the fitness test. A diverse collection of small fitness tests has been programmed. Each fitness test evaluates a motif for a particular musical characteristic, see figure 3. By making weighed combinations of these small fitness tests it is possible to create more complex tests. We found that specifying the weights is a difficult process and requires a lot of tweaking. Also, it is possible to combine conflicting tests which results in unpredictable behaviour. The algorithm regularly gets stuck in local maxima but this did not turn out to be a problem artistically. An example specification of a weighed combined fitness test would be: group1 . motifLength 2.0 4 totalLength 1.0 10 accents 1.0 5 onBeat 1.0 5 0.5 ambitus 0.6 4 rests 0.7 0 averageNotenumber 1.3 13 averageInterval 0.8 3 averageVolume 1.0 100 Figure 3. Example of a small fitness test 4.4. COMPUTER VOICES 4.3. 1133 4.3.1. General Design 1133’s design is based on a master-client approach. The conductor computer runs the master application and the musician computers run the client application. The master 4.4.1. Voice Synthesis We used the speakable items of Mac OS X for speech synthesis. These speakable items were used through the [aka.speech] external [2]. For Dutch speech synthesis software of the Acapela group was used [1]. Two slightly different approaches of FOF-synthesis were developed with a slight difference in overall sound, resulting in a more diverse sounding ’choir’. Both approaches were built according to the model of FOF as shown in figure 4. midi-note frequency pulse-train or saw-like soundsource (rich in harmonics) !T frequency-modulation = vibrato subtle noises: frequency, resonance, vibratos lores filter vowel-filterbank Figure 5. Voice Synthesis - sound source approach 1 provided an entertaining user experience. Developing the installations and viewing the computer lab in a different perspective gave us new ideas and insights. We are curious to see what other artistic uses of networked computers may be possible in the future. Figure 4. FOF-synthesis There are some important differences between the two FOF-techniques that have been used. The first approach uses only standard Max/MSP objects whereas the second approach uses externals which were developed by CNMAT [5]. The first approach uses filtered noises [lores∼] and [phasor∼] objects modulated by different, independent vibratos, as shown in figure 5. The vowel-filterbank consists of a [fffb∼] object with subtle amplitude modulation of every output of the [fffb∼] object. The second approach uses the [harmonics∼] object set to a pulsetrain waveform combined with a small amount of noise. This approach uses [resonator∼] objects as a filterbank. Both approaches use the IRCAM vowel filter characteristics [5] as the filterbank settings. The first approach uses the characteristics as ratios to a fundamental frequency and also modulates the frequency values. 5. CONCLUSIONS Current technology provides us with the means of easily creating network installations like the two installations presented in this paper. Both virtual ensembles generated interesting and aesthetically pleasing music and 6. REFERENCES [1] Acapela Group, infovox iVox demo [2] Akamatsu, M., www.iamas.ac.jp/∼aka/max/ [3] Bélanger, O., Traube, C., Piché, J. ”Designing and Controlling a Source-filter Model for Naturalistic and Expressive Singing Voice Synthesis”, ICMC 2007 proceedings, 2007. [4] Cycling ’74, The Max/MSP 4.6 programming language [5] Freed, A. and Zbyszynski, M., CNMAT: resonators∼ and harmonics∼ externals and ”Singing-voice Subtractive ”Source-Filter” Synthesis”, 2006 [6] Gartland-Jones, A. and Copley, P. ”The Suitability of Genetic Algorithms for Musical Composition”, Contemporary Music Review, 2003. [7] Ruiter, W. de, Compositietechnieken in de twintigste eeuw, de Toorts, Haarlem, 1993.