EAnalysis: Developing A Sound-Based Music Analytical Tool

Download as pdf or txt
Download as pdf or txt
You are on page 1of 19

EAnalysis: developing a sound-based music analytical

tool
Pierre Couprie

To cite this version:


Pierre Couprie. EAnalysis: developing a sound-based music analytical tool. Leigh Landy; Simon
Emmerson. Expanding the Horizon of Electroacoustic Music Analysis, Expanding the Horizon of
Electroacoustic Music Analysis, Cambridge University Press, pp.170-194, 2016, Expanding the Horizon
of Electroacoustic Music Analysis, 9781107118324. �hal-01290982�

HAL Id: hal-01290982


https://hal.archives-ouvertes.fr/hal-01290982
Submitted on 2 Jan 2017

HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est


archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents
entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non,
lished or not. The documents may come from émanant des établissements d’enseignement et de
teaching and research institutions in France or recherche français ou étrangers, des laboratoires
abroad, or from public or private research centers. publics ou privés.

Distributed under a Creative Commons Attribution - NonCommercial - NoDerivatives| 4.0


International License
EAnalysis : Developing a Sound-Based Music Analytical Tool

Pierre Couprie
[email protected]
IReMus, Paris-Sorbonne University (Paris), MTIRC, De Montfort University (Leicester)

1. Introduction
Analysing electroacoustic music is always difficult. Mostly works do not have visual support
or score and when the music has a score, e.g. mixed music, the electronic part is usually
written as a form of code and understanding relations between the signs and sound is
complex. This is why most musicians use graphic representation to analyse electroacoustic
music, to create spatialisation scores, or to transmit knowledge to their students. Also
composers use sketches to elaborate forms, structures or memorise their works during the
creative process.
Acousmatic music is not representative of current electroacoustic music. A lot of musicians
use live electronics, improvisation, other arts — such as video, sound sculpture, poetry, etc.
— where technical means are an important part of the work and recording these performances
is very difficult. A stereophonic sound file alone cannot define the work. Many current
electroacoustic works are allographics (Genette, 1997), they are defined by different
recordings of different performances, multitrack recordings of different instruments/devices,
video recordings, scores, data from different devices, and so on. Electroacoustic means and
electronic instruments are hybrid and modular. Analysing an electroacoustic performance is a
real challenge because you may need to use a range of software to segment sound material,
compare various data in different formats, analyse interactions between musicians through
movie recordings, and create representations of structures and relations between parts or
elements of the performance. Moreover, most software is not compatible, there is no standard
exchange format.
Enhancing analytical software is very important but enhancing representation is also essential.
To analyse various types of data, we need to create suitable representations: sound
representation, line and form/structure charts, graphic representation of units or moments.
These representations need also to integrate images or other representations of performance,
and even from the creative process itself. Representation in electroacoustic music analysis is
not only a graphic representation with beautiful shapes in various colours, each of them
representing a sound. Representation can also include sonograms, curve charts of audio
descriptors, representation of interaction message lists between musician and computer, tables
with time cues, structure representations, space motions, or relations between image and
sound in video music.
EAnalysis1 was created to fill the gaps that exist between various analysis software
applications. EAnalysis cannot do everything musicologists, teachers, or musicians want, it is
a workspace where the user can create representations, import data from other software or
recorded during performance, and analyse them. I did not reinvent the wheel; this piece of
software offers the possibility to import data and to export analyses in different formats. It is
based on another programme, iAnalyse, which was created for written music analysis. But

1
EAnalysis is available from http://eanalysis.pierrecouprie.fr.
EAnalysis is very different because the main support of iAnalyse is the score and the main
support of EAnalysis is the sound.
This chapter presents the development of EAnalysis from three angles. The first is
representation and its role in music analysis. The second is new concepts introduced by the
software. The final angle presents the most important features of EAnalysis through
presenting different examples.

2. Analysis with graphic representation


2.1. Role of graphic representation in musical analysis
Musical analysis generates representations (Chouvel, 2011), representations of form,
structures, relations between various elements, representations with or without time, etc.
Musicologists need representations to analyse or to present their analyses. Several theories of
analysis are also based on representations such as Schenkerian reduction, paradigmatic
segmentation, or various representations of harmony. Representation is important for musical
analysis because this is a study of a time art. Humans need to write time down to capture the
ephemeral moment and study it. For example a representation of structural segmentation in a
formal diagram can reveal new points of view on musical structures. Analytical representation
is always a reduction of the musical process. They focus on one or several musical parameters
to reveal internal or external relationships between them. In a pedagogical field, analytical
representation can also reveal implicit relations or structural processes.
One of the particularities of electroacoustic music is to have no (or to have incomplete) visual
support. Mixed music uses a score with various symbols or graphic shapes to represent the
electronic part. These symbols can represent a number of preset, simple text indications of
sound transformation, or graphic shapes representing a reduction of the electroacoustic part.
All of these have great importance for the musician and/or technical assistant. Musicologists
and musicians can also use them to analyse the work, to understand the musical ideas of the
composer, or to reconstruct the creative process. Teachers can use them to understand the
work and to prepare presentations for their students. But these texts, symbols, or graphic
reductions are limited to what the composer wants to give you, to what he thinks important to
perform his work. These indications are important to analyse the work but they are not in
themselves an analysis.
Analysing a work means understanding complex relations between parts/moments/units and
revealing something difficult to perceive through simple listening. The most important goal of
analysis is to give you keys to understand music. Students need these keys to understand how
the composer works or to create their own music. Teachers need these keys to present work to
their students and to move their ears to what they need to hear. Musicologists need these keys
to develop their own theory of music, to create links between different works, or simply to
understand aspects of a work. These keys cannot exist only in thought, you have to record
them through text, simple graphics, or more elaborated graphic representations. Moreover,
these records are also very important for memory. With them, you can memorise, anticipate,
link moments of a work even if they are not close, and navigate inside one or several works.
Graphic or text representations are important to study and understand electroacoustic music
(Couprie, 2009). But there is also another aspect, the transmission of knowledge: how a
teacher can transmit to his students an analysis of electroacoustic music; how a researcher can
transmit his analysis or his music theory; how a student can share experiences of listening to a
work. For many different reasons, graphic/text representations are a good solution to sharing
and publishing analysis. Interactive examples of electroacoustic music created with
sonogram/waveforms and graphic/text representations are more efficient at communication
than a simple reference to an extract of an audio track.
2.2. How to create a graphical representation?
How to create a graphic/text representation depends to what you want to do with (Couprie,
2006). If you need to guide your students inside a work, maybe it is better to use iconic
graphics. Links between music properties and iconic graphics are easier to understand.
Listeners will not need any explanation or key to associate particular aspects of music and
graphic shapes. Figure 1 represents two types of representation: an iconic shape that
represents the dynamic part of the sound, a symbolic shape that represents the sound type. I
used the second in a representation of a work by Alain Savouret (Couprie, 2001). The colour
of the shape represented the level of sound transformation and the form of the shape
represented the sound type. I decided to use symbolic rather than iconic representations
because the structure of the work is very formal, a theme and variations. Demonstrating how
the composer used sound transformations to structure his work seemed to be easier using
symbolic representation.

Figure 1. Iconic versus symbolic graphic representation.


If you need to communicate complex analysis with a number of different parameters, you
need to associate iconic and symbolic representations. The iconic part allows the
representation of significant moments or saliences of musical flow. With the symbolic part
(text or graphics), you can represent numerous sonic properties, structural layers, musical
functions, or very detailed analysis of moments. This takes more time but a key to understand
it is a good complement to the iconic part.
The symbolic part also allows the analyst to represent several points of view. Placing side-by-
side different interpretations of structure or different segmentations of musical flow is a good
way to transmit complex relationships or indeterminate aspects of analysis (Roy, 2003).
One last point I want to make concerns the aesthetic aspect of graphic representation. Do we
need to be neutral or do we authorise an aesthetic look to the graphics without links with the
music? Once again, this aspect depends on what you want to do with your graphic
representation. If representation is only to analyse or is only a part of your research process,
you do not need to consider this question. But if you have to communicate your analysis to a
range of different people, maybe you have to consider further the communicational aspect of
your work. For a paper on Luc Ferrari, I realised graphic representations (Teruggi and
Couprie, 2001) that are very close to artistic or pedagogical realisations. These representations
were an experiment to extend the borders of analysis by representation.
3. EAnalysis : New concepts for analysis and graphic representation
3.1. Applications and limitations of current software
As I have developed in several papers, creating a graphic representation of electroacoustic
music is complex. Complex because analysis is complex: you have to determine a point of
view, you have to learn the work in depth, you have to extract significant aspects and link
them to others in the work or in other works. The process of analysis of electroacoustic music
is like discovering a new landscape without knowing the right way forward… and there is no
right way. Very often, you have to change direction or to start again in a different direction.
Knowing the final direction when you start the analysis is very rare.
Using software to analyse electroacoustic music is important because you need to learn about
properties of sounds, to validate your listening or to help your listening when the musical flow
is too complex. Maybe it will be useful to mask some sounds or to change the gain of other
sounds to understand the different layers of the music. Several software programmes are very
useful for this. There are 4 categories:
1.! Software to manipulate audio by filtering, changing gain, or changing pitch:
Audiosculpt2 and SPEAR3 are perfect examples for that. Both of them are
analysis/synthesis software. Audiosculpt was developed for composers to sculpt the
sound. With SPEAR, you can extract formants and manipulate them individually.
These programmes are complex to use but very important for musicologists who want
to work on sound. They can extract parts of a complex spectrum and thus focus their
analysis on specific sound properties.
2.! Software to extract data from sound: Audiosculpt and Sonic Visualiser4 (with Vamp
plug-ins) are good examples. Sonic Visualiser uses the Vamp plug-ins to extract audio
descriptors such as spectral centroid, inharmonicity, energy, etc. These descriptors
help researchers to isolate individual sound characteristics as clues for musical
analysis.
3.! Software to annotate or to create graphic representations: Sonic Visualiser,
ASAnnotation5, MetaScore6, Acousmographe7, or Flash/Multimedia sketches.
Creating flash or HTML5 animation is a good option for multimedia publications but
this needs coding and complex development. Then, other software such as
Acousmographe or MetaScore are good compromises. Unfortunately MetaScore is not
publicly available, this software was developed for the library of Cité de la Musique
(Paris) and is only used for internal publications. If you only need to annotate, e.g. to
add small texts (markers) to a sound, then you also can use Sonic Visualiser or
ASAnnotation.

2
Audiosculpt is developed by Ircam and is available through the Forum:
http://forumnet.ircam.fr.
3
SPEAR is free software developed by Michael Klingbeil: http://www.klingbeil.com/spear/.
4
Sonic Visualiser is developed by the Centre for Digital Music at Queen Mary University of
London: http://www.sonicvisualiser.org. Sonic Visualiser uses Vamp Plug-ins:
http://www.vamp-plugins.org.
5
ASAnnotation is a free software based on Audiosculpt and developed by Ircam:
http://recherche.ircam.fr/anasyn/ASAnnotation/.
6
MetaScore is developed by Olivier Koechlin (Koechlin, 2011).
7
Acousmographe is developed by INA-GRM:
http://www.inagrm.com/accueil/outils/acousmographe.
4.! Software oriented musical analysis: Acousmographe with the Aural Sonology Plug-
In8, Acousmoscribe9, and TIAALS10. The first two packages contain tools to describe
and represent sounds with an augmented version of Pierre Schaeffer’s sound object
theory (Thoresen, 2007 and Di Santo, 2009). TIAALS focuses on sound material
analysis and realisation of typological, paradigmatical or other analytical charts.
These categories are of course not limited to these specific software packages. I only
presented here the most advanced or useful software to analyse electroacoustic music.
Unfortunately, these software packages have limitations:
•! They cannot analyse audio-visual files, they only use sound files, and most of them
only stereophonic files. Video music and multitrack works are very common in
electroacoustic music. Moreover, video is a good support to analyse performance.
•! Several of them cannot export their data to readable files or import data from other
software. There is no format to exchange analysis data between them but nevertheless,
analysing electroacoustic music requires the use of several software applications from
the extraction of data to creating representations.
•! The interface is often limited and not adapted for musical studies. E.g.: there is no
possibility to navigate inside a file and to compare different moments of a work or of
different works.
•! While they have interesting features (such as the Timbre Scope of Acousmographe or
drawing of audio descriptor values on the sonogram with Sonic Visualiser), most of
them are difficult to use in some contexts (e.g. with a long work, without the
possibility to filter data, or to synchronise with a graphic representation, etc.).
To this list of software, I have to add programmes for interactive analysis. Several
musicologists have published realisations that are closed software, proposing interactive
experiences or musical material for reader. Michael Clarke has published several analyses as
standalone software applications (Clarke, 2012). Even if these realisations are not exactly
software because the user cannot use them to analyse other pieces, the interactive parts are
very complex and seem to consist of small applications to explore the composer’s musical
researches. In the field of creative process analysis, Ircam has published several CD-ROMs
such as those on Philippe Manoury (Battier, Cheret, Lemouton, Manoury, 2003) and Roger
Reynolds (McAdams, Battier, 2005). These CD-ROMs contain analysis and musical material
from the specific work. Readers can use them to create their own analysis.
This short presentation of the most common software used in analytical research demonstrates
that current packages offer a huge array of possibilities to the researcher. Each software
application is focused on very specific and powerful features. Unfortunately, most of them
were not developed by or with musicologists. They are not the result of the study of musical
analysis workflow. Analysing music requires some useful features that these software
packages do not integrate.

8
Aural Sonology Plug-in is developed by INA-GRM from Lasse Thoresen’s research:
http://www.inagrm.com/aural-sonology-plugin-0.
9
Acousmoscribe is developed by SCRIME from ideas by Jean-Louis Di Santo:
http://scrime.labri.fr.
10
TIAALS is developed by the university of Huddersfield and the Durham university:
http://www.hud.ac.uk/research/researchcentres/tacem/.
3.2. EAnalysis: Towards a new tool for electroacoustic music analysis
If these limitations were not so important 15 or 20 years ago, they are more problematic to
study more recent electroacoustic music. This is why I decided to reverse the method and to
develop EAnalysis in a different way:
•! To develop software suitable for musicologists and musicians - while not only for
them they are the primary targets.
•! Not to reinvent the wheel: e.g. there already exists good software to realise data
extraction from sound, so use their results but do not redevelop them.
•! To develop a useful player for electroacoustic music: to navigate and compare
different moments of a work or of different works, to play different tracks of a
multitrack work, use audio-visual or image files.
•! To create analytic/text/graphic tools for the study of music. Simply to create software
with beautiful graphic tools to draw anything you want may not be useful to realise a
graphic representation. Musicologist, students, teachers, even children need very
specific tools to create a music representation during the time of listening or very
quickly after.
•! To develop specific analytic tools using analytic tags or an interface to compare
analyses. Moreover, analytic tools have to be linked to graphic tools.
•! To analyse, we need to present and manipulate various values. This is not always
possible with a simple two-dimensional view; we need to use them in different kinds
of view to create augmented representations.
•! Finally, I wanted to create a laboratory to experiment with new types of
representation, and new tools without any limits11.
Various limitations of other software had to be resolved with EAnalysis:
•! Projects in EAnalysis would be able to use one or several audio-visual files.
•! EAnalysis would interact with other software through import/export features.
•! The interface would be developed to study sound and music, not only to play a sound
file like a very simple player.
•! Each feature would be well configured not to be limited to a specific context.
This list of goals is the result of several years of research. I have used various software
packages in my papers and experimented with them for musical analysis. Unfortunately,
musicology rarely integrates digital developments but nevertheless to study recent
electroacoustic composition and to go beyond common representation/analysis are very
important goals for research.

4. Inside the development of EAnalysis


4.1. From iAnalyse Studio to EAnalysis
The development of EAnalysis was a long process. The project ‘New multimedia Tools for
Electroacoustic Music Analysis’ started in October 2010 but EAnalysis is in part the result of
my previous research. Over several years around 2006, I developed a first piece of software,
iAnalyse12, which was a presentation application for musicians. It contained slides and
graphic shapes much like Powerpoint but each of them could be synchronised to an audio-
visual file. iAnalyse was perfectly adapted to the presentation of written music. The user

11
This is why several of them are not finalised and need further research to be accomplished.
12
iAnalyse Studio is available as a free software: http://ianalyse.pierrecouprie.fr.
could annotate a score, create a playhead to help the following of the score, and create simple
animations for musicologists or teachers. Around 2008, I imagined a development of this idea
to extend it with analytical tools. In 2008, I presented to the EMS Conference new features
that included possibilities to analyse electroacoustic music. Annotations were based on Lasse
Thoresen’s system (Thoresen, 2007) and were used with a sonogram. This first presentation
was very incomplete and worked only as a simulated part of iAnalyse. Then, I started research
to create a system of annotation that was more open and that included other analytical
theories. Indeed, soundscape analysis (Schafer, 1994), spectromophology (Smalley, 1997),
Temporal Semiotic Units (Hautbois, 2013), functions (Roy, 2003), or language grid
(Emmerson, 1986) are good examples of what an analytical software package must include.
Finally the ‘New Multimedia Tools for Electroacoustic Music Analysis’ project started and
we decided to create a separate piece of software instead to include the electroacoustic
analytic tools already inside iAnalyse.
During these years of research, I realised that to create tools for electroacoustic music analysis
needs very specific thought and solutions for analysis. Then, I needed to re-think the current
tools. I followed 3 main ideas:
•! Analysis of electroacoustic music involves starting with analysis, not with drawing.
Drawing is the final step and it should be possible to automate the mapping between
analytic and graphical parameters
•! Analysis is a great tool to understand music and concerns not only musicologists. One
of the aims of the ‘New Multimedia Tools for Electroacoustic Music Analysis’ project
was to create a toolbox for different types of users. The software must offer a range of
strategies adapted to very different types of music, users and habits.
•! Analysis means to use and to link various different research and results, the software
must be able to import and export data from and to other software. Moreover, users
must be able to exchange part of a work, develop libraries or a whole analysis.
Some of these ideas have been realised in EAnalysis as it exists at the time of writing, others
have yet to be developed to be more efficient. But research has been started and if EAnalysis
is only a laboratory for these ideas, it is a substantial laboratory for future developments.
4.2. Associating various points of view
One of the most important goals of EAnalysis is to represent several parameters or values at
the same time. In previous research, I demonstrated the difficulty of representing more than 4
analytic parameters in the same representation (Couprie, 2009). Common graphic
representation uses X/Y-axes and shapes to represent sound parameters:
•! X-axis usually represents time position and time duration.
•! Y-axis usually represents pitch or a frequency range.
•! Morphology of shape is used to represent amplitude of the sound.
•! The analyst can also use colour and texture to represent frequency range, grain, or
structural level.
Figure 2 represents the beginning of a piece by Alain Savouret. I worked on a graphical
representation of this piece for the CD-ROM La musique électroacoustique by INA-GRM
(Couprie, 2000) and this new representation is based on it. The space of figure 2 allows the
representation of several parameters of sound:
•! X-axis: time position and time duration.
•! Y-axis: panoramic position indicated by letters R, C, L for right, centre and left.
•! The morphology of shape is used to represent type of sound and/or amplitude
morphology.
•! Colour represents sound transformation: black is original sound, grey is original sound
with filter processing, and the light grey ellipse is reverberation.
This graphic representation is very simple but we can observe an important point. Graphic
representation is a good tool to represent listening characteristics of sound (type, space
position, transformation) and implicit musical aspects (rhythm and duration, structural
construction). Moreover, associated graphics, waveform and sonogram allow us to represent
more parameters (pitches, range of spectrum, intensity variations).

Figure 2. Graphic representation of the beginning of Dulcinea, extract of Don Quichotte


Corporation by Alain Savouret.
Is it possible to create a complex graphic representation that will associate the information of
these three representations?
Adding more parameters demands more dimensions to extend the graphic representation. Use
of 3D causes two important problems:
•! The listener misses precision. Distinguishing exact positions between different shapes
becomes complex.
•! 3D adds only 1 further dimension for 1 parameter: How can we add more parameters?
Another issue is to create different kinds of representations within only one analysis.
Musicologists needs to change their point of view without recreating their graphic
representation. Current software is limited because analysis is created through drawing: you
segment sound material and analyse structure by drawing shapes. Changing point of view or
creating another representation with time and frequency positions of shape you have already
created demands a redraw, a new representation. This limitation can be removed by
disconnecting analysis from drawing. EAnalysis offers the possibility to create analytic events
with time and frequency positions. The analytic part of the event consists of several analytic
properties that you have created for your analysis. After you analyse, you decide how shapes
are drawn. A system of rules, like in style sheets, allows associating analytic properties to
graphic properties. Events contain 3 types of properties: bound, that is the global frame
properties; graphic, that contain all properties for drawing, and analytic, that are optional
properties to list any kind of analytic description of sound. These events are drawn in a time
view from bound and graphic properties. But the user has the possibility to change any bound
and graphic properties from graphic, or analytic properties.
This system is powerful and allows working with several strategies:
•! Creating a common graphic representation without analytic properties and without
rules.
•! Creating a common graphic representation with analytic properties and drawing
different types of representation, different types of analysis.
•! Focussing on analysis by working with analytic properties: drawing simple shapes
(e.g. a rectangle), adding analytic properties and deciding after how they will be
drawn.
4.3. Tools for different types of users
Working with different types of users at different levels is one of aims of the project ‘New
Multimedia Tools for Electroacoustic Music Analysis’. EAnalysis integrates this possibility in
3 parts: modes, types of view, and types of event.
4.3.1. Modes
EAnalysis integrates 3 modes: normal, add text and drawing. These modes allow the user to
create events with different tools. Normal mode is the default mode. The user adds an event
by ‘drag and drop’ from a preformatted list or from his own library to the view. With add text
mode, the user enters text during playback and can annotate audio-visual files with words or
sentences. Each part of the text is an event and the user can switch to normal mode to change
its graphic properties. This mode is realised for analysts who prefer to work with text or for
simple annotations of ideas during the first listening. Drawing mode is for users who prefer to
draw with mouse, graphic tablet, or interactive whiteboard. This mode is very useful to create
very simple annotations on a white page, to highlight a sonogram, to work on a whiteboard
while listening with children. Moreover, if the user uses a graphic tablet, pressure is detected
and might be used to create artistic drawing like calligraphy.
These three modes were the first features that were developed to respond to various users and
were not created as individual elements but as part of a global architecture.
4.3.2. Views
The user can create several types of view. These are used to edit and/or show events, images
or other data:
•! Time view is the most important view. The background contains waveform, linear or
logarithmic sonogram, layers of sonograms, differential sonogram (Chouvel, Bresson,
Agon, 2007), image, or colour. The middleground shows imported data from other
software such as audio descriptors with curve charts. The foreground shows markers
and graphic events. The user creates graphic representations with this view.
•! Image view displays slideshows of images. E.g. pictures taken during a performance
or a soundwalk can be synchronised with the sound recording.
•! Map view is used to create a chart from extracts of audio files. These extracts are
represented by sonogram, waveform, events or colour and can be linked with lines like
a mind map.
•! Structure view shows linear structures with different representations: linear, formal
diagram, arc diagram to display patterns, similarity matrix.
•! Video view displays the image of movie files.
Views are stacked in a vertical axis. Time position can be synchronised or not.
Unsynchronised time allows the comparing of different time positions or different track
positions in the same piece, or in different pieces.
With EAnalysis, the user can associate different types of view. Figure 3 displays 2 types of
view (from bottom to top):
1.! 5 time views: waveform of the whole piece, sonogram, graphic representation, chart
with data (audio descriptors), similarity matrix from audio descriptors.
2.! A video view with animated film by Robert Lapoujade (Bayle, 2013).
Associating different views creates a complex representation to study or present results of an
analysis.

Figure 3. L’oiseau moqueur by François Bayle (animated film by Robert Lapoujade).


EAnalysis displays different types of view: 5 time views, and video view.
Figure 4 displays another example of complex representation. The piece NoaNoa by Kaija
Saariaho for flute and electronics is structured around a root cell of two notes. All other
segmented micro-structures can be analysed with a paradigmatic chart. This figure represents
3 views (from bottom to top):
1.! The sonogram.
2.! The paradigmatic chart of the opening with 3 units (y-axis) displayed in time (x-axis).
3.! The score of this opening extract.
The chart view allows the creation of any kind of chart from extracts of audio files. Blocks of
colour, waveform, sonogram, or graphic events represent these extracts. Blocks can be linked
and positioned on a white view. In this example, positions represent units and time, but blocks
are movable in any direction. The user can select a block and play the corresponding extract
or visualise what block is under the playhead when playing the whole piece.

Figure 4. Sonogram, paradigmatic chart, and score of beginning of NoaNoa by Kaija


Saariaho.
Figure 5 shows different types of structure representations (from bottom to top):
1.! Linear structure shows segmentation in a classical manner but colours can be mapped
to time duration or title of units.
2.! Formal diagram highlight novelty and repetitions of units.
3.! Arc diagram represent patterns by linking similar sets of units.
4.! Similarity matrix is computed from titles of units and reveals similarity between
different parts of the structure.
Figure 5. Different types of structure representations (from bottom to top): linear, formal
diagram, arc diagram, similarity matrix.

Figure 3 uses a chart and similarity matrix to represent data imported from Sonic Visualiser.
Because visualisation of data is important to extract similarity and singularities for musical
analysis, EAnalysis also offers other possibilities to create representations from data. Figure 6
presents five type of graphs (from bottom to top):
1.! A similarity matrix does not show values but similarities between values (black
represents similarities and white non-similarities).
2.! Simple chart to represent data in a very simple way.
3.! A BStD chart (Malt, Jourdan, 2015) represents evolution of timbre from three audio
descriptors in only one line: spectral centroid (Y), spectral variance (height), and
intensity (gradient of colours).
4.! A cloud of points can represent five data (X, Y, size, colour, opacity). EAnalysis uses
one or more charts in cloud point to represent data from different tracks to help in
comparative analysis.
5.! A hierarchical correlation plot (Collective, 2009) represents correlation between two
sets of data from different levels of structure.
Figure 6. Different type of representation of data (from bottom to top): similarity matrix,
simple chart (mirrored line), BStD chart, point cloud chart, hierarchical correlation plot.

These four examples demonstrate possibilities in term of analysing, teaching, or


communicating with EAnalysis. Different configurations of view can also be saved in the
same project.
4.3.3. Events
Because events contain three types of property, they can be used for different strategies and
with different levels of complexity:
1.! Graphic events are very simple shapes such as are available in every drawing software
application: rectangle, ellipse, text, polygon, image, etc. This level is adapted to first
annotations of the piece before analysis, working at listening with children, or creating
beautiful graphic representations.
2.! Analytic events are preformatted shapes for analysis. Each event contains a graphic
shape and one or more analytic parameters. Working with preformatted analytic
events is a good starting point for students to learn musical analysis or specialists to
apply existing theories.
3.! Users can also create their own analytic events with personalised analytic parameters.
This level is highly flexible allowing the user to adapt representation and analytic
segmentation to the analysed work or to a personalised analytical theory.
Figure 7 is 3 extracts of an EAnalysis interface: an example of a selected event and its
graphical and analytical properties. Graphical properties contain 3 groups of parameters
(graphic, text and advanced) and are very close to graphic software. Analytical properties are
key-value pairs of parameters.
Figure 7. A selected event (left) and its graphical (centre) and analytical (right) properties.
EAnalysis contains fifteen preformatted analytic parameters (sound objects,
spectromorphologies, language grid, space, etc.) and users can add their own parameters and
group them into a list and library to share with other users. The interface to edit events and
manage their properties is simple and flexible.
Events are also completed with markers. Markers are only time positions with simple graphic
properties. They can be used to annotate ideas on first listening, or to mark breaks or structure
parts. Events and markers are editable in time view. This is why time view is the default view
to visualise, listen, and edit analyses. Other views are to display other data.
4.4. Import, export, share works with communities, and communicate
4.4.1 Import and export data
As explained above, modern software must be able to communicate with other software.
Musicologists do not work with only one application, they use different software to prepare
audio files, to create representations, or to analyse data with several different procedures.
EAnalysis can import and export data from other software through four categories of files:
1.! Audio-visual file is the root file from which the project is created and a common
export format. EAnalysis creates a project from a monophonic or stereophonic audio
or video file. The user can also import other audio-visual files to work with multitrack
pieces or compare different pieces.
2.! Image file is used to create an image event or slideshow inside image view. As an
export file format, image is useful to create a key (with export selected event as image
feature) or to export an analysis to images.
3.! Text file is a common format to exchange various types of data. EAnalysis uses this to
import a list of time cues (to create markers), time value pairs (to create curves in data
view), or graphic representations (from ProTools information sessions or
Acousmographe XML export). It can also export lists of events and markers to analyse
or use in other software such as Open Music, Max, Excel, etc.
4.! EAnalysis has also 2 types of format: eanalysis project and ealibrary. Both of them
allow the user to share analyses with or without media files (if copyright does not
allow that) as well as event library including personalised analytic parameters.
The fourth point is very important for the ‘New Multimedia Tools for Electroacoustic Music
Analysis’ project. To share their work or research with other communities is the main activity
of the musicologist or musician. With the OREMA web site13, Michael Gatt aims to enhance
sharing works, tools, and to develop theory discussion around musical analysis of
electroacoustic music. EAnalysis offers two formats to share projects (with or without media)
and theoretical research (analytical event library).
In parallel with file exchange data, current version of EAnalysis can also use the LibXtract
plug-in and SuperVP14 to compute audio descriptors and modification of gain. The workflow
(export from one application to import in EAnalysis) is reduced to some actions inside
EAnalysis that use command line tools to communicate with both technologies. The
LibXtract plug-in offers the computation of about forty audio descriptors and SuperVP allows
us to transform the gain within spectrum areas drawn with graphical events.
4.5. Perspectives
With import/export data, EAnalysis can be defined as a workspace. Because it is difficult to
create a real synergy between different software applications, allowing the user to exchange
data is essential. It increases research in musicology and the power of each piece of software.
The first step of development was to offer a large range of possibilities; the second step will
be demonstrating them through the realisation of different examples and increasing them by
adding new features.
One part of the perspective of EAnalysis development is to show how to use it with other
software such as in figure 6 that uses data from Sonic Visualiser. Visualisation of data is a
powerful feature of EAnalysis - any kind of lists that contain time-value pairs of data may be
visualised.
The second part of the perspective will be adding new software compatibilities. The list
presented in section 3.1 contains common software used in musical analysis but musicologists
use also other software such as statistical applications or software used in musical production.
EAnalysis needs to integrate these other applications and maybe new types of view to
represent their data. These perspectives are very exciting but also very complex, indeed not
possible in several cases, because some software uses a specific format with particular
representations. As I mentioned, there does not exist a compatible format to exchange data:
only software that use text formats (text, XML, JSON) can currently be used in EAnalysis.
EAnalysis was developed to facilitate adding new types of view. But as discussed in section
4.2, new types of representation have to emerge from needs.
EAnalysis answers to the need for a multipurpose tool for electroacoustic music analysis. Of
course, this workspace gives new possibilities by working with many types of data and
creating representations with them, but EAnalysis is also a classical piece of software because
it works with historical theories of analysis. Musicology needs also to go beyond these simple
perspectives. During the development of EAnalysis, some decisions were difficult because I

13
Online Repository for Electroacoustic Music Analysis: http://www.orema.dmu.ac.uk.
14
SuperVP is a technology developed at Ircam to compute spectrum and time transformation.
Audiosculpt is based on SuperVP.
realised that several steps were important but appeared also an outdated method and there was
a need to restart and go beyond the original aim. The best example is events. In EAnalysis,
events are objects with a border (e.g. time and frequency) but are adapted to specific
analytical strategies. A lot of recent electroacoustic music works are very complex in term of
media or musical realisation and cannot be analysed with bordered or statical objects. Another
example of an EAnalysis limitation is the representation of sound. The software proposes
different representations from waveform or sonogram. One of them, the similarity matrix,
allows us to research singularities inside spectromorphologies but realisation of the matrix
from data of different tracks or different pieces needs to be improved with the dynamic time
warping (DTW) algorithm (Zattra, Orio, 2009). Finally, some researchers are exploring new
forms of analytical representation: the MaMux seminar at Ircam presented some of them15.
The emergence of researches in this field is evidence that musicologists need new kinds of
representation for complex musical relationships.

5. Conclusion
This paper presents an account of the development of the EAnalysis software. EAnalysis, as a
sound-based music (Landy, 2007) analytical software, is created for the study of music based
on sound, not only electroacoustic music but also other non-written music. Choices I made to
create two or more possibilities to achieve the same result, or different interface parts for the
same feature are going in the same direction: to respond to different types of user and to allow
analyse of different genres and categories of music. This chapter has presented theoretical
origins and technical choices to propose a software package that is more adapted to musical
analysis than other software. As I mentioned, above all other goals, EAnalysis is an
experimental laboratory16. Realisations by Michael Clarke in the field of aural analysis,
research on archive preservations (Barkati, Bonardi, Vincent, Rousseaux, 2012), or new
representations of sound (differential sonogram or similarity matrix of sonograms)
demonstrate the importance of software development in the analysis of electroacoustic music.
Most of the current graphical representations used for the analysis of electroacoustic music
are based on the same paradigm: a 2D representation of time and frequency with some
annotations. EAnalysis offers other possibilities but this is probably only a first step in a
different direction. In the field of electroacoustic music, analytical researches are in their
teenage years. Computer science and multimedia possibilities have been developed
significantly in recent years. Musicologists have now more keys to explore new paradigms of
representation.

References
Bayle, F. 2013. L’expérience acoustique. Paris: Magison.
Barkati, K. Bonardi, A. Vincent, A. Rousseaux, F. 2012. GAMELAN : A Knowledge
Approach for Digital Audio Production Workflows. Articial Intelligence for Knowledge
Management. Montpellier: ECAI/IFIP.
Battier, M. Cheret, B. Lemouton, S. Manoury, P. 2003. PMA LIB The electronic music of
Philippe Manoury. Paris:Ircam.

15
MaMux: Mathematics and Music research seminar (Ircam). Several sessions presented
mathematical representations of musical and one session explored analytical representation:
http://repmus.ircam.fr/mamux/saisons/saison12-2012-2013/2013-02-01.
16
It appears also to be an important step for my research. In this software, I have realised and
experimented with some of the research ideas I havedeveloped in several papers since 2005.
Chouvel, JM. Bresson, J. Agon, C. 2007. L’analyse musicale différentielle : principes,
représentation et application à l’analyse de l’interprétation. EMS Conference. Leicester: De
Montfort University. Online article:
http://jeanmarc.chouvel.3.free.fr/Flash/ArticleTFDHTML/index.html.
Chouvel, JM. 2011. Musical analysis and the representation. 7th European Music Analysis
Conference. Rome. Online article:
http://jeanmarc.chouvel.3.free.fr/textes/English/AnalysisAndRepresentationMTO.pdf.
Clarke, M. 2012. Analysing Electroacoustic Music: an Interactive Aural Approach. Music
Analysis 31.
Collective, 2009. Hierarchical Correlation Plots. Online article:
http://www.mazurka.org.uk/ana/timescape/.
Couprie, P. 2000. Transformation/transmutation. Analyse d’un extrait de Don Quichotte
Corporation d’Alain Savouret. La musique électroacoustique, Paris: INA-GRM/Hyptique.
Couprie, P. 2006. (Re)Presenting Electroacoustic Music. Organised Sound 11(2). Cambridge:
Cambridge University Press.
Couprie, P. 2009. La representación gráfica : una herramienta de análisis y de publicación de
la música. Doce Notas, El análisis de la música 19-20. Madrid: Gloria Collado Guevara.
Di Santo. JL. 2009. L’acousmoscribe, un éditeur de partitions acousmatiques. EMS
Conference. Buenos Aires: UNTREF. Online article: http://www.ems-
network.org/ems09/papers/disanto.pdf.
Emmerson, S. 1986. The relation of language to materials. The language of electroacoustic
music. Houndmills: Palgrave Macmillan.
Genette, G. 1997. L’œuvre de l’art. Paris: Seuil.
Hautbois, X. 2013. Temporal Semiotic Units (TSUs), a very short introduction. Online article:
http://www.labo-mim.org/site/index.php?2013/03/29/225-temporal-semiotic-units-tsus-a-
very-short-introduction.
Koechlin, O. 2011. De l’influence des outils numériques interactifs sur le temps musical.
Musimédiane 6. Paris: SFAM. Online journal: http://www.musimediane.com.
Landy, L. 2007. Understanding the Art of Sound Organization. Cambridge: MIT Press.
Malt, M. Jourdan, E. 2015. Le BStD - Une représentation graphique de la brillance et de
l'écart type spectral, comme possible représentation de l'évolution du timbre sonore.
Forthcomming.
McAdams, S. Battier, M. 2005. Creation and perception of a contemporary musical work:
The Angel of Death by Roger Reynolds. Paris:Ircam.
Roy, S. 2003. L’analyse des musiques électroacoustiques : modèles et propositions. Paris:
L’Harmattan.
Schafer, M. 1994. The soundscape. Rochester: Destiny Books.
Smalley, D. 1997. Spectromorphology: explaining sound-shapes. Organised Sound 2(2).
Cambridge: Cambridge University Press.
Terrugi, D. and Couprie, P. 2001. Hétérozygote et les Presque rien. Portrait Polychromes Luc
Ferrari. Paris: INA-GRM.
Thoresen. L. 2007. Spectromorphological analysis of sound objects: an adaptation of Pierre
Schaeffer's typomorphology. Organised Sound 12(2). Cambridge: Cambridge University
Press.
Zattra, L. Orio, N. 2009. ACAME – Analyse comparative automatique de la musique
électroacoustique. Musimédiane 4. Paris:SFAM. Online article:
http://www.musimediane.com/spip.php?article87.

You might also like