Icmc Panoramix 2016

Download as pdf or txt
Download as pdf or txt
You are on page 1of 7

Panoramix: 3D mixing and post-production workstation

Thibaut Carpentier

To cite this version:


Thibaut Carpentier. Panoramix: 3D mixing and post-production workstation. 42nd International
Computer Music Conference (ICMC), Sep 2016, Utrecht, Netherlands. �hal-01366547�

HAL Id: hal-01366547


https://hal.archives-ouvertes.fr/hal-01366547
Submitted on 14 Sep 2016

HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est


archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents
entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non,
lished or not. The documents may come from émanant des établissements d’enseignement et de
teaching and research institutions in France or recherche français ou étrangers, des laboratoires
abroad, or from public or private research centers. publics ou privés.
Panoramix: 3D mixing and post-production workstation

Thibaut Carpentier
UMR 9912 STMS IRCAM – CNRS – UPMC
1, place Igor Stravinsky, 75004 Paris
[email protected]

ABSTRACT (spaced pair, surround miking, close microphones, etc.),


and advantages and drawbacks of each technique are well
This paper presents panoramix, a post-production work- known (see for instance [1–4]). For instance when mixing
station for 3D-audio contents. This tool offers a compre- Pierre Boulez’s Répons, Lyzwa emphasized that multiple
hensive environment for mixing, reverberating, and spa- miking techniques had to be combined in order to benefit
tializing sound materials from different microphone sys- from their complimentarity [5]: a main microphone tree (e.g.
tems: surround microphone trees, spot microphones, am- surround 5.0 array) captures the overall spatial scene and
bient miking, Higher Order Ambisonics capture. Several provides a realistic impression of envelopment as the differ-
3D spatialization techniques (VBAP, HOA, binaural) can ent microphone signals are uncorrelated; such a system is
be combined and mixed simultaneously in different formats. well suited for distant sounds and depth perception. How-
Panoramix also provides conventional features of mixing ever the localization of sound sources lacks precision, and
engines (equalizer, compressor/expander, grouping param- thus additional spot microphones have to be used, close to
eters, routing of input/output signals, etc.), and it can be the instruments. During post-production, these spot micro-
controlled entirely via the Open Sound Control protocol. phones have to be re-spatialized using panning techniques.
Electronic tracks, if independently available, have to be
1. INTRODUCTION processed similarly. Finally the sound engineer can add ar-
tificial reverberation in the mix in order to fuse the different
Sound mixing is the art of combining multiple sonic el- materials and to enhance depth impression.
ements in order to eventually produce a master tape that In summary, the mixing engineer’s task is to create a com-
can be broadcast and archived. It is thus a crucial step in prehensive sound scene through manipulation of the spa-
the workflow of audio content production. With the in- tial attributes (localization, immersion, envelopment, depth,
creasing use of spatialization technologies in multimedia etc.) of the available audio materials. Tools used in the post-
creation and the emergence of 3D diffusion platforms (3D production workflow typically consist of: a mixing console
theaters, binaural radio-broadcast, etc.), new mixing and (analog or digital), digital audio workstations (DAWs) and
post-production tools become necessary. sound spatialization software environments.
In this regard, the post-production of an electroacoustic
The work presented in this article aims at enhancing exist-
music concert represents an interesting case study as it
ing tools especially in regard to 3D mixing wherein existing
involves various mixing techniques and raises many chal-
technologies are ill-suited. Mixing desks are usually lim-
lenges. The mixing engineer usually has to deal with nu-
ited to conventional panning techniques (time or intensity
merous and heterogeneous audio materials: main micro-
differences) and they do not support 3D processing such
phone recording, spot microphones, ambient miking, elec-
as binaural or Ambisonic rendering. They are most often
tronic tracks (spatialiazed or not), sound samples, impulse
dedicated to 2D surround setups (5.1 or 7.1) and they do
responses of the concert hall, etc. With all these elements at
not provide knob for elevation control. Similarly, digital
hand, the sound engineer has to reproduce (if not re-create)
audio workstations lack flexibility for multichannel streams:
the spatial dimension of the piece. His/her objective is to
most of the DAWs only support “limited” multichannel
faithfully render the original sound scene and to preserve
tracks/busses (stereo, 5.1 or 7.1) and inserting spatializa-
the acoustical characteristics of the concert hall while of-
tion plugins is difficult and/or tedious. On the other hand,
fering a clear perspective on the musical form. Most often
many powerful sound spatialization engines are available.
the mix is produced from the standpoint of the conductor
As shown in [6] and other surveys, a majority of these tools
as this position allows to apprehend the musical structure
are integrated into realtime media-programming environ-
and provides an analytic point of view which conforms to
ments such as Max or PureData. Such frameworks appear
the composer’s idea.
inadequate to post-production and mixing as many crucial
Obviously, the sound recording made during the concert is
operations (e.g. group management or dynamic creation of
of tremendous importance and it greatly influences the post-
new tracks) can hardly be implemented. Furthermore, spa-
production work. Several miking approaches can be used
tialization libraries are generally dedicated to one given ren-
dering technique (for instance VBAP [7] or Higher-Order
c
Copyright: ⃝2016 Thibaut Carpentier et al. This is an open-access Ambisonic [8]) and they are ill-suited to hybrid mix.
article distributed under the terms of the Creative Commons Attribution Finally, high-spatial resolution microphones such as the
License 3.0 Unported, which permits unrestricted use, distribution, and EigenMike 1 are essentially used in research labs but they
reproduction in any medium, provided the original author and source are
credited. 1 http://www.mhacoustics.com

122 Proceedings of the International Computer Music Conference 2016


remain under-exploited in actual production contexts, in group instruments miking
spite of their great potential. I
saxophone, trumpet 1, 4 microphones: AT4050,
bassoon, electric guitar AKG214, C535, AKG214
5 microphones: KMS105,
As a consequence, we have developed a new tool which II
synthesizer 1, clarinet 1,
DPA4021, AKG214, KM140,
provides a unified framework for the mixing, spatialization trumpet 2, cello 1
AKG411
and reverberation of heterogeneous sound sources in a 3D 11 microphones: DPA4066,
context. flute 1, oboe, french horn 1, KM150, C353, KM150,
III
trombone 1, percussion 1 Beta57, SM58 (x2), SM57
This paper is organized as follows: Section 2 presents (x2), C535, AKG411
the process of recording an electroacoustic piece for use synthesizer 2, violin 3, 5 microphones: DPA4061
IV
in 3D post-production. This paradigmatic example is used violin 4, viola 1, cello 2 (x3), DPA2011, KM140
to elaborate the specifications of the new mixing engine. 10 microphones: SM57 (x2),
percussion 2, trombone 2,
SM58 (x2), MD421, C535,
Section 3 details the technical features of panoramix, the V french horn 2, clarinet 2,
Beta57, KMS105, AKG414
proposed workstation. Finally Section 4 outlines possible flute 2
(x2), DPA4066
future improvements. synthesizer 3, violin 1, 10 microphones: DPA4061
VI violin 2, viola 2, double (x3), AKG414 (x4), KM140,
bass C535 (x2), SM58 (x2)
2. PARADIGMATIC EXAMPLE
Table 1. Spot microphones used for the recording.
2.1 Presentation
Composer Olga Neuwirth’s 2015 piece Le Encantadas o 2.2 Sound recording
le avventure nel mare delle meraviglie, for ensemble and
electronics 2 serves as a useful case study in 3D audio pro- Given the spatial configuration of the piece, the recording
duction techniques. The piece had its French premiere on session 3 involved a rather large set of elements:
October 21st in the Salle des Concerts de la Philharmonie 2 • 45 close microphones for the six instrumental groups (see
(Paris), performed by the Ensemble intercontemporain with Table 1),
Matthias Pintscher conducting. As is often the case in • distant microphones for capturing the overall image of
Neuwirth’s work, the piece proposed a quite elaborate spa- the groups: spaced microphones pairs for groups I and
tial design, with the ensemble divided in six groups of four II; omni-directional mics for the side groups,
or five musicians. Group I was positioned on-stage, while • one EigenMike microphone (32 channels) in the middle
groups II to VI were dispatched in the balcony, surrounding of the hall, i.e. in the center of the HOA dome,
and overlooking the audience (cf. Figure 1). The electronic • one custom 6-channel surround tree (see [5]) also located
part combined pre-recorded sound samples and real-time in the center of the hall,
effects, to be rendered over a 40-speaker 3D dome above • 32 tracks for the electronics (30 speaker feeds plus 2
the audience. Different spatialization approaches were em- subwoofers),
ployed, notably Higher-Order Ambisonic (HOA), VBAP, • direct capture of the 3 (stereo) synthesizers as well as 3
and spatial matrixing. Throughout the piece, several virtual click tracks.
sound spaces were generated by means of reverberators. In total, 132 tracks were recorded with two laptop com-
In particular, high-resolution directional room impulse re- puters (64 and 68 channels respectively) which were later
sponses, measured with an EigenMike microphone in the re-synchronized by utilizing click tracks.
San Lorenzo Church (Venice), were used in a 4th order HOA
convolution engine in order to simulate the acoustics of the 2.3 Specifications for the post-production workstation
church – as a reference to Luigi Nono’s Prometeo.
In spite of its rather large scale, this example of recording
session is representative of what is commonly used in the
electroacoustic field, where each recorded element requires
post-production treatment. As mentioned in the introduc-
tion, various tools can be used to handle these treatments,
however there is yet no unified framework covering all the
required operations.
Based on the example of Encantadas (and others not cov-
ered in this article), we can begin to define the specifications
for a comprehensive mixing environment. The workstation
should (at least) allow for:
• spatializing monophonic sound sources (spot microphones
or electronic tracks) in 3D,
• adding artificial reverberation,
• encoding and decoding of Ambisonic sound-fields (B-
format or higher orders),
Figure 1. Location of the six instrumental groups in the Salle des Concerts • mixing already spatialized electronic parts recorded as
– Philharmonie 2, Paris. speaker feeds,
3 Sound recording: Ircam / Clément Cornuau, Mélina Avenati, Sylvain
2 Computer music design: Gilbert Nouno / Ircam Cadars

Proceedings of the International Computer Music Conference 2016 123


• adjusting the levels and delays of each elements so as to
align them,
• combining different spatialization approaches,
• rendering and exporting the final mix in several formats.
With these specifications in mind, we developed panoramix,
a virtual mixing console which consists of an audio en-
gine associated with a graphical user interface for control-
ling/editing the session.

3. PANORAMIX
Like a traditional mixing desk, the panoramix interface is
designed as vertical strips depicted in Figure 3. Strips can
be of different types, serving different purposes with the
following common set of features:
• multichannel vu-meter for monitoring the input level(s),
• input trim, Figure 2. Compressor/expander module. ➀ Dynamic compression curve.
• multichannel equalization module (where the EQ is ap- ➁ Ratios and thresholds. ➂ Temporal characteristics.
plied uniformly on each channel). The equalizer comes
as a 8-stage parametric filter (see ➅ in Figure 3) with
one high-pass, one low-pass (Butterworth design with of four temporal sections: direct sound, early reflections,
adjustable slope), two shelving filters, and four second- late/diffuse reflections and reverberation tail. By default the
order sections (with adjustable gain, Q and cutoff fre- Spat perceptual model is applied, using the source distance
quency), to calculate the gain, delay, and filter coefficients for each
• multichannel dynamic compressor/expander (Figure 2) of the four temporal sections. Alternatively, the perceptual
with standard parameters (ratio, activation threshold, and model can be disabled (see slave buttons ➂ in Figure 4)
attack/release settings), and the levels manually adjusted. Each temporal section
• mute/solo buttons, may also be muted independently. In the signal process-
• multichannel vu-meter for output monitoring, with a gain ing chain, the extended direct sound (i.e. direct sound plus
fader. early reflections) is generated inside the mono track (Fig-
In addition, a toolbar below the strip header (Figure 3), ure 7), while the late/diffuse sections are synthesized in
allows for the configuration of various options such as lock- a reverb bus (described in 3.2.2) which is shared among
ing/unlocking the strip, adding textual annotations, and several tracks in order to minimize the CPU cost. Finally,
configuring the vu-meters (pre/post fader, peakhold), etc. a drop-down menu (“bus send”) allows one to select the
Strips are organized in two main categories: input tracks destination bus (see 3.2.1) of the track.
and busses. The following sections describe the properties Moreover all mono tracks are visualized (and can be ma-
of each kind of strip. nipulated) in a 2D geometrical interface (➆ in Figure 3).

3.1 Input tracks 3.1.2 Multi Track

Input tracks correspond to the audio streams used in the A Multi Track is essentially a coordinated collection of
mixing session (which could be real-time or prerecorded). mono tracks, where all processing settings (filters, rever-
Each input track contains a delay parameter in order to beration, etc.) are applied similarly on each monophonic
re-synchronize audio recorded with different microphone channel. The positions of each of the mono elements are
systems. For example, spot microphones are recorded close fixed (i.e. they are set once –via the “Channels...” menu–
to the instruments and so their signals arrive faster than for the lifetime of the session). Such Multi Track is typi-
microphones placed at greater distances. Miking a sound cally used to process a multichannel stream of speaker feeds
source with multiple microphones is also prone to tone signals (see paragraph 2.3).
coloration; adjusting the delay parameter helps reducing this Similar results could be obtained by grouping (see 3.5)
coloration and can also be used to vary the sense of spatial multiple “Mono” tracks, however “Multi” tracks make the
envelopment. In practice, it can be effective to set the spot configuration and management of the session much more
microphones to arrive slightly early, to take advantage of the simple, rapid and intuitive.
precedence effect which stabilizes the perceived location of
3.1.3 EigenMike Track
the combined sound.
As its name suggests, an “EigenMike” Track is employed to
3.1.1 Mono Track
process recordings made with spherical microphone arrays
A Mono Track is used to process and spatialize a mono- such as the EigenMike. Correspondingly, the track has 32
phonic signal, typically from a spot microphone or an elec- input channels and it encodes spherical microphone signals
tronic track. The strip provides controls over the localiza- in the HOA format. Encoding can be performed up to
tion attributes (azimuth, elevation, distance), spatial effects 4th order, and several normalization flavors (N3D, SN3D,
(Doppler, air absorption filtering) and reverberation. The ar- FuMa, etc.) are available.
tificial reverberation module is derived from the Spat archi- Modal-domain operators can later be applied to spatially
tecture [9] wherein the generated room effect is composed transform the encoded sound-field, for example rotating the

124 Proceedings of the International Computer Music Conference 2016


Figure 3. Main interface of the panoramix workstation. ➀ Input strips. ➁ Panning and reverb busses. ➂ LFE bus. ➃ Master track. ➄ Session options.
➅ Insert modules (equalizer, compressor, etc.). ➆ Geometrical interface for positioning.

whole sound scene, or weighting the spherical harmonics case of binaural reproduction, the “hrtf...” button provides
components (see ➃ in Figure 4). means to select the desired HRTF set. Finally, HOA panning
Signals emanating from an EigenMike recording are al- busses decode the Ambisonic streams, and several decoding
ready spatialized and they convey the reverberation of the parameters can be adjusted (see “HOA Bus 1” in Figure 3).
concert hall, however a reverb send parameter is provided The selection of rendering techniques (VBAP, HOA, bin-
in the track, which can be useful for adding subtle artificial aural) was motivated by their ability to spatialize sounds
reverberation, coherent with the other tracks, to homoge- in full 3D, and their perceptual complementarity. Other
nize the mix. The reverb send is derived from the omni panning algorithms may also be added in future versions of
component (W-channel) of the HOA stream. panoramix.
Output signals from the panning busses are sent to the
3.1.4 Tree Track Master strip. Each panning bus provides a routing matrix
A “Tree” track is used to accommodate the signals of a so as to assign the signals to the desired destination channel
microphone tree such as the 6-channel tree installed for (➁ in Figure 5).
the recording of Encantadas (section 2.2). The “Mics...”
3.2.2 Reverberation bus
button (cf. Track “Tree 1” in Figure 3) pops up a window
for setting the positions of the microphones in the tree. It is Reverberation busses function to synthesize the late/diffuse
further possible to align the delay and level of each cell of sections of the artificial reverberation processing chain. A
the microphone array. reverb bus is uniquely and permanently attached to one
As microphone trees entirely capture the sound scene, the or more panning buses, where the reverberation effect is
“Tree” track does not apply any specific treatment to the applied to each track routed to this bus.
signals. Panoramix builds on the reverberation engine of Spat
which consists of a feedback delay network with an variable
3.2 Busses decay profile, adjustable in three frequency bands. The
main parameters of the algorithm are exposed in the reverb
Three types of bus are provided: panning busses, reverb
strip (see ➄ in Figure 4).
busses, and one LFE (“low frequency enhancement”) bus.
3.2.3 LFE Bus
3.2.1 Panning/Decoding bus
Each track has a LFE knob to tune the amount of signals
The role of panning busses is threefold: 1) they act as sent to the LFE bus which handles the low-frequency signals
summing busses for the track output streams; 2) they control sent to the subwoofer(s) of the reproduction setup. The bus
the spatialization technique in use (three algorithms are applies a low-pass filter with adjustable cutoff frequency.
currently supported: VBAP, HOA and binaural); 3) panning
busses are used to control various parameters related to
3.3 Master
the encoding/decoding of the signals. For speaker-based
rendering (VBAP or HOA), the “Speakers...” button allows The “Master” strip collects the output signals of all the
for the configuration of the speakers layout (Figure 6); in busses and forwards them to the panoramix physical outputs.

Proceedings of the International Computer Music Conference 2016 125


Although the workstation only has one Master strip, it is
possible to simultaneously render mixes in various formats.
For instance, if the session has 26 physical output channels,
one can assign channels 1–24 to an Ambisonic mix and
channels 25–26 to a binaural rendering.

Figure 7. Audio architecture (simplified representation). ➀ Mono track.


➁ Panning/decoding bus. ➂ Reverb bus.

3.4 Session options

The “Options” strip is used for the management of the mix-


ing session. This includes routing of the physical inputs (see
➆ in Figure 4 and ➀ in Figure 5), creation and edition of the
tracks and busses (➇ in Figure 4) as well as import/export
Figure 4. View of multiple strips; from left to right: mono track, Eigen- of preset files (➉ in Figure 4).
Mike track, HOA reverberation bus, master track, session options. ➀ Strip
header: name of the strip, color, lock/unlock, options, annotations, input
vu-meter, input trim, equalizer and compressor. ➁ Localization param-
eters (position, Doppler effect, air absorption). ➂ Room effect settings 3.5 Group management
(direct sound, early reflections, send to late reverb). ➃ HOA encoding
and sound-field transformations parameters. ➄ Late reverb settings (re- In a mixing context, it is frequently useful to group (or
verberation time, crossover frequencies, etc.). ➅ Master track. ➆ Input link) several parameters to maintain a coherent relationship
matrix. ➇ Track management (create, delete, etc.). ➈ Groups management. while manipulating them. To achieve this, Panoramix offers
➉ Import/export of presets and OSC configuration.
a grouping mechanism where all modifications to one track
parameter will also offset that parameter in every linked
track. The “Options” strip provides a means to create, edit,
duplicate or delete groups (see ➈ in Figure 4 and Figure 8),
and the ability to select the active group(s). Grouping effects
all track parameters by default, however it is also possible
to exclude some parameters from the group (e.g. mute, solo,
send; see ➂ in Figure 8).

Figure 5. ➀ Input routing. Physical inputs (rows of the matrix) can


be assigned to the available tracks (columns). ➁ Panning bus routing
“HOA 1”. The output of the bus (columns) can be routed to the Master
channels (rows), i.e. towards the physical outputs.
Each channel can have multiple connections (e.g. one physical input can
be routed to several tracks).

Figure 6. Configuration of the speaker layout for a panning bus. Speakers


coordinates can be edited in Cartesian ➀ or spherical ➁ coordinates. The
reproduction setup can be aligned in time ➂ and level ➃; delays and gains Figure 8. Creation/edition of a group. ➀ Available tracks. ➁ Tracks
are automatically computed or manually entered. currently in group. ➂ Group options.

126 Proceedings of the International Computer Music Conference 2016


3.6 OSC communication 5. REFERENCES
All parameters of the panoramix application can be re- [1] D. M. Huber and R. E. Runstein, Modern Recording
motely accessed via the Open Sound Control protocol Techniques (8th Edition). Focal Press, 2014.
(OSC [10]). Typically, a digital audio workstation is
[2] F. Rumsey and T. McCormick, Sound and Recording
used for edition and playback of the audio tracks while
(6th edition). Elsevier, 2009.
panoramix handles the spatial rendering and mixing (see
Figure 9). Automation data is stored in the DAW and sent [3] B. Bartlett, “Choosing the Right Microphone by Un-
to panoramix through OSC via a plugin such as ToscA [11]. derstanding Design Tradeoffs,” Journal of the Audio
Engineering Society, vol. 35, no. 11, pp. 924 – 943,
Nov 1987.
[4] R. Knoppow, “A Bibliography of the Relevant Liter-
ature on the Subject of Microphones,” Journal of the
Audio Engineering Society, vol. 33, no. 7/8, pp. 557 –
561, July/August 1985.
[5] J.-M. Lyzwa, “Prise de son et restitution multicanal en
5.1. Problématique d’une œuvre spatialisée : Répons,
Pierre Boulez,” Conservatoire National Supérieur de
Musique et de Danse de Paris, Tech. Rep., May 2005.
[6] N. Peters, G. Marentakis, and S. McAdams, “Current
Technologies and Compositional Practices for Spatial-
ization: A Qualitative and Quantitative Analysis,” Com-
puter Music Journal, vol. 35, no. 1, pp. 10 – 27, 2011.
Figure 9. Workflow with panoramix and a digital audio workstation [7] V. Pulkki, “Virtual Sound Source Positioning Using
communicating through the OSC protocol and the ToscA plugin. Vector Base Amplitude Panning,” Journal of the Audio
Engineering Society, vol. 45, no. 6, pp. 456 – 466, June
4. CONCLUSION AND PERSPECTIVES 1997.
This paper considered the design and implementation of [8] J. Daniel, “Représentation de champs acoustiques, ap-
a 3D mixing and post-production workstation. The devel- plication à la transmission et à la reproduction de scènes
oped application is versatile and offers a unified framework sonores complexes dans un contexte multimédia,” Ph.D.
for mixing, spatializing and reverberating sound materials dissertation, Université de Paris VI, 2001.
from different microphone systems. It overcomes the limi-
tations of other existing tools and has been proved useful in [9] T. Carpentier, M. Noisternig, and O. Warusfel, “Twenty
practical mixing situations. Years of Ircam Spat: Looking Back, Looking Forward,”
Nonetheless, the application can be further improved and in Proc. of the 41st International Computer Music Con-
many new features are considered for future versions. This ference, Denton, TX, USA, Sept. 2015, pp. 270 – 277.
includes (but is not limited to): [10] M. Wright, “Open Sound Control: an enabling technol-
• support of other encoding/decoding strategies, notably ogy for musical networking,” Organised Sound, vol. 10,
for M-S and B-Format microphones, no. 3, pp. 193 – 200, Dec 2005.
• extension of the reverberation engine to convolution or
hybrid processors [12], [11] T. Carpentier, “ToscA: An OSC Communication Plugin
• import and/or export of the tracks’ settings in an object- for Object-Oriented Spatialization Authoring,” in Proc.
oriented format such as ADM [13], of the 41st International Computer Music Conference,
• implementation of monitoring or automatic down-mixing Denton, TX, USA, Sept. 2015, pp. 368 – 371.
tools, based for instance on crosstalk cancellation tech- [12] T. Carpentier, M. Noisternig, and O. Warusfel, “Hybrid
niques as proposed in [14], Reverberation Processor with Perceptual Control,” in
• insert of audio plugins (VST, AU, etc.) in the strips, Proc. of the 17th Int. Conference on Digital Audio
• integration of automation data directly into the panoramix Effects (DAFx-14), Erlangen, Germany, Sept. 2014.
workstation,
• synchronization of the session to a LTC time-code. [13] M. Parmentier, “Audio Definition (Metadata) Model
– EBU Tech 3364,” European Broadcasting Union,
Acknowledgments Tech. Rep., 2015. [Online]. Available: https:
//tech.ebu.ch/docs/tech/tech3364.pdf
The author is very grateful to Clément Cornuau, Olivier
Warusfel, Markus Noisternig and the whole sound engineer- [14] A. Baskind, T. Carpentier, J.-M. Lyzwa, and O. Warus-
ing team at Ircam for their invaluable help in the conception fel, “Surround and 3D-Audio Production on Two-
of this tool. The author also wish to thank Angelo Farina Channel and 2D-Multichannel Loudspeaker Setups,” in
for providing the EigenMike used for the recording of En- 3rd International Conference on Spatial Audio (ICSA),
cantadas, and Olga Neuwirth for authorizing this recording Graz, Austria, Sept. 2015.
and its exploitation during the mixing sessions.

Proceedings of the International Computer Music Conference 2016 127

You might also like