Audio Unit Programming Guide
Audio Unit Programming Guide
Audio Unit Programming Guide
2007-10-31
Apple Inc. 2007 Apple Inc. All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, mechanical, electronic, photocopying, recording, or otherwise, without prior written permission of Apple Inc., with the following exceptions: Any person is hereby authorized to store documentation on a single computer for personal use only and to print copies of documentation for personal use provided that the documentation contains Apples copyright notice. The Apple logo is a trademark of Apple Inc. No licenses, express or implied, are granted with respect to any of the technology described in this document. Apple retains all intellectual property rights associated with the technology described in this document. This document is intended to assist application developers to develop applications only for Apple-labeled computers. Apple Inc. 1 Infinite Loop Cupertino, CA 95014 408-996-1010 .Mac is a registered service mark of Apple Inc. Apple, the Apple logo, Carbon, Cocoa, eMac, Final Cut, Final Cut Pro, Finder, GarageBand, Logic, Mac, Mac OS, Macintosh, Objective-C, QuickTime, Tiger, and Xcode are trademarks of Apple Inc., registered in the United States and other countries.
Even though Apple has reviewed this document, APPLE MAKES NO WARRANTY OR REPRESENTATION, EITHER EXPRESS OR IMPLIED, WITH RESPECT TO THIS DOCUMENT, ITS QUALITY, ACCURACY, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. AS A RESULT, THIS DOCUMENT IS PROVIDED AS IS, AND YOU, THE READER, ARE ASSUMING THE ENTIRE RISK AS TO ITS QUALITY AND ACCURACY. IN NO EVENT WILL APPLE BE LIABLE FOR DIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES RESULTING FROM ANY DEFECT OR INACCURACY IN THIS DOCUMENT, even if advised of the possibility of such damages. THE WARRANTY AND REMEDIES SET FORTH ABOVE ARE EXCLUSIVE AND IN LIEU OF ALL OTHERS, ORAL OR WRITTEN, EXPRESS OR IMPLIED. No Apple dealer, agent, or employee is authorized to make any modification, extension, or addition to this warranty. Some states do not allow the exclusion or limitation of implied warranties or liability for incidental or consequential damages, so the above limitation or
exclusion may not apply to you. This warranty gives you specific legal rights, and you may also have other rights which vary from state to state.
Contents
Introduction
Introduction 9
Who Should Read This Document? 9 Organization of This Document 10 Making Further Progress in Audio Unit Development 10 Required Tools for Audio Unit Development 11 See Also 11
Chapter 1
Chapter 2
3
2007-10-31 | 2007 Apple Inc. All Rights Reserved.
CONTENTS
Defining and Using Properties 52 Synthesis, Processing, and Data Format Conversion Code 55 Signal Processing 55 Music Synthesis 56 Music Effects 56 Data Format Conversion 57 Audio Unit Life Cycle 57 Overview 57 Categories of Programmatic Events 58 Bringing an Audio Unit to Life 58 Property Configuration 59 Audio Unit Initialization and Uninitialization 59 Kernel Instantiation in n-to-n Effect Units 60 Audio Processing Graph Interactions 61 Audio Unit Processing 61 Closing 62 Chapter 3
Chapter 4
Chapter 5
4
2007-10-31 | 2007 Apple Inc. All Rights Reserved.
CONTENTS
Collect Configuration Information for the Audio Unit Bundle 92 Set Your Company Name in Xcode 93 Create and Configure the Project 93 Test the Unmodified Audio Unit 104 Implement the Parameter Interface 112 Name the Parameters and Set Values 113 Edit the Constructor Method 114 Define the Parameters 115 Provide Strings for the Waveform Pop-up Menu 117 Implement the Factory Presets Interface 118 Name the Factory Presets and Give Them Values 119 Add Method Declarations for Factory Presets 120 Set the Default Factory Preset 120 Implement the GetPresets Method 121 Define the Factory Presets 122 Implement Signal Processing 123 DSP Design for the Tremolo Effect 124 Define Member Variables in the Kernel Class Declaration 124 Write the TremoloUnitKernel Constructor Method 126 Override the Process Method 127 Override the Reset Method 131 Implement the Tail Time Property 131 Validate your Completed Audio Unit 131 Test your Completed Audio Unit 133 Chapter 6
5
2007-10-31 | 2007 Apple Inc. All Rights Reserved.
CONTENTS
6
2007-10-31 | 2007 Apple Inc. All Rights Reserved.
Chapter 2
Chapter 3
Chapter 5
7
2007-10-31 | 2007 Apple Inc. All Rights Reserved.
Table 5-3 Table 5-4 Table 5-5 Listing 5-1 Listing 5-2 Listing 5-3 Listing 5-4 Listing 5-5 Listing 5-6 Listing 5-7 Listing 5-8 Listing 5-9 Listing 5-10 Listing 5-11 Listing 5-12 Listing 5-13 Listing 5-14 Chapter 6
Specification of tremolo waveform parameter 91 Specification of Slow & Gentle factory preset 92 Specification of Fast & Hard factory preset 92 Parameter names and values (TremoloUnit.h) 113 Setting parameters in the constructor (TremoloUnit.cpp) 114 The customized GetParameterInfo method (TremoloUnit.cpp) 115 The customized GetParameterValueStrings method (TremoloUnit.cpp) 117 Factory preset names and values (TremoloUnit.h) 119 Factory preset method declarations (TremoloUnit.h) 120 Setting the default factory preset in the constructor (TremoloUnit.cpp) 120 Implementing the GetPresets method (TremoloUnit.cpp) 121 Defining factory presets in the NewFactoryPresetSet method (TremoloUnit.cpp) 122 TremoloUnitKernel member variables (TremoloUnit.h) 124 Modifications to the TremoloUnitKernel Constructor (TremolUnit.cpp) 126 The Process method (TremoloUnit.cpp) 128 The Reset method (TremoloUnit.cpp) 131 Implementing the tail time property (TremoloUnit.h) 131
8
2007-10-31 | 2007 Apple Inc. All Rights Reserved.
INTRODUCTION
Introduction
This document describes audio units and how to create them. Audio units are digital audio plug-ins based on Apples world class Core Audio technology for Mac OS X. As a hobbyist or a computer science student, you can design and build your own audio units to make applications like GarageBand do new things with sound. As a commercial developer, you can create professional quality software components that provide features like filtering, reverb, dynamics processing, and sample-based looping. You can also create simple or elaborate MIDI-based music synthesizers, as well as more technical audio units such as time and pitch shifters and data format converters. As part of Core Audio and being integral to Mac OS X, audio units offer a development approach for audio plug-ins that excels in terms of performance, robustness, and ease of deployment. With audio units, you also gain by providing a consistent, simple experience for end-users. Your target market is wide, including performers, DJs, recording and mastering engineers, and anyone who likes to play with sound on their Macintosh. Note: This first version of Audio Unit Programming Guide does not go into depth on some topics important to commercial audio unit developers such as copy protection, parameter automation, and custom views (graphical user interfaces for audio units). Nor does this version provide instruction on developing types of audio units other than the most common type, effect units.
INTRODUCTION
Introduction
If you want to get your hands on building an audio unit right away, go straight to Tutorial: Building a Simple Effect Unit with a Generic View (page 87). As you build the audio unit, you can refer to other sections in this document for conceptual information related to what youre doing. If you prefer to build your knowledge incrementally, starting with a solid conceptual foundation before seeing the technology in action, read the chapters in order. If you already have some familiarity with building your own audio units, you may want to go straight to The Audio Unit (page 43) and Appendix: Audio Unit Class Hierarchy (page 137). You might also want to review A Quick Tour of the Core Audio SDK (page 83) to see if the SDK contains some treasures you havent been using until now.
Audio Unit Development Fundamentals, a bird's eye view of audio unit development, covering Xcode, the Core Audio SDK, design, development, testing, and deployment The Audio Unit, design and programming considerations for the part of an audio unit that performs the audio work The Audio Unit View, a description of the two audio unit view typesgeneric and customas well as an explanation of parameter automation A Quick Tour of the Core Audio SDK: Taking advantage of the code in the Core Audio SDK is the fastest route to audio unit development Tutorial: Building a Simple Effect Unit with a Generic View, a tutorial that takes you from zero to a fully functioning effect unit Appendix: Audio Unit Class Hierarchy, a tour of the audio unit class hierarchy provided by the Core Audio SDK
The ability to develop plug-ins using the C++ programming language, because the audio unit class hierarchy in the Core Audio SDK uses C++. A grounding in audio DSP, including the requisite mathematics. Alternatively, you can work with someone who can provide DSP code for your audio units, along with someone who can straddle the audio unit and math worlds. For optimum quality and performance of an audio unit, DSP code needs to be correctly incorporated into the audio unit scaffolding. A grounding in MIDI, if you are developing instrument units.
10
INTRODUCTION
Introduction
The latest version of Xcode. The examples in this document use Xcode version 2.4. The latest Mac OS X header files. The examples in this document use the header files in the 10.4.0 Mac OS SDK, installed with Apples Xcode Tools. The latest Core Audio development kit. The examples in this document use Core Audio SDK v1.4.3, installed with Apples Xcode Tools at this location on your system: /Developer/Examples/CoreAudio At least one audio unit hosting application. Apple recommends the AU Lab application, installed with Apples Xcode Tools at this location on your system: /Developer/Applications/Audio/AU Lab The audio unit validation command-line tool, auval, Apples validation tool for audio units, provided with Mac OS X.
See Also
As you learn about developing audio units, you may find the following information and tools helpful:
The coreaudio-api mailing list, a very active discussion forum hosted by Apple that covers all aspects of audio unit design and development. Audio Unit framework API documentation (preliminary), available from the Audio Topic page on Apples developer website. Additional audio unit API reference available in the Core Audio Framework Reference in Apples Reference Library. The TremoloUnit sample project, which corresponds to the audio unit you build in Tutorial: Building a Simple Effect Unit with a Generic View (page 87). Core Audio Overview, which surveys all of the features available in Core Audio, and describes where audio units fit in. Bundle Programming Guide, which describes the file-system packaging mechanism in Mac OS X used for audio units. Component Manager Reference, which describes the API of the Component Manager, the Mac OS X technology that manages audio units at the system level. To learn more about the Component Manager you can refer to the Apple legacy document Component Manager from More Macintosh Toolbox, and to Component Manager for QuickTime.
11
INTRODUCTION
Introduction
12
See Also
2007-10-31 | 2007 Apple Inc. All Rights Reserved.
CHAPTER 1
When you set out to create an audio unit, the power and flexibility of Core Audios Audio Unit framework give you the ability to go just about anywhere with sound. However, this power and flexibility also mean that there is a lot to learn to get started on the right foot. In this chapter, you get a birds-eye view of this leading edge technology, to serve you as you take the first steps toward becoming an audio unit developer. You begin here with a quick look at the audio unit development cycle. Then, you focus in on what audio units are, and discover the important role of the Core Audio SDK in audio unit development. You learn how audio units function as plug-ins in Mac OS X and in concert with the applications that use them. Finally, you get introduced to the Audio Unit Specification, and how it defines the plug-in API that audio unit developers and application developers both write to. After reading this chapter youll be ready to dig in to the architectural and development details presented in The Audio Unit (page 43). If you want to get your hands on building an audio unit right away, you can skip this chapter for now and go straight to Tutorial: Building a Simple Effect Unit with a Generic View (page 87). As you build the audio unit, you can refer back to this chapter, and to other sections in this document, for conceptual information related to what youre doing.
4. 5. 6.
As with any software development, each of these steps typically entails iteration. The tutorial later in this document, Tutorial: Building a Simple Effect Unit with a Generic View (page 87), leads you through most of these steps.
13
CHAPTER 1
14
CHAPTER 1
Figure 1-1
Notification Center
Audio Unit Plug-in interface Your custom code Scaffolding from Core Audio SDK (optional but recommended)
The figure shows two distinct internal parts of an audio unit bundle: the audio unit itself, on the left, and the audio unit view, on the right. The audio unit performs the audio work. The view provides a graphical user interface for the audio unit, and, if you provide it, support for parameter automation. (See Supporting Parameter Automation (page 37).) When you create an audio unit, you normally package both pieces in the same bundleas you learn to do laterbut they are logically separate pieces of code. The audio unit, its view, and the host application communicate with each other by way of a notification center set up by the host application. This allows all three entities to remain synchronized. The functions for the notification center are part of the Core Audio Audio Unit Event API. When a user first launches a host application, neither the audio unit nor its view is instantiated. In this state, none of the pieces shown in Figure 1-1 are present except for the host application. The audio unit and its view come into existence, and into play, in one of two ways:
Typically, a user indicates to a host application that theyd like to use an audio unit. For example, a user could ask the host to apply a reverb effect to a channel of audio. For an audio unit that you supply to add a feature to your own application, the application opens the audio unit directly, probably upon application launch.
When the host opens the audio unit, it hooks the audio unit up to the hosts audio data chainrepresented in the figure by the light yellow (audio data) arrows. This hook up has two parts: providing fresh audio data to the audio unit, and retrieving processed audio data from the audio unit.
15
CHAPTER 1
To provide fresh audio data to an audio unit, a host defines a callback function (to be called by the audio unit) that supplies audio data one slice at a time. A slice is a number of frames of audio data. A frame is one sample of audio data across all channels. To retrieve processed audio data from an audio unit, a host invokes an audio units render method.
Here is how the audio data flow proceeds between a host application and an audio unit: 1. 2. 3. 4. The host invokes the audio units render method, effectively asking the audio unit for a slice of processed audio data The audio unit responds by calling the hosts callback function to get a slice of audio data samples to process The audio unit processes the audio data samples and places the result in an output buffer for the host to retrieve The host retrieves the processed data and then again invokes the audio units render method
In the depiction of the audio unit in Figure 1-1 (page 15), the outer cube represents the plug-in API. Apple provides the Audio Unit Specification that defines the plug-in API for a variety of audio unit types. When you develop your audio unit to this specification, it will work with any host application that also follows the specification. Inside, an audio unit contains programmatic scaffolding to connect the plug-in API to your custom code. When you use the Core Audio SDK to build your audio unit, this scaffolding is supplied in the form of glue code for the Component Manager along with a C++ class hierarchy. Figure 1-1 (page 15) (rather figuratively) represents your custom code as an inner cube within the audio unit, and represents the SDKs classes and glue code as struts connecting the inner cube to the outer cube. You can build an audio unit without using the Core Audio SDK, but doing so entails a great deal more work. Apple recommends that you use the Core Audio SDK for all but the most specialized audio unit development. To learn about the internal architecture of an audio unit, read Audio Unit Architecture (page 43) in The Audio Unit (page 43).
16
CHAPTER 1
Figure 1-2
Audio unit bundle
Audio unit
When you build an audio unit using Xcode and a supplied audio unit template, your Xcode project takes care of packaging all these pieces appropriately. As a component, an audio unit has the following file system characteristics:
It is a bundle with a .component file name extension It is a package; users see the bundle as opaque when they view it in the Finder
The information property list (Info.plist) file within the bundles top-level Contents folder provides critical information to the system and to host applications that want to use the audio unit. For example, this file provides:
The unique bundle identifier string in the form of a reverse domain name (or uniform type identifier). For example, for the FilterDemo audio unit provided in the Core Audio SDK, this identifier is com.apple.demo.audiounit.FilterDemo. The name of the file, within the bundle, that is the audio unit proper. This file is within the MacOS folder in the bundle.
An audio unit bundle can contain a custom user interface, called a view. The standard location for the view is in the audio unit bundles Resources folder. The audio unit shown in Figure 1-2 includes such a view, packaged as an opaque bundle itself. Looking inside the audio unit view bundle shows the view bundle file structure:
17
CHAPTER 1
Figure 1-3
When a host application opens an audio unit, it can ask the audio unit if it has a custom view. If there is one, the audio unit can respond by providing the path to the view bundle. You can put the view bundle anywhere, including a network location. Typically, however, views are packaged as shown here. An audio unit bundle typically contains one audio unit, as described in this section. But a single audio unit bundle can contain any number of audio units. For example, Apple packages all of its audio units in one bundle, System/Library/Components/CoreAudio.component. The CoreAudio.component bundle includes a single file of executable code containing all of the Apple audio units, and another file containing all of the supplied custom views:
18
CHAPTER 1
Figure 1-4
Audio unit bundle
Audio unit usually refers to the executable code within the MacOS folder in the audio unit bundle, as shown in Figure 1-2 (page 17). This is the part that performs the audio work. Sometimes, as in the title of this document, audio unit refers in context to the entire audio unit bundle and its contents. In this case, the term audio unit corresponds to a users view of a plug-in in the Mac OS X file system. Audio unit view refers to the graphical user interface for an audio unit, as described in The Audio Unit View (page 63). As shown in Figure 1-2, the code for a custom view typically lives in its own bundle in the Resources folder inside the audio unit bundle. Views are optional, because the AudioUnit framework lets a host application create a generic view based on parameter and property code in the audio unit. Audio unit bundle refers to the file system packaging that contains an audio unit and, optionally, a custom view. When this document uses audio unit bundle, it is the characteristics of the packaging, such as the file name extension and the Info.plist file, that are important. Sometimes, as in the description of where to install audio units, audio unit bundle refers to the contents as well as the packaging. In this case, its analogous to talking about a folder while meaning the folder and its contents.
19
CHAPTER 1
Adding an audio unit to a running host application Using the audio unit Removing the audio unit from the running host application
Along the way, this tutorial shows you how to get started with the very useful AU Lab application. 1. Launch the AU Lab audio unit host application (in /Developer/Applications/Audio/) and create a new AU Lab document. Unless you've configured AU Lab to use a default document style, the Create New Document window opens. If AU Lab was already running, choose File > New to get this window.
20
CHAPTER 1
Ensure that the configuration matches the settings shown in the figure: Built-In Audio for the Audio Device, Line In for the Input Source, and Stereo for Output Channels. Leave the window's Inputs tab unconfigured; you will specify the input later. Click OK.
21
CHAPTER 1
A new AU Lab window opens, showing the output channel you specified.
At this point, AU Lab has already instantiated all of the available audio units on your computer, queried them to find out such things as how each can be used in combination with other audio units, and has then closed them all again. (More precisely, the Mac OS X Component Manager has invoked the instantiation and closing of the audio units on behalf of AU Lab. Component Manager Requirements for Audio Units (page 28), below, explains this.)
22
CHAPTER 1
2. In AU Lab, choose Edit > Add Audio Unit Generator. A dialog opens from the AU Lab window to let you specify the generator unit to serve as the audio source.
In the dialog, ensure that the AUAudioFilePlayer generator unit is selected in the Generator pop-up. To follow this example, change the Group Name to Player. Click OK. You can change the group name at any time by double-clicking it in the AU Lab window.
23
CHAPTER 1
The AU Lab window now shows a stereo input track. In addition, an inspector window has opened for the generator unit. If you close the inspector, you can reopen it by clicking the rectangular "AU" button near the top of the Player track.
3. Add one or more audio files to the Audio Files list in the player inspector window. Do this by dragging audio files from the Finder, as shown in the figure. Putting some audio files in the player inspector window lets you send audio through the AU Lab application, and through an audio unit that you add to the Player track. Just about any audio file will do. For this example, a music file works well.
24
CHAPTER 1
Now AU Lab is configured and ready for you to add an audio unit. 4. To dynamically add an audio unit to the AU Lab host application, click the triangular menu button in the first row of the Effects section in the Player track in AU Lab, as shown in the figure.
25
CHAPTER 1
A menu opens, listing all the audio units available on your system, arranged by category and manufacturer. AU Lab gets this list from the Component Manager, which maintains a registry of installed audio units.
Choose an audio unit from the pop-up. To follow this example, choose the AUParametricEQ audio unit from the Apple submenu. (This audio unit, supplied as part of Mac OS X, is a single-band equalizer with controls for center frequency, gain, and Q.) AU Lab asks the Component Manager to instantiate the audio unit you have chosen. AU Lab then initializes the audio unit. AU Lab also opens the audio units Cocoa generic view, which appears as a utility window:
You have now dynamically added the AUParametricEQ audio unit to the running AU Lab host application. 5. To demonstrate the features of the audio unit in AU Lab, click the Play button in the AUAudioFilePlayer inspector to send audio through the audio unit. Vary the sliders in the generic view to hear the audio unit working.
26
CHAPTER 1
6. To remove the audio unit from the host application, once again click the triangular menu button in the first row of the Effects section in the Player track, as shown in the figure.
The Component Manager closes the audio unit on behalf of AU Lab. You have now dynamically removed the audio unit, and its features, from the running AU Lab host application.
27
CHAPTER 1
The audio unit bundle. The bundle wraps the audio unit and its view (if you provide a custom view), and provides identification for the audio unit that lets Mac OS X and the Component Manager use the audio unit. The audio unit itself. When you build your audio unit with the Core Audio SDK, as recommended, the audio unit inherits from the SDKs class hierarchy. The audio unit view. The Core Audio API frameworks. The Component Manager. The host application.
Refer to A Quick Tour of the Core Audio SDK (page 83) if youd like to learn about the rest of the SDK.
Be packaged as a component, as defined by the Component Manager Have a single entry point that the Component Manager recognizes Have a resource (.rsrc) file that specifies a system wide unique identifier and version string Respond to Component Manager calls
Satisfying these requirements from scratch is a significant effort and requires a strong grasp of the Component Manager API. However, the Core Audio SDK insulates you from this. As demonstrated in the chapter Tutorial: Building a Simple Effect Unit with a Generic View (page 87), accommodating the Component Manager requires very little work when you use the SDK.
28
CHAPTER 1
~/Library/Audio/Plug-Ins/Components/
Audio units installed here can be used only by the owner of the home folder
/Library/Audio/Plug-Ins/Components/
Audio units installed here can be used by all users on the computer It is up to you which of these locations you use or recommend to your users. The Mac OS X preinstalled audio units go in a location reserved for Apples use:
/System/Library/Components/
The Component Manager maintains a cached registry of the audio units in these locations (along with any other plug-ins it finds in other standard locations). Only registered audio units are available to host applications. The Component Manager refreshes the registry on system boot, on user log-in, and whenever the modification timestamp of one of the three Components folders changes. A host application can explicitly register audio units installed in arbitrary locations by using the Component Managers RegisterComponent, RegisterComponentResource, or RegisterComponentResourceFile functions. Audio units registered in this way are available only to the host application that invokes the registration. This lets you use audio units to add features to a host application you are developing, without making your audio units available to other hosts.
The type specifies the general type of functionality provided by an audio unit. In so doing, the type also identifies the audio units plug-in API. In this way, the type code is programmatically significant. For example, a host application knows that any audio unit of type 'aufx' (for audio unit effect) provides DSP functionality. The Audio Unit Specification specifies the available type codes for audio units, as well as the plug-in API for each audio unit type.
The subtype describes more precisely what an audio unit does, but is not programmatically significant for audio units.
29
CHAPTER 1
For example, Mac OS X includes an effect unit of subtype 'lpas', named to suggest that it provides low-pass filtering. If, for your audio unit, you use one of the subtypes listed in the AUComponent.h header file in the Audio Unit framework (such as 'lpas'), you are suggesting to users of your audio unit that it behaves like the named subtype. However, host applications make no assumptions about your audio unit based on its subtype. You are free to use any subtype code, including subtypes named with only lowercase letters.
The manufacturer code identifies the developer of an audio unit. Apple expects each developer to register a manufacturer code, as a creator code, on the Data Type Registration page. Manufacturer codes must contain at least one uppercase character. Once registered, you can use the same manufacturer code for all your audio units.
In addition to these four-character codes, each audio unit must specify a correctly formatted version number. When the Component Manager registers audio units, it picks the most recent version if more than one is present on a system. As a component, an audio unit identifies its version as an eight-digit hexadecimal number in its resource (.rsrc) file. As youll see in Tutorial: Building a Simple Effect Unit with a Generic View (page 87), you specify this information using Xcode. Here is an example of how to construct a version number. It uses an artificially large number to illustrate the format unambiguously. For a decimal version number of 29.33.40, the hexadecimal equivalent is 0x001d2128. The format works as follows for this number: Figure 1-5 Constructing an audio unit version number
Decimal version number
Major version
Minor revision
Dot/bug-fix release
Hexadecimal equivalent
The four most significant hexadecimal digits represent the major version number. The next two represent the minor version number. The two least significant digits represent the dot release number. When you release a new version of an audio unit, you must ensure that its version number has a higher value than the previous versionnot equal to the previous version, and not lower. Otherwise, users who have a previous version of your audio unit installed wont be able to use the new version.
30
CHAPTER 1
Implementing the plug-in API for any given type of audio unit from scratch is a significant effort. It requires a strong grasp of the Audio Unit and Audio Toolbox framework APIs and of the Audio Unit Specification. However, the Core Audio SDK insulates you from much of this as well. Using the SDK, you need to implement only those methods and properties that are relevant to your audio unit. (You learn about the audio unit property mechanism in the next chapter, The Audio Unit (page 43).)
The various Apple types defined for audio units, as listed in the AudioUnit component types and subtypes enumeration in the AUComponent.h header file in the Audio Unit framework The functional and behavioral requirements for each type of audio unit The plug-in API for each type of audio unit, including required and optional properties
You develop your audio units to conform to the Audio Unit Specification. You then test this conformance with the auval command-line tool, described in the next section. The Audio Unit Specification defines the plug-in API for the following audio unit types:
Effect units ('aufx'), such as volume controls, equalizers, and reverbs, which modify an audio data stream Music effect units ('aumf'), such as loopers, which combine features of instrument units (such as starting and stopping a sample) with features of effect units Offline effect units ('auol'), which let you do things with audio that arent practical in real time, such as time reversal or look-ahead level normalization Instrument units ('aumu'), which take MIDI and soundbank data as input and provide audio data as outputletting a user play a virtual instrument Generator units ('augn'), which programmatically generate an audio data stream or play audio from a file Data format converter units ('aufc'), which change characteristics of an audio data stream such as bit depth, sample rate, or playback speed Mixer units ('aumx'), which combine audio data streams
31
CHAPTER 1
Panner units ('aupn'), which distribute a set of input channels, using a spatialization algorithm, to a set of output channels
The audio unit serves as the model, encapsulating all of the knowledge to perform the audio work The audio units view serves, naturally, as the view, displaying the audio units current settings and allowing a user to change them The Audio Unit Event API, and the code in an audio unit and its view that calls this API, corresponds to the controller, supporting communication between the audio unit, its view, and a host application
32
CHAPTER 1
Here is a scenario where this matters. Suppose a user doesnt have the required hardware dongle for your (copy protected) audio unit. Perhaps he left it at home when he brought his laptop to a performance. If your audio unit invokes its copy protection on instantiation, this could prevent a host application from opening. If your audio unit invokes its copy protection on initialization, as recommended, the performer could at least use the host application.
Multiple Instantiation
An audio unit can be instantiated any number of times by a host application and by any number of hosts. More precisely, the Component Manager invokes audio unit instantiation on behalf of host applications. The Component Manager infrastructure ensures that each audio unit instance exists and behaves independently. You can demonstrate multiple instantiation in AU Lab. First add one instance of the AUParametricEQ effect unit to an AU Lab document, as described above in Tutorial: Using an Audio Unit in a Host Application (page 20). Then invoke the pop-up menus in additional rows of the Effects section in the Player track. You can add as many one-band parametric equalizers to the track as you like. Each of these instances of the audio unit behaves independently, as you can see by the varied settings in the figure: Figure 1-6 Multiple instantiation of audio units in AU Lab
33
CHAPTER 1
34
CHAPTER 1
Figure 1-7
Effect Unit A
4 3 2
Start here
1
Render method
Here is how the pull proceeds in Figure 1-7: 1. 2. The host application calls the render method of the final node (effect unit B) in the graph, asking for one slice worth of processed audio data frames. The render method of effect unit B looks in its input buffers for audio data to process, to satisfy the call to render. If there is audio data waiting to be processed, effect unit B uses it. Otherwise, and as shown in the figure, effect unit B (employing a superclass in the SDKs audio unit class hierarchy) calls the render method of whatever the host has connected to effect unit Bs inputs. In this example, effect unit A is connected to Bs inputsso effect unit B pulls on effect unit A, asking for a slice audio data frames. Effect unit A behaves just as effect unit B does. When it needs audio data, it gets it from its input connection, which was also established by the host. The host connected effect unit As inputs to a render callback in the host. Effect unit A pulls on the hosts render callback. The hosts render callback supplies the requested audio data frames to effect unit A. Effect unit A processes the slice of data supplied by the host. Effect unit A then supplies the processed audio data frames that were previously requested (in step 2) to effect unit B. Effect unit B processes the slice of data provided by effect unit A. Effect unit B then supplies the processed audio data frames that were originally requested (in step 1) to the host application. This completes one cycle of pull.
3.
4. 5. 6.
Audio units normally do not know whether their inputs and outputs are connected to other audio units, or to host applications, or to something else. Audio units simply respond to rendering calls. Hosts are in charge of establishing connections, and superclasses (for audio units built with the Core Audio SDK) take care of implementing the pull.
35
CHAPTER 1
As an audio unit developer, you dont need to work directly with audio processing graphs except to ensure that your audio unit plays well with them. You do this, in part, by ensuring that your audio unit passes Apples validation test, described in Audio Unit Validation with the auval Tool (page 38). You should also perform testing by hooking up your audio unit in various processing graphs using host applications, as described in Audio Unit Testing and Host Applications (page 38).
Processing
An audio unit that processes audio data, such as an effect unit, works in terms of rendering cycles. In each rendering cycle, the audio unit:
Gets a slice of fresh audio data frames to process. It does this by calling the rendering callback function that has been registered in the audio unit. Processes the audio data frames. Puts the resulting audio data frames into the audio units output buffers.
An audio unit does this work at the beck and call of its host application. The host application also sets number of audio data frames per slice. For example, AU Lab uses 512 frames per slice as a default, and you can vary this number from 24 to 4,096. See Testing with AU Lab (page 39). The programmatic call to render the next slice of frames can arrive from either of two places:
From the host application itself, in the case of the host using the audio unit directly From the downstream neighbor of the audio unit, in the case of the audio unit being part of an audio processing graph
Audio units behave exactly the same way regardless of the calling contextthat is, regardless of whether it is a host application or a downstream audio unit asking for audio data.
Resetting
Audio units also need to be able to gracefully stop rendering. For example, an audio unit that implements an IIR filter uses an internal buffer of samples. It uses the values of these buffered samples when applying a frequency curve to the samples it is processing. Say that a user of such an audio unit stops playing an audio file and then starts again at a different point in the file. The audio unit, in this case, must start with an empty processing buffer to avoid inducing artifacts. When you develop an audio units DSP code, you implement a Reset method to return the DSP state of the audio unit to what it was when the audio unit was first initialized. Host applications call the Reset method as needed.
36
CHAPTER 1
The ability of an audio unit to change its parameter values programmatically on request from a host application The ability of an audio unit view to post notifications as parameter values are changed by a user The ability of a host application to support recording and playback of parameter automation data
Some hosts that support parameter automation with audio units are Logic Pro, Ableton Live, and Sagan Metro. Parameter automation uses the Audio Unit Event API, declared in the AudioUnitUtilties.h header file as part of the Audio Toolbox framework. This thread-safe API provides a notification mechanism that supports keeping audio units, their views, and hosts in sync. To support parameter automation in your audio unit, you must create a custom view. You add automation support to the views executable code, making use of the Audio Unit Event API to support some or all of the following event types:
Parameter gestures, which include the kAudioUnitEvent_BeginParameterChangeGesture and kAudioUnitEvent_EndParameterChangeGesture event types Parameter value changes, identified by the kAudioUnitEvent_ParameterValueChange event type Property changes, identified by the kAudioUnitEvent_PropertyChange event type
37
CHAPTER 1
In some unusual cases you may need to add support for parameter automation to the audio unit itself. For example, you may create a bandpass filter with adjustable upper and lower corner frequencies. Your audio unit then needs to ensure that the upper frequency is never set below the lower frequency. When an audio unit invokes a parameter change in a case like this, it needs to issue a parameter change notification. The Audio Unit View (page 63) and Defining and Using Parameters (page 50) give more information on parameter automation.
An audio units plug-in API, as defined by its programmatic type An audio units basic functionality including such things as which audio data channel configurations are available, time required to instantiate the audio unit, and the ability of the audio unit to render audio
The auval tool tests only an audio unit proper. It does not test any of the following:
Audio unit views Audio unit architecture, in terms of using the recommended model-view-controller design pattern for separation of concerns Correct use of the Audio Unit Event API Quality of DSP, quality of audio generation, or quality of audio data format conversion
The auval tool can validate every type of audio unit defined by Apple. When you run it, it outputs a test log and summarizes the results with a pass or "fail indication. For more information, refer to the auval built-in help system. To see auval help text, enter the following command at a prompt in the Terminal application:
auval -h
Evolution of the Core Audio frameworks and SDK Variations across host application versions
38
CHAPTER 1
As host applications that recognize audio units proliferate, the task of testing your audio unit in all potential hosts becomes more involved. The situation is somewhat analogous to testing a website in various browsers: your code may perfectly fit the relevant specifications, but nonconformance in one or another browser requires you to compensate. With this in mind, the following sections provide an overview of host-based audio unit testing.
Behavior, in terms of being found by a host, displayed in a menu, and opened View, both generic and custom Audible performance Interaction with other audio units when placed in an audio processing graph I/O capabilities, such as sidechains and multiple outputs, as well as basic testing of monaural and stereophonic operation
In Mac OS X v10.4 Tiger, AU Lab lets you test the following types of audio units:
Varying the Host Applications Characteristics AU Lab lets you control some of its hosting characteristics, which lets you test the behavior of your audio unit under varying conditions. For example, you can change the number of frames of audio data to process in each rendering cycle. You do this using Devices Preferences. In AU Lab, choose Preferences from the AU Lab menu. Click Devices to show Devices Preferences:
39
CHAPTER 1
Click the Frames pop-up menu. You can choose the number of frames for your audio unit to process in each rendering cycle:
Click the disclosure triangle for Expert Settings. You can vary the slider to choose the percentage of CPU time to devote to audio processing. This lets you test the behavior of your audio unit under varying load conditions:
40
CHAPTER 1
There are many third-party and open source applications that support audio units, among them Ableton Live, Amadeus, Audacity, Cubase, Digital Performer, DSP-Quattro, Peak, Rax, and Metro.
41
CHAPTER 1
42
CHAPTER 2
When you develop an audio unit, you begin with the part that performs the audio work. This part exists within the MacOS folder inside the audio unit bundle as shown in Figure 1-2 (page 17). You can optionally add a custom user interface, or view, as described in the next chapter, The Audio Unit View (page 63). In this chapter you learn about the architecture and programmatic elements of an audio unit. You also learn about the steps you take when you create an audio unit.
RESET
Input Scope Input Element (bus) 0 Channel 0 Connection Channel 1 Stream format Other properties
DSP
Output Scope Output Element (bus) 0 Stream format Other properties Channel 0 Channel 1
RENDER
Connection
43
CHAPTER 2
You use scopes when writing code that sets or retrieves values of parameters and properties. For example, Listing 2-1 shows an implementation of a standard GetProperty method, as used in the effect unit you build in Tutorial: Building a Simple Effect Unit with a Generic View (page 87): Listing 2-1 Using scope in the GetProperty method
ComponentResult TremoloUnit::GetProperty ( AudioUnitPropertyID inID, AudioUnitScope inScope, // the host specifies the scope AudioUnitElement inElement, void *outData ) { return AUEffectBase::GetProperty (inID, inScope, inElement, outData); }
When a host application calls this method to retrieve the value of a property, the host specifies the scope in which the property is defined. The implementation of the GetProperty method, in turn, can respond to various scopes with code such as this:
if (inScope == kAudioUnitScope_Global) { // respond to requests targeting the global scope } else if (inScope == kAudioUnitScope_Input) { // respond to requests targeting the input scope } else { // respond to other requests }
There are five scopes defined by Apple in the AudioUnitProperties.h header file in the Audio Unit framework, shown in Listing 2-2: Listing 2-2 Audio unit scopes
= = = = = 0, 1, 2, 3, 4
Input scope: The context for audio data coming into an audio unit. Code in an audio unit, a host application, or an audio unit view can address an audio units input scope for such things as the following:
An audio unit defining additional input elements An audio unit or a host setting an input audio data stream format An audio unit view setting the various input levels on a mixer audio unit A host application connecting audio units into an audio processing graph
Host applications also use the input scope when registering a render callback, as described in Render Callback Connections (page 47).
44
CHAPTER 2
Output scope: The context for audio data leaving an audio unit. The output scope is used for most of the same things as input scope: connections, defining additional output elements, setting an output audio data stream format, and setting output levels in the case of a mixer unit with multiple outputs. A host application, or a downstream audio unit in an audio processing graph, also addresses the output scope when invoking rendering.
Global scope: The context for audio unit characteristics that apply to the audio unit as a whole. Code within an audio unit addresses its own global scope for setting or getting the values of such properties as:
Host applications can also query the global scope of an audio unit to get these values. There are two additional audio unit scopes, intended for instrument units, defined in AudioUnitProperties.h:
Group scope: A context specific to the rendering of musical notes in instrument units Part scope: A context specific to managing the various voices of multitimbral instrument units
This version of Audio Unit Programming Guide does not discuss group scope or part scope.
ComponentResult TremoloUnit::GetProperty ( AudioUnitPropertyID inID, AudioUnitScope inScope, AudioUnitElement inElement, // the host specifies the element here void *outData ) { return AUEffectBase::GetProperty (inID, inScope, inElement, outData); }
45
CHAPTER 2
Elements are identified by integer numbers and are zero indexed. In the input and output scopes, element numbering must be contiguous. In the typical case, the input and output scopes each have one element, namely element (or bus) 0. The global scope in an audio unit is unusual in that it always has exactly one element. Therefore, the global scopes single element is always element 0. A bus (that is, an input or output element) always has exactly one stream format. The stream format specifies a variety of characteristics for the bus, including sample rate and number of channels. Stream format is described by the audio stream description structure (AudioStreamBasicDescription), declared in the CoreAudioTypes.h header file and shown in Listing 2-4: Listing 2-4 The audio stream description structure
struct AudioStreamBasicDescription { Float64 mSampleRate; // sample frames per second UInt32 mFormatID; // a four-char code indicating stream type UInt32 mFormatFlags; // flags specific to the stream type UInt32 mBytesPerPacket; // bytes per packet of audio data UInt32 mFramesPerPacket; // frames per packet of audio data UInt32 mBytesPerFrame; // bytes per frame of audio data UInt32 mChannelsPerFrame; // number of channels per frame UInt32 mBitsPerChannel; // bit depth UInt32 mReserved; // padding }; typedef struct AudioStreamBasicDescription AudioStreamBasicDescription;
An audio unit can let a host application get and set the stream formats of its buses using the kAudioUnitProperty_StreamFormat property, declared in the AudioUnitProperties.h header file. This propertys value is an audio stream description structure. Typically, you will need just a single input bus and a single output bus in an audio unit. When you create an effect unit by subclassing the AUEffectBase class, you get one input and one output bus by default. Your audio unit can specify additional buses by overriding the main classs constructer. You would then indicate additional buses using the kAudioUnitProperty_BusCount property, or its synonym kAudioUnitProperty_ElementCount, both declared in the AudioUnitProperties.h header file. You might find additional buses helpful if you are building an interleaver or deinterleaver audio unit, or an audio unit that contains a primary audio data path as well as a sidechain path for modulation data. A bus can have exactly one connection, as described next.
46
CHAPTER 2
typedef struct AudioUnitConnection { AudioUnit sourceAudioUnit; // the audio unit that supplies audio // data to the audio unit whose // connection property is being set UInt32 sourceOutputNumber; // the output bus of the source unit UInt32 destInputNumber; // the input bus of the destination unit } AudioUnitConnection;
The kAudioUnitProperty_MakeConnection property and the audio unit connection structure are declared in the AudioUnitProperties.h file in the Audio Unit framework. As an audio unit developer, you must make sure that your audio unit can be connected for it to be valid. You do this by supporting appropriate stream formats. When you create an audio unit by subclassing the classes in the SDK, your audio unit will be connectible. The default, required stream format for audio units is described in Commonly Used Properties (page 52). Figure 1-7 (page 35) illustrates that the entity upstream from an audio unit can be either another audio unit or a host application. Whichever it is, the upstream entity is typically responsible for setting an audio units input stream format before a connection is established. If an audio unit cannot support the stream format being requested, it returns an error and the connection fails.
typedef OSStatus (*AURenderCallback)( void *inRefCon, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames, AudioBufferList *ioData );
The host must explicitly set the stream format for the audio units input as a prerequisite to making the connection. The audio unit calls the callback in the host when its ready for more audio data.
47
CHAPTER 2
In contrast, for an audio processing graph connection, the upstream audio unit supplies the render callback. In a graph, the upstream audio unit also sets the downstream audio units input stream format. A host can retrieve processed audio data from an audio unit directly by calling the AudioUnitRender function on the audio unit, as shown in Listing 2-7: Listing 2-7 The AudioUnitRender function
extern ComponentResult AudioUnitRender ( AudioUnit ci, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inOutputBusNumber, UInt32 inNumberFrames, AudioBufferList *ioData );
The Core Audio SDK passes this function call into your audio unit as a call to the audio units Render method. You can see the similarity between the render callback and AudioUnitRender signatures, which reflects their coordinated use in audio processing graph connections. Like the render callback, the AudioUnitRender function is declared in the AUComponent.h header file in the Audio Unit framework.
struct AudioBuffer { UInt32 mNumberChannels; // number of interleaved channels in the buffer UInt32 mDataByteSize; // size, in bytes, of the buffer void *mData; // pointer to the buffer }; typedef struct AudioBuffer AudioBuffer;
An audio buffer can hold a single channel, or multiple interleaved channels. However, most types of audio units, including effect units, use only noninterleaved data. These audio units expect the mNumberChannels field in the audio buffer structure to equal 1. Output units and format converter units can accept interleaved channels, represented by an audio buffer with the mNumberChannels field set to 2 or greater. An audio unit manages the set of channels in a bus as an audio buffer list structure (AudioBufferList), also defined in CoreAudioTypes.h, as shown in Listing 2-9: Listing 2-9 The audio buffer list structure
struct AudioBufferList { UInt32 mNumberBuffers; // the number of buffers in the list AudioBuffer mBuffers[kVariableLengthArray]; // the list of buffers
48
CHAPTER 2
AudioBufferList;
In the common case of building an n-to-n channel effect unit, such as the one you build in Tutorial: Building a Simple Effect Unit with a Generic View (page 87), the audio unit template and superclasses take care of managing channels for you. You create this type of effect unit by subclassing the AUEffectBase class in the SDK. In contrast, when you build an m-to-n channel effect unit (for example, stereo-to-mono effect unit), you must write code to manage channels. In this case, you create your effect unit by subclassing the AUBase class. (As with the rest of this document, this consideration applies to version 1.4.3 of the Core Audio SDK, current at the time of publication.)
Creating an audio unit Xcode project with a supplied template. In this case, creating the project gives you source files that define custom subclasses of the appropriate superclasses. You modify and extend these files to define the custom features and behavior of your audio unit. Making a copy of an audio unit project from the SDK, which already contains custom subclasses. In this case, you may need to strip out code that isnt relevant to your audio unit, as well as change symbols in the project to properly identify and refer to your audio unit. You then work in the same way you would had you started with an Xcode template.
49
CHAPTER 2
All audio units also have characteristics, typically non-time varying and not directly settable by a user, called properties. A property is a key/value pair that refines the plug-in API of your audio unit by declaring attributes or behavior. For example, you use the property mechanism to declare such audio unit characteristics as sample latency and audio data stream format. Each property has an associated data type to hold its value. For more on properties, as well as definitions of latency and stream format, see Commonly Used Properties (page 52). Host applications can query an audio unit about its parameters and about its standard properties, but not about its custom properties. Custom properties are for communication between an audio unit and a custom view designed in concert with the audio unit. To get parameter information from an audio unit, a host application first gets the value of the audio units kAudioUnitProperty_ParameterList property, a property provided for you by superclasses in the SDK. This propertys value is a list of the defined parameter IDs for the audio unit. The host can then query the kAudioUnitProperty_ParameterInfo property for each parameter ID. Hosts and views can also receive parameter and property change information using notifications, as described in Parameter and Property Events (page 67).
By the audio units view (custom if you provide one, generic otherwise) when the view is drawn on screen By a host application that is providing a generic view for the audio unit By a host application that is representing the audio units parameters on a hardware control surface
To make use of a parameters current setting (as adjusted by a user) when rendering audio, you call the GetParameter method. This method is inherited from the AUEffectBase class. The GetParameter method takes a parameter ID as its one argument and returns the parameters current value. You typically make this call within the audio units Process method to update the parameter value once for each render cycle. Your rendering code can then use the parameters current value. In addition to the GetParameterInfo method (for telling a view or host about a parameters current value) and the GetParameter method (for making use of a parameter value during rendering), an audio unit needs a way to set its parameter values. For this it typically uses the SetParameter method, from the AUEffectBase class. There are two main times an audio unit calls the SetParameter method:
During instantiationin its constructor methodto set its default parameter values
50
CHAPTER 2
When runningwhen a host or a view invokes a parameter value changeto update its parameter values
The SetParameter method takes two method parametersthe ID of the parameter to be changed and its new value, as shown in Listing 2-10: Listing 2-10
void SetParameter( UInt32 paramID, Float32 value );
The audio unit you build in Tutorial: Building a Simple Effect Unit with a Generic View (page 87) makes use of all three of these methods: GetParameterInfo, GetParameter, and SetParameter. An audio unit sometimes needs to invoke a value change for one of its parameters. It might do this in response to a change (invoked by a view or host) in another parameter. When an audio unit on its own initiative changes a parameter value, it should post an event notification. For example, in a bandpass filter audio unit, a user might lower the upper corner frequency to a value below the current setting of the frequency bands lower limit. The audio unit could respond by lowering the lower corner frequency appropriately. In such a case, the audio unit is responsible for posting an event notification about the self-invoked change. The notification informs the view and the host of the lower corner frequency parameters new value. To post the notification, the audio unit follows a call to the SetParameter method with a call to the AUParameterListenerNotify method.
51
CHAPTER 2
Host application developers provide parameter persistence by taking advantage of the SDKs kAudioUnitProperty_ClassInfo property. This property uses a CFPropertyListRef dictionary to represent the current settings of an audio unit.
kAudioUnitProperty_StreamFormat
Declares the audio data stream format for an audio units input or output channels. A host application can set the format for the input and output channels separately. If you dont implement this property to describe additional stream formats, a superclass from the SDK declares that your audio unit supports the default stream format: non-interleaved, 32-bit floating point, native-endian, linear PCM.
kAudioUnitProperty_BusCount
52
CHAPTER 2
Declares the number of buses (also called elements) in the input or output scope of an audio unit. If you dont implement this property, a superclass from the SDK declares that your audio unit uses a single input and output bus, each with an ID of 0.
kAudioUnitProperty_Latency
Declares the minimum possible time for a sample to proceed from input to output of an audio unit, in seconds. For example, an FFT-based filter must acquire a certain number of samples to fill an FFT window before it can calculate an output sample. An audio unit with a latency as short as two or three samples should implement this property to report its latency. If the sample latency for your audio unit varies, use this property to report the maximum latency. Alternatively, you can update the kAudioUnitProperty_Latency property value when latency changes, and issue a property change notification using the Audio Unit Event API. If your audio units latency is 0 seconds, you dont need to implement this property. Otherwise you should, to let host applications compensate appropriately.
kAudioUnitProperty_TailTime
Declares the time, beyond an audio units latency, for a nominal-level signal to decay to silence at an audio units output after it has gone instantaneously to silence at the input. Tail time is significant for audio units performing an effect such as delay or reverberation. Apple recommends that all audio units implement the kAudioUnitProperty_TailTime property, even if its value is 0. If the tail time for your audio unit variessuch as for a variable delayuse this property to report the maximum tail time. Alternatively, you can update the kAudioUnitProperty_TailTime property value when tail time changes, and issue a property change notification using the Audio Unit Event API.
kAudioUnitProperty_SupportedNumChannels
Declares the supported numbers of input and output channels for an audio unit. The value for this property is stored in a channel information structure (AUChannelInfo), which is declared in the AudioUnitProperties.h header file:
typedef struct AUChannelInfo { SInt16 inChannels; SInt16 outChannels; } AUChannelInfo;
This is the default case. Any number of input outChannels = 1 and output channels, as long as the numbers match Any number of input channels, exactly two output channels
one field is 1, the other field inChannels = 1 Any number of input channels, any number of is 2 outChannels = 2 output channels
53
CHAPTER 2
Field values
Example
Meaning, using example Exactly two input channels, exactly six output channels No input channels, exactly two output channels (such as for an instrument unit with stereo output)
If you dont implement this property, a superclass from the SDK declares that your audio unit can use any number of channels provided the number on input matches the number on output.
kAudioUnitProperty_CocoaUI
Declares where a host application can find the bundle and the main class for a Cocoa-based view for an audio unit. Implement this property if you supply a Cocoa custom view. The kAudioUnitProperty_TailTime property is the most common one youll need to implement for an effect unit. To do this: 1. Override the SupportsTail method from the AUBase superclass by adding the following method statement to your audio unit custom class definition:
2.
If your audio unit has a tail time other than 0 seconds, override the GetTailTime method from the AUBase superclass. For example, if your audio unit produces reverberation with a maximum decay time of 3000 mS, add the following override to your audio unit custom class definition:
54
CHAPTER 2
You add code to the GetProperty method to tell the view the current value of each custom property: Listing 2-12 The GetProperty method from the SDKs AUBase class
virtual ComponentResult GetProperty ( AudioUnitPropertyID inID, AudioUnitScope inScope, AudioUnitElement inElement, void *outData);
You would typically structure the GetPropertyInfo and GetProperty methods as switch statements, with one case per custom property. Look at the Filter::GetPropertyInfo and Filter::GetProperty methods in the FilterDemo project to see an example of how to use these methods. You override the SetProperty method to perform whatever work is required to establish new settings for each custom property. Each audio unit property must have a unique integer ID. Apple reserves property ID numbers between 0 and 63999. If you use custom properties, specify ID numbers of 64000 or greater.
Signal Processing
To perform DSP, you use an effect unit (of type 'aufx'), typically built as a subclass of the AUEffectBase class. AUEffectBase uses a helper class to handle the DSP, AUKernelBase, and instantiates one kernel object (AUKernelBase) for each audio channel. Kernel objects are specific to n-to-n channel effect units subclassed from the AUEffectBase class. They are not part of other types of audio units.
55
CHAPTER 2
The AUEffectBase class is strictly for building n-to-n channel effect units. If you are building an effect unit that does not employ a direct mapping of input to output channels, you subclass the AUBase superclass instead. As described in Processing: The Heart of the Matter (page 36), there are two primary methods for audio unit DSP code: Process and Reset. You override the Process method to define the DSP for your audio unit. You override the Reset method to define the cleanup to perform when a user takes an action to end signal processing, such as moving the playback point in a sound editor window. For example, you ensure with Reset that a reverberation decay doesnt interfere with the start of play at a new point in a sound file. Tutorial: Building a Simple Effect Unit with a Generic View (page 87) provides a step-by-step example of implementing a Process method. While an audio unit is rendering, a user can make realtime adjustments using the audio units view. Processing code typically takes into account the current values of parameters and properties that are relevant to the processing. For example, the processing code for a high-pass filter effect unit would perform its calculations based on the current corner frequency as set in the audio units view. The processing code gets this value by reading the appropriate parameter, as described in Defining and Using Parameters (page 50). Audio units built using the classes in the Core Audio SDK work only with constant bit rate (CBR) audio data. When a host application reads variable bit rate (VBR) data, it converts it to a CBR representation, in the form of linear PCM, before sending it to an audio unit.
Music Synthesis
An instrument unit (of type 'aumu'), in contrast to effect unit, renders audio in terms of notes. It acts as a virtual music synthesizer. An instrument unit employs a bank of sounds and responds to MIDI control data, typically initiated by a keyboard. You subclass the AUMonotimbralInstrumentBase class for most instrument units. This class supports monophonic and polyphonic instrument units that can play one voice (also known as a patch or an instrument sound) at a time. For example, if a user chooses a piano voice, the instrument unit acts like a virtual piano, with every key pressed on a musical keyboard invoking a piano note. The Core Audio SDK class hierarchy also provides the AUMultitimbralInstrumentBase class. This class supports monophonic and polyphonic instrument units that can play more than one voice at a time. For example, you could create a multimbral instrument unit that would let a user play a virtual bass guitar with their left hand while playing virtual trumpet with their right hand, using a single keyboard.
Music Effects
A music effect unit (of type 'aumf') provides DSP, like an effect unit, but also responds to MIDI data, like an instrument unit. You build a music effect unit by subclassing the AUMIDIEffectBase superclass from the SDK. For example, you would do this to create an audio unit that provides a filtering effect that is tuned according to the note pressed on a keyboard.
56
CHAPTER 2
Overview
The life cycle of an audio unit, as for any plug-in, consists of responding to requests. Each method that you override or write from scratch in your audio unit is called by an outside process, among them:
The Mac OS X Component Manager, acting on behalf of a host application A host application itself An audio processing graph and, in particular, the downstream audio unit The audio units view, as manipulated by a user
You dont need to anticipate which process or context is calling your code. To the contrary, you design your audio unit to be agnostic to the calling context. Audio unit life cycle proceeds through a series of states, which include:
Uninstantiated. In this state, there is no object instance of the audio unit, but the audio units class presence on the system is registered by the Component Manager. Host applications use the Component Manager registry to find and open audio units. Instantiated but not initialized. In this state, host applications can query an audio unit object for its properties and can configure some properties. Users can manipulate the parameters of an instantiated audio unit by way of a view.
57
CHAPTER 2
Initialized. Host applications can hook up initialized audio units into audio processing graphs. Hosts and graphs can ask initialized audio units to render audio. In addition, some properties can be changed in the initialized state. Uninitialized. The Audio Unit architecture allows an audio unit to be explicitly uninitialized by a host application. The uninitialization process is not necessarily symmetrical with initialization. For example, an instrument unit can be designed to still have access, in this state, to a MIDI sound bank that it allocated upon initialization.
Housekeeping events that the host application initiates. These include finding, opening, validating, connecting, and closing audio units. For these types of events, an audio unit built from the Core Audio SDK typically relies on code in its supplied superclasses. Operational events that invoke your custom code. These events, initiated by the host or by your audio units view, include initialization, configuration, rendering, resetting, real-time or offline changes to the rendering, uninitialization, reinitialization, and clean-up upon closing. For some simple audio units, some operational events (especially initialization) can also rely on code from SDK superclasses.
58
CHAPTER 2
Property Configuration
When possible, an audio unit should configure its properties in its constructor method. However, audio unit properties can be configured at a variety of times and by a variety of entities. Each individual property is usually configured in one of the following ways:
By the audio unit itself, typically during instantiation By the application hosting the audio unit, before or after audio unit initialization By the audio units view, as manipulated by a user, when the audio unit is initialized or uninitialized
This variability in configuring audio unit properties derives from the requirements of the various properties, the type of the audio unit, and the needs of the host application. For some properties, the SDK superclasses define whether configuration can take place while an audio unit is initialized or only when it is uninitialized. For example, a host application cannot change an audio units stream format (using the kAudioUnitProperty_StreamFormat property) unless it ensures that the audio unit is uninitialized. For other properties, such as the kAudioUnitProperty_SetRenderCallback property, the audio unit specification prohibits hosts from changing the property on an initialized audio unit but there is no programmatic enforcement against it. For yet other properties, such as the kAudioUnitProperty_OfflineRender property, it is up to the audio unit to determine whether to require uninitialization before changing the property value. If the audio unit can handle the change gracefully while initialized, it can allow it. The audio unit specification details the configuration requirements for each Apple defined property.
An instrument unit acquires a MIDI sound bank for the unit to use when responding to MIDI data An effect unit allocates memory buffers for use during rendering An effect unit calculates wave tables for use during rendering
Generally speaking, each of these operations should be performed in an override of the Initialize method from the AUBase class.
59
CHAPTER 2
If you define an override of the Initialize method for an effect unit, begin it with a call to AUEffectBase::Initialize. This will ensure that housekeeping tasks, like proper channel setup, are taken care of for your audio unit. If you are setting up internal buffers for processing, you can find out how large to make them by calling the AUBase::GetMaxFramesPerSlice method. This accesses a value that your audio units host application defines before it invokes initialization. The actual number of frames per render call can vary. It is set by the host application by using the inFramesToProcess parameter of the AUEffectBase::Process or AUBase::DoRender methods. Initialization is also the appropriate time to invoke an audio units copy protection. Copy protection can include such things as a password challenge or checking for the presence of a hardware dongle. The audio unit class hierarchy in the Core Audio SDK provides specialized Initialize methods for the various types of audio units. Effect units, for example, use the Initialize method in the AUEffectBase class. This method performs a number of important housekeeping tasks, including:
Protecting the effect unit against a host application which attempts to connect it up in ways that wont work Determining the number of input and output channels supported by the effect unit, as well as the channel configuration to be used for the current initialization. (Effect units can be designed to support a variable number of input and output channels, and the number used can change from one initialization to the next.)
Setting up or updating the kernel objects for the effect unit, ensuring they are ready to do their work
In many cases, such as in the effect unit youll create in Tutorial: Building a Simple Effect Unit with a Generic View (page 87), effect units dont need additional initialization work in the audio units class. They can simply use the Initialize method from AUBase as is, by inheritance. The effect unit youll build in the tutorial does this. In the specific case of an effect unit based on the AUEffectBase superclass, you can put resource-intensive initialization code into the constructor for the DSP kernel object. This works because kernels are instantiated during effect unit initialization. The example effect unit that you build later in this document describes this part of an effect units life cycle. Once instantiated, a host application can initialize and uninitialize an audio unit repeatedly, as appropriate for what the user wants to do. For example, if a user wants to change sampling rate, the host application can do so without first closing the audio unit. (Some other audio plug-in technologies do not offer this feature.)
60
CHAPTER 2
2. 3.
The effect unit gets initialized During initialization, the effect unit instantiates an appropriate number of kernel objects
This sequence of events makes the kernel object constructor a good place for code that you want invoked during audio unit initialization. For example, the tremolo unit in this documents tutorial builds its tremolo wave tables during kernel instantiation.
Establishing input and output connections Breaking input and output connections Responding to rendering requests
A host application is responsible for making and breaking connections for an audio processing graph. Performing connection and disconnection takes place by way of setting properties, as discussed earlier in this chapter in Audio Processing Graph Connections (page 47). For an audio unit to be added to or removed from a graph, it must be uninitialized. Audio data flow in graphs proceeds according to a pull model, as described in Audio Processing Graphs and the Pull Model (page 34).
Processes audio (for example, effect units and music effect units) Generates audio from MIDI data (instrument units) or otherwise, such as by reading a file (generator units) Transforms audio data (format converter units) such as by changing sample rate, bit depth, encoding scheme, or some other audio data characteristic
In effect units built using the Core Audio SDK, the processing work takes place in a C++ method called Process. This method, from the AUKernelBase class, is declared in the AUEffectBase.h header file in the SDK. In instrument units built using the SDK, the audio generation work takes place in a method called Render, defined in the AUInstrumentBase class. In an effect unit, processing starts when the unit receives a call to its Process method. This call typically comes from the downstream audio unit in an audio processing graph. As described in Audio Processing Graph Interactions (page 61), the call is the result of a cascade originating from the host application, by way of the graph object, asking the final node in the graph to start.
61
CHAPTER 2
The processing call that the audio unit receives specifies the input and output buffers as well as the amount of data to process: Listing 2-13
virtual void Process const Float32 Float32 UInt32 UInt32 bool &
For an example implementation of the Process method, see Tutorial: Building a Simple Effect Unit with a Generic View (page 87). Processing is the most computationally expensive part of an audio units life cycle. Within the processing loop, avoid the following actions:
Note: Some Core Foudation calls, such as CFRetain and CFRelease, employ mutex locks. For this reason, its best to avoid Core Foundation calls during processing.
Closing
When a host is finished using an audio unit, it should close it by calling the Component Managers CloseComponent function. This function invokes the audio units destructor method. Audio units themselves must take care of freeing any resources they have allocated. If youre using copy protection in your audio unit, you should end it only on object destruction.
62
CHAPTER 3
Almost every audio unit needs a graphical interface to let the user adjust the audio units operation and see what its doing. In Core Audio terminology, such a graphical interface is called a view. When you understand how views work and how to build them, you can add a great deal of value to the audio units you create.
Types of Views
There are two main types of views:
Generic views provide a functional yet decidedly non-flashy interface. You get a generic view for free. It is built for you by the host application that opens your audio unit, based on parameter and property definitions in your audio unit source files. Custom views are graphical user interfaces that you design and build. Creating a great custom view may entail more than half of the development time for an audio unit. For your effort you can offer something to your users that is not only more attractive but more functional as well. You may choose to wait on developing a custom view until after your audio unit is working, or you may forgo the custom view option entirely. If you do create a custom view, you can use Carbon or Cocoa.
Separation of Concerns
From the standpoint of a user, a view is an audio unit. From your standpoint as a developer, the situation is a bit more subtle. You build a view to be logically separate from the audio unit executable code, yet packaged within the same bundle. To achieve this programmatic separation, Apple recommends that you develop your custom views so that they would work running in a separate address space, in a separate process, and on a separate machine from the audio unit executable code. For example, you pass data between the audio unit executable code and its view only by value, never by reference. Without impinging on this formal separation, however, its often convenient to share a header file between an audio unit executable code and its view. Youll see this done in the SDKs FilterDemo project. A shared header file can provide, for example, data types and constants for custom properties that an audio units executable code and its view can both use. Both generic and custom views make use of a notification mechanism to communicate with their associated audio unit executable code, as described in Parameter and Property Events (page 67).
Types of Views
2007-10-31 | 2007 Apple Inc. All Rights Reserved.
63
CHAPTER 3
The audio unit associated with this view has two continuously variable parameters, each represented in the view by a simple slider along with a text field showing the parameters current value. The generic view for your audio unit will have a similar look. Table 3-1 describes where each user interface element in a generic view comes from. The Source of value column in the table refers to files you see in an audio unit Xcode project built using the Audio Unit Effect with Cocoa View template. Table 3-1 User interface items in an audio unit generic view Example value Source of value
NAME defined constant in the<className>.r
Audio unit name, at upper left of Audio Unit: Filter view utility window Audio unit name, in title bar of view utility window and in pop-up Manufacturer name, at upper right of view utility window Parameter name Filter
file
NAME defined constant in the <className>.r
file
Apple Demo
22050
12.00
64
CHAPTER 3
Source of value
kDefaultValue_ParamOne float value from the <className>.h file
Hz
Measurement unit as specified in the GetParameterInfo method in the <className>.cpp file. The generic view mechanism presents a slider for this parameter. This is because the parameter unit of measurement, as defined in the GetParameterInfo method, is linear gain.
Slider
As shown next, a custom view can provide significantly more value and utility than a generic view.
Custom Views
When a host opens your audio unit, it asks if theres a custom view available. If so, it can use it as described in View Instantiation and Initialization (page 66). Heres the custom view for the same audio unit described in The Generic View, namely, the FilterDemo audio unit from the Core Audio SDK:
The primary feature of this custom view is a realtime frequency response curve. This makes the custom view (and, by association, the audio unit) more attractive and far more useful. Instead of seeing just a pair of numbers, a user can now see the frequency response, including how filter resonance and cutoff frequency influence the response. No matter what sort of audio unit you build, you can provide similar benefits to users when you include a custom view with your audio unit.
Custom Views
2007-10-31 | 2007 Apple Inc. All Rights Reserved.
65
CHAPTER 3
The ability to hide unneeded detail, or to provide progressive disclosure of controls The ability to provide support for parameter automation Choice of user-interface controls, for example knobs, faders, or horizontal sliders Much more information for the user through real-time graphs, such as frequency response curves A branding opportunity for your company
The SDKs FilterDemo audio unit project is a good example to follow when creating a custom view for your audio unit. See Tutorial: Demonstrating Parameter Gestures and Audio Unit Events (page 69) later in this chapter for more on custom views.
if (AudioUnitGetProperty ( inAU, // the audio unit the host is checking kAudioUnitProperty_CocoaUI, // the property the host is querying kAudioUnitScope_Global, 0, cocoaViewInfo, &dataSize) == noErr) { CocoaViewBundlePath = // the host gets the path to the view bundle (NSURL *) cocoaViewInfo -> mCocoaAUViewBundleLocation; factoryClassName = // the host gets the view's class name (NSString *) cocoaViewInfo -> mCocoaAUViewClass[0]; }
If you do not supply a custom view with your audio unit, the host will build a generic view based on your audio units parameter and property definitions. Here is what happens, in terms of presenting a view to the user, when a host opens an audio unit: 1. The host application calls the GetProperty method on an audio unit to find out if it has a custom view, as shown in Listing 3-1. If the audio unit provides a Cocoa view, the audio unit should implement the kAudioUnitProperty_CocoaUI property. If the audio unit provides a Carbon view, the audio unit should implement the kAudioUnitProperty_GetUIComponentList property. The rest of this sequence assumes the use of a Cocoa custom view. The host calls the GetPropertyInfo method for the kAudioUnitProperty_CocoaUI property to find out how many Cocoa custom views are available. As a short cut, a host can skip the call to GetPropertyInfo. In this case, the host would take the first view in the view class array by using code such as shown in the listing above, using an array index of 0: factoryClassName = (NSString *) cocoaViewInfo -> mCocoaAUViewClass[0];. In this case, skip ahead to step 4.
2.
66
CHAPTER 3
3.
The audio unit returns the size of the AudioUnitCocoaViewInfo structure as an integer value, indicating how many Cocoa custom views are available. Typically, developers create one view per audio unit. The host examines the value of cocoaViewInfo to find out where the view bundle is and what the main view class is for the view (or for the specified view if the audio unit provides more than one). The host loads the view bundle, starting by loading the main view class to instantiate it.
4.
5.
There are some rules about how to structure the main view class for a Cocoa view:
The view must implement the AUCocoaUIBase protocol. This protocol specifies that the view class acts as a factory for views, and returns an NSView object using the uiViewForAudioUnit:withSize: method. This method tells the view which audio unit owns it, and provides a hint regarding screen size for the view in pixels (using an NSSize structure).
If youre using a nib file to construct the view (as opposed to generating the view programmatically), the owner of the nib file is the main (factory) class for the view.
An audio units view should work whether the audio unit is simply instantiated or whether it has been initialized. The view should continue to work if the host uninitializes the audio unit. That is, a view should not assume that its audio unit is initialized. This is important enough in practice that the auval tool includes a test for retention of parameter values across uninitialization and reinitialization.
typedef struct AudioUnitEvent { AudioUnitEventType mEventType; union { AudioUnitParameter mParameter; AudioUnitProperty mProperty; } mArgument;
67
CHAPTER 3
} AudioUnitEvent;
Heres how this structure works: 1. 2. Identifies the type of the notification, as defined in the AudioUnitEventType enumeration. Identifies the parameter involved in the notification, for notifications that are begin or end gestures or changes to parameters. (See Parameter Gestures (page 68).) The AudioUnitParameter data type is used by the Audio Unit Event API and not by the Audio Unit framework, even though it is defined in the Audio Unit framework. Identifies the property involved in the notification, for notifications that are property change notifications.
3.
A corresponding AudioUnitEventType enumeration lists the various defined AudioUnitEvent event types, shown in Listing 3-3: Listing 3-3 The AudioUnitEventType enumeration
= = = =
0, 1, 2, 3
kAudioUnitEvent_ParameterValueChange Indicates that the notification describes a change in the value of a parameter kAudioUnitEvent_BeginParameterChangeGesture Indicates that the notification describes a parameter begin gesture; a parameter value is about to change kAudioUnitEvent_EndParameterChangeGesture Indicates that the notification describes a parameter end gesture; a parameter value has finished changing kAudioUnitEvent_PropertyChange Indicates that the notification describes a change in the value of an audio unit property
Parameter Gestures
User-interface events that signal the start or end of a parameter change are called gestures. These events can serve to pass notifications among a host, a view, and an audio unit that a parameter is about to be changed, or has just finished being changed. Like parameter and property changes, gestures are communicated using the Audio Unit Event API. Specifically, gestures use the kAudioUnitEvent_BeginParameterChangeGesture and kAudioUnitEvent_EndParameterChangeGesture event types, as shown in Listing 3-3 (page 68), above.
68
CHAPTER 3
Instantiating two views for an audio unit Performing adjustments in one of the views while observing the effect in the other view
Introduces you to compiling an audio unit project with Xcode Shows how AU Lab can display a generic view and a custom view for the same audio unit Shows how a custom property supports communication between an audio unit and a custom view
Before you start, make sure that youve installed Xcode and the Core Audio SDK, both of which are part of Apples Xcode Tools installation.
69
CHAPTER 3
2. Click Build to build the audio unit project. (You may see some warnings about non-virtual destructors. Ignore these warnings.) Xcode creates a new build folder inside the FilterDemo project folder.
70
CHAPTER 3
3. Open the build folder and look inside the Development target folder.
The newly built audio unit bundle is named FilterDemo.component, as shown in the figure. 4. Copy the FilterDemo.component bundle to the ~/Library/Audio/Plug-Ins/Components folder. In this location, the newly built audio unit is available to host applications.
71
CHAPTER 3
5. Launch the AU Lab audio unit host application (in /Developer/Applications/Audio/) and create a new AU Lab document. Unless you've configured AU Lab to use a default document style, the Create New Document window opens. If AU Lab was already running, choose File > New to get this window.
Ensure that the configuration matches the settings shown in the figure: Built-In Audio for the Audio Device, Line In for the Input Source, and Stereo for Output Channels. Leave the window's Inputs tab unconfigured; you will specify the input later. Click OK.
72
CHAPTER 3
A new AU Lab window opens, showing the output channel you specified.
73
CHAPTER 3
6. Click the triangular menu button in the one row of the Effects section in the Master Out track in AU Lab, as shown in the figure.
74
CHAPTER 3
In the menu that opens, choose the Filter audio unit from the Apple Demo group:
75
CHAPTER 3
The custom views frequency response curve is drawn in real time based on the audio units actual frequency response. The audio unit makes its frequency response data available to the custom view by declaring a custom property. The audio unit keeps the value of its custom property up to date. The custom view queries the audio units custom property to draw the frequency response curve.
76
CHAPTER 3
77
CHAPTER 3
78
CHAPTER 3
8. Click the view type pop-up menu in one instance of the audio units view, as shown in the figure:
The view changes to the generic view, as shown in the next figure. You are now set up to demonstrate gestures and audio unit events.
79
CHAPTER 3
9. Click and hold one of the sliders in the generic view, as shown in the figure. When you click, observe that the crosshairs in the custom view become highlighted in bright blue. They remain highlighted as long as you hold down the mouse button.
As you move the slider in the generic view, frequency response curve in the custom view keeps pace with the new setting.
The highlighting and un-highlighting of the crosshairs in the custom view, when you click and release on a slider in the generic view, result from gesture events. The changes in the frequency response curve in the custom view, as you move a slider in the generic view, result from parameter change events
80
CHAPTER 3
10. Finally, click and hold at the intersection of the crosshairs in the custom view. When you click, observe that the sliders in the generic view become highlighted. As you move the crosshairs in the custom view, the sliders in the generic view keep pace with the new settings.
This demonstrates that both views, and the audio unit itself, remain in sync by way of audio unit events.
81
CHAPTER 3
82
CHAPTER 4
You can build an audio unit from scratch using the Core Audio frameworks directly. However, as described throughout this document, Apple strongly encourages you to begin with the freely downloadable Core Audio software development kit, or SDK. Apple builds all of the audio units that ship in Mac OS X using the SDK. The SDK offers many advantages to audio unit developers. The SDK:
Insulates you from almost all the complexity in dealing with the Mac OS X Component Manager Greatly simplifies development effort with a rich C++ class hierarchy. In many cases, you can create audio units with a few method overrides. This lets you build full-featured, commercial quality audio units without directly calling any of the APIs in the Core Audio frameworks. Provides a straightforward starting point with Xcode audio unit project templates.
AudioUnits
83
CHAPTER 4
PublicUtility Services
Directly using the the audio units built by these projects Studying the source code to gain insight into how to design and implement audio units Using the projects as starting points for your own audio units
The DiagnosticAUs project builds three audio units that you may find useful for troubleshooting and analysis as youre developing your audio units:
AUValidSamples detects samples passing through it that are out of range or otherwise invalid. You can choose this audio unit from the Apple_DEBUG group in an audio unit host such as AU Lab. AUDebugDispatcher facilitates audio unit debugging. You can choose this audio unit from the Acme Inc group in an audio unit host such as AU Lab. AUPulseDetector measures latency in host applications
The FilterDemo project builds a basic resonant low-pass filter with a custom view. The MultitapDelayAU project builds a multi-tap delay with a custom view. The SampleAUs project builds a pass-through audio unit that includes presets and a variety of parameter types. The SinSynth project builds a simple instrument unit that you can use in a host application like GarageBand.
84
CHAPTER 4
Directly using the the tools, audio units, and host applications built by these projects Studying the source code to gain insight into how Core Audio works, as well as insight into how to design and implement tools, audio units, and host applications Using the projects as starting points for your own tools, audio units, and host applications
The Services folder contains the following Xcode projects: AudioFileTools A project that builds a set of eight command-line tools for playing, recording, examining, and manipulating audio files. AudioUnitHosting A project that builds a Carbon-based audio unit host application. Useful in terms of sample code for host developers but deprecated as a host for testing your audio units. (For audio unit testing, use AU Lab.) AUMixer3DTest A project that builds an application that uses a 3D mixer audio unit. AUViewTest A project that builds a windowless application based on an audio processing graph. The graph includes an instrument unit, an effect unit, and an output unit. Theres a menu item to open a view for the effect unit. The AUViewTest application uses a music player object to play a repeating sequence through the instrument unit. CocoaAUHost A project that builds a Cocoa-based audio unit host application. This project is useful in terms of sample code for host developers but deprecated as a host for testing your audio units. (For audio unit testing, use AU Lab.) MatrixMixerTest A project that builds an example mixer unit with a custom Cocoa view. OpenALExample A project that builds an application based on the OpenAL API with Apple extensions. The application demonstrates control of listener position and orientation within a two-dimensional layout of audio sources.
85
CHAPTER 4
86
CHAPTER 5
Properties, parameters, and factory presets Audio unit types, subtypes, manufacturer IDs, and version numbers Audio unit life cycle including instantiation, initialization, and rendering The role of the Core Audio SDK in audio unit development The role of the AU Lab application and the auval tool in audio unit development
Overview
Simple effect units operate on individual audio samples without considering one audio samples relationship to another, and without using a processing buffer. You build such an effect unit in this chapter, further simplified by leaving out some advanced audio unit features: a custom view and custom properties. Simple effect units can do very basic DSP, such as:
Monaural tremolo is a continuous wavering produced by varying an audio channels gain at low frequency, on the order of a few Hertz. This figure illustrates monaural tremolo that varies from full gain to silence:
Overview
2007-10-31 | 2007 Apple Inc. All Rights Reserved.
87
CHAPTER 5
Figure 5-1
Monaural tremolo
Stereo tremolo is similar but involves continuous left/right panning at a low frequency. In this chapter you design and build a monaural tremolo effect that uses the generic view. The steps, described in detail in the following sections, are: 1. 2. Install the Core Audio development kit if you haven't already done so. Perform some design work, including:
Specify the sort of DSP your audio unit will perform Design the parameter interface Design the factory preset interface Determine the configuration information for your audio unit bundle, such as the subtype and manufacturer codes, and the version number
3. 4.
Create and configure the Xcode project. Implement your audio unit:
Implement the parameter and preset interfaces. Implement signal processingthe heart of your audio unit.
5.
Your development environment for building any audio unitsimple or otherwiseshould include the pieces described under Required Tools for Audio Unit Development (page 11) in the Introduction (page 9). You do not need to refer to the Xcode documentation or the Core Audio SDK documentation to complete the tasks in this chapter.
88
CHAPTER 5
http://developer.apple.com/sdk/ The Core Audio SDK installer places C++ superclasses, example Xcode projects, and documentation at this location on your system:
/Developer/Examples/CoreAudio
The SDK also installs Xcode project templates for audio units at this location on your system:
/Library/Application Support/Apple/Developer Tools/Project Templates/
2.
After you have installed the SDK, confirm that Xcode recognizes the audio unit templates, as follows:
Launch Xcode and then choose File > New Project. In the Assistant window, confirm that there is an Audio Units group.
Figure 5-2
Having confirmed the presence of the Audio Units template group, click Cancel, and then quit Xcode. Note: If there is no Audio Units template group, check whether the SDK files were indeed installed as expected. If not, try installing again. Also make sure you are using the newest version of Xcode.
89
CHAPTER 5
Tremolo frequencythe number of tremolo cycles per second. Tremolo depththe mix between the input signal and the signal with tremolo applied. At a depth of 0%, there is no tremolo effect. At a depth of 100%, the effect ranges from full amplitude to silence. Tremolo waveformthe shape of the tremolo effect, such as sine wave or square wave.
User interface name as it appears in the audio unit view Programmatic name, also called the parameter ID, used in the GetParameterInfo method in the audio unit implementation file Unit of measurement, such as gain, decibels, or hertz Minimum value Maximum value Default value
The following tables specify the parameter design. You'll use most of these values directly in your code. Specifying a "Description" is for your benefit while developing and extending the audio unit. You can reuse the description later in online help or a user guide. Table 5-1 Specification of tremolo frequency parameter Value Frequency
90
CHAPTER 5
Value The frequency of the tremolo effect. When this parameter is set to 2 Hz, there are two cycles of the tremolo effect per second. This parameters value is continuous, so the user can set any value within the available range. The user adjusts tremolo frequency with a slider.
kTremolo_Frequency
Programmatic name
Unit of measurement Hz Minimum value Maximum value Default value Table 5-2 0.5 10.0 2.0 Specification of tremolo depth parameter Value Depth The depth, or intensity, of the tremolo effect. When set to 0%, there is no tremolo effect. When set to 100%, the tremolo effect ranges over each cycle from silence to unity gain. This parameters value is continuous, so the user can set any value within the available range. The user adjusts tremolo depth with a slider.
kTremolo_Depth
Programmatic name
Unit of measurement Percent Minimum value Maximum value Default value Table 5-3 0.0 100.0 50.0 Specification of tremolo waveform parameter Value Waveform The waveform that the tremolo effect follows. This parameter can take on a set of discrete values. The user picks a tremolo waveform from a menu.
kTremolo_Waveform
Programmatic name
91
CHAPTER 5
Value Sine
It's easy to add, delete, and refine parameters later. For example, if you were creating an audio level adjustment parameter, you might start with a linear scale and later change to a logarithmic scale. For the tremolo unit you're building, you might later decide to add additional tremolo waveforms.
50% Sine
Specification of Fast & Hard factory preset Value Fast & Hard Frenetic, percussive, and intense
90% Square
92
CHAPTER 5
Audio unit bundle English name, such as Tremolo Unit. Host applications will display this name to let users pick your audio unit. Audio unit programmatic name, such as TremoloUnit. Youll use this to name your Xcode project, which in turn uses this name for your audio units main class. Audio unit bundle version number, such as 1.0.2. Audio unit bundle brief description, such as "Tremolo Unit version 1.0, copyright 2006, Angry Audio." This description appears in the Version field of the audio unit bundles Get Info window in the Finder. Audio unit bundle subtype, a four-character code that provides some indication (to a person) of what your audio unit does. For this audio unit, use 'tmlo'. Company English name, such as Angry Audio. This string appears in the audio unit generic view. Company programmatic name, such as angryaudio. Youll use this string in the reverse domain name-style identifier for your audio unit. Company four-character code, such as 'Aaud'. You obtain this code from Apple, as a creator code, by using the Data Type Registration page. For the purposes of this example, for our fictitious company named Angry Audio, we'll use 'Aaud'.
Reverse domain name identifier, such as com.angryaudio.audiounit.TremoloUnit. The Component Manager will use this name when identifying your audio unit. Custom icon, for display in the Cocoa generic viewoptional but a nice touch. This should be a so-called "Small" icon as built with Apples Icon Composer application, available on your system in the /Developer/Applications/Utilities folder.
Now that you have set this so called expert preference, Xcode places your companys name in the copyright notice in the source files of any new project.
93
CHAPTER 5
1. Choose File > New Project 2. In the New Project Assistant dialog, choose the Audio Unit Effect template and click Next.
94
CHAPTER 5
3. Name the project TremoloUnit and specify a project directory. Then click Finish.
Xcode creates the project files for your audio unit and the Xcode project window opens. At this point, Xcode has used the audio unit project template file to create a subclass of the AUEffectBase class. Your custom subclass is named according to the name of your project. You can find your custom subclasss implementation file, TremoloUnit.cpp, in the AU Source group in the Xcode Groups & Files pane, shown next.
95
CHAPTER 5
In later steps, you'll edit methods in your custom subclass to override the superclasss methods. TremoloUnit.cpp also contains a couple of methods from the AUKernelBase helper class; these are the methods that you later modify to perform digital signal processing. 4. With the AU Source group open as shown in the previous step, click TremoloUnitVersion.h. Then click the Editor toolbar button, if necessary, to display the text editor. There are three values to customize in this file: kTremoloUnitVersion, TremoloUnit_COMP_SUBTYPE, and TremoloUnit_COMP_MANF.
96
CHAPTER 5
Scroll down in the editing pane to the definitions of TremoloUnit_COMP_SUBTYPE and TremoloUnit_COMP_MANF. Customize the subtype field with the four-character subtype code that you've chosen. In this example, 'tmlo' indicates (to developers and users) that the audio unit lets a user add tremolo.
Also customize the manufacturer name with the unique four-character string that identifies your company. Note: There is no #define statement for component type in the TremoloUnitVersion.h file because you specified the typeeffect unitwhen you picked the Xcode template for the audio unit. The audio unit bundle type is specified in the AU Source/TremoloUnit.r resource file. Now set the version number for the audio unit. In the TremoloUnitVersion.h file, just above the definitions for subtype and manufacturer, youll see the definition statement for the kTremoloUnitVersion constant. By default, the template sets this constants value to 1.0.0, as represented by the hexadecimal number 0x00010000. Change this, if you like. See Audio Unit Identification (page 29) for how to construct the hexadecimal version number. Save the TremoloUnitVersion.h file. 5. Click the TremoloUnit.r resource file in the "Source/AU Source" group in the Groups & Files pane. There are two values to customize in this file: NAME and DESCRIPTION.
NAME is used by the generic view to display both your company name and the audio unit name DESCRIPTION serves as the menu item for users choosing your audio unit from within a host application.
To work correctly with the generic view, the value for NAME must follow a specific pattern:
97
CHAPTER 5
If you have set your company name using the Xcode expert preference as described earlier, it will already be in place in the NAME variable for this project; to follow this example, all you need to do is add a space between Tremolo and Unit in the audio unit name itself. The Xcode template provides a default value for DESCRIPTION. If you customize it, keep it short so that the string works well with pop-up menus. The figure shows a customized DESCRIPTION.
As you can see in the figure, the resource file uses a #include statement to import the Version header file that you customized in step 4, TremoloUnitVersion.h. The resource file uses values from that header file to define some variables such as component subtype (COMP_SUBTYPE) and manufacturer (COMP_MANUF). Save the TremoloUnit.r resource file.
98
CHAPTER 5
6. Open the Resources group in the Groups & Files pane in the Xcode project window, and click on the InfoPlist.strings file.
Using the editor, customize the value for CFBundleGetInfoString using the value youve chosen for the audio unit brief description. The figure provides an example. This string appears in the Version field of the audio unit bundles Get Info window in the Finder. Save the InfoPlist.strings file. 7. Open the Targets group in the Groups & Files pane in the Xcode project window. Double-click the audio unit bundle, which has the same name as your projectin this case, TremoloUnit.
99
CHAPTER 5
In the Target Info windows Properties tab, provide values for Identifier, Creator, Version, and, optionally, a path to a Finder icon file for the bundle that you place in the bundles Resources folder. The audio unit bundle identifier field should follow the pattern:
com.<company_name>.audiounit.<audio_unit_name>
For the Creator value, use the same four-character string used for the manufacturer field in step 4. Xcode transfers all the information from the Properties tab into the audio unit bundles Info.plist file. You can open the Info.plist file, if you'd like to inspect it, directly from this dialog using the "Open Info.plist as File" button at the bottom of the window. When finished, close the Info.plist file (if you've opened it) or close the Target Info window. 8. Now configure the Xcode projects build process to copy your audio unit bundle to the appropriate location so that host applications can use it.
100
CHAPTER 5
In the project window, disclose the Products group and the Targets group, as shown in the figure, so that you can see the icon for the audio unit bundle itself (TremoloUnit.component) as well as the build phases (under Targets/TremoloUnit).
9. Now add a new build phase. Right-click (or control-click) the final build phase for TremoloUnit and choose Add > New Build Phase > New Copy Files Build Phase.
101
CHAPTER 5
The new Copy Files build phase appears at the end of the list, and a dialog opens, titled Copy Files Phase for "TremoloUnit" Info.
Change the Destination pop-up to Absolute Path, as shown in the figure. Enter the absolute destination path for the built audio unit bundle in the Full Path field. Note: The copy phase will not work if you enter a tilde (~) character to indicate your home folder. In Xcode 2.4, the Full Path field will let you enter a path by dragging the destination folder into the text field only if you first click in the Full Path field. You can use either of the valid paths for audio unit bundles, as described in Audio Unit Installation and Registration (page 29). With the full path entered, close the dialog.
102
CHAPTER 5
Now drag the TremoloUnit.component icon from the Products group to the new build phase.
You can later change the Copy Files location, if you want, by double clicking the gray Copy Files build phase icon. Alternatively, click the Copy Files icon and then click the Info button in the toolbar. At this point, you have have the makings for a working audio unit. You have not yet customized it to do whatever it is that youll have it do (in our present case, to provide a single-channel tremolo effect). Its a good idea to ensure that you can build it without errors, that you can validate it with the auval tool, and that you can use it in a host application. Do this in the next step.
103
CHAPTER 5
10. Build the project. You can do this in any of the standard ways: click the Build button in the toolbar, or choose Build from the Build button's menu, or choose Build > Build from the main menu, or type command-B.
If everything is in order, your project will build without error. The copy files build phase that you added in the previous step ensures that a copy of the audio unit bundle gets placed in the appropriate location for the Component Manager to find it when a host application launches. The next step ensures that is so, and lets you test that it works in a host application.
Use the auval tool to verify that Mac OS X recognizes your new audio unit bundle Launch the AU Lab application Configure AU Lab to test your new audio unit, and test it
Apples Core Audio team provides AU Lab in the /Developer/Applications/Audio folder, along with documentation. You do not need to refer to AU Labs documentation to complete this task. The auval command-line tool is part of a standard Mac OS X installation. 1. In Terminal, enter the command auval -a. If Mac OS X recognizes your new audio unit bundle, you see a listing similar to this one:
104
CHAPTER 5
If your Xcode project builds without error, but you do not see the new audio unit bundle in the list reported by the auval tool, double check that youve entered the correct path in the Copy Files phase, as described in step 8 of Create and Configure the Xcode Project (page 93). Note: The next few steps in this tutorial will be familiar to you if you went through Tutorial: Using an Audio Unit in a Host Application (page 20) in Audio Unit Development Fundamentals.
105
CHAPTER 5
2. Launch AU Lab and create a new AU Lab document. Unless youve configured AU Lab to use a default document style, the Create New Document window opens. If AU Lab was already running, choose File > New to get this window.
Ensure that the configuration matches the settings shown in the figure: Built-In Audio for the Audio Device, Line In for the Input Source, and Stereo for Output Channels. Leave the windows Inputs tab unconfigured; you will specify the input later. Click OK.
106
CHAPTER 5
A new AU Lab window opens, showing the output channel you specified.
107
CHAPTER 5
3. Choose Edit > Add Audio Unit Generator. A dialog opens from the AU Lab window to let you specify the generator unit to serve as the audio source.
In the dialog, ensure that the AUAudioFilePlayer unit is selected in the Generator pop-up. To follow this example, change the Group Name to Player. Click OK. Note: You can change the group name at any time by double-clicking it in the AU Lab window.
108
CHAPTER 5
The AU Lab window now shows a stereo input track. In addition, an inspector window has opened for the player unit. If you close the inspector, you can reopen it by clicking the rectangular "AU" button near the top of the Player track.
109
CHAPTER 5
4. Add an audio file to the Audio Files list in the player inspector window. Do this by dragging the audio file from the Finder, as shown. Putting an audio file in the player inspector window lets you send audio through the new audio unit. Just about any audio file will do, although a continuous tone is helpful for testing.
110
CHAPTER 5
5. Click the triangular menu button in the first row of the Effects section in the Player track, as shown in the figure.
A menu opens, listing all the audio units available on your system, arranged by category and manufacturer. There is an Angry Audio group in the pop-up, as shown in the next figure. Choose your new audio unit from the Effects first row pop-up.
111
CHAPTER 5
AU Lab opens your audio units Cocoa generic view, which appears as a utility window.
The generic view displays your audio units interface as it comes directly from the parameter and property definitions supplied by the Xcode template. The template defines an audio unit that provides level adjustment. Its view, built by the AU Lab host application, features a Gain control. You modify the view in a later step in this task by changing your audio units parameter definitions. Refer to Table 3-1 (page 64) for information on where you define each user interface element for the generic view. 6. Click the Play button in the AUAudioFilePlayer inspector to send audio through the unmodified audio unit. This lets you ensure that audio indeed passes through the audio unit. Vary the slider in the generic view, as you listen to the audio, to ensure that the parameter is working. 7. Save the AU Lab document for use later, giving it a name such as Tremolo Unit Test.trak You will use it . in the final section of this chapter, Test your Completed Audio Unit (page 133). Next, you'll define your tremolo effect units parameter interface to give a user control over tremolo rate, depth, and waveform.
Name the parameters and give them values in the custom subclasss header file Add statements to the audio units constructor method to set up the parameters when the audio unit is instantiated
112
CHAPTER 5
Override the GetParameterInfo method from the SDKs AUBase class, in the custom subclasss implementation file, to define the parameters
The code you implement here does not make use of the parameters, per se. It is the DSP code that you implement later, in Implement Signal Processing (page 123), that makes use of the parameters. Here, you are simply defining the parameters so that they will appear in the audio unit generic view and so that theyre ready to use when you implement the DSP code.
#pragma mark ____TremoloUnit Parameter Constants static static static static static static static static static static static static CFStringRef const float const float const float CFStringRef const float const float const float kParamName_Tremolo_Freq kDefaultValue_Tremolo_Freq kMinimumValue_Tremolo_Freq kMaximumValue_Tremolo_Freq kParamName_Tremolo_Depth kDefaultValue_Tremolo_Depth kMinimumValue_Tremolo_Depth kMaximumValue_Tremolo_Depth = = = = = = = = = = = = CFSTR ("Frequency"); 2.0; 0.5; 20.0; CFSTR ("Depth"); 50.0; 0.0; 100.0; // // // // 1 2 3 4
// 5
CFStringRef kParamName_Tremolo_Waveform const int kSineWave_Tremolo_Waveform const int kSquareWave_Tremolo_Waveform const int kDefaultValue_Tremolo_Waveform
// menu item names for the waveform parameter static CFStringRef kMenuItem_Tremolo_Sine static CFStringRef kMenuItem_Tremolo_Square // parameter identifiers enum { kParameter_Frequency kParameter_Depth kParameter_Waveform kNumberOfParameters };
// 7 // 8
// 9 = = = = 0, 1, 2, 3
Heres how this code works: 1. 2. 3. Provides the user interface name for the Frequency (kParamName_Tremolo_Freq) parameter. Defines a constant for the default value for the Frequency parameter for the tremolo unit, anticipating a unit of Hertz to be defined in the implementation file. Defines a constant for the minimum value for the Frequency parameter.
113
CHAPTER 5
4. 5.
Defines a constant for the maximum value for the Frequency parameter. Provides a user interface name for the Depth (kParamName_Tremolo_Depth) parameter. The following three lines define constants for the default, minimum, and maximum values for the Depth parameter. Provides a user interface name for the Waveform (kParamName_Tremolo_Waveform) parameter. The following three lines define constants for the minimum, maximum, and default values for the Waveform parameter. Defines the menu item string for the sine wave option for the Waveform parameter. Defines the menu item string for the square wave option for the Waveform parameter. Defines constants for identifying the parameters; defines the total number of parameters.
6.
7. 8. 9.
For each parameter youve defined in the TremoloUnit.h file, your audio unit needs:
A corresponding SetParameter statement in the constructor method, as described next in Edit the Constructor Method A corresponding parameter definition in the GetParameterInfo method, as described later in Define the Parameters (page 115)
Note: At this point, building your audio unit will fail because you have not yet edited the implementation file, TremoloUnit.cpp, to use the new parameter definitions. You work on the TremoloUnit.cpp file in the next few sections.
TremoloUnit::TremoloUnit (AudioUnit component) : AUEffectBase (component) { CreateElements (); Globals () -> UseIndexedParameters (kNumberOfParameters); SetParameter ( kParameter_Frequency, kDefaultValue_Tremolo_Freq ); SetParameter ( kParameter_Depth, kDefaultValue_Tremolo_Depth ); // 1
// 2
114
CHAPTER 5
SetParameter ( kParameter_Waveform, kDefaultValue_Tremolo_Waveform ); #if AU_DEBUG_DISPATCHER mDebugDispatcher = new AUDebugDispatcher (this); #endif }
// 3
Heres how this code works: 1. Sets up the first parameter for the audio unit, based on values from the header file. In this project, this parameter controls tremolo frequency. The SetParameter method is inherited from superclasses in the Core Audio SDK. Sets up the second parameter for the audio unit, based on values from the header file. In this project, this parameter controls tremolo depth. Sets up the third parameter for the audio unit, based on values from the header file. In this project, this parameter controls the tremolo waveform.
2. 3.
#pragma mark ____Parameters ComponentResult TremoloUnit::GetParameterInfo ( AudioUnitScope inScope, AudioUnitParameterID inParameterID, AudioUnitParameterInfo &outParameterInfo ) { ComponentResult result = noErr; outParameterInfo.flags = kAudioUnitParameterFlag_IsWritable | kAudioUnitParameterFlag_IsReadable; // 1
if (inScope == kAudioUnitScope_Global) { switch (inParameterID) { case kParameter_Frequency: AUBase::FillInParameterName ( outParameterInfo, kParamName_Tremolo_Freq, false
// 2 // 3
115
CHAPTER 5
); outParameterInfo.unit = kAudioUnitParameterUnit_Hertz; outParameterInfo.minValue = kMinimumValue_Tremolo_Freq; outParameterInfo.maxValue = kMaximumValue_Tremolo_Freq; outParameterInfo.defaultValue = kDefaultValue_Tremolo_Freq; outParameterInfo.flags |= kAudioUnitParameterFlag_DisplayLogarithmic; break; case kParameter_Depth: AUBase::FillInParameterName ( outParameterInfo, kParamName_Tremolo_Depth, false ); outParameterInfo.unit = kAudioUnitParameterUnit_Percent; outParameterInfo.minValue = kMinimumValue_Tremolo_Depth; outParameterInfo.maxValue = kMaximumValue_Tremolo_Depth; outParameterInfo.defaultValue = kDefaultValue_Tremolo_Depth; break; case kParameter_Waveform: AUBase::FillInParameterName ( outParameterInfo, kParamName_Tremolo_Waveform, false ); outParameterInfo.unit = kAudioUnitParameterUnit_Indexed; outParameterInfo.minValue = kSineWave_Tremolo_Waveform; outParameterInfo.maxValue = kSquareWave_Tremolo_Waveform; outParameterInfo.defaultValue = kDefaultValue_Tremolo_Waveform; break; default: result = kAudioUnitErr_InvalidParameter; break; } } else { result = kAudioUnitErr_InvalidParameter; } return result; }
// 4 // 5 // 6 // 7 // 8
// 9
// 10
// 11
// 12
116
CHAPTER 5
1. 2. 3.
Adds two flags to all parameters for the audio unit, indicating to the host application that it should consider all the audio units parameters to be readable and writable. All three parameters for this audio unit are in the global scope. The first case in the switch statement, invoked when the view needs information for the kTremolo_Frequency parameter, defines how to represent this parameter in the user interface. Sets the unit of measurement for the Frequency parameter to Hertz. Sets the minimum value for the Frequency parameter. Sets the maximum value for the Frequency parameter. Sets the default value for the Frequency parameter Adds a flag to indicate to the host that it should use a logarithmic control for the Frequency parameter. The second case in the switch statement, invoked when the view needs information for the kTremolo_Depth parameter, defines how to represent this parameter in the user interface.
4. 5. 6. 7. 8. 9.
10. Sets the unit of measurement for the Depth parameter to percentage. The following three statements set the minimum, maximum, and default values for the Depth parameter. 11. The third case in the switch statement, invoked when the view needs information for the kTremolo_Waveform parameter, defines how to represent this parameter in the user interface. 12. Sets the unit of measurement for the Waveform parameter to indexed, allowing it to be displayed as a pop-up menu in the generic view. The following three statements set the minimum, maximum, and default values for the depth parameter. All three are required for proper functioning of the parameters user interface.
ComponentResult TremoloUnit::GetParameterValueStrings ( AudioUnitScope inScope, AudioUnitParameterID inParameterID, CFArrayRef *outStrings ) { if ((inScope == kAudioUnitScope_Global) && (inParameterID == kParameter_Waveform)) { if (outStrings == NULL) return noErr; CFStringRef strings [] = {
// 1
// 2 // 3
117
CHAPTER 5
kMenuItem_Tremolo_Sine, kMenuItem_Tremolo_Square }; *outStrings = CFArrayCreate ( NULL, (const void **) strings, (sizeof (strings) / sizeof (strings [0])), NULL ); return noErr; } return kAudioUnitErr_InvalidParameter; } // 4
// 5
Heres how this code works: 1. 2. This method applies only to the waveform parameter, which is in the global scope. When this method gets called by the AUBase::DispatchGetPropertyInfo method, which provides a null value for the outStrings parameter, just return without error. Defines an array that contains the pop-up menu item names. Creates a new immutable array containing the menu item names, and places the array in the outStrings output parameter. Calculates the number of menu items in the array.
3. 4.
5.
Name the factory presets and give them values. Modify the TremoloUnit class declaration by adding method signatures for handling factory presets. Edit the audio units constructor method to set a default factory preset. Override the GetPresets method to set up a factory presets array. Override the NewFactoryPresetSet method to define the factory presets.
118
CHAPTER 5
Note: The work you do here does not make use of the factory presets, per se. It is the DSP code that you implement in Implement Signal Processing (page 123) that makes use of the presets. Here, you are simply defining the presets so that they will appear in the audio unit generic view and so that they're ready to use when you implement the DSP code.
#pragma mark ____TremoloUnit Factory Preset Constants static const float kParameter_Preset_Frequency_Slow = static const float kParameter_Preset_Frequency_Fast = static const float kParameter_Preset_Depth_Slow = static const float kParameter_Preset_Depth_Fast = static const float kParameter_Preset_Waveform_Slow = kSineWave_Tremolo_Waveform; static const float kParameter_Preset_Waveform_Fast = kSquareWave_Tremolo_Waveform; enum { kPreset_Slow = 0, kPreset_Fast = 1, kNumberPresets = 2 }; static AUPreset kPresets [kNumberPresets] = { {kPreset_Slow, CFSTR ("Slow & Gentle")}, {kPreset_Fast, CFSTR ("Fast & Hard")} }; static const int kPreset_Default = kPreset_Slow;
// 6
// 7 // 8 // 9
// 10
// 11
Heres how this code works: 1. 2. 3. 4. 5. 6. 7. 8. 9. Defines a constant for the frequency value for the Slow & Gentle factory preset. Defines a constant for the frequency value for the Fast & Hard factory preset. Defines a constant for the depth value for the Slow & Gentle; Hard factory preset. Defines a constant for the depth value for the Fast & Hard factory preset. Defines a constant for the waveform value for the Slow & Gentle factory preset. Defines a constant for the waveform value for the Fast & Hard factory preset. Defines a constant for the Slow & Gentle factory preset. Defines a constant for the Fast & Hard factory preset. Defines a constant representing the total number of factory presets.
119
CHAPTER 5
10. Defines an array containing two Core Foundation string objects. The objects contain values for the menu items in the user interface corresponding to the factory presets. 11. Defines a constant representing the default factory preset, in this case the Slow & Gentle preset.
#pragma mark ____TremoloUnit class TremoloUnit : public AUEffectBase { public: TremoloUnit (AudioUnit component); ... virtual ComponentResult GetPresets ( CFArrayRef *outData ) const; virtual OSStatus NewFactoryPresetSet ( const AUPreset &inNewFactoryPreset ); protected: ... };
// 1
// 2
Heres how this code works: 1. 2. Declaration for the GetPresets method, overriding the method from the AUBase superclass. Declaration for the NewFactoryPresetSet method, overriding the method from the AUBase superclass.
TremoloUnit::TremoloUnit (AudioUnit component) : AUEffectBase (component) { CreateElements (); Globals () -> UseIndexedParameters (kNumberOfParameters); // code for setting default values for the audio unit parameters SetAFactoryPresetAsCurrent ( // 1 kPresets [kPreset_Default]
120
CHAPTER 5
Heres how this code works: 1. Sets the default factory preset.
#pragma mark ____Factory Presets ComponentResult TremoloUnit::GetPresets ( CFArrayRef *outData ) const { if (outData == NULL) return noErr; CFMutableArrayRef presetsArray = CFArrayCreateMutable ( NULL, kNumberPresets, NULL ); for (int i = 0; i < kNumberPresets; ++i) { CFArrayAppendValue ( presetsArray, &kPresets [i] ); } *outData = (CFArrayRef) presetsArray; return noErr; }
// 2 // 3
// 4
// 5
Heres how this code works: 1. The GetPresets method accepts a single parameter, a pointer to a CFArrayRef object. This object holds the factory presets array generated by this method. Checks whether factory presets are implemented for this audio unit. Instantiates a mutable Core Foundation array to hold the factory presets.
2. 3.
121
CHAPTER 5
4. 5.
Fills the factory presets array with values from the definitions in the TremoloUnit.h file. Stores the factory presets array at the outData location.
OSStatus TremoloUnit::NewFactoryPresetSet ( const AUPreset &inNewFactoryPreset ) { SInt32 chosenPreset = inNewFactoryPreset.presetNumber; if ( chosenPreset == kPreset_Slow || chosenPreset == kPreset_Fast ) { for (int i = 0; i < kNumberPresets; ++i) { if (chosenPreset == kPresets[i].presetNumber) { switch (chosenPreset) { case kPreset_Slow: SetParameter ( kParameter_Frequency, kParameter_Preset_Frequency_Slow ); SetParameter ( kParameter_Depth, kParameter_Preset_Depth_Slow ); SetParameter ( kParameter_Waveform, kParameter_Preset_Waveform_Slow ); break; case kPreset_Fast: SetParameter ( kParameter_Frequency, kParameter_Preset_Frequency_Fast ); SetParameter ( kParameter_Depth, kParameter_Preset_Depth_Fast ); SetParameter ( kParameter_Waveform, kParameter_Preset_Waveform_Fast );
// 2 // 3
// 4 // 5 // 6 // 7
// 8
// 9
// 10
122
CHAPTER 5
// 11
// 12
// 13
Heres how this code works: 1. This method takes a single argument of type AUPreset, a structure containing a factory preset name and number. Gets the number of the desired factory preset. Tests whether the desired factory preset is defined. This for loop, and the if statement that follows it, allow for noncontiguous preset numbers. Selects the appropriate case statement based on the factory preset number. The settings for the Slow & Gentle factory preset. Sets the Frequency audio unit parameter for the Slow & Gentle factory preset. Sets the Depth audio unit parameter for the Slow & Gentle factory preset. Sets the Waveform audio unit parameter for the Slow & Gentle factory preset.
2. 3. 4. 5. 6. 7. 8. 9.
10. The settings for the Fast & Hard factory preset. The three SetParameter statements that follow work the same way as for the other factory preset. 11. Updates the preset menu in the generic view to display the new factory preset. 12. On success, returns a value of noErr. 13. If the host application attempted to set an undefined factory preset, return an error.
123
CHAPTER 5
The Process method, which performs the signal processing The Reset method, which returns the audio unit to its pristine, initialized state
Along the way, you make changes to the default TremoloUnitKernel class declaration in the TremoloUnit.h header file, and you modify the TremoloUnitKernel constructor method in TremoloUnit.cpp.
124
CHAPTER 5
protected: class TremoloUnitKernel : public AUKernelBase { public: TremoloUnitKernel (AUEffectBase *inAudioUnit); // 1 virtual void Process const Float32 Float32 UInt32 UInt32 bool ); virtual void Reset(); private: enum float float float Float32 long enum float float }; }; ( *inSourceP, *inDestP, inFramesToProcess, inNumChannels, // equal to 1 &ioSilence
{kWaveArraySize = 2000}; mSine [kWaveArraySize]; mSquare [kWaveArraySize]; *waveArrayPointer; mSampleFrequency; mSamplesProcessed; {sampleLimit = (int) 10E6}; mCurrentScale; mNextScale;
// // // // // // // // //
2 3 4 5 6 7 8 9 10
Heres how this code works (skip this explanation if youre not interested in the math): 1. The constructor signature in the Xcode template contains a call to the superclasss constructor, as well as empty braces representing the method body. Remove all of these because you implement the constructor method in the implementation file, as described in the next section. The number of points in each wave table. Each wave holds one cycle of a tremolo waveform. The wave table for the tremolo sine wave. The wave table for the tremolo pseudo square wave. The wave table to apply to the current audio input buffer. The sampling frequency, or "sample rate" as it is often called, of the audio signal to be processed. The number of samples processed since the audio unit started rendering, or since this variable was last set to 0. The DSP code tracks total samples processed because:
2. 3. 4. 5. 6. 7.
The main processing loop is based on the number of samples placed into the input buffer But the DSP must take place independent of the input buffer size.
8.
To keep the value of mSamplesProcessed within a reasonable limit, there's a test in the code to reset it when it reaches this value.
125
CHAPTER 5
9.
The scaling factor currently in use. The DSP uses a scaling factor to correlate the points in the wave table with the audio signal sampling frequency, in order to produce the desired tremolo frequency. The kernel object keeps track of a current and next scaling factor to support changing from one tremolo frequency to another without an audible glitch.
10. The desired scaling factor to use, resulting from a request by the user for a different tremolo frequency.
Initializing member variables that require initialization Filling the two wave tables Getting the sample rate of the audio stream.
A convenient location for the constructor is immediately below the TremoloUnitEffectKernel pragma mark. Listing 5-11 Modifications to the TremoloUnitKernel Constructor (TremolUnit.cpp)
#pragma mark ____TremoloUnitEffectKernel //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ // TremoloUnit::TremoloUnitKernel::TremoloUnitKernel() //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ TremoloUnit::TremoloUnitKernel::TremoloUnitKernel (AUEffectBase *inAudioUnit) : AUKernelBase (inAudioUnit), mSamplesProcessed (0), mCurrentScale (0) // 1 { for (int i = 0; i < kWaveArraySize; ++i) { // 2 double radians = i * 2.0 * pi / kWaveArraySize; mSine [i] = (sin (radians) + 1.0) * 0.5; } for (int i = 0; i < kWaveArraySize; ++i) { double radians = i * 2.0 * pi / kWaveArraySize; radians = radians + 0.32; mSquare [i] = ( sin (radians) + 0.3 * sin (3 * radians) + 0.15 * sin (5 * radians) + 0.075 * sin (7 * radians) + 0.0375 * sin (9 * radians) + 0.01875 * sin (11 * radians) + 0.009375 * sin (13 * radians) + 0.8 ) * 0.63; } mSampleFrequency = GetSampleRate (); // 3
// 4
126
CHAPTER 5
Heres how this code works: 1. 2. The constructor method declarator and constructor-initializer. In addition to calling the appropriate superclasses, this code initializes two member variables. Generates a wave table that represents one cycle of a sine wave, normalized so that it never goes negative and so that it ranges between 0 and 1. Along with the value of the Depth parameter, this sine wave specifies how to vary the audio gain during one cycle of sine wave tremolo. 3. Generates a wave table that represents one cycle of a pseudo square wave, normalized so that it never goes negative and so that it ranges between approximately 0 and approximately 1. Gets the sample rate for the audio stream to be processed.
4.
Declare all variables used in the method Get the current values of all the parameters as set by the user via the audio units view Check the parameters to ensure they're in bounds, and taking appropriate action if they're not Perform calculations that don't require updating with each sample. In this project's case, this means calculating the scaling factor to use when applying the tremolo wave table.
Inside the processing loop, do only work that must be performed sample by sample:
Perform calculations that must be updated sample by sample. In this case, this means calculating the point in the wave table to use for tremolo gain Respond to parameter changes in a way that avoids artifacts. In this case, this means switching to a new tremolo frequency, if requested by the user, in a way that avoids any sudden jump in gain. Calculate the transformation to apply to each input sample. In this case, this means calculating a) the tremolo gain based on the current point in the wave table, and b) the current value of the Depth parameter. Calculate the output sample that corresponds to the current input sample. In this case, this means applying the tremolo gain and depth factor to the current input sample. Advance the indexes in the input and output buffers
127
CHAPTER 5
Advance other indexes involved in the DSP. In this case, this means incrementing the mSamplesProcessed variable.
Listing 5-12
void TremoloUnit::TremoloUnitKernel::Process ( const Float32 *inSourceP, Float32 *inDestP, UInt32 inSamplesToProcess, UInt32 inNumChannels, bool &ioSilence ) { if (!ioSilence) { const Float32 *sourceP = inSourceP; Float32 *destP = inDestP, inputSample, outputSample, tremoloFrequency, tremoloDepth, samplesPerTremoloCycle, rawTremoloGain, tremoloGain; tremoloWaveform;
// 7 // 8 // // // // // // // // 9 10 11 12 13 14 15 16
int
// 17 // 18 // 19 // 20 // 21
tremoloFrequency = GetParameter (kParameter_Frequency); tremoloDepth = GetParameter (kParameter_Depth); tremoloWaveform = (int) GetParameter (kParameter_Waveform); if (tremoloWaveform == kSineWave_Tremolo_Waveform) waveArrayPointer = &mSine [0]; } else { waveArrayPointer = &mSquare [0]; } {
// 22
// 23
// 24
samplesPerTremoloCycle = mSampleFrequency / tremoloFrequency; // 25 mNextScale = kWaveArraySize / samplesPerTremoloCycle; // 26 // the sample processing loop ////////////////
128
CHAPTER 5
// 27
int index = // 28 static_cast<long>(mSamplesProcessed * mCurrentScale) % kWaveArraySize; if ((mNextScale != mCurrentScale) && (index == 0)) { mCurrentScale = mNextScale; mSamplesProcessed = 0; } if ((mSamplesProcessed >= sampleLimit) && (index == 0)) mSamplesProcessed = 0; rawTremoloGain = waveArrayPointer [index]; tremoloGain = (rawTremoloGain * tremoloDepth tremoloDepth + 100.0) * 0.01; inputSample = *sourceP; outputSample = (inputSample * tremoloGain); *destP = outputSample; sourceP += 1; destP += 1; mSamplesProcessed += 1; } } } // 29
// 30
// 31 // 32 // // // // // // 33 34 35 36 37 38
Heres how this code works: 1. 2. 3. 4. 5. 6. The Process method signature. This method is declared in the AUKernelBase class. The audio sample input buffer. The audio sample output buffer. The number of samples in the input buffer. The number of input channels. This is always equal to 1 because there is always one kernel object instantiated per channel of audio. A Boolean flag indicating whether the input to the audio unit consists of silence, with a TRUE value indicating silence. Ignores the request to perform the Process method if the input to the audio unit is silence. Assigns a pointer variable to the start of the audio sample input buffer. Assigns a pointer variable to the start of the audio sample output buffer.
7. 8. 9.
10. The current audio sample to process. 11. The current audio output sample resulting from one iteration of the processing loop. 12. The tremolo frequency requested by the user via the audio units view. 13. The tremolo depth requested by the user via the audio units view.
129
CHAPTER 5
14. The number of audio samples in one cycle of the tremolo waveform. 15. The tremolo gain for the current audio sample, as stored in the wave table. 16. The adjusted tremolo gain for the current audio sample, considering the Depth parameter. 17. The tremolo waveform type requested by the user via the audio units view. 18. Gets the current value of the Frequency parameter. 19. Gets the current value of the Depth parameter. 20. Gets the current value of the Waveform parameter. 21. Assigns a pointer variable to the wave table that corresponds to the tremolo waveform selected by the user. 22. Performs bounds checking on the Frequency parameter. If the parameter is out of bounds, supplies a reasonable value. 23. Performs bounds checking on the Depth parameter. If the parameter is out of bounds, supplies a reasonable value. 24. Performs bounds checking on the Waveform parameter. If the parameter is out of bounds, supplies a reasonable value. 25. Calculates the number of audio samples per cycle of tremolo frequency. 26. Calculates the scaling factor to use for applying the wave table to the current sampling frequency and tremolo frequency. 27. The loop that iterates over the audio sample input buffer. 28. Calculates the point in the wave table to use for the current sample. This, along with the calculation of the mNextScale value in comment , is the only subtle math in the DSP for this effect. 29. Tests if the scaling factor should change, and if its safe to change it at the current sample to avoid artifacts. If both conditions are met, switches the scaling factor and resets the mSamplesProcessed variable. 30. Tests if the mSamplesProcessed variable has grown to a large value, and if it's safe to reset it at the current sample to avoid artifacts. If both conditions are met, resets the mSamplesProcessed variable. 31. Gets the tremolo gain from the appropriate point in the wave table. 32. Adjusts the tremolo gain by applying the Depth parameter. With a depth of 100%, the full tremolo effect is applied. With a depth of 0%, there is no tremolo effect applied at all. 33. Gets an audio sample from the appropriate spot in the audio sample input buffer. 34. Calculates the corresponding output audio sample. 35. Places the output audio sample at the appropriate spot in the audio sample output buffer. 36. Increments the position counter for the audio sample input buffer. 37. Increments the position counter for the audio sample output buffer.
130
CHAPTER 5
Heres how this code works: 1. 2. Resets the mCurrentScale member variable to its freshly initialized value. Resets the mSamplesProcessed member variable to its freshly initialized value.
Given the nature of the DSP your audio unit performs, its tail time is 0 secondsso you dont need to override the GetTailTime method. In the AUBase superclass, this method reports a tail time of 0 seconds, which is what you want your audio unit to report.
131
CHAPTER 5
1. In Terminal, enter the command to validate the tremolo unit. This consists of the auval command name followed by the -v flag to invoke validation, followed by the type, subtype, and manufacturer codes that identify the tremolo unit. The complete command for this audio unit is:
auval -v aufx tmlo Aaud
132
CHAPTER 5
If everything is in order, auval should report that your new audio unit is indeed valid. The figure shows the last bit of the log created by auval:
133
CHAPTER 5
(You may need to quit and reopen AU Lab for your completed audio units view to appear.) Notice the sliders for the Frequency and Depth parameters, the pop-up menu for the Waveform parameter, and the default preset displayed in the Factory Presets pop-up menu. 3. Click the Play button in the AUAudioFilePlayer utility window. If everything is in order, the file you have selected in the player will play through your audio unit and youll hear the tremolo effect. 4. Experiment with the range of effects available with the tremolo unit: adjust the Frequency, Depth, and Waveform controls while listening to the output from AU Lab. 5. In the presets menu toward the upper right of the generic view, choose Show Presets.
134
CHAPTER 5
Disclose the presets in the Factory group and verify that the presets that you added to your audio unit, using the NewFactoryPresetSet method, are present. Double-click a preset to load it. You can add user presets to the audio unit as follows:
Set the parameters as desired In the presets menu, choose Save Preset As In the dialog that appears, enter a name for the preset.
135
CHAPTER 5
136
CHAPTER 6
This appendix describes the Core Audio SDK audio unit class hierarchy, including starting points for common types of audio units.
137
CHAPTER 6
Figure 6-1
AUIOElement AUInputElement AUOutputElement SynthElement SynthPartElement SynthGroupElement AUEffectBase AUKernelBase SynthNoteList SynthNote
AUOutputBase
AUMIDIBase
For general, n-to-m channel effect units, start with the AUBase class For n-to-n channel effect units, which map each input channel to a corresponding output channel, start with the AUEffectBase class For monotimbral instrument units (either monophonic or polyphonic), start with the AUMonotimbralInstrumentBase class For multitimbral instrument units, start with the AUMultitimbralInstrumentBase class For format converter or generator audio units, start with the AUBase class For music effect units, start with the AUMIDIEffectBase class
138
REVISION HISTORY
This table describes the changes to Audio Unit Programming Guide. Date 2007-10-31 Notes Clarified discussions of audio unit parameters and automation in Supporting Parameter Automation (page 37), Control Code: Parameters, Factory Presets, and Properties (page 49), and Parameter and Property Events (page 67). Clarified information regarding audio unit input and output channel counts in Table 2-1 (page 53). 2006-11-07 Added Provide Strings for the Waveform Pop-up Menu (page 117) section in Tutorial: Building a Simple Effect Unit with a Generic View (page 87) chapter. Corrected Define the Factory Presets (page 122) section in Tutorial: Building a Simple Effect Unit with a Generic View (page 87) chapter. Improved naming scheme for parameter and preset constants in Tutorial: Building a Simple Effect Unit with a Generic View (page 87) chapter. Added link to the TremoloUnit sample code project in Introduction (page 9). 2006-08-07 First version
139
2007-10-31 | 2007 Apple Inc. All Rights Reserved.
REVISION HISTORY
140
2007-10-31 | 2007 Apple Inc. All Rights Reserved.