Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

renderToFile stops background audio being played on iOS #2931

Open
thibauddavid opened this issue Sep 10, 2024 · 4 comments
Open

renderToFile stops background audio being played on iOS #2931

thibauddavid opened this issue Sep 10, 2024 · 4 comments
Labels

Comments

@thibauddavid
Copy link

thibauddavid commented Sep 10, 2024

macOS Version(s) Used to Build

macOS 14 Sonoma

Xcode Version(s)

Xcode 14

Description

Hi,

I'm using AudioEngine and PitchShifter to apply an effect on an audio file.
I'm rendering an output file using engine.renderToFile. However, while rendering is being done other apps playing audio are getting paused.
I thought renderToFile used offline rendering, in which case I don't get why the shared AudioSession is getting tampered.

Is this an expected behavior, or I am missing something ?

Here is what I'm doing (removed optional handling etc for readability)

let engine = AudioEngine()
let inputFile = try AVAudioFile(forReading: inputURL)
let player = AudioPlayer(file: inputFile)
let pitchShifter = PitchShifter(player)
pitchShifter.shift = 10
engine.output = pitchShifter
try engine.renderToFile(
    outputFile,
    duration: inputFile.duration,
    prerender: { player.play() }
)

Thanks for helping

Crash Logs, Screenshots or Other Attachments (if applicable)

No response

@thibauddavid thibauddavid changed the title renderToFile stops background audio being played renderToFile stops background audio being played on iOS Sep 10, 2024
@NickCulbertson
Copy link
Member

@thibauddavid
Copy link
Author

Hi, yeah I tried that and it obviously worked, but I don't get why a supposed offline rendering mode has to deal with audio session. It should only process audio frames

@NickCulbertson
Copy link
Member

In the AudioKit render() method it looks like it is calling start() on AVAudioEngine after setting .offline render mode which state:

open class AVAudioEngine : NSObject {
...
On AVAudioSession supported platforms, this method may cause the audio session to be implicitly activated. It is recommended to configure and activate the app's audio session before starting the engine. For more information, see the `prepare` method above.
    */
    open func start() throws

The underlying engine is also reset() so I'm guessing one of these things might be the culprit. (Render method for reference):

func render(to audioFile: AVAudioFile,
                maximumFrameCount: AVAudioFrameCount = 4096,
                duration: Double,
                renderUntilSilent: Bool = false,
                silenceThreshold: Float = 0.00005,
                prerender: (() -> Void)? = nil,
                progress progressHandler: ((Double) -> Void)? = nil) throws
    {
        guard duration >= 0 else {
            throw NSError(domain: "AVAudioEngine ext", code: 1,
                          userInfo: [NSLocalizedDescriptionKey: "Seconds needs to be a positive value"])
        }

        // Engine can't be running when switching to offline render mode.
        if isRunning { stop() }
        try enableManualRenderingMode(.offline,
                                      format: audioFile.processingFormat,
                                      maximumFrameCount: maximumFrameCount)

        // This resets the sampleTime of offline rendering to 0.
        reset()
        try start()

        guard let buffer = AVAudioPCMBuffer(pcmFormat: manualRenderingFormat,
                                            frameCapacity: manualRenderingMaximumFrameCount)
        else {
            throw NSError(domain: "AVAudioEngine ext", code: 1,
                          userInfo: [NSLocalizedDescriptionKey: "Couldn't create buffer in renderToFile"])
        }

        // This is for users to prepare the nodes for playing, i.e player.play()
        prerender?()

        // Render until file contains >= target samples
        let targetSamples = AVAudioFramePosition(duration * manualRenderingFormat.sampleRate)
        let channelCount = Int(buffer.format.channelCount)
        var zeroCount = 0
        var isRendering = true

        while isRendering {
            if !renderUntilSilent, audioFile.framePosition >= targetSamples {
                break
            }
            let framesToRender = renderUntilSilent ? manualRenderingMaximumFrameCount
                : min(buffer.frameCapacity, AVAudioFrameCount(targetSamples - audioFile.framePosition))

            let status = try renderOffline(framesToRender, to: buffer)

            // Progress in the range of starting (0) - finished (1)
            var progress: Double = 0

            switch status {
            case .success:
                try audioFile.write(from: buffer)
                progress = min(Double(audioFile.framePosition) / Double(targetSamples), 1.0)
                progressHandler?(progress)
            case .cannotDoInCurrentContext:
                Log("renderToFile cannotDoInCurrentContext", type: .error)
                continue
            case .error, .insufficientDataFromInputNode:
                throw NSError(domain: "AVAudioEngine ext", code: 1,
                              userInfo: [NSLocalizedDescriptionKey: "render error"])
            @unknown default:
                Log("Unknown render result:", status, type: .error)
                isRendering = false
            }

            if renderUntilSilent, progress == 1, let data = buffer.floatChannelData {
                var rms: Float = 0.0
                for i in 0 ..< channelCount {
                    var channelRms: Float = 0.0
                    vDSP_rmsqv(data[i], 1, &channelRms, vDSP_Length(buffer.frameLength))
                    rms += abs(channelRms)
                }
                let value = (rms / Float(channelCount))

                if value < silenceThreshold {
                    zeroCount += 1
                    // check for consecutive buffers of below threshold, then assume it's silent
                    if zeroCount > 2 {
                        isRendering = false
                    }
                } else {
                    // Resetting consecutive threshold check due to positive value
                    zeroCount = 0
                }
            }
        }

        stop()
        disableManualRenderingMode()
    }

@thibauddavid
Copy link
Author

Indeed. However, this seems like a bug to me, as rendering to a file in an offline mode shouldn't have a side-effect on audio session

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants