How to access audio buffers in the iOS SDK for VAD and volume detection

This question originally came up in our Slack community and the thread has been consolidated here for long-term reference.

In the iOS SDK, I see the library directly taps into the microphone.

Is there a way to pass a stream of audio buffers to the SDK to be sent to WebRTC?

Or can the SDK feed us the recorded buffers so we can use them for VAD, volume, sound detection, etc.?

Yes, you can capture audio buffers from the microphone. See the Swift SDK documentation: