我今天有很多有趣的事情iOS 和音频单元并发现了很多有用的资源(包括在内)。
首先,我对某些事情感到困惑:是否真的有必要创建一个音频图 with 混合器单元录制应用程序播放的声音?
或者播放声音就足够了ObjectAL https://github.com/kstenerud/ObjectAL-for-iPhone(或者更简单地说 AVAudioPlayer 调用)并创建一个远程IO单元通过录音回调寻址到正确的总线上?
Second,一个更加编程化的问题!
因为我还不太适应音频单元概念,我尝试适应苹果混音器主机项目 http://developer.apple.com/library/ios/#samplecode/MixerHost/Introduction/Intro.html能够记录最终的混音。显然,我尝试用迈克尔·泰森 RemoteIO 帖子 http://atastypixel.com/blog/using-remoteio-audio-unit/.
我的回调函数上收到 EXC_BAD_ACCESS :
static OSStatus recordingCallback (void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
AudioBufferList *bufferList; // <- Fill this up with buffers (you will want to malloc it, as it's a dynamic-length list)
EffectState *effectState = (EffectState *)inRefCon;
AudioUnit rioUnit = effectState->rioUnit;
OSStatus status;
// BELOW I GET THE ERROR
status = AudioUnitRender(rioUnit,
ioActionFlags,
inTimeStamp,
inBusNumber,
inNumberFrames,
bufferList);
if (noErr != status) { NSLog(@"AudioUnitRender error"); return noErr;}
// Now, we have the samples we just read sitting in buffers in bufferList
//ExtAudioFileWriteAsync(effectState->audioFileRef, inNumberFrames, bufferList);
return noErr;
}
在使用回调函数之前我做了MixerHostAudio.h
typedef struct {
AudioUnit rioUnit;
ExtAudioFileRef audioFileRef;
} EffectState;
并在界面中创建:
AudioUnit iOUnit;
EffectState effectState;
AudioStreamBasicDescription iOStreamFormat;
...
@property AudioUnit iOUnit;
@property (readwrite) AudioStreamBasicDescription iOStreamFormat;
然后在实现文件中MixerHostAudio.h :
#define kOutputBus 0
#define kInputBus 1
...
@synthesize iOUnit; // the Remote IO unit
...
result = AUGraphNodeInfo (
processingGraph,
iONode,
NULL,
&iOUnit
);
if (noErr != result) {[self printErrorMessage: @"AUGraphNodeInfo" withStatus: result]; return;}
// Enable IO for recording
UInt32 flag = 1;
result = AudioUnitSetProperty(iOUnit,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Input,
kInputBus,
&flag,
sizeof(flag));
if (noErr != result) {[self printErrorMessage: @"AudioUnitSetProperty" withStatus: result]; return;}
// Describe format
iOStreamFormat.mSampleRate = 44100.00;
iOStreamFormat.mFormatID = kAudioFormatLinearPCM;
iOStreamFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
iOStreamFormat.mFramesPerPacket = 1;
iOStreamFormat.mChannelsPerFrame = 1;
iOStreamFormat.mBitsPerChannel = 16;
iOStreamFormat.mBytesPerPacket = 2;
iOStreamFormat.mBytesPerFrame = 2;
// Apply format
result = AudioUnitSetProperty(iOUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
kInputBus,
&iOStreamFormat,
sizeof(iOStreamFormat));
if (noErr != result) {[self printErrorMessage: @"AudioUnitSetProperty" withStatus: result]; return;}
result = AudioUnitSetProperty(iOUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
kOutputBus,
&iOStreamFormat,
sizeof(iOStreamFormat));
if (noErr != result) {[self printErrorMessage: @"AudioUnitSetProperty" withStatus: result]; return;}
effectState.rioUnit = iOUnit;
// Set input callback ----> RECORDING
AURenderCallbackStruct callbackStruct;
callbackStruct.inputProc = recordingCallback;
callbackStruct.inputProcRefCon = self;
result = AudioUnitSetProperty(iOUnit,
kAudioOutputUnitProperty_SetInputCallback,
kAudioUnitScope_Global,
kInputBus,
&callbackStruct,
sizeof(callbackStruct));
if (noErr != result) {[self printErrorMessage: @"AudioUnitSetProperty" withStatus: result]; return;}
但我不知道出了什么问题,也不知道如何挖掘。
注:效果状态结构存在是因为我也尝试集成生物音频项目 https://github.com/brennon/BioAudio从缓冲区写入文件的能力。
还有第三,我想知道是否有更简单的方法来录制我的iPhone应用程序播放的声音(即排除麦克风)?