We have learned how to play short audio samples, and now we are ready to organize sound streaming. This recipe explains how to organize a buffer queue to allow on-the-fly sound generation and streaming.
We suppose that the reader is already familiar with our AudioSource
and iWaveDataProvider
classes described in the previous recipe.
First, we enrich
iWaveDataProvider
with the additional methodsIsStreaming()
, which indicates that the data from this provider should be read in small chunks, andStreamWaveData()
, which actually reads a single chunk:class iWaveDataProvider: public iObject … virtual bool IsStreaming() const { return false; } virtual int StreamWaveData( int Size ) { return 0; } … };
Next we write a derived class, which contains an intermediate buffer for decoded or generated sound data. It does not implement
StreamWaveData()
, but implements theGetWaveData()
andGetWaveDataSize()
methods:class StreamingWaveDataProvider: public iWaveDataProvider...