How to get codec payload/stream data for live audio stream?

Hi,

I’m wanting to try and take multiple published audio streams (from Flash/RTMP), merge them, and then create one new stream which is the combined audio.

I’m mostly wondering what interfaces / methods to look at to acquire the live audio stream codec data (Nellymoser or Speex from Flash control). I think I saw there was a JSpeex sample that at least demonstrates how to create a stream from Java.

Thanks,

Tom

Anyone know if the Wowza API allows for this? I got thinking of an alternative setup which could be to use localhost UDP/RTP to pass the published stream to a custom app, then have it publish a new one that merged audio.

You can use IMediaStreamActionNotify.onAudioCodec. Take a look at this example:

https://www.wowza.com/docs/how-to-use-imediastreamactionnotify3-interface-to-listen-for-rtmp-stream-events-includes-codec-info

Richard

I’m not sure if that is possible, but take a look at IHTTPStreamerCupertinoLivePacketizerDataHandler.onFillChunkDataPacket. Take a look at this example:

https://www.wowza.com/docs/how-to-convert-ontextdata-events-in-a-live-or-vod-stream-to-timed-events-id3-tags-in-an-apple-hls-stream

Richard

Thanks. I think I played with that already and found the debugger only called the audio codec info callback once for a published stream and not once for every chunk of audio data.

I’m not sure if there is a better way, but I found I was able to do this by creating a custom live packetizer, which then gets a callback for each AMFPacket.