I’ve got to a working point with WebRTC and didn’t want to bother support in regards to undocumented nuances in regards to the bitrate SDP config required by chrome. It took over a month to get the information right. The min bitrate config is utilised for Chrome when its generally not even needed to be configured.
There is a requirement to move HLS and live 608 caption support injected into the rtmp stream which then the packetizer sends the 608 captions to HLS. And somehow get it sent to WebRTC. I cannot work out how this is possible unless Wowza supports a data channel and the captions and sent over the data channel via wowza.
Is there a known hack work around for now if data channels are not supported ? Could a stream target be possible in the future when publishing webrtc ?
I am about to find out if OBS / Ffmpeg can be used to publish webrtc via a plugin to handle the websocket communication. It needs some testing.