Wowza supporting webRTC topology for multi-peers? Mesh / MCU / SFU?

Hi, I’m newbie in WebRTC and considering WebRTC for voice conference besides from Video.

I cannot understand topology of WebRTC Wowza support. Example is only for peer2peer.

I think Wowza have capability for MCU but No info at all.

Which topology Wowza support?

I have the same question as the OP. Maybe rewording it a bit might elicit a response this time?

Wowza’s docs shows that the Wowza Streaming Engine supports having each participant in a WebRTC session send one stream to it, and WSE will send that stream to every other participant. This is in contrast to the pure P2P approach where every participant connects to every other participant, and sends n-1 streams.

My question is, how does WSE do it? WebRTC is fundamentally a 1:1 technology – the sender listens to feedback from the receiver and will finely tune the streams bitrate to match available bandwidth. It does not – normally – use a set of predetermined bitrates.

So how does WSE turn a 1:1 tech into something where a single stream can be sent to multiple recipients?

One approach is to act as a Multipoint Conferencing Unit (MCU). If WSE is taking this approach, then when a stream is sent to WSE, it will be transrated n-1 times and sent along to every other participant. Each participant gets a stream who’s overall bandwidth is tailored for their connection. This is great, but requires a fairly powerful server and can’t scale very high.

Another approach is to act as a Selective Forwarding Unit (SFU). In this approach, WSE would receive a stream and send it along to the other n-1 participants unchanged. But how to ensure that the stream fits within the participant’s available bandwidth? The n-1 receivers will all send feedback about their receipt of the stream, and WSE will coalesce that feedback into one “worst case” feedback summary that’s fed back to the stream’s sender. Under this approach, WSE would do no transrate work, which means it scales well computationally, but every recipient receives a stream that’s tailored for the recipient with the worst bandwidth, so video quality can suffer.

There are other possibilities. WSE could act like a MCU, but instead of transrating to match each recipient’s bandwidth, it could transrate to just a handful of pre-determined bitrates and send the “best-fit” version to the recipient (sort of like how HLS or DASH works).

So… with all that laid out…

What DOES Wowza Streaming Engine actually do? I didn’t see any info in the docs about explaining what approach is being used, and it’s a really important question when it comes to understanding how WSE will scale, and how video quality will react if user with a lousy connection tries to participate,

Great question and thanks for being thorough. I will have our WebRTC engineer answer when he is in office tomorrow. Thanks for your patience.

Hi,

So currently Wowza Streaming Engine acts as a WebRTC peer but then only as a 1-2-1 connection.

So you can publish on a connection or you can play back and Wowza treats the WebRTC connection like other type of audio/video stream it manages

Wowza does not transrate based on the client connection capability and does not support peer to peer connections within a WebRTC framework.

Wowza acts as a ‘unicast exploder’ so receiving a inbound stream via WebRTC and then if a WebRTC client connects to playback makes it available, or the published WebRTC can be made available (if the CODECs are supports) via HLS, RTSP, RTMP etc.

Andrew.

Hi…a typical “MCU” focused vendor will decode all incoming streams and merge all decoded streams into one in one encoding pass (one per template times ABR flavors). Then clients will be able to chose between templates (usually there are no more than 3: active speaker, grid, and active + screenshare). It will then also include routing in that it will decide which client should receive which flavor and template. It will be far more efficient on bandwidth consumption (each client only upload once, and download once), and indeed little more taxing on CPU… here’s why it’s not “by far” more encoding.

pcb assembly in usa

Okay so to make sure I’m understanding correctly, if a WebRTC client is connected for playback, there’s no adaptive bitrate.

Is that also the case for ingest? That is, if a WebRTC client is trying to upload a 5 Mbps video stream over a 2 Mbps connection, will Wowza Streaming Engine send the client the feedback necessary for the client to know that packets are being dropped and that it should adapt to a lower bitrate?

On the publishing side, Wowza does support adaptive encoding in version 4.8.5.

On the playback side, Wowza does not support adaptive encoding. The bitrate is controlled from the publisher and it is set at what Wowza sees in the connection with the publisher. To note, you could choose to trancode the incoming stream to create multiple quality renditions in Wowza, and then you could selectively choose the appropriate rendition for the user upon connection. So for example, send lower resolution rendition to mobile. This is very different though than adaptive encoding on the publisher side.

Wowza for me seems like behaving closer to an SFU, relaying streams between publisher and receiver. And it being in the middle, can record streams and also build complex workflows where stream from webrtc needs to be relayed to conventional streaming modes like HLS.