We have successfully implemented the below configuration in a lab environment including simulated WAN. During configuration a switch from a variable bitrate stream (2-8Mbps with target 3.5Mbps) to constant 3.5Mbps caused issues which we resolved by upping the sortBuffer from 500 to 4000. This was a best guess rather than a scientific process.
Is there any documentation that lists all of these settings and their purpose? With the case described below can you suggest any improvements to the default config as per the setup instructions listed here https://www.wowza.com/docs/how-to-publish-and-play-a-live-stream-mpeg-ts-based-encoder
In particular I am interested in which settings are relevant in the rtp-live configuration when pulling a MPEG-TS and serving RTMP especially as latency and or packet drop increases.
Thanks
Andy
(we have around 20 Wowza perpetual licenses and I can provide the final digits of the license key to support if required)
Environment:
**Hardware Encoder :**Spinnaker 7100 HD publishing Mulitcast UDP MPEG-TS H.264@3.5Mbps,1280x720 constant bit rate with each packet being 1316 in length (content)
Media Server: Wowza Media Server subscribing to multicast stream and serving RTMP and in some cases recording MP4 to disk
Client: Windows 7, Digital Signage Solution using Active X Flash Plugin to connect and playback rtmp stream
With udp streaming in from the encoder that you have, the packets can arrive out of order and the sort buffer buffers x milliseconds of packets and sorts them into the right order. Without this, it will just drop any late packets. It also aligns the video & audio as the audio packets are a lot smaller so an audio packet for a similar timecode to a video packet will arrive sooner.
You can also add a jitter buffer which will help align everything and log packet loss in the log files.
This and the sort buffer both add to the latency of the stream so if that is an issue then you need to keep the settings as low as possible.
Also take a look at the General Tuning article for any tips that might suit your implementation.
Not sure that network this is running on but 3.5Mbps is pretty hefty over the Internet. We suggest 400-600Kbps.
Charlie
Thanks Roger
So how should we configure the jitter buffer - In addition to or instead of the sort buffer? Is one “better” than the other in this instance? e.g. should I reset the sortBuffer to 500 and allow all of the additional buffer to be in the jitter buffer of 3500ms? Is the flow jittter buffer >sort buffer > client or ? Is the packet loss logging tracking packets dropped as they are too late according to the buffer settings or all missing packets (in the event they have been dropped at an upstream router)?
I’ll experiment tomorrow but if there is an ideal balance between the buffers please advise. In this case the latency is not an issue (up to a point) we are looking for a low number of clients to be as close to perfect as possible. The tuning guide although helpful does not really apply in our case as we are not pushing the machine. Avg 5% CPU utilisation, Plenty of spare memory on heap. I think the places we can gain are on the buffers and any network tuning.
UPDATE: Adding the jitter buffer logged packet loss but is dropping so many packets the stream does not play. The player timecode display does increment sporadically suggesting that some packets are getting through. Tried various configurations trying to balance the sortBuffer against the jitter buffer but struggling without any detail on how these relate to one another. The only configuration that plays at this point is the sortBuffer set to ~ 4000
WARN server comment 2011-02-03 11:17:35 - - - - - 104.225 - - - - - - - - RTPDePacketizerWrapperPacketSorter.packetLoss[rtplive/definst/live.stream:mpegts]: last:57624 curr:57625
WARN server comment 2011-02-03 11:17:35 - - - - - 104.226 - - - - - - - - RTPDePacketizerWrapperPacketSorter.packetLoss[rtplive/definst/live.stream:mpegts]: last:57626 curr:57627
WARN server comment 2011-02-03 11:17:35 - - - - - 104.226 - - - - - - - - RTPDePacketizerWrapperPacketSorter.packetLoss[rtplive/definst/live.stream:mpegts]: last:57631 curr:57884
WARN server comment 2011-02-03 11:17:35 - - - - - 104.226 - - - - - - - - RTPDePacketizerWrapperPacketSorter.packetLoss[rtplive/definst/live.stream:mpegts]: last:57885 curr:57616
WARN server comment 2011-02-03 11:17:35 - - - - - 104.326 - - - - - - - - RTPDePacketizerWrapperPacketSorter.packetLoss[rtplive/definst/live.stream:mpegts]: last:57618 curr:57619
WARN server comment 2011-02-03 11:17:35 - - - - - 104.326 - - - - - - - - RTPDePacketizerWrapperPacketSorter.packetLoss[rtplive/definst/live.stream:mpegts]: last:57620 curr:57623
WARN server comment 2011-02-03 11:17:35 - - - - - 104.326 - - - - - - - - RTPDePacketizerWrapperPacketSorter.packetLoss[rtplive/definst/live.stream:mpegts]: last:57624 curr:57626
WARN server comment 2011-02-03 11:17:35 - - - - - 104.327 - - - - - - - - RTPDePacketizerWrapperPacketSorter.packetLoss[rtplive/definst/live.stream:mpegts]: last:57627 curr:57628
WARN server comment 2011-02-03 11:17:35
UPDATE: I have been discussing this issue with Charlie via the support channels. The conclusion is that although sortBufferSize should be only required to sync audio and video packets it has a direct correlation with successful playback of our high bandwidth (3.5Mbps) stream. Reason unknown at this time.
Any use of jittterBuffer with our MPEG-TS causes failure. jitterBuffer does not work with this type of stream. All packets are logged as dropped when logging is activated
Adjusting RTP > DatagramConfiguration > Incoming ReceiveBufferSize has had no impact positive or negative
For our stream a setting of Application > Streams > Properties >Property > sortBufferSize 1500 has resulted in a stable and great quality stream over a MPLS network