Can you help me find out (or at least give some hints) what one set of cupertino segmenter ( ChunkDurationTarget / MaxChunkCount / PlaylistChunkCount ) is better / worse than another for given channel and prove it.
Is there any criterion that can be calculated and describe definite set of parameters in clear way?
Audio and video packets from a live encoder enter the Wowza media server and are segmented into time-based chunks. The duration of each chunk (in milliseconds) is controlled by the cupertinoChunkDurationTarget setting. As the chunks are created, they’re added to the available chunk list. The maximum number of chunks stored in the available chunk list is controlled by the cupertinoMaxChunkCount setting. When an iOS device requests the stream, a playlist is returned that contains the [n] most recently added chunks. The number of items returned in the playlist ([n]) is controlled by the cupertinoPlaylistChunkCount setting.
Each chunk is created when a key-frame is received by Wowza. Because of this, the key-frame frequency of the incoming stream has an influence on the duration of the chunks.
Usually, the default values for these properties are modified when users want to achieve a lower latency between the live event and the playback of that stream. In this case, having a live stream with a key-frame frequency setting of 1 key-frame per second, and a cupertinoChunkDurationTarget property value of 2000, would produce chunks with a duration of 2 seconds. Since the player needs 3 chunks to start playing back the stream, you can get a latency of 6-7 seconds, depending on the network quality.
However, if the chunks are too small and the internet connection at the client side is too bad, It can take more than 2 seconds to download one data chunks and you might end up with your playback client having buffering issues on their devices.
You must choose the best configuration setup that matches your requirements and the conditions of the network your server and clients are operation on.
So, the first criteria can be delay between unpausing player and actually starting stream, and the second - size of time intervals that video stalled, buffering next chunks?
And both then smaller then better.
I need a javascript test page and test stream to measure this values with different segmenter configurations.
It’s not always the case. If you use adaptive bitrate streaming (ABR), smaller chunk size translates to faster stream swapping. If the client’s network is not stable the player might switch from one bitrate to another very often, causing a very uneven experience.
Smaller chunks also require bigger playlists and more computing power allocated to generating/parsing those playlists leading to consuming more power on client mobile devices.
With smaller chunk sizes you also have a higher load on the network. The ratio of the actual stream data versus the metadata (playlists) is also smaller.
You will have to choose a trade-off between optimal video quality / experience and a fast start or bandwidth/power optimizations.