Richard, all,
we ended up modifying a few classes in the OSMF’s HTTP streaming managing code. A quick hint for the solution (sorry for my terrible English).
Let’s assume that the encoder has a huge available bandwidth and has no problems during the streaming; thus, if the fragments’ duration is set to 4 seconds, the encoder (and Wowza) will write a new fragment each 4 seconds.
The client starts playing the stream: it will ask for the manifest, choose the right playlist (based on the current available bandwidth), open the playlist_b.abst and get the current available fragments list. It will then start getting the fragments one after another and reload a new copy of the playlist.abst file when it reaches (or approaches) the end of the fragments list.
Now, suppose that for whatever reason the player has a network problem which reduces the available bandwidth. It will load the fragments with a much lesser speed; if its available bandwidth is lower than the fragments’ one it will start to accumulate delay.
If the delay grows enough the player could be downloading a fragment, say the last element in the last downloaded playlist*.abst with ID 100, then download a new playlist.abst. Wowza answers with a playlist starting from the first available fragment in the current window, say with the ID 105.
The player checks if the packet “105” is the next playable one (if its start time copes well with the last played fragment’s end time and if the ID is consecutive); this check fails (because the new ID is higher than the last played fragment’s one, but the start time is much later than the end time) and begins with the playlist.abst loading loop, “hoping” to receive “the next fragment” (which in a live streaming scenario will fail again and again).
The idea is simple: when this check fails there will be no available fragment. We’ll add the following check:
-
has the quality switched (–> our bandwidth has changed)? if so pick the first available fragment and start from there
-
if not, ask for the next ID (the fragment immediately after the one you played) even if it’s not present in the fragments list.
This is a very naif approach and it’s not guaranteed as working; however, if you provide a larger value for sanjoseMaxChunkCount than the one for sanjosePlaylistChunkCount, this trick could work.
What do you think?
Regards,
Leonardo
Latcho, all,
I haven’t been (yet) authorized to share the code, but I can explain you the background idea (we’re working on OSMF 1.6.1 pre, but I think that the very same idea will work also in OSMF 2).
The core functions that calculate the next fragment to be played are in the org.osmf.net.httpstreaming.f4f:AdobeFragmentRunTable class. You should keep track of the current played stream (“quality”), in order to understand (when you get into the loop) if it’s switching or not.
If the next fragment detection fails the function will return a null value (and this is triggering the loop) because the current playlist doesn’t carry what the player thinks is the next fragment.
I think this should be enough to have you build your own workaround
Let me know,
Leonardo
Hello Leonardo,
Thanks for your explanation.
I have the same problem over low bandwidth connections, so that explains why.
Would you mind sharing your workaround in code. Even if you think it’s naive - I think it’s clever you found out !
I could use the code even today, even if you are not to sure about it, since I’m NOW streaming football (European Championship) and I hit on the loop issue.
THANKS !
Kind Regards,
Stijn AKA latcho
Richard, all,
we ended up modifying a few classes in the OSMF’s HTTP streaming managing code. A quick hint for the solution (sorry for my terrible English).
Let’s assume that the encoder has a huge available bandwidth and has no problems during the streaming; thus, if the fragments’ duration is set to 4 seconds, the encoder (and Wowza) will write a new fragment each 4 seconds.
The client starts playing the stream: it will ask for the manifest, choose the right playlist (based on the current available bandwidth), open the playlist_b.abst and get the current available fragments list. It will then start getting the fragments one after another and reload a new copy of the playlist.abst file when it reaches (or approaches) the end of the fragments list.
Now, suppose that for whatever reason the player has a network problem which reduces the available bandwidth. It will load the fragments with a much lesser speed; if its available bandwidth is lower than the fragments’ one it will start to accumulate delay.
If the delay grows enough the player could be downloading a fragment, say the last element in the last downloaded playlist*.abst with ID 100, then download a new playlist.abst. Wowza answers with a playlist starting from the first available fragment in the current window, say with the ID 105.
The player checks if the packet “105” is the next playable one (if its start time copes well with the last played fragment’s end time and if the ID is consecutive); this check fails (because the new ID is higher than the last played fragment’s one, but the start time is much later than the end time) and begins with the playlist.abst loading loop, “hoping” to receive “the next fragment” (which in a live streaming scenario will fail again and again).
The idea is simple: when this check fails there will be no available fragment. We’ll add the following check:
-
has the quality switched (–> our bandwidth has changed)? if so pick the first available fragment and start from there
-
if not, ask for the next ID (the fragment immediately after the one you played) even if it’s not present in the fragments list.
This is a very naif approach and it’s not guaranteed as working; however, if you provide a larger value for sanjoseMaxChunkCount than the one for sanjosePlaylistChunkCount, this trick could work.
What do you think?
Regards,
Leonardo
As I understand, Best Effort Fetch in OSMF 2.0 does something like that (attempt to download a fragment which is not in a fragment list, but should be) - http://sourceforge.net/apps/mediawiki/osmf.adobe/index.php?title=Best-Effort_Fetch . Unfortunately I can’t test because Wowza does not work with OSMF 2.0 at all - it begins looping immediately after starting the stream, not even one fragment gets downloaded.
We discovered that playing with these varibles can fix the initial bootstrap looping. Dependingh on your (DVR/caching) config you can lower them or make them higher.
But to mee it seems that keeping them low is the best setup if no caching of fragments is configured.
OsmfSettinds.as
/**
* @private
*
* The amount of seconds OSMF will stay behind the live point in dvr scenarios.
*/
public static var hdsDVRLiveOffset:Number = 4;
/**
* @private
*
* The amount of seconds OSMF will stay behind the live point in the pure live scenario.
*/
public static var hdsPureLiveOffset:Number = 5;
The thing is, if you make them too high you easily fall out of the LIVE window… get a 404 on fragment request and get into bootloop.
If they are too short you might miss a future packet that is not written to disk yet…
If you don’t have the bestEffortfetch feature enabled ( did not test bestEffort yet ) and no DVR window enabled in your config and a low bandwidth on clientside , it seems that you indeed can easily fall out of the LIVE window sometimes immediate) and recovery is not there by default. Mostly on low bandwidth conections, if some extra buffering was needed during playback, the build up delay towards live can get you quickly in trouble (for sure if you combine it with a larger liveOffset).
Hi Leonardo,
Thanks for the info. Since we are in the middle of the event, it’s too dangerous to start fiddling in that osmf core myself wright now. Allthough I will look at it, a proven fix would help.
If you could share the code later I’d still be interested and the community ( or just me ) will for sure appreciate it
Thanks,
Stijn AKA latcho