So like a lot of other people, I’m trying to get the latency down as low as possible on iOS for a live stream (not recorded). I’ve been looking at samples here and understand to this point that the key is messing with cupertinoChunkDurationTarget and the keyframe setting. I’ve read that Apple’s recommendations for the chunk duration is 10 seconds, but I have 2 questions.
-
What is the downside (what negative results will I see) from setting cupertinoChunkDurationTarget = 1000 and a keyframe of 1 second? Aside from the horsepower for encoding and increased BW needed to provide the source stream, what am I going to see on the viewing side by reducing these values. Is that the reason for the stream sometimes freezing? I guess I’m trying to learn more about the consequences of reducing the chunk duration.
-
Remeber I am discussing a live (not recorded stream here). I have been able to see latency below 10 seconds as others have mentioned if I start with a source that is 320x240, 10fps, 1 second keyframe. However if I start with a source stream that is 640x480, the latency is up in the 20+ second range. The question is, does it make sense to attempt to have the server transcode the source down to 320x240 and then have it converted to stream to iOS to help keep the latency down? Or is it that by the process of converting it down to 320x240, then chunking it, I’m introducing more delay than I could save by sticking with a source of 640x480? I tried testing this and I think I configured it correctly, but the delay was the same 20-30 seconds. What are my options here if I want to stick with a 640x480 source stream?
Thanks!