Hello, I was wondering if anyone could help with the issue we are having with
live iOS streaming.
We are using a hardware H264 encoder on Texas Instruments DM365 SoC. The H264
stream it produces is muxed into a FLV container and then streamed to Wowza
over RTMP.
Streaming to Flash players works fine, but iOS devices such as iPod and iPad
have trouble displaying video served through HLS.
We have noticed that video on iOS plays if the H264 source connects to Wowza
AFTER the iOS device. If the iOS device connects to Wowza after the video
source connected, the iOS device would not display any video.
It seems that there is some info in the start segment which is absent from the
subsequent video segments, and which iOS needs to be able to decode video.
We were able to successfuly stream to iOS devices by using a software h264
encoder and a webcam with the following command:
ffmpeg -r 25 -f vfwcap -i 0 -r 25 -an -f flv -s 320×240 -vcodec
libx264 -b 512k -flags +loop -cmp +chroma -partitions
+parti4x4+partp8x8+partb8x8 -subq 5 -trellis 1 -refs 1 -coder 0
-me_range 16 -keyint_min 25 -sc_threshold 40 -i_qfactor 0.71 -bt
1024k -maxrate 512k -bufsize 512k -rc_eq ‘blurCplx^(1-qComp)’ -qcomp
0.6 -qmin 10 -qmax 51 -qdiff 4 -level 30 -aspect 320:240 -g 30 -async
2 “rtmp://127.0.0.1/live/ffmpegstrem”
On the source, there are currently only a few parameters which we can tweak
when using a gstreamer element which controls the encoder (bitrate, cbr/vbr,
intraframe interval), and the encoder default is to encode high profile h264.
In latest gstreamer plugin code, there are options to set the encoder to do
base (or main profile), set the level to 3.1, and turn on AUD NAL and SPS/PPS
headers insertion. Unfortunately, enabling these did not help. The stream on
iOS would only play if the player got the first segment, as before.
We were unable to run Apple’s mediastreamvalidator on the m3u8 served by wowza
- the validator would hang indefinitely with no output.
There is no audio muxed into the FLV. The video pipeline on the source runs
nominally on 30fps, but we are physically outputting about 15fps.
We have tried manually streaming FLV generated on the source by splitting it
into tags, removing script tag which contains metadata, and then publishing FLV
tags in sequence to Wowza. After we reach the last tag, we start streaming
again from the first tag, but we modify FLV container’s timestamp so that it
looks like a fresh chunk. We had the same issue with this set-up - iOS would
start streaming only when it receives the start of h264 stream, that is, each
time after the cycle restarts, but not mid-stream.
The FLV obtained from the source is here: http://none.itekako.com/stream_test.flv
Is it possible that our stream does not contain some data which should be
periodically inserted into the stream, such as IDR slices or SPS/PPS headers?
Does Wowza change the H264 stream in any way when it transmuxes video? For
example, does it insert AUD NALs if there aren’t any in the input stream?
Thanks,
Bojan