Hi,
I’ve been searching all over the web and cant seem to find much information about this. Is there a way to stream multicast video to an iOS device? If this is not supported by the built-in iOS video player, are there 3rd party apps that can do it?
Thanks
No way I know of. Also since these devices rely on WiFi - multicast is definitively not recommended.
Perhaps it is undesirable because there is no guaranteed receipt of the packets as there is with TCP unicast. And wireless is prone to transmission problems. But, I suppose for a live transmission it’s no different than tuning into the radio. If you have bad reception, you’ll lose data.
Anyhow, it seems possible: http://www.wi-fiplanet.com/tutorials/article.php/3433451
Perhaps VLC receive a multicast stream on iPad: http://tldp.org/HOWTO/VideoLAN-HOWTO/x549.html
(I haven’t tried it)
It seems to me like WIFI would actually be a pretty good medium for multicast. Wouldn’t it be better to have a single multicast stream on a WIFI segment instead of dozens of individual unicast streams (one per user)?
I think when you plan to use any network and you have options to choose from you need work the best combinations.
Multicast is great for serving an amorphous crowd of listeners. A high powered server isn’t required because the network is the distribution server. If all the network switches are multicast aware, it can be very efficient because the branches of the network tree will be automatically trimmed to forward the multicast stream to only branches with connected listeners.
As has been pointed out WiFi can be hit and miss. In a TCP network various schemes are employed to overcome this issue, mostly buffering. But if what you want is low latency for live performance, the buffering doesn’t work for you. What you’re forced into is to just send in the blind and let broken packets be replaced by silence.
In a TCP network with hundreds connected users, network bandwidth becomes a limiting factor and so complex compression schemes are used to reduce the required bandwidth. These schemes add latency because they encode blocks of samples that improve compression. If there is even a single bit error occurs, the entire block is lost. Retransmission takes care of most packet drops.
But we want a low latency. Loss of a block would create a big silence that would quickly become fatiguing to the audience. The answer is to use a coding scheme that is very simple and though it is not very efficient, using multicast keeps the required network bandwidth in check. Normally audio is encoded at the CD standard rate of 44,100 samples per second or the video standard rate of 48,000 samples per second. Suppose you decide to encode 8 stereo samples per packet. If one packet is lost about 0.2 milliseconds of audio that will be replaced by silence or the value of last good sample. If the packet is errored, just through it away. The result may be a noticeable pop or not depending on the audio being encoded. Most listeners can tolerate this until the network error rate gets to be so high that the pops or silences makes it unlistenable. I would argue at that that error rate retransmission falls apart as a mitigating scheme too. So many packets are being retransmitted that they become the predominant traffic in the network and just make things worse.
So, I think multicast in a WiFi network can be a good choice if a coding scheme like PCM (wav) is chosen over complex block oriented coding. Upwards of 1 Mbps of channel bandwidth will be occupied compared to something like 32 Kbps for a more complex coder. But if there are more that 40 listeners, multicast is more conservative of network bandwidth.