Richard
Thanks! I would agree with other users about the m1.small. I was both surprised with how well some things worked, but also disappointed at other resources of the m1.small.
I have NEVER been able to see 150Mb/s out of a small instance. For example, for some time now I’ve been running a live event each Wednesday evening that requires one ORIGIN and several EDGE servers (the number of EDGE servers have been anywhere from 3 to 20 depending on the event demand).
Each EDGE for this event receives two streams – a 350kbps stream and a 700kbps stream. These are all flash-based (all iOS traffice, for example, is pushed to a different EDGE setup).
The limitation I’ve seen over and over is not the bandwidth, but the number of users connected. Right around the 200 connection point the EDGE servers will start to get bottlenecked and video will start to suffer for clients connected to that EDGE. The bandwidth going OUT is usually around the 70Mb/s mark… maybe a tad bit higher. Of the 200 connections, most are connecting to the 350kbps stream with just 5-10 connecting to the 700kbps stream.
That said, I’m guessing the limitation with the m1.small is some sort of processing resource – as I can see the “processes in the run queue” in CACTI (these are Wowza 2.X AMIs) start to exceed the 1.0 mark. CACTI’s “processes in the run queue” is usually my favorite graph to watch for these setups, as that particular data is usually right on the mark to let me know when a Wowza instance is going to have issues, or not, with handling all the video requests.
If I launch a m1.small instance and just let it sit (doing absolutely nothing) the “processes in the run queue” in CACTI will show a fluctuating number… jumping from 0.18 to 0.30 over the course of a minute, or two, and then back down again… over and over. I’m not sure exactly what’s happening… but that’s what I’ve seen from the graphs.
So – I’ve learned to keep my connection counts on those particular EDGE instances to around 150 max before launching additional EDGE servers. And even though all the EDGE servers are load balanced, some will have moderately low “processes in the run queue” during a live event (usually around 0.25 - 0.35) while other EDGE servers seem to jump to 0.85 every couple minutes. Odd.
In another setup – also weekly – I’ve had an ORIGIN m1.small instance (running 24/7). This one is for a church, and each week they’ll have 10 congregations (at once) presenting live with recording (live-record) which is also load-balanced to additional m1.small EDGE instances that run for one day. In addition to this, there are two congregations that push rtplive streams (not recorded and not load-balanced). All of these congregations have individual “text” chat modules (based off the simple text chat module Wowza provided) which is handled by the ORIGIN. All VOD is also handled by the ORIGIN.
So… the m1.small instance has been able to take 10 incoming live-record streams, record them and push them to the EDGE locations… take 2 incoming rtplive streams and push them to connected users (about 25 users connected to each one)… as well as handle all the text chat traffic and VOD playback (usually about a dozen VOD files) all at the same time. Not too shabby, if you ask me.
Until a month ago, this setup seemed to work just fine. Then they started to experience a few issues when another congregation started to live-record stream… so I think this was the straw that broke the camel’s back.
I upgraded them to a c1.medium instance – thinking that I could keep the “processes in the run queue” lower with his “High-CPU” instance… and WOW… it made a huge difference. The extra compute units seem to have made a lot of difference. With the same load as the previous m1.small instance, this new c1.medium instance seems to be taking a nap when it comes to “processes in the run queue” in CACTI… reporting extremely low numbers during the most aggressive loads. This is a good thing.
With that information, I’ll continue to push the c1.medium instance to see where the next bottleneck is… be it bandwith, CPU, processes, etc.
Sorry for the long post – just thought someone out there might find the information helpful. Wish I had it when I started.
David L Good