I have a peculiar situation that looks like it turns the Wowza LB configuration upside down. Instead of serving many many clients, I need to serve many many encoders. I have incoming RTMP connections from across the US and want to scale my Wowza farm horizontally when needed and dynamically bring servers into and out of the farm. Right now, I switch between large virtual instances and even larger virtual instances by hand behind an AWS ELB, which is a pretty simple Load Balancer. This means during switchovers I blip some encoders.
What I want to do is to have a smart Load Balancer on the way in that can stop sending incoming connections to a server, so that I can retire that server when the last connection drops. I want to be able to have two smaller servers (less cost) than one bigger server, during times of moderate load. Best case would be to launch a new server from an AWS AMI based on load as seen by the load balancer. I have this infrastructure with the rest of my server farms and it works beautifully to scale up for load and down when the load is no longer there. However, I have connections that are being recorded, so putting different recorded files on different servers for different segments of a live broadcast is less than optimal.
I have begun investigating HAProxy as a possible solution to this need. Any one have any experience with HAProxy and Wowza for incoming RTMP connections?
I understand this is a very specific use case, and the large majority of load balance configurations are viewer based, which doesn’t need quite the stickiness. However, it is my use case and something I have toyed with solving for the long term.
Is there anyone who has any experience here? How do I go about getting some thoughts from Wowza team on this. I need direction more than implementation, but I am facing a crisis of overloaded incoming connections.
Hi @Bob Bohanek - did you ever have any success here?
We also have an application that is almost purely based around ingestion from multiple sources (playback is handled by our CDN). Our current solution is to provision a server with a pool of 5 hard-coded “slots” (one per incoming stream) which we manually spread out as we analyse demand.
Obviously this isn’t sustainable and is definitely not fail-proof - unfortunately the industry we’re supporting holds events that are “must succeed”, so we only get one shot per stream to get it right.
What I’d really like to do is build a farm of edge WSE servers capable of handling and transcoding, say 3, concurrent sources and then have a load balancer in front to catch ingestion requests which it can then route to the least busy farm member. The biggest problem I have wrapping my head around is how to dynamically configure the edge server with the correct application details (i.e., stream target info, source authentication info etc etc) when an incoming stream starts.As you say in your post, recording is also an issue.
Would love to hear from you if you had any luck in this.
Hey @Bob Bohanek - did you ever end up finding a solution to this issue? We’re going crazy with the amount of ingest servers we’re having to run and I’d love to hear if you’ve found a decent way to load balance incoming RTMP streams.
Hi @Naoca; at Raskenlund we’ve dealt with this challenge multiple times; and I’ll be happy to discuss this further with you. Please contact me directly on hello@raskenlund.com
There are a few ways to go about this and I would suggest you take a look at this blog to get a better idea of your options. Blog is expert advice from @Karel_Boek who has several years of experience setting this up.
“When creating a scaling strategy, you should carefully consider the server load from incoming streams (ingest) as well as the server load for distributing those streams.”
@Michael_Van_Slambrou, if it’s on a cloud service (Azure, AWS, Google Cloud, etc.) then most often their TCP Load Balancer will work just fine. Typically these do round-robin balancing, which works well as long as you remember to side-track overloaded servers so that they’re not included in the round-robin list.
In some cases, an off-the-shelf TCP Load Balancer may not work. In that case, you can build your own, simply based on e.g. redirection (NB! Make sure that your publishing soft-/hardware support redirection)