Hi there.
We’ve been using a load balancing setup where there’s a single primary server that runs the origin, load balancer, and edge applications, and when we start up edge servers to handle a large load, they connect to it. This has been working well, but as we expand, the primary server gets connections for every stream times every edge server. So for 20 streams and 20 edges, it has 400 connections.
To fix this, we’re first moving the load balancing and edge applications off of the primary server, to a new origin server. This server will have 5 edge-origin servers connecting to it, and each of those will have 5 edge servers connecting to them.
This all looks possible, except the part when we start up edge servers. If I’m understanding correctly, we need to specify the edge-origin server to connect to in the loadbalancertargets.txt file. But if we’re starting the edge-origin servers up for a temporary time, the addresses will change often, and we’ll have to customize the startup package for the edge servers for each edge-origin.
Should I be using only one load balancer server, or do I need a load balancer server on each edge-origin machine? If I’m telling them all to use my main origin server for the load balancer, will they be able to connect to all the streams?
Is this correct, or is there an easier way to do all of this?
Thanks for the help.
Hi. You’re asking how to configure a load-balancer in a Liverepeater origin>edge-origin>edge configuration, correct? I think you’re on the right track. Lets see about your questions:
-
Should I be using only one load balancer server?
-
…will clients be able to connect to all the streams?
-
We’ll have to customize the startup package for the edge servers for each edge-origin.
The key to understanding here is knowing that the concept of the loadbalancer is independent from the concept of the LiveRepeater (edge-origin configuration).
Regarding best practices for dynamic IP address: Short answer: don’t do that. Long answer: Dynamic IP addresses are not conducive to the server paradigm, so you have to make a workaround. For some use cases you can use domain names and set a short TTL on your registrar when changing the A-Record to point to your new servers. Or, you could include a script in these edge-origin and edge server startup packages that checks a webservice on your main server to get the info needed to automatically configure themselves. Then you don’t have to alter the startup packages. Otherwise, isn’t AWS Elastic IP used to keep the same IP across different server instantiations?
#1: Yes, you will use one loadbalancer server (listener). This is the server that your domain name points to; the domain name you want customers to use for your service.
-It is possible to have nested loadbalancer listeners, for example if you need to use GeoLocation to keep certain clients in certain geographically located groups of edge servers.
#2: Yes, if that is your goal. Clients can connect to whatever origin or edge-origin streams you’ve referenced in your edge application.
#3: That’s correct. When you start a new edge application, you have to tell it what origin server to connect to.
To recap: Your loadbalancer senders point to your single loadbalancer listener. Your edge applications point to their group’s edge-origin applications. Your edge-origin applications point to your main origin server.
I suppose you have considered the Wowza Dynamic Load Balancer, but will post a link in this thread in any case:
https://www.wowza.com/docs/how-to-get-dynamic-load-balancing-addon
The Load Balancer Listener’s HTTPProvider, “serverInfoXML”, provides connection count for each edge in the cluster. You could build a system to scale your edge cluster around this.
Richard
I’m not sure how Rackspace cloud servers can be started, but EC2 servers can be started manually through an web interface (AWS Console) or the EC2Tools API, and I would suppose there is an equivalent to both in Rackspace. And they must have a way of image a server, an image with Wowza installed and configured as an edge and Load Balancer sender, with .stream files in place, etc, such that when it starts up the LB Sender starts sending to the LB Listener, which will then start referring client requets to it. A new instance in this scenario will get new connections first because it will be the least loaded edge.
Richard
Seems to me you’d have two options here:
-
Via configuration: Setting the origin hostname in your Application.xml as soon as you have the origin IP available.
-
Via code. Since there’s no built-in way for your server instances to “see” each other as they are turned up, you’ll need to build out your own modules for this. One option would be to set up a socket service (outside of wowza) that would allow you to relay messages between instances as they are turned up and down. Another option would be to use a key-value store with pub/sub functionality (such as Redis), and then have each of your servers subscribe to the messages on the store, and publish a message as they turn up.
I forgot to respond to this, but thank you for your help.
We have a nice system set up where we scale up to:
1 root “ingest” server that takes the streams.
1 primary origin server that handles the initial load balancing requests and takes the streams from the root server.
5 edge-origin servers that act as middlemen to the edges and have their own load balancer.
25 edge servers, 5 for each edge-origin.
When we don’t have a lot of streams going, we go down to one of each.
Thanks again.
Minor correction on that: Each edge-origin server does not have its own load balancer. They don’t even talk to the load balancer at all. Each edge connects to the primary origin server that does load balancing, but gets the video feed from the edge-origin.
Hi Moresheth,
thanks for your explanation. Are your servers phisical or virtual? We are planning to have a similar architecture on cloud servers (Rackspace), but we have to figure out how to instantiate and configure new edge servers on demand, in such complex architecture (origin -> edge/origin -> edge -> clients). Our goal is to scale dinamically, as the number of incoming client connections increases.
Thank you.
Yes I know it.
My concern (among others) is on how to instantiate and configure new cloud servers dinamically on Rackspace. If I have a origin cloud server (we can suppose it is also the listener in the load balancing) just instantiated in order to ingest a live stream, how can I instantiate new edge cloud servers and configure them to “see” the origin? I know I can prepare vm images with Wowza already configured and tuned, but some parameters (like the vm IPs) are known only when the servers are instantiated.
Thanks Richard.