Creating a production worthy docker image

Hi all,

We’re trying to create a production worthy Docker image, but ran into multiple problems. We’ve managed to rewrite the container so that the docker image can be customized (it’s ipv6 enabled, ssl enabled, stores config, …).

But a few things give us a bad feeling about using wowza and the docker image.

  1. to have a successful docker image, wowza should provide customization via environment variables. Maybe it’s a good idea to let the community manage and expand the image more (with github and pull requests for example). This would be beneficial to all users using docker images. There are already a few customize docker images, but since they are third party, company policy does not allow us to use them. Any plans of either customizing or letting the community handle this?

  2. Is there any way to manage the persistent data by means of config? I’ve managed to mount a volume with persistent data, but since the files are scattered, we had to persist a lot of directories, which are also containing config instead of persistent data. I would argue this is bad practice, but I see no other way. Is there? Can you configure wowza in order to store all persistent data on one folder on a volume, without the main config files?

  3. Is there a way to get wowza running redundantly? After searching the internet and the forums, the answer seems to be: no. But in a production environment, I feel this is necessary. Streams should be started on all instances and when one is down or has crashed or is restarted, this should not affect the end-user. Now a stream is not even active anymore when the container is restarted. Manual (or programmatic) action is needed. This is far from optimal. Is this indeed the way it works? No alternative setups? No database, shared caches or whatever? Any plans doing this or should we provide our own backup mechanisms in the long term?

Thanks for your answers,

Tom

Hi @Tom De Dobbeleer, thanks for the feedback on this. Let me pass this information on and I will post again soon with some information for you.

Our product manager for Docker is out of the country until early next week for a work event, but I will follow up with you.

@Rose Power-Wowza Community Manager

Thanks Rose!

Another issue arose in a couple of stress tests this week: the container could not serve and HLS stream for +70 clients (production settings). The stream terminated after a certain amount of time with a certain amount of connections.

What we saw was that the jvm process memory built up to reach is limits. But the jvm did not stop at its limits, so the process was OOM killed by linux (“Memory cgroup out of memory: Kill process 313025 (java) score 999 or sacrifice child” error). The cause of this was that the jvm did not restrict itself to container memory limits.

The deeper cause of this is an old jvm running on the docker container. The jvm shipped with the latest wowza container has no container support at all (1.8.0_77). So we updated the jvm to the latest OpenJDK 8 version which has support for containers out of the box (and by default: -XX:-UseContainerSupport was backported).

So this is yet another problem which makes me a bit worried about using your docker container, especially in production environments. We could move to a VM, but this is 2019 and I guess containarized environments are not the future, but the present :slight_smile:

Hi @Tom De Dobbeleer, would you mind submitting a support ticket on this? The senior engineers are asking if they can take a closer look and run some tests. Feedback and examples like this are so valuable to us and I hope you can find some time to submit. Thank you!

https://www.wowza.com/support/open-ticket

@Rose Power-Wowza Community Manager

We have some trouble creating support tickets atm, that’s why I post here. I will, when this issue is fixed (my colleagues opened a support ticket for this issue).

Any progress on your colleague’s ticket? We’ve managed to get around the environment variables, and we’re just ignoring the persistent data problem at this point, but the current Docker image ships with OpenJDK 9.0.4+11, and I’m not sure UseContainerSupport was backported to that (maybe not as OpenJDK 9 was EOLed back in 2018).

Hey,

Yes, we’re running it in docker at the moment. With the latest openjdk11 (you can probably just use the latest openjdk). We also use an external volume to work with persistent data and ‘rewire’ the wowza config on deploy. Otherwise we cannot deploy a new instance with a new config for example.

Maybe some advice after a couple a weeks in heavy production:

  • memory, memory, memory: we currently have 32GB ram and 16 gb heap
  • to make sure wowza can handle all our streams, we put a caching server in front of it (caddy)
  • do not enable https on the streaming port, that’s a memory killer for sure

Hope this helps,
Tom

2 Likes

Thanks so much for this update @Tom_De_Dobbeleer. Nice of you to do that!

Thanks, @Tom_De_Dobbeleer. I’ve got an image now that replaces the default OpenJDK 9 JRE with OpenJDK 11.0.8+10 (Ubuntu’s current OpenJDK 11), per the Wowza docs for replacing the JRE.

It doesn’t seem like this affects Wowza’s default heap size calculation – it’s still 70% of the hardware memory size, regardless of limits set on the container. So I assume I still need to set that in Tune.xml by hand.

Also, it seems like setting -Xmx overrides UseContainerSupport. So is there still a memory benefit to updating to OpenJDK 11?

Hey,

You are absolutely right, I forgot about the -Xmx. I tried to delete the setting, but the software apparently needs it, cause I wasn’t able to discard it.

When I started out using this container, it used an unsupported version of jdk8. I rather manage the jvm myself and always use the latest supported version, rather than a fixed one. Different memory and heap management, security patches… Things I hope are beneficial. I don’t like stale things.

Maybe an RFC should be opened to ditch the -Xmx settings. We’ve already seen one crash due to bad memory management in production. Always moving to a higher memory settings seems like patching a broken leg.

1 Like