Reasons For ModuleTranscoderTimedSnapshot Interval Restrictions

I’m working with the ModuleTranscoderTimedSnapshot and I notice that the parameter for the interval is set to a minimum of 1 second. The source of the module enforces this restriction. I’m guessing this restriction is in place for a reason.

Why can we not generate images any quicker? Did you find that anything more frequently causes handling of the live stream to slow down? Is it a problem with File.IO and the ability to write out the images any faster? Can the server just not encode the images any faster?

I’m curious the reasoning behind the restrictions and what are the consequences if I remove it?

Hi there.

It does use up CPU cycles so the more times you generate a snapshot the more CPU used. Hence Wowza has limited this to 1 per second. Also it should not be used to generate a real time stream view, ie. a snapshot every 25/50ms as some people have tried, and it doesn’t work.

I hope this answers your question.

Kind regards,

Salvadore

Hi there.

It does use up CPU cycles so the more times you generate a snapshot the more CPU used. Hence Wowza has limited this to 1 per second. Also it should not be used to generate a real time stream view, ie. a snapshot every 25/50ms as some people have tried, and it doesn’t work.

I hope this answers your question.

Kind regards,

Salvadore

So your experience is more that it will affect the CPU usage and bog down the server in general? When you say people try a snapshot at every 25/50ms and it doesn’t work what do you mean? Is it that the CPU pegs at 100% and eventually can’t keep up? Say you want 5 fps (and had a powerful CPU to support it) is that reasonable?

So if I DID want a real time stream view of images say 5-10 fps what would be the best way to approach that?

In digging deeper into this it seems the line that takes the longest time to complete is:

sourceVideo.grabFrame(new FrameGrabResult(streamName))

It could take anywhere from say 30ms to several hundred ms. I’m assuming this varies based on the frame data coming in, whether or not it’s a keyframe, etc.

Regardless if this is called via a timer like the timedsnapshot example or called from the onbeforescale event it seems this is the potential “bottleneck” in regards to how fast we could process the image. Anything coming in at 15-20fps definitely has the potential to cause the processing to back up and create problems.

Could you offer any more insight into this and is there a way to reduce the time it takes to process or is there another part of the API that would be quicker in retrieving the actual frame data that could be converted to an image?

Thanks!

Hi,

You will find the CPU near 100% or close to and as the snap shot is software driven you will see your server start to suffer in other areas as threads take longer to process other elements.

The module is designed as a ‘snapshot’ not a fps type system. If you want to get it more often you need to look at the transcoder APIs to hook into the onbeforescale event.

Andrew.