4.8.14 Cmaf Packetizer LL-HLS errors

Hi All,

Running 4.8.14 Trial and testing LL-HLS for a potential deployment.
Mostly seems to be running ok (RTMP ingest > Origin > RTMP Push to Edge) and can pull LL-HLS streams from both origin and edge (having followed the config guide here > https://www.wowza.com/docs/deliver-apple-low-latency-hls-live-streams-using-wowza-streaming-engine)
I have also removed the target A/V splits as in the above (400ms) so should be defaulted back to 1000ms in an attempt to resolve the below.

Stream plays fine for anything up to 5/6 minutes before the test players at Theo, JW Player and Akamai drop their connections with either a "This Video Cannot be played" error, or https://domain/live/main.stream/chunklist_w1574532724_vo_sfm4s.m3u8?_HLS_msn=64&_HLS_part=5. Invalid URL. The time is not repeatable and could happen as quickly as 20 seconds in.

At the same time, the server log fills with errors such as;

CmafWriterHandler.getAudioChunk[live/stream/main.stream][a] segment 62 chunk 3 does not exist

CmafWriterHandler.getVideoChunk[electrosonic_cdn/main.stream][v] segment 62 chunk 2 does not exist

Stream is coming in via RTMP, and OBS isnt reporting any loss (30fps, keyframe interval 1s)

I’ve tried increasing the cmafMaxSegmentCount to 100 in an attempt to resolve the missing chunk error, but this hasn’t made any difference.

Any pointers would be greatly appreciated on this one! :slight_smile:
Scott

hmmm, thanks for all the details. Any issues with the network or processing speed?

If not, we have had some issues with streams suddenly dropping like this in 4.8.14 with LL-HLS. Can you please send in a ticket so we can test for a potential bug? It may not be an issue on your end.

In fairness, we are running a little below the recommended in terms of CPU clock (12 Xeon cores @ 2.6Ghz vs the 3.0Ghz advised) but not running production loads. Network is 10Gbe end to end so I cant see a reason why I’d see issues there as the links aren’t congested.

The host systems are running as VM’s - host hardware isn’t over committed so shouldn’t be causing any issues there unless there is VM specific configuration I need to watch out for?

I’ll stick a ticket in for it as you say in any case

Thanks