What is video latency?
Suppose you are watching an award show through a streaming service. Meanwhile, your neighbor is watching on a traditional television and starts loudly celebrating the fact that their favorite show won, leaving you to wait another thirty seconds to see the award. Or worse, you get a Twitter notification spoiling the winner 15 seconds beforehand, killing the anticipation you had built up. This is video latency – the gap in between when an event is broadcasted and when you receive it.
A number of steps in the glass-to-glass journey affect video latency:
- Video encoding pipeline duration
- Ingest and packaging operations
- Network propagation and transport protocol
- Content delivery network (CDN)
- Segment length
- Player policies (buffering, playhead positioning, resilience)
With traditional adaptive bitrate streaming, video latency mainly depends on media segment length. For example, if your media segments are six seconds long, your player will already be six seconds late compared to the actual absolute time when it requests the first segment.
This is worsened by buffering before the actual start of the playback, which will increase the time for the first decoded video frame. Much of this latency is due to player policies and issues rather than pipeline problems.
Minimizing video latency in live streaming
Live streaming video latency can be reduced and minimized by video providers if they consider certain issues that may not be obvious at first.
The first of these is Flash and Real-Time Messaging Protocol (RTMP). Flash-based applications using RTMP streaming used to work well for video latency, but with the deprecation of Flash and web browsers steadily reducing support for the plug-in, Content Delivery Networks have also begun deprecating RTMP, forcing content providers to take alternative action.
The second is the conflict between scale and reliability and video latency. The larger a network is and the further videos need to be distributed can affect video latency. An effective way to reduce this is switching to HTML5-friendly streaming technologies, such as HTTP Live Streaming (HLS), Dynamic Adaptive Streaming over HTTP (DASH or MPEG-DASH), and the Common Media Application Format (CMAF). These streaming technologies distribute media over HTTP, making them cacheable so that CDNs can deliver more efficiently.
There are a few simple steps media providers can take to lower their video latency:
- Measure video latency at every step in the workflow
- Optimize your video encoding pipeline
- Choose the right segment duration for your requirements
- Build the appropriate architecture
- Optimize (or replace) your video player(s)
Additionally, it is important to choose the right segment for video packaging. For example, you can achieve five-second latency with one-second duration segments. Choose two-second duration segments, and the result will be between seven and 10 seconds of video latency – unless you optimize the player’s settings. But this can vary based on your requirements. So, if video latency of seven seconds or below is not critical, two-second segments might be better for you. If your player uses two-second segments, raising the GOP length from one to two seconds will increase the encoding quality at a constant bitrate.
All of these problems can be solved using AWS Elemental – a recently available video streaming service from Amazon Web Services. AWS Elemental and similar cloud-based media services can reduce latency in live streaming by offering storage solutions, segment reduction, timed DVR windows, HTTP live-streaming, DASH or CMAF.
Elemental will allow you to modify your media needs no matter the scale while maintaining reliability and offering fully-managed cloud services tailored to your needs.