Have you ever watched a sports live-stream event on your tablet and at the same time watched it on your TV? Did you notice — it is never transmitted in real-time to your internet device?
In the internet streaming community, we talk about “glass-to-glass” latency. In some recent events streamed, such as the Australian Open Tennis sports event or the US presidential debates, you could see a delay of up to 90 seconds by the time the consumer watches something on his or her mobile device.
At Unleash live, we connect multiple secure live camera feeds from drones, robots, and IP cameras, add A.I. analysis into the live stream, and broadcast to an authenticated global audience. We do all this with less than a second latency to our servers and often less than 10 seconds to a global internet audience. Here is an example of a use case implemented together with NVIDIA AI.
Live streaming is very different from using media represented by a file, whether it be an mp4 or mov or webm file.
Like most other files, this file sits on a server and can be delivered to the browser. This is often known as a progressive download. Netflix and other streaming services focus on this progressive download experience.
Live-streamed media, in contrast, lacks a finite start and end time. Rather than a static file, it is a stream of data that the server passes down the line to the browser and is often adaptive. This requires different formats depending on network speed, browsers, and special server-side software to achieve this. The Mozilla Foundation provides a great overview of the complexity currently involved.
Several factors induce delay: The sending device, the server, the CDN, the SSL encryption, the receiving browser, and the video player.
Apple with iOS and Safari has long-backed HLS and defaults to 10-second content chunks and a certain number of packets to create a meaningful playback buffer. This results in about 30 seconds of glass-to-glass delay seconds from capture to final packet assembly. But, when you introduce CDNs for greater scalability, you inject another 15–30 seconds of latency so the servers can cache the content in flight — not to mention any last-mile network congestion that might slow down a stream.
The other leading alternative is DASH, which is short for Dynamic Adaptive Streaming over HTTP from MPEG, MPEG-DASH. Google is a strong supporter of their Android operating system and Chrome browsers. The video chunks are often shorter than HLS, however other tradeoffs such as lack of strong DRM support need to be considered.
You might ask, why can I communicate in ‘real-time’ via Skype or Google Meet? These systems use a peer-to-peer communication architecture based on WebRTC or other proprietary protocols and establish dedicated connections between a limited number of senders and receivers.
There is hope to significantly improve the live video streaming experience with AV1, promising 30% better compression versus current standards, and it is designed for both video streaming and real-time communications. The Alliance for Open Media initiative aims for a royalty-free video format and has aligned all the major players. With the first specification of AV1 released in early 2018, we should see multiple implementations by year-end, with applications generally available in early 2019. Optimized hardware is expected later next year, and strong browser and player support will follow soon after.
As video streaming experiences improve quickly, we will see new immersive experiences and collaboration opportunities. At Unleash live, we bring these experiences to life now.
Visit unleashlive.com to learn more, or get in touch to talk with a specialist team member to learn how your business can take advantage of live-streaming video and AI analytics.