QoS monitoring in OTT: importance, challenges & best practices

person doing qos monitoring

As streaming broadcasters know, end-users don’t care about how their content reaches them, they care about what it looks like. They judge the quality of their OTT, premium pay-TV, and broadcast services based on their actual viewing experience: whether there’s high latency, poor image quality, or buffering. Understandably, delivering a poor video quality of service (QoS) for customers paying premium prices can severely damage a brand’s image and affect revenues. In 2022, more than ever, churn is the name of the war between streaming services, and poor quality sends customers running. This makes QoS monitoring one of the most important tasks for any streaming provider.

What is video quality of service & how is it different from QoE?

The three key factors in judging the quality of an OTT stream are latency, quality of experience (QoE), and quality of service (QoS). If you're in OTT operations or an OTT NOC team, these are the KPIs your job is judged against. Regarding the former, the glass-to-glass latency in broadcasting is typically around 8-10 seconds. This means that OTT content providers need to hit this rate or do better: it’s the gold standard set so far. 

On the other hand, QoE and QoS are interlinked, and most large broadcasting corporations have long-deployed monitoring solutions of varying complexity. As many of these antiquated technologies are physical devices, they’re expensive to scale and expand as companies extend their service offering geographically, and are very expensive to maintain over time. This is why so many services are moving to the cloud for QoE and QoS monitoring. These solutions measure their score and identify opportunities to improve stream quality to give their customers the best possible experience. 

QoE is what the customer experiences on their stream (i.e. image quality, buffering, etc). It's measurable via customer satisfaction surveys or other subjective metrics and is extremely important to a broadcaster's profits and customer retention. The global reach of online content means broadcasters are serving a more international, widely dispersed customer base. The accelerating, global mass consumption of OTT content and continued improvement in streaming tech also make QoE the big differentiating factor for consumers and providers alike.

QoS is a more objective measurement. It refers to the performance of a broadcasting network and includes important factors such as uptime, the probability of downtime, error rates, bandwidth, and latency. QoS is focused more on the process that happens between the IP and the streaming application, rather than on the end-users themselves. When it comes to customer loyalty and brand reputation, QoS is crucial for providers of traffic-heavy content such as live stream sports and music events, and poor QoS greatly impacts QoE.

As streaming broadcasters know, end-users don't care about how their content reaches them, they care about what it looks like. They judge the quality of their OTT, premium pay TV and broadcasts based on their actual viewing experience: whether there’s a high latency, poor image quality or buffering. Therefore, delivering a poor OTT live streaming service for customers paying premium prices can severely damage a brand’s image and affect revenues.What’s more, customers are increasingly paying for OTT. In fact, Conviva’s 2018 All-Screen Streaming Census report indicated global TV streaming audience growth to have more than doubled in 2018 compared to 2017. With continually accelerating growth within streaming audiences, companies need to come as close to matching broadcast quality as possible, or even exceed it, if they want to survive and thrive in the incredibly competitive OTT live streaming market.

Why QoS monitoring is essential for sustainable viewer growth & retention

frustrated-viewer-as-stream-breaks-down-due-to-poor-qos-monitoringA paid live stream stopping in a crucial moment can quickly lead to a Tweetstorm and prove to be extremely costly for the streaming provider: refunds, subscription cancellations, and cost-to-attend customers

Amongst increased competition, QoS monitoring has become more important than ever. Online streaming viewers simply don’t have the patience they used to. Now, with an abundance of alternative options available to them, they just go elsewhere when quality of service is poor. Moreover, even if they don’t, their frustration and anger translate into customer complaints which cost providers millions of dollars to manage.

The numbers speak the truth: low QoS means customers churn

Here are a few statistics that illustrate the problem. According to monolith CDN Akamai, as many as 50% of viewers will abandon a stream if it takes more than 5 seconds to load. 76% of viewers were also found to leave a service if their stream buffered several times. Quality of service is tied to customer loyalty and the demand for quality will only rise as there are more options on the table.

“As many as 50% of viewers will abandon a stream if it takes more than 5 seconds to load.”

Let’s look at the example of the next big opportunity in OTT cloud streaming: the high-quality live broadcast event. Just a few minutes or even seconds of downtime during a sporting event could see a huge chunk of the viewership lose faith and go to the competition. Imagine a viewer paying $35 a month to watch their favourite football team, only to miss the game-winning penalty in the Champions League final because of a broken stream–which could have been prevented with better live stream monitoring.

The ever-evolving landscape of OTT broadcasting means viewers now take the experience with them—for instance, audience figures for the Super Bowl LIII show the device landscape is continually shifting to mobile devices and connected TVs. If content stalls, buffers, or fails to play, consumers have a wealth of options they can turn to. Broadcasters providing a poor QoE will see users churn away in high numbers, leading to a huge downturn in revenue.

Cost of low QoS extends to expensive management of complaints

Even if disgruntled customers don’t churn, the sizable call centre cost that comes with user complaints is likely to bring a dent in profits. Forrester estimates the average cost to service a customer complaint amounts to $12/call, meaning a serious QoE issue during a major sporting event causing 100,000 people to complain could cost $1.2 million in call centre expenditure alone. Not to mention hate calling–that’s why we see Tweetstorms and attempts at contacting OTT providers via messaging when there are problems; having to make the call further frustrates your customers and also costs money. 

Bottom line: with continually accelerating growth within streaming audiences, companies need to come as close to matching broadcast quality as possible, or even exceed it, if they want to survive and thrive in the incredibly competitive OTT live streaming market. The key to achieving this is the prevention of streaming problems, rather than a reactive approach.

Increased competition reinforces importance of top-notch QoS

Streaming consumption is up by 266% over the past three years. While this is good news for the industry overall, it has also meant that an increasing amount of new players have emerged on the scene. 

This includes Disney+, Apple TV+, HBO Max, Peacock, and Paramount Plus, creating a crowded space. Even in the live streaming space, competition is heating up, with Amazon streaming EPL matches, Yahoo covering NFL games, and Paramount Plus offering sports live streaming.

Further competition comes from the gaming world. Google launched Stadia, its game streaming subscription service, and Netflix CEO Reed Hastings recently said the streaming service’s main rival is Fortnite, quickly following this statement with the integration of games in the Netflix platform in late 2021. 

It’s clear that subscription fatigue is already setting in and it will only get worse as more services are released. The more options available to consumers, the less they will put up with streams that buffer or don’t play at an exceedingly high quality.

The challenges of QoS monitoring for streaming operators

person-using-remote-control-to-navigate-streaming-app

When Disney+ launched, and from the moment the first of several million clicked “subscribe,” the home of the famous mouse had to be at the ready. As Fierce Video points out, the launch didn’t go without its hiccups. While Disney’s was a coding issue, any number of problems can arise in the complex streaming delivery chain. 

This is especially the case when companies are providing the holy grail of OTT streaming: live events. Live streaming events only happen once. Cause users to miss a once-in-a-lifetime experience, and that could very well make a bad impression that lasts, well, a lifetime.

QoS problems such as outages, buffering and slow load times can cause users to quit a service in droves. The expectation today is for online live streaming to be very close to broadcast TV quality-wise–but this hasn’t been easy to implement across complex streaming infrastructures with many points of possible failure. 

The main monitoring challenges are a lack of full end-to-end visibility due to outdated technology as well as difficulties to scale and evolve the streaming monitoring technology stack because many solutions have linearly rising costs as service expands, such as physical NOC systems. This is why the most innovative OTT providers have turned to virtual NOC toolsets that are fully clouded and allow them to identify failing or underperforming components well before they impact customers.

The visibility problem for QoS monitoring

While streaming feels ubiquitous today, many early providers that set out to test customer uptake and establish market share still retain basic development monitoring infrastructure. In other words, they only monitor basic metrics that simply confirm whether content is being delivered. This approach can be compared to waiting until your car breaks down before addressing the issues. Having such low visibility on content quality, to the point where major errors are not spotted before and prevented from happening, is unsustainable.

Having access to a consistent metric for diagnosis of the delivery chain and reducing the time needed to fix complex issues is crucial for providing users with the QoS they demand and deserve. 

The solution to this problem? Cost-effective end-to-end visualisation and surveillance of the entire streaming delivery chain.

The challenge of scaling QoS monitoring

The two challenges in scaling QoS monitoring that will have your head nodding are training and cost.

With streaming continuing to break records every year, monitoring solutions must allow for flexibility to scale and evolve–without equally rising costs. However, many operators rely on a set of different, often disconnected solutions. This not only makes it difficult for operations teams to see the big picture but also slows down innovation. In this set-up, any update of existing solutions or introduction of new ones requires team members to be retrained and new integrations to be established. In teams that are already working against the clock to solve time-sensitive issues, spending hours training on complex new tools is simply not realistic. Both cost time and money, as well as delay the roll-out of improvements. In a crowded arena fighting for market share, this means broadcasters are constantly behind when they ought to be at the forefront of QoS to win.

An effective solution to this is to implement a monitoring harness that easily plugs in different technologies to provide connection points and gather data in one place via APIs. 

QoS monitoring best practices for reaching near-broadcast quality

qos-monitoring-platform-touchstream

Ensure high QoS with active monitoring and passive in-player monitoring

There are two prevalent types of monitoring; passive in-player monitoring and active synthetic monitoring, like Touchstream’s own VirtualNOC content availability monitoring. Each has its benefits when it comes to achieving near-broadcast quality. 

Passive in-player monitoring software has trackers inserted into the player module to collect data on playback of content on the client device in real-time. This information is then sent back to an online collection point for processing and is usually only active during video playback. This type of monitoring gives you the advantage of being able to see your viewers’ playback efficiency in real-time. If an error occurs, it displays how many viewers are affected and the extent of the issue. 

The drawback is that passive in-player monitoring operates only when an end-user is viewing the content. This means you're only notified of problems once users are experiencing them too, or have been for some time. In addition, passive in-player monitoring also doesn’t give the root cause of an issue in the streaming chain that could originate in any part of the process. You'll never know whether the problem could have been caused at the origin or within the encoder, the CDN, or by the end user's ISP or WI-Fi router.

Active monitoring creates a continuous simulation of a broadcaster's viewing content in a specific location so that it can be tested 24/7. The sequence a player goes through during video playback is recorded and then executed repeatedly by automation, allowing the data to be collected centrally for processing.

With active monitoring, end-users don’t have to be viewing content for issues to be detected. As a result, problems can be seen early and fixed before they have a lasting impact on a broadcaster’s QoE. Touchstream’s active monitoring, for example, checks every bitrate of every channel and encoding format, and all are monitored from high-quality PoPs located online on diverse transit networks.

When it comes to active monitoring vs passive in-player monitoring, it’s not quite as simple as just choosing one over the other. The best approach is to deploy both, as the two approaches complement each other rather than compete against each other. By leveraging the benefits of each, you can monitor as many points as possible via an OTT live streaming workflow, ensuring high-quality QoE that rivals even broadcast.

Fix redundancy with multiple source encoders

Another useful way to prevent playback disruption from happening in the first place is by building redundancy into the production workflow by using multiple source encoders. With automated failover, if an encoder stops working or disconnects from the transcoder, another source can continue to supply the video. By monitoring both the redundant paths, actions can be taken to failover either automatically or very quickly, and to repair the redundant path to ensure no impact on the viewers.

Provide low latency via first-rate CMAF

A key expectation for viewers is low latency. Broadcast glass-to-glass latency is typically 8-10 seconds. Applying a best-in-class low latency CMAF solution can help OTT live streaming reach a similar latency. CMAFs provide the lowest latency possible but do make the process more prone to error, meaning that monitoring and redundancy are made doubly important. 

Obtain full QoS viewability with end-to-end monitoring in one dashboard

It’s important to bring all data together visually in the monitoring process, something that has historically been difficult with the complex workflow of OTT live streaming. For instance, Touchstream’s end-to-end live stream monitoring has one dashboard for the entire delivery chain that is capable of integrating data from external sources including other monitoring tools, allowing quick response and 24/7 view of issues that could impact QoE. 

In OTT live streaming, time is everything. Failing to pinpoint problems and fix them fast means you risk losing viewers to any one of a multitude of streaming alternatives. Thankfully, today there are enough options for high-quality redundant encoders coupled with specialist monitoring to track these problems and, crucially, solve them even before they become an on-screen issue for the viewer. 

Ensure flexibility and scalability with a monitoring harness

For many operators, dealing with constantly evolving streaming technologies as well as scaling challenges requires painful and resource-heavy adjustments. With a monitoring harness, you create a framework for gathering and visualising QoS monitoring data that can be easily updated and evolved. Regardless of whether you need to replace, upgrade, or add video streaming technology stack components, you just plug them in easily via APIs and that's it. Meanwhile, the monitor harness and its core functionalities: data gathering and visualisation, remain the same. The investment needed to adjust your monitoring solution and train your staff due to changes in your technology stack is kept to a minimum. 

👉 You may like: Monitoring harness white paper

Quality of service monitoring has become a key factor for streaming operators’ success. Viewers are faced with a wide range of alternatives, so anything less than smooth, low latency, and error-free experiences won’t do. The technologies and frameworks needed to answer these high QoS demands are already out there, so you just need to put them all together in one place via a monitoring harness.

Want to know more about how Touchstream can help you optimise QoS monitoring? Contact us here.