How Prime Video uses machine learning to ensure video quality

Date:


Streaming video can suffer from defects introduced during recording, encoding, packaging, or transmission, so most subscription video services — such as Amazon Prime Video — continually assess the quality of the content they stream.

Manual content review — known as eyes-on-glass testing — doesn’t scale well, and it presents its own challenges, such as variance in reviewers’ perceptions of quality. More common in the industry is the use of digital signal processing to detect anomalies in the video signal that frequently correlate with defects.

The initial version of Amazon Prime Video’s block corruption detector uses a residual neural network to produce a map indicating the probability of corruption at particular image locations, binarizes that map, and computes the ratio between the corrupted area and the total image area.

Three years ago, the Video Quality Analysis (VQA) group in Prime Video started using machine learning to identify defects in captured content from devices, such as gaming consoles, TVs, and set-top boxes, to validate new application releases or offline changes to encoding profiles. More recently, we’ve been applying the same techniques to problems such as real-time quality monitoring of our thousands of channels and live events and to analyzing new catalogue content at scale.

Our team at VQA trains computer vision models to watch video and spot issues that may compromise the customer viewing experience, such as blocky frames, unexpected black frames, and audio noise. This enables us to process video at the scale of hundreds of thousands of live events and catalogue items.

An interesting challenge we face is the lack of positive cases in training data due to the extremely low prevalence of audiovisual defects in Prime Video offerings. We tackle this challenge with a dataset that simulates defects in pristine content. After using this dataset to develop detectors, we validate that the detectors transfer to production content by testing them on a set of actual defects.

An example of how we introduced audio clicks into clean audio

Waveform of the clean audio.

Waveform of the audio with clicks added.

Impaired audio with artificial clicks

Spectrogram of the clean audio.

Spectrogram of the audio with clicks added.

We have built detectors for 18 different types of defect, including video freezes and stutters, video tearing, synchronization issues between audio and video, and problems with caption quality. Below, we look closely at three examples of defects: block corruption, audio artifacts, and audiovisual-synchronization problems.

Block corruption

One disadvantage of using digital signal processing for quality analysis is that it can have trouble distinguishing certain types of content from content with defects. For example, to a signal processor, crowd scenes or scenes with high motion can look like scenes with block corruption, in which impaired transmission causes the displacement of blocks of pixels within the frame or causes blocks of pixels to all take the same color value.

An example of block corruption

To detect block corruption, we use a residual neural network, a network designed so that higher layers explicitly correct errors missed by the layers below (the residual error). We replace the final layer of a ResNet18 network with a 1×1 convolution (conv6 in the network diagram).

The architecture of the block corruption detector.

The output of this layer is a 2-D map, where each element is the probability of block corruption in a particular image region. This 2-D map is dependent upon the size of the input image. In the network diagram, a 224 x 224 x 3 image passes to the network, and the output is a 7 x 7 map. In the example below, we pass an HD image to the network, and the resultant map is 34 x 60 pixels.

In the initial version of this tool, we binarized the map and calculated the corrupted-area ratio as corruptionArea = areaPositive/totalArea. If this ratio exceeded some threshold (0.07 proved to work well), then we marked the frame as having block corruption. (See animation, above.)

In the current version of the tool, however, we move the decision function into the model, so it’s learned jointly with the feature extraction.

Audio artifact detection

“Audio artifacts” are unwanted sounds in the audio signal, which may be introduced through the recording process or by data compression. In the latter case, this is the audio equivalent of a corrupted block. Sometimes, however, artifacts are also introduced for creative reasons.

To detect audio artifacts in video, we use a no-reference model, meaning that during training, it doesn’t have access to clean audio as a standard of comparison. The model, which is based on a pretrained audio neural network, classifies a one-second audio segment as either no defect, audio hum, audio hiss, audio distortion, or audio clicks.

Currently, the model achieves a balanced accuracy of 0.986 on our proprietary simulated dataset. More on the model can be found in our paper “A no-reference model for detecting audio artifacts using pretrained audio neural networks”, which we presented at this year’s IEEE Winter Conference on Applications of Computer Vision.

An example of video with distorted audio

Audio/video sync detection

Another common quality issue is the AV sync or lip sync defect, when the audio is not in line with the video. Issues during broadcasting, reception, and playback can knock the audio and video out of sync.

To detect lip sync defects, we have built a detector — which we call LipSync — based on the SyncNet architecture from the University of Oxford.

The input to the LipSync pipeline is a four-second video fragment. It passes to a shot detection model, which identifies shot boundaries; a face detection model, which identifies the faces in each frame; and a face-tracking model, which identifies faces in successive frames as belonging to the same person.

Preprocessing pipeline to extract face tracks — four-second clips centered on a single face.

The outputs of the face-tracking model — known as face tracks — and the correlated audio then pass to the SyncNet model, which aggregates across the face tracks to decide whether the clip is in sync, out of sync, or inconclusive, meaning there are either no faces/face tracks detected or there are an equal number of in-sync and out-of-sync predictions.

Future work

These are a select few of the detectors in our arsenal. In 2022, we continue to work on refining and improving our algorithms. In ongoing work, we’re using active learning — which algorithmically selects particularly informative training examples — to continually retrain our deployed models.

To generate synthetic datasets, we are researching EditGan, a new method that allows more precise control over the outputs of generative adversarial networks (GANs). We are also using our custom AWS cloud-native applications and SageMaker implementations to scale our defect detectors, to monitor all live events and video channels.





Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Share post:

Popular

More like this
Related