Challenge on Quality Assessment of Compressed UGC Videos

Description

Video Quality Assessment (VQA) has been an active research field in both academic and industry in the last two decades. Recently, the growing popularity of video sharing applications and video conferencing systems is posing new challenges to the VQA field. Indeed, User Generated Content (UGC) videos in these applications exhibit quite different characteristics than Professionally Generated Content (PGC) videos. UGC videos are commonly captured by amateurs using smartphone cameras under various shooting conditions. The captured videos are often processed with special effects and aesthetics filters before being compressed and uploaded to video sharing applications.

With the assumption that pristine PGC videos possess perfect quality, FR VQA metrics predict the quality of processed videos by measuring quality degradation between reference and processed videos. However, this assumption is generally not valid for UGC videos and there is a need to develop new techniques to close the gap between PGC and UGC videos.

This challenge is focused on estimating the quality of H.264/AVC compressed UGC videos. There are two tracks depends on whether any information of the reference is used

  1. The MOS track, an algorithm would predict the Mean Opinion Score (MOS) of compressed clips. Please note that the test set includes both the "references" and their compressed versions in this track.

  2. The DMOS track, the Differential Mean Opinion Score (DMOS) between the reference and compressed clips should be predicted.

Dates

The dates are tentative and will change in accordance with the main conference.

  1. Registration (simply send an email to the contact below): anytime before model submission.

  2. Release of training and validation sets: January 1, 2021.

  3. Submission of models for evaluating on test set: April 29, 2021.

  4. Paper submission: May 15, 2021.

  5. Announcement of results on test set: May 31, 2021.

  6. Author notification: June 10, 2021.

  7. Camera-ready submission: June 16, 2021.

Dataset

We provide a UGC video quality dataset for this challenge. The dataset contains 6400 video clips for training and 800 video clips for validation, respectively. We hold extra 800 videos as the test set which is not available to the public. The reference videos are collected from practical video sharing applications. Each reference video is compressed with H.264/AVC into seven bitstreams to cover a wild range of compression levels. We have conducted subjective test to collect MOS score for both reference and compressed clip. Each clip is subjectively voted by at least 50 volunteers.

Participants will submit a trained model for the organizers to benchmark its performance on the test set. Training on additional data is allowed as long as it is mentioned in the submitted manuscript.

The MOS/DMOS files can be downloaded directly via the following links. However, the size of the dataset is around 34 GB that is cubersome to download via a single download link. We have prepared a shell script to download with "wget" command.

  1. Videos

  2. MOS

  3. DMOS

Submission

The organizers will evaluate the performance of submitted models on the test set. Participants are expected to submit their source codes or executable compatible with typical Linux operating systems. Make sure to give instructions to setup the environment.

We will require participants to submit a paper that describes technical details of the proposed method through the ICME 2021 website.

Evaluation

We identify it a No-Reference framework if a method predicts the quality of a compressed clip without any information from the reference. NR method is suitable for the absolute score track. Please note that we will ask a NR metric to evaluate the score of the reference and its compressed version. If any information from the reference is used, a model will compete in the relative score track.

The submitted models will be evaluated by computing several commonly used metrics between subjective and objective score on the test set. We will consider the following metrics: 1) Pearson Linear Correlation Coefficient (PLCC), 2) Spearman Rank Order Correlation Coefficient (SROCC), 3) Kendall Rank Order Correlation Coefficient (KROCC) and Root Mean Squared Error (RMSE). The final decision will also take running time, model complexity, and CPU/GPU loads into consideration.

Awards

We will distribute a total of $2300 awards among winners in each track.

  1. Winner: $1000

  2. Runner-up: $800

  3. 3rd Place: $500

According to the Grand Challenge policies, the organizers are not allowed to participate in the challenge. Teams from the same institutes with the organizers are welcome to participate but not eligible for prizes.

Organizers

  1. Haiqiang Wang, Peng Cheng Laboratory, China.

  2. Gary Li, Peking University (Shenzhen), China.

  3. Shan Liu, Tencent Media Lab, China.

  4. C.-C. Jay Kuo, University of Southern California, USA.

Contact

Haiqiang Wang

wanghq03@pcl.ac.cn