Video Quality Assessment (VQA) has been an active research field in both academic and industry in the last two decades. Recently, the growing popularity of video sharing applications and video conferencing systems is posing new challenges to the VQA field. Indeed, User Generated Content (UGC) videos in these applications exhibit quite different characteristics than Professionally Generated Content (PGC) videos. UGC videos are commonly captured by amateurs using smartphone cameras under various shooting conditions. The captured videos are often processed with special effects and aesthetics filters before being compressed and uploaded to video sharing applications.
With the assumption that pristine PGC videos possess perfect quality, FR VQA metrics predict the quality of processed videos by measuring quality degradation between reference and processed videos. However, this assumption is generally not valid for UGC videos and there is a need to develop new techniques to close the gap between PGC and UGC videos.
This challenge is focused on estimating the quality of H.264/AVC compressed UGC videos. There are two tracks depends on whether any information of the reference is used
The MOS track, an algorithm would predict the Mean Opinion Score (MOS) of compressed clips. Please note that the test set includes both the "references" and their compressed versions in this track.
The DMOS track, the Differential Mean Opinion Score (DMOS) between the reference and compressed clips should be predicted.
The dates are tentative and will change in accordance with the main conference.
Registration (simply send an email to the contact below): anytime before model submission.
Release of training and validation sets: January 1, 2021.
Submission of models for evaluating on test set: April 29, 2021.
Paper submission: May 15, 2021.
Announcement of results on test set: May 31, 2021.
Author notification: June 10, 2021.
Camera-ready submission: June 16, 2021.
We provide a UGC video quality dataset for this challenge. The dataset contains 6400 video clips for training and 800 video clips for validation, respectively. We hold extra 800 videos as the test set which is not available to the public. The reference videos are collected from practical video sharing applications. Each reference video is compressed with H.264/AVC into seven bitstreams to cover a wild range of compression levels. We have conducted subjective test to collect MOS score for both reference and compressed clip. Each clip is subjectively voted by at least 50 volunteers.
Participants will submit a trained model for the organizers to benchmark its performance on the test set. Training on additional data is allowed as long as it is mentioned in the submitted manuscript.
The MOS/DMOS files can be downloaded directly via the following links. However, the size of the dataset is around 34 GB that is cubersome to download via a single download link. We have prepared a shell script to download with "wget" command.
The organizers will evaluate the performance of submitted models on the test set. Participants are expected to submit their source codes or executable compatible with typical Linux operating systems. Make sure to give instructions to setup the environment.
We will require participants to submit a paper that describes technical details of the proposed method through the ICME 2021 website.
We identify it a No-Reference framework if a method predicts the quality of a compressed clip without any information from the reference. NR method is suitable for the absolute score track. Please note that we will ask a NR metric to evaluate the score of the reference and its compressed version. If any information from the reference is used, a model will compete in the relative score track.
The submitted models will be evaluated by computing several commonly used metrics between subjective and objective score on the test set. We will consider the following metrics: 1) Pearson Linear Correlation Coefficient (PLCC), 2) Spearman Rank Order Correlation Coefficient (SROCC), 3) Kendall Rank Order Correlation Coefficient (KROCC) and Root Mean Squared Error (RMSE). The final decision will also take running time, model complexity, and CPU/GPU loads into consideration.
We will distribute a total of $2300 awards among winners in each track.
3rd Place: $500
According to the Grand Challenge policies, the organizers are not allowed to participate in the challenge. Teams from the same institutes with the organizers are welcome to participate but not eligible for prizes.
Haiqiang Wang, Peng Cheng Laboratory, China.
Gary Li, Peking University (Shenzhen), China.
Shan Liu, Tencent Media Lab, China.
C.-C. Jay Kuo, University of Southern California, USA.