Screen Content Coding Makes HEVC the Flexible Standard for Any Video Source
San Diego, USA − The 114th MPEG meeting was held in San Diego, CA, USA, from 22 – 26 February 2016
Powerful new HEVC tools improve compression of text, graphics, and animation
The 114th MPEG meeting marked the completion of the Screen Content Coding (SCC) extensions to HEVC – the High Efficiency Video Coding standard. This powerful set of tools augments the compression capabilities of HEVC to make it the flexible standard for virtually any type of video source content that is commonly encountered in our daily lives.
Screen content is video containing a significant proportion of rendered (moving or static) graphics, text, or animation rather than, or in addition to, camera-captured video scenes. The new SCC extensions of HEVC greatly improve the compression of such content. Example applications include wireless displays, news and other television content with text and graphics overlays, remote computer desktop access, and real-time screen sharing for video chat and video conferencing.
The technical development of the SCC extensions was performed by the MPEG and VCEG video coding joint team JCT-VC, following a joint Call for Proposals issued in February 2014.
CfP issued for technologies to orchestrate capture and consumption of media across multiple devices
At its 114th meeting, MPEG issued a Call for Proposals (CfP) for Media Orchestration. The CfP seeks submissions of technologies that will facilitate the orchestration of devices and media, both in time (advanced synchronization, e.g. across multiple devices) and space, where the media may come from multiple capturing devices and may be consumed by multiple rendering devices. An example application includes coordination of consumer CE devices to record a live event. The CfP for Media Orchestration can be found at http://mpeg.chiariglione.org/meetings/114.
User Description framework helps recommendation engines deliver better choices
At the 114th meeting, MPEG has completed a standards framework (in ISO/IEC 21000-22) to facilitate the narrowing of big data searches to help recommendation engines deliver better, personalized, and relevant choices to users. Understanding the personal preferences of a user, and the context within which that user
Source: Status: Subject: Date:
is interacting with a given application, facilitates the ability of that application to better respond to individual user requests. Having that information provided in a standard and interoperable format enables application providers to more broadly scale their services to interoperate with other applications providers. Enter MPEG User Description (MPEG-UD). The aim of MPEG User Description is to ensure interoperability among recommendation services, which take into account the user and his/her context when generating recommendations for the user. With MPEG-UD, applications can utilize standard descriptors for users (user descriptor), the context in which the user is operating (context descriptor), recommendations (recommendation descriptor), and a description of a specific recommendation service that could be eventually consumed by the user (service descriptor).
Publish/Subscribe Application Format is finalized
The Publish/Subscribe Application Format (PSAF, ISO/IEC 23000-16) has reached the final milestone of FDIS at this MPEG meeting. The PSAF enables a communication paradigm where publishers do not communicate information directly to intended subscribers but instead rely on a service that mediates the relationship between senders and receivers. In this paradigm, Publishers create and store Resources and their descriptions, and send Publications; Subscribers send Subscriptions. Match Service Providers (MSP) receive and match Subscriptions with Publications and, when a Match has been found, send Notifications to users listed in Publications and Subscriptions. This paradigm is enabled by three other MPEG technologies which have also reached their final milestone: Contract Expression Language (CEL), Media Contract Ontology (MCO) and User Description (UD). A PSAF Notification is expressed as a set of UD Recommendations.
CEL is a language to express contract regarding a digital license, the complete business agreements between the parties. MCO is an ontology to represent contracts dealing with rights on multimedia assets and intellectual property protected content in general. A specific vocabulary is defined in a model extension to represent the most common rights and constraints in the audiovisual context. PSAF contracts between Publishers or Subscribers and MSPs are expressed in CEL or MCO.
Augmented Reality Application Format reaches FDIS status
At the 114th MPEG meeting, the 2nd edition of ARAF, MPEG’s Application Format for Augmented Reality (ISO/IEC 23000-13) has reached FDIS status and will be soon published as an International Standard. The MPEG ARAF enables augmentation of the real world with synthetic media objects by combining multiple existing MPEG standards within a single specific application format addressing certain industry needs. In particular, ARAF comprises three components referred to as scene, sensor/actuator, and media. The target applications include geolocation-based services, image-based object detection and tracking, audio recognition and synchronization, mixed and augmented reality games and real-virtual interactive scenarios.
Genome compression progresses toward standardization
At its 114th meeting, MPEG has progressed its exploration of genome compression toward formal standardization. The 114th meeting included a seminar to collect additional perspectives on genome data standardization, and a review of technologies that had been submitted in response to a Call for Evidence. The purpose of that CfE, which had been previously issued at the 113th meeting, was to assess whether new technologies could achieve better performance in terms of compression efficiency compared with currently used formats.
In all, 22 tools were evaluated. The results demonstrate that by integrating a multiple of these tools, it is possible to improve the compression of up to 27% with respect to the best state-of-the-art tool. With this evidence, MPEG has issued a Draft Call for Proposals (CfP) on Genomic Information Representation and Compression. The Draft CfP targets technologies for compressing raw and aligned genomic data and metadata for efficient storage and analysis.
As demonstrated by the results of the Call for Evidence, improved lossless compression of genomic data beyond the current state-of-the-art tools is achievable by combining and further developing them. The call also addresses lossy compression of the metadata which make up the dominant volume of the resulting compressed data. The Draft CfP seeks lossy compression technologies that can provide higher compression performance without affecting the accuracy of analysis application results. Responses to the Genomic Information Representation and Compression CfP will be evaluated prior to the 116th MPEG meeting in October 2016 (in Chengdu, China). An ad hoc group, co-chaired by Martin Golobiewski, convenor of Working Group 5 of ISO TC 276 (i.e. the ISO committee for Biotechnology) and Dr. Marco Mattavelli (of MPEG) will coordinate the receipt and pre-analysis of submissions received in response to the call. Detailed results to the CfE and the presentations shown during the seminar will soon be available as MPEG documents N16137 and N16147 at: http://mpeg.chiariglione.org/meetings/114.
MPEG evaluates results to CfP for Compact Descriptors for Video Analysis
MPEG has received responses from three consortia to its Call for Proposals (CfP) on Compact Descriptors for Video Analysis (CDVA). This CfP addresses compact (i.e., compressed) video description technologies for search and retrieval applications, i.e. for content matching in video sequences. Visual content matching includes matching of views of large and small objects and scenes, that is robust to partial occlusions as well as changes in vantage point, camera parameters, and lighting conditions. The objects of interest include those that are planar or non-planar, rigid or partially rigid, and textured or partially textured. CDVA aims to enable efficient and interoperable design of video analysis applications in large databases, for example broadcasters’ archives or videos available on the Internet. It is envisioned that CDVA will provide a complementary set of tools to the suite of existing MPEG standards, such as the MPEG-7 Compact Descriptors for Visual Search (CDVS). Evaluation showed that sufficient technology was received such that a standardization effort is started. The final standard is expected to be ready in 2018.
Workshop on 5G/ Beyond UHD Media
A workshop on 5G/ Beyond UHD Media was held on February 24th, 2016 during the 114th MPEG meeting. The workshop was organized to acquire relevant information about the context in which MPEG technology related to video, virtual reality and the Internet of Things will be operating in the future, and to review the status of mobile technologies with the goal of guiding future codec standardization activity.
Dr. James Kempf of Ericsson reported on the challenges that Internet of Things devices face in a mobile environment. Dr. Ian Harvey of FOX discussed content creation for Virtual Reality applications. Dr. Kent Walker of Qualcomm promoted the value of unbundling technologies and creating relevant enablers. Dr. Jongmin Lee of SK Telecom explained challenges and opportunities in Next Generation Mobile Multimedia Services. Dr. Sudhir Dixit of Wireless World Research Forum reported on the next generation mobile 5G network and Its Challenges in Support of UHD Media. Emmanuel Thomas of TNO showed trends in 5G and future media consumption using media orchestration as an example. Dr. Charlie Zhang of Samsung Research America focused in his presentation on the 5G Key Technologies and Recent Advances.
Verification test complete for Scalable HEVC and MV-HEVC
MPEG has completed verification tests of SHVC, the scalable form of HEVC. These tests confirm the major savings that can be achieved by Scalable HEVC’s nested layers of data from which subsets can be extracted and used on their own to provide smaller coded streams. These smaller subsets can still be decoded with good video quality, as contrasted with the need to otherwise send separate “simulcast” coded video streams or add an intermediate “transcoding” process that would add substantial delay and complexity to the system.
The verification tests for SHVC showed that scalable HEVC coding can save an average of 40–60% in bit rate for the same quality as with simulcast coding, depending on the particular scalability scenario. SHVC includes capabilities for using a “base layer” with additional layers of enhancement data that improve the video picture resolution, the video picture fidelity, the range of representable colors, or the dynamic range of displayed brightness. Aside from a small amount of intermediate processing, each enhancement layer can be decoded by applying the same decoding process that is used for the original non-scalable version of HEVC. This compatibility that has been retained for the core of the decoding process will reduce the effort needed by industry to support the new scalable scheme.
Further verification tests were also conducted on MV-HEVC, where the Multiview Main Profile exploits the redundancy between different camera views using the same layering concept as scalable HEVC, with the same property of each view-specific layer being decodable by the ordinary HEVC decoding process. The results demonstrate that for the case of stereo (two views) video, a data rate reduction of 30% when compared to simulcast (independent HEVC coding of the views), and more than 50% when compared to the multi-view version of AVC (which is known as MVC), can be achieved for the same video quality.
Exploring new Capabilities in Video Compression Technology
Three years after finishing the first version of the HEVC standard, this MPEG meeting marked the first full meeting of a new partnership to identify advances in video compression technology. At its previous meeting, MPEG and ITU-T’s VCEG had agreed to join together to explore new technology possibilities for video coding that lie beyond the capabilities of the HEVC standard and its current extensions. The new partnership is known as the Joint Video Exploration Team (JVET), and the team is working to explore both incremental and fundamentally different video coding technology that shows promise to potentially become the next generation in video coding standardization. The JVET formation follows MPEG’s workshops and requirements-gathering efforts that have confirmed that video data demands are continuing to grow and are projected to remain a major source of stress on network traffic – even as additional improvements in broadband speeds arise in the years to come. The groundwork laid at the previous meeting for the JVET effort has already borne fruit. The team has developed a Joint Exploration Model (JEM) for simulation experiments in the area, and initial tests of the first JEM version have shown a potential compression improvement over HEVC by combining a variety of new techniques. Given sufficient further progress and evidence of practicality, it is highly likely that a new Call for Evidence or Call for Proposals will be issued in 2016 or 2017 toward converting this initial JVET exploration into a formal project for an improved video compression standard.
How to contact MPEG, learn more, and find other MPEG facts
To learn about MPEG basics, discover how to participate in the committee, or find out more about the array of technologies developed or currently under development by MPEG, visit MPEG’s home page at
http://mpeg.chiariglione.org. There you will find information publicly available from MPEG experts past and present including tutorials, white papers, vision documents, and requirements under consideration for new standards efforts. You can also find useful information in many public documents by using the search window].
Examples of tutorials that can be found on the MPEG homepage include tutorials for: High Efficiency Video Coding, Advanced Audio Coding, Universal Speech and Audio Coding, and DASH to name a few. A rich repository of white papers can also be found and continues to grow. You can find these papers and tutorials for many of MPEG’s standards freely available. Press releases from previous MPEG meetings are also available. Journalists that wish to receive MPEG Press Releases by email should contact Dr. Christian Timmerer at Christian.firstname.lastname@example.org.
Future MPEG meetings are planned as follows:
No. 115, Geneva, CH, 30 – 03 May – June 2016
No. 116, Chengdu, CN, 17 – 21 October 2016
No. 117, Geneva, CH, 16 – 20 January, 2017
No. 118, Hobart, AU, 03 – 07 April, 2017