Thursday, December 14, 2017

MPEG news: a report from the 120th meeting, Macau, China

MPEG Meeting Plenary
The original blog post can be found at the Bitmovin Techblog and has been updated here to focus on and highlight research aspects. Additionally, this version of the blog post will be also posted at ACM SIGMM Records.

The MPEG press release comprises the following topics:
  • Point Cloud Compression – MPEG evaluates responses to call for proposal and kicks off its technical work 
  • The omnidirectional media format (OMAF) has reached its final milestone 
  • MPEG-G standards reach Committee Draft for compression and transport technologies of genomic data 
  • Beyond HEVC – The MPEG & VCEG call to set the next standard in video compression 
  • MPEG adds better support for mobile environment to MMT 
  • New standard completed for Internet Video Coding 
  • Evidence of new video transcoding technology using side streams 

Point Cloud Compression

At its 120th meeting, MPEG analysed the technologies submitted by nine industry leaders as responses to the Call for Proposals (CfP) for Point Cloud Compression (PCC). These technologies address the lossless or lossy coding of 3D point clouds with associated attributes such as colour and material properties. Point clouds are referred to as unordered sets of points in a 3D space and typically captured using various setups of multiple cameras, depth sensors, LiDAR scanners, etc., but can also be generated synthetically and are in use in several industries. They have recently emerged as representations of the real world enabling immersive forms of interaction, navigation, and communication. Point clouds are typically represented by extremely large amounts of data providing a significant barrier for mass market applications. Thus, MPEG has issued a Call for Proposal seeking technologies that allow reduction of point cloud data for its intended applications. After a formal objective and subjective evaluation campaign, MPEG selected three technologies as starting points for the test models for static, animated, and dynamically acquired point clouds. A key conclusion of the evaluation was that state-of-the-art point cloud compression can be significantly improved by leveraging decades of 2D video coding tools and combining 2D and 3D compression technologies. Such an approach provides synergies with existing hardware and software infrastructures for rapid deployment of new immersive experiences.
Although the initial selection of technologies for point cloud compression has been concluded at the 120th MPEG meeting, it could be also seen as a kick-off for its scientific evaluation and various further developments including the optimization thereof. It is expected that various scientific conference will focus on point cloud compression and may open calls for grand challenges like for example at IEEE ICME 2018.

Omnidirectional Media Format (OMAF)

The understanding of the virtual reality (VR) potential is growing but market fragmentation caused by the lack of interoperable formats for the storage and delivery of such content stifles VR’s market potential. MPEG’s recently started project referred to as Omnidirectional Media Format (OMAF) has reached Final Draft of International Standard (FDIS) at its 120th meeting. It includes
  • equirectangular projection and cubemap projection as projection formats; 
  • signalling of metadata required for interoperable rendering of 360-degree monoscopic and stereoscopic audio-visual data; and 
  • provides a selection of audio-visual codecs for this application. 
It also includes technologies to arrange video pixel data in numerous ways to improve compression efficiency and reduce the size of video, a major bottleneck for VR applications and services, The standard also includes technologies for the delivery of OMAF content with MPEG-DASH and MMT.
MPEG has defined a format comprising a minimal set of tools to enable interoperability among implementers of the standard. Various aspects are deliberately excluded from the normative parts to foster innovation leading to novel products and services. This enables us -- researcher and practitioners -- to experiment with these new formats in various ways and focus on informative aspects where typically competition can be found. For example, efficient means for encoding and packaging of omnidirectional/360-degree media content and its adaptive streaming including support for (ultra-)low latency will become a big issue in the near future.

MPEG-G: Compression and Transport Technologies of Genomic Data

The availability of high throughput DNA sequencing technologies opens new perspectives in the treatment of several diseases making possible the introduction of new global approaches in public health known as “precision medicine”. While routine DNA sequencing in the doctor’s office is still not current practice, medical centers have begun to use sequencing to identify cancer and other diseases and to find effective treatments. As DNA sequencing technologies produce extremely large amounts of data and related information, the ICT costs of storage, transmission, and processing are also very high. The MPEG-G standard addresses and solves the problem of efficient and economical handling of genomic data by providing new
  • compression technologies (ISO/IEC 23092-2) and 
  • transport technologies (ISO/IEC 23092-1), 
which reached Committee Draft level at its 120th meeting.
Additionally, the Committee Drafts for
  • metadata and APIs (ISO/IEC 23092-3) and 
  • reference software (ISO/IEC 23092-4) 
are scheduled for the next MPEG meeting and the goal is to publish Draft International Standards (DIS) at the end of 2018.
This new type of (media) content, which requires compression and transport technologies, is emerging within the multimedia community at large and, thus, input is welcome.

Beyond HEVC – The MPEG & VCEG Call to set the Next Standard in Video Compression

The 120th MPEG meeting marked the first major step toward the next generation of video coding standard in the form of a joint Call for Proposals (CfP) with ITU-T SG16’s VCEG. After two years of collaborative informal exploration studies and a gathering of evidence that successfully concluded at the 118th MPEG meeting, MPEG and ITU-T SG16 agreed to issue the CfP for future video coding technology with compression capabilities that significantly exceed those of the HEVC standard and its current extensions. They also formalized an agreement on formation of a joint collaborative team called the “Joint Video Experts Team” (JVET) to work on development of the new planned standard, pending the outcome of the CfP that will be evaluated at the 122nd MPEG meeting in April 2018. To evaluate the proposed compression technologies, formal subjective tests will be performed using video material submitted by proponents in February 2018. The CfP includes the testing of technology for 360° omnidirectional video coding and the coding of content with high-dynamic range and wide colour gamut in addition to conventional standard-dynamic-range camera content. Anticipating a strong response to the call, a “test model” draft design is expected be selected in 2018, with development of a potential new standard in late 2020.
The major goal of a new video coding standard is to be better than its successor (HEVC). Typically this "better" is quantified by 50% which means, that it should be possible encode the video at the same quality with half of the bitrate or a significantly higher quality with the same bitrate including. However, at this time the “Joint Video Experts Team” (JVET) from MPEG and ITU-T SG16 faces competition from the Alliance for Open Media, which is working on AV1. In any case, we are looking forward to an exciting time frame from now until this new codec is ratified and how it will perform compared to AV1. Multimedia systems and applications will also benefit from new codecs which will gain traction as soon as first implementations of this new codec becomes available (note that AV1 is available as open source already and continuously further developed).

MPEG adds Better Support for Mobile Environment to MPEG Media Transport (MMT)

MPEG has approved the Final Draft Amendment (FDAM) to MPEG Media Transport (MMT; ISO/IEC 23008-1:2017), which is referred to as “MMT enhancements for mobile environments”. In order to reflect industry needs on MMT, which has been well adopted by broadcast standards such as ATSC 3.0 and Super Hi-Vision, it addresses several important issues on the efficient use of MMT in mobile environments. For example, it adds distributed resource identification message to facilitate multipath delivery and transition request message to change the delivery path of an active session. This amendment also introduces the concept of a MMT-aware network entity (MANE), which might be placed between the original server and the client, and provides a detailed description about how to use it for both improving efficiency and reducing delay of delivery. Additionally, this amendment provides a method to use WebSockets to setup and control an MMT session/presentation.

New Standard Completed for Internet Video Coding

A new standard for video coding suitable for the internet as well as other video applications, was completed at the 120th MPEG meeting. The Internet Video Coding (IVC) standard was developed with the intention of providing the industry with an “Option 1” video coding standard. In ISO/IEC language, this refers to a standard for which patent holders have declared a willingness to grant licenses free of charge to an unrestricted number of applicants for all necessary patents on a worldwide, non-discriminatory basis and under other reasonable terms and conditions, to enable others to make, use, and sell implementations of the standard. At the time of completion of the IVC standard, the specification contained no identified necessary patent rights except those available under Option 1 licensing terms. During the development of IVC, MPEG removed from the draft standard any necessary patent rights that it was informed were not available under such Option 1 terms, and MPEG is optimistic of the outlook for the new standard. MPEG encourages interested parties to provide information about any other similar cases. The IVC standard has roughly similar compression capability as the earlier AVC standard, which has become the most widely deployed video coding technology in the world. Tests have been conducted to verify IVC’s strong technical capability, and the new standard has also been shown to have relatively modest implementation complexity requirements.

Evidence of new Video Transcoding Technology using Side Streams

Following a “Call for Evidence” (CfE) issued by MPEG in July 2017, evidence was evaluated at the 120th MPEG meeting to investigate whether video transcoding technology has been developed for transcoding assisted by side data streams that is capable of significantly reducing the computational complexity without reducing compression efficiency. The evaluations of the four responses received included comparisons of the technology against adaptive bit-rate streaming using simulcast as well as against traditional transcoding using full video re-encoding. The responses span the compression efficiency space between simulcast and full transcoding, with trade-offs between the bit rate required for distribution within the network and the bit rate required for delivery to the user. All four responses provided a substantial computational complexity reduction compared to transcoding using full re-encoding. MPEG plans to further investigate transcoding technology and is soliciting expressions of interest from industry on the need for standardization of such assisted transcoding using side data streams.

MPEG currently works on two related topics which are referred to as network-distributed video coding (NDVC) and network-based media processing (NBMP). Both activities involve the network, which is more and more evolving to highly distributed compute and delivery platform as opposed to a bit pipe, which is supposed to deliver data as fast as possible from A to B. This phenomena could be also interesting when looking at developments around 5G, which is actually much more than just radio access technology. These activities are certainly worth to monitor as it basically contributes in order to make networked media resources accessible or even programmable. In this context, I would like to refer the interested reader to the December'17 theme of the IEEE Computer Society Computing Now, which is about Advancing Multimedia Content Distribution.
Publicly available documents from the 120th MPEG meeting can be found here (scroll down to the end of the page). The next MPEG meeting will be held in Gwangju, Korea, January 22-26, 2018. Feel free to contact Christian Timmerer for any questions or comments.
Some of the activities reported above are considered within the Call for Papers at 23rd Packet Video Workshop (PV 2018) co-located with ACM MMSys 2018 in Amsterdam, The Netherlands. Topics of interest include (but are not limited to):
  • Adaptive media streaming, and content storage, distribution and delivery 
  • Network-distributed video coding and network-based media processing 
  • Next-generation/future video coding, point cloud compression 
  • Audiovisual communication, surveillance and healthcare systems 
  • Wireless, mobile, IoT, and embedded systems for multimedia applications 
  • Future media internetworking: information-centric networking and 5G 
  • Immersive media: virtual reality (VR), augmented reality (AR), 360° video and multi-sensory systems, and its streaming 
  • Machine learning in media coding and streaming systems 
  • Standardization: DASH, MMT, CMAF, OMAF, MiAF, WebRTC, MSE, EME, WebVR, Hybrid Media, WAVE, etc.
  • Applications: social media, game streaming, personal broadcast, healthcare, industry 4.0, education, transportation, etc. 
Important dates
  • Submission deadline: March 1, 2018 
  • Acceptance notification: April 9, 2018 
  • Camera-ready deadline: April 19, 2018

Wednesday, October 25, 2017

University Assistant (m/f)

The Alpen-Adria-Universität Klagenfurt announces the following job vacancy (in accordance with § 107 Abs. 1 Universitätsgesetz 2002):

University Assistant (m/f)
(fixed-term employment for the period of 4 years, 40 hours/week (Uni-KV: B1))

at the Faculty for Technical Sciences, Department for Computer Science. The monthly minimum salary for this position as stated in the collective agreement and according to the classification scheme is €2.731 (pre-tax, 14 x per year), but may be higher due to previous employment periods eligible for inclusion and other earnings and remunerations. Estimated commencement of duties will be the 1st of March, 2018.

Your duties:
  • Independent scientific research with the goal to obtain the conferral of a doctorate
  • Collaboration at the department within the research group “distributed systems” in terms of research and teaching
  • Independent scientific research within the field of distributed systems
  • Participation at the students’ counselling
  • Collaboration at administrative tasks within the department and university committees
  • Collaboration at public relations activities within the institute and faculty

The research group “Distributed Systems” does its research within the field of scientific high-performance computing, cloud computing and multimedia systems. The goal is to publish in international, high quality professional journals and conference transcripts and to cooperate with various commercial partners. With regard to teaching, additional fields such as computer networks, operation systems, distributed systems and compiler construction are covered by our research group.

Your profile:
  • Master or diploma degree of Technical Science in the field of Computer Science, completed at a domestic or foreign university (with good final degrees)
  • Excellent knowledge and experience in: high-performance computing, cloud-computing, virtualisation, big data, energy efficiency
  • Fluency in English, both in written and oral form

All relevant documents for the application (including copies of all school certificates and performance records) have to be submitted via the online application form of the University of Klagenfurt no later than the 8th of November, mentioning reference number 607/17.

Desirable qualifications are:
  • Fluency in German, both in written and oral form
  • Excellent programming skills, especially C++ and Java
  • Experience in handling OpenStack
  • Relevant international and practical work experience
  • Social and communicative competences and ability to work in a team
  • Experience with university teaching and research activities

The goal of the position is to equip graduates of a master or diploma programme with the necessary technical and scientific training to complete a doctorate or PhD. In Technical Sciences. Applications of scientists already holding such degree can therefore not be taken into further consideration.

The University of Klagenfurt lays special emphasis on increasing the number of women in senior and in academic positions and therefore strongly invites qualified women to apply for this position. In case of equal qualifications, female applicants will receive preferential consideration.

Furthermore, persons with disabilities or chronic illnesses who meet the required qualification criteria are also explicitly invited to apply for the position.

General information for applicants:

Additional information regarding the research group “Distributed Multimedia Systems” can be found online ( or by phone +43-463-2700-3611 (Univ.-Prof. DI Dr. Radu Prodan).

The University of Klagenfurt cannot refund any travel or accommodation expenses that arises in connection with the admission procedure.

Saturday, October 14, 2017

Happy World Standards Day 2017

Today on October 14 we celebrate the World Standards Day which is a good opportunity to review how standards impact our everyday's life. In fact, many standards help me creating this blog post ranging from web standards (W3C), communication standards (IETF, ITU), and standards defining representation formats (e.g., JPEG, MPEG).

Perhaps you are wondering how such standards are created. In MPEG, for example and in a nutshell, new work items are proposed and discussed within the requirements subgroup, which typically issues a requirements document followed up by a call for proposal. The responses to this call are discussed and evaluated according to predefined criteria and adopted into a working draft. Once the working draft becomes mature, MPEG may decide to issue a Committee Draft (CD), which goes out to national bodies for ballot. If national bodies agree on the CD, which could include comments on how to improve it, the next stage would be Draft International Standard (DIS) followed up Final Draft International Standard (FDIS), each accompanied by a ballot including comments. At FDIS stage mainly yes|no vote is allowed and only pure editorial comments can be integrated before going to International Standard (IS), which is when the standard is finally published. [note: sometimes it's a bit more complicated but this is another story - for the interested reader, I've documented the process when working on MPEG-DASH here]

This may sound like a very boring process but it's also possible to win Engineering Emmy Awards like HEVC did very recently, where also Leonardo Chiariglione received the Charles F. Jenkins Lifetime Achievement Award (see his response to this award here). 

In this context, I often quote the following xkcd comic which shows two sides of the coin. First, the obvious one where one should indeed not create the 15th competing standards, which one may think it's easy but it isn't although there's also a positive aspect about this (see at the end of this blog post). Second, standards should only define the minimum to enable interoperability and leave out enough space for innovation and competition. However, it's not always clear from the beginning, where to draw the line in order to become a successful standard.
In the past couple of years I was heavily involved in the standardization of MPEG-DASH. In the beginning we've been in the situation with multiple competing formats (Adobe HDS, Apple HLS, Microsoft Smooth Streaming, etc.). MPEG-DASH was finally adopted by Adobe and Microsoft, leaving HLS as competing format/standard (i.e., informational RFC 8216) which now utilizes MPEG's Common Media Application Format (CMAF) to allow a common media segment format to be used by both DASH and HLS. Thus, we did not create the 15th competing standard and DASH/HLS/CMAF is an important step towards reducing market fragmentation although it's not yet the end of the path.

I'd like to conclude with two quotes related to standards. One is from one of my professor at university who was saying "if you have sleeping problems, read a standard", which is true - they are boring to read for an outsider - but it's exciting to work on standards as you basically define the path for future products and services. Finally, my favorite quote goes back to Andrew S. Tanenbaum's book on computer networks: "The nice thing about standards is that you have so many to choose from" which I interpret as a positive statement as competition leads to innovation which eventually leads to innovative products and services - that's what we want.

In this spirit: Happy World Standards Day!

Wednesday, September 13, 2017

Packet Video Workshop 2018

23rd Packet Video Workshop 2018
June 12, 2018, Amsterdam, The Netherlands
(co-located with ACM MMSys'18)

Workshop Co-Chairs
  • Ali C. Begen, Ozyegin University / Networked Media, Turkey (ali.begen at
  • Christian Timmerer, Alpen-Adria-Universität Klagenfurt / Bitmovin Inc., Austria (christian.timmerer at
Workshop TPC Co-Chairs
  • Roger Zimmermann, National University of Singapore (NUS), Singapore (rogerz at
  • Thomas Schierl, Fraunhofer Heinrich Hertz Institute (HHI), Germany (thomas.schierl at
The 23rd Packet Video Workshop (PV 2018) is devoted to presenting technological advancements and innovations in video and multimedia transmission over packet networks. The workshop provides a unique venue for people from the media coding and networking fields to meet, interact and exchange ideas. Its charter is to promote the research and development in both established and emerging areas of video streaming and multimedia networking. PV 2018 will be held in Amsterdam on June 12th. The workshop will be a single-track event and welcomes paper submissions from both cutting-edge research, and business and consumer applications. PV 2018 will be co-located with ACM MMSys, NOSSDAV, NetGames and MMVE.

PV 2018 seeks papers in all areas of media delivery over current IP and future networks. Authors are especially encouraged to submit papers with real-world experimental results and datasets.

Topics of interest include (but are not limited to)
  • Adaptive media streaming, and content storage, distribution and delivery
  • Network-distributed video coding and network-based media processing
  • Next-generation/future video coding, point cloud compression
  • Audiovisual communication, surveillance and healthcare systems
  • Wireless, mobile, IoT, and embedded systems for multimedia applications
  • Future media internetworking: information-centric networking and 5G
  • Immersive media: virtual reality (VR), augmented reality (AR), 360° video and multi-sensory systems, and its streaming
  • Machine learning in media coding and streaming systems
  • Standardization: DASH, MMT, CMAF, OMAF, MiAF, WebRTC, MSE, EME, WebVR, Hybrid Media, WAVE, etc.
  • Applications: social media, game streaming, personal broadcast, healthcare, industry 4.0, education, transportation, etc.
Important dates
  • Submission deadline: March 1, 2018
  • Acceptance notification: April 9, 2018
  • Camera-ready deadline: April 19, 2018

Submission instructions
Prospective authors are invited to submit an electronic version of full papers, in PDF format, up to six printed pages in length (double column ACM conference format) at the PV 2018 Web site. The authors are also encouraged to regularly check the PV 2018 web site for the latest information and updates. The proceedings will be published by ACM Digital Library.

Monday, September 4, 2017

MPEG news: a report from the 119th meeting, Turin, Italy

The original blog post can be found at the Bitmovin Techblog and has been updated here to focus on and highlight research aspects. Additionally, this version of the blog post will be also posted at ACM SIGMM Records.

The MPEG press release comprises the following topics:
  • Evidence of New Developments in Video Compression Coding
  • Call for Evidence on Transcoding for Network Distributed Video Coding
  • 2nd Edition of Storage of Sample Variants reaches Committee Draft
  • New Technical Report on Signalling, Backward Compatibility and Display Adaptation for HDR/WCG Video Coding
  • Draft Requirements for Hybrid Natural/Synthetic Scene Data Container

Evidence of New Developments in Video Compression Coding

At the 119th MPEG meeting, responses to the previously issued call for evidence have been evaluated and they have all successfully demonstrated evidence. The call requested responses for use cases of video coding technology in three categories:
  • standard dynamic range (SDR) — two responses;
  • high dynamic range (HDR) — two responses; and
  • 360° omnidirectional video — four responses.
The evaluation of the responses included subjective testing and an assessment of the performance of the “Joint Exploration Model” (JEM).

The results indicate significant gains over HEVC for a considerable number of test cases with comparable subjective quality at 40-50% less bit rate compared to HEVC for the SDR and HDR test cases with some positive outliers (i.e., higher bit rate savings). Thus, the MPEG-VCEG Joint Video Exploration Team (JVET) concluded that evidence exists of compression technology that may significantly outperform HEVC after further development to establish a new standard. As a next step, the plan is to issue a call for proposals at 120th MPEG meeting (October 2017) and responses expected to be evaluated at the 122th MPEG meeting (April 2018).

We already witness an increase of research articles addressing video coding technologies with capabilities beyond HEVC which will further increase in the future. The main driving force is over the top (OTT) delivery which calls for more efficient bandwidth utilization. However, competition is also increasing with the emergence of AV1 of AOMedia and we may observe also an increasing number of articles in that direction including evaluations thereof. An interesting aspect is also that the number of use cases is also increasing (e.g., see different categories above), which adds further challenges to the "complex video problem".

Call for Evidence on Transcoding for Network Distributed Video Coding

The call for evidence on transcoding for network distributed video coding targets interested parties possessing technology providing transcoding of video at lower computational complexity than transcoding done using a full re-encode. The primary application is adaptive bitrate streaming where a highest bitrate stream is transcoded into lower bitrate streams. It is expected that responses may use “side streams” (or side information, some may call it metadata) accompanying the highest bitrate stream to assist in the transcoding process. MPEG expects submissions for the 120th MPEG meeting where compression efficiency and computational complexity will be assessed.

Transcoding has been discussed already for a long time and I can certainly recommend this article from 2005 published in the Proceedings of the IEEE. The question is, what is different now, 12 years later, and what metadata (or side streams/information) is required for interoperability among different vendors (if any)?

A Brief Overview of Remaining Topics...

  • The 2nd edition of storage of sample variants reaches Committee Draft and expands its usage to MPEG-2 transport stream whereas the first edition primarily focused on ISO base media file format.
  • The new technical report for high dynamic range (HDR) and wide colour gamut (WCG) video coding comprises a survey of various signaling mechanisms including backward compatibility and display adaptation.
  • MPEG issues draft requirements for a scene representation media container enabling the interchange of content for authoring and rendering rich immersive experiences which is currently referred to as hybrid natural/synthetic scene (HNSS) data container.

Other MPEG (Systems) Activities at the 119th Meeting

DASH is in fully maintenance mode as only minor enhancements/corrections have been discussed including contributions to conformance and reference software. The omnidirectional media format (OMAF) is certainly the hottest topic within MPEG systems which is actually between two stages (i.e., between DIS and FDIS) and, thus, a study of DIS has been approved and national bodies are kindly requested to take this into account when casting their votes (incl. comments). The study of DIS comprises format definitions with respect to coding and storage of omnidirectional media including audio and video (aka 360°). The common media application format (CMAF) has been ratified at the last meeting and awaits publications by ISO. In the meantime CMAF is focusing on conformance and reference software as well as amendments regarding various media profiles. Finally, requirements for a multi-image application format (MiAF) are available since the last meeting and at the 119th MPEG meeting a work draft has been approved. MiAF will be based on HEIF and the goal is to define additional constraints to simplify its file format options.

We have successfully demonstrated live 360 adaptive streaming as described here but we expect various improvements from standards available and under development of MPEG. Research aspects in these areas are certainly interesting in the area of performance gains and evaluations with respect to bandwidth efficiency in open networks as well as how these standardization efforts could be used to enable new use cases. 

Publicly available documents from the 119th MPEG meeting can be found here (scroll down to the end of the page). The next MPEG meeting will be held in Macau, China, October 23-27, 2017. Feel free to contact me for any questions or comments.

Monday, July 24, 2017

IEEE ICME 2017: Keynote at Workshop on Mobile Multimedia Computing, Hong Kong, Jul 14, 2017

Titel: Dynamic Adaptive Streaming over HTTP: Overview, State-of-the-Art, and Challenges

Abstract: Real-time entertainment services deployed over the open, unmanaged Internet – streaming audio and video – account now for more than 70% of the evening traffic in North American fixed access networks and it is assumed that this figure will reach 80% by 2020. The technology used for such services is commonly referred to as Dynamic adaptive streaming over HTTP and is widely adopted by various platforms such as YouTube, Netflix, Flimmit, etc. thanks to the standardization of MPEG-DASH. This presentation provides an overview of the MPEG-DASH standard, various implementation options - specifically on informative aspects -, and reviews the work-in-progress and future research directions.

Bio: Christian Timmerer is an Associate Professor with Alpen-Adria-Universität Klagenfurt, Austria, and his research focus is on immersive multimedia communication, streaming, adaptation, and quality of experience. He has authored over 150 publications in his research area and was the General Chair of WIAMIS 2008, QoMEX 2013, and ACM MMSys 2016. He participated in several EC-funded projects, notably, DANAE, ENTHRONE, P2P-Next, ALICANTE, SocialSensor, and the COST Action IC1003 QUALINET. He also participated in ISO/MPEG work for several years, notably, in the areas of MPEG-21, MPEG-M, MPEG-V, and MPEG-DASH. He is a Co-Founder of Bitmovin and CIO | Head of Research and Standardization.


Thursday, July 13, 2017

DASH-IF awarded Grand Challenge on Dynamic Adaptive Streaming over HTTP at IEEE ICME 2017

Hong Kong, July 12, 2017

Real-time entertainment services such as streaming video and audio are currently accounting for more than 70% of the Internet traffic during peak hours. Interestingly, these services are all delivered over-the-top (OTT) of the existing networking infrastructure using the Hypertext Transfer Protocol (HTTP). The MPEG Dynamic Adaptive Streaming over HTTP (DASH) standard enables smooth multimedia streaming towards heterogeneous devices.

The aim of the DASH-IF Grand Challenge on Dynamic Adaptive Streaming over HTTP at IEEE ICME 2017 is to solicit contributions addressing end-to-end delivery aspects that will help improve the QoE while optimally using the network resources at an acceptable cost. Such aspects include, but are not limited to, content preparation for adaptive streaming, delivery in the Internet and streaming client implementations. A special focus of 2017’s grand challenge will be related to virtual reality applications and services including 360 degree videos.

We received the following submissions, which have been evaluated by DASH-IF members:
  • "Content Preparation and Cross-Device Delivery of 360° Video with 4k Field of View Using DASH" by Louay Bassbouss, Stefan Pham, Stephan Steglich, Martin Lasak
  • "A Hybrid P2P/Multi-Source Quality-Adaptive Live-Streaming Solution for high end-user's QoE" by Joachim Bruneau-Queyreix, Mathias Lacaud, Daniel Negru
  • "Efficient content preparation and distribution of 360VR sequences using MPEG-DASH technology" by Cesar Diaz, Julian Cabrera, Marta Orduna, Lara Munoz, Pablo Perex, Narciso Garcia
  • "Optimal Viewport Adaptive Streaming for 360-Degree Videos" by Zhimin Xu, Lan Xie, Xinggong Zhang, Han Hu, Yixuan Ban, Zongming Guo
The winner will be awarded  €750 and the runner-up €250.

Each submission has been presented at IEEE ICME 2017 within an oral session, which was attended very well. We've also seen interesting demos after all submissions have been presented.


This year's award goes to the following papers:

WINNER: "A Hybrid P2P/Multi-Source Quality-Adaptive Live-Streaming Solution for high end-user's QoE" by Joachim Bruneau-Queyreix, Mathias Lacaud, Daniel Negru
C. Timmerer (left), Joachim Bruneau-Queyreix (middle), Axel Becker-Lakus (right)

RUNNER-UP: "Optimal Viewport Adaptive Streaming for 360-Degree Videos" by Zhimin Xu, Lan Xie, Xinggong Zhang, Han Hu, Yixuan Ban, Zongming Guo
C. Timmerer (left), Zongming Guo (middle), Axel Becker-Lakus (right)

We would like to congratulate all winners and hope seeing you next year at IEEE ICME 2018.

Photos by Cigdem Turan (PolyU, Hong Kong).