Coded representation of immersive media (MPEG-I) represents the current MPEG effort to develop a suite of standards to support immersive media products, services and applications.
Currently MPEG-I has 11 parts and more parts are being added.
- Part 1 – Immersive Media Architectures outlines possible architectures for immersive media services.
- Part 2 – Omnidirectional MediA Format specifies an application format that enables consumption of omnidirectional video, aka Video 360. Version 2 is under development
- Part 3 – Immersive Video Coding will specify the emerging Versatile Video Coding standard
- Part 4 – Immersive Audio Coding will specify metadata to enable enhanced immersive audio experiences compared to what is possible today with MPEG-H 3D Audio
- Part 5 – Video-based Point Cloud Compression will specify a standard to compress dense static and dynamic point clouds
- Part 6 – Immersive Media Metrics will specify different parameters useful for immersive media services and their measurability
- Part 7 – Immersive Media Metadata will specify systems, video and audio metadata for immersive experiences. One example is the current 3DoF+ Video activity
- Part 8 – Network-Based Media Processing will specify APIs to access remote media processing services
- Part 9 – Geometry-based Point Cloud Compression will specify a standard to compress sparse static and dynamic point clouds
- Part 10 – Carriage of Point Cloud Data will specify how to accommodate compressed point clouds in the MP4 File Format
- Part 11 – Implementation Guidelines for Network-based Media Processing is the usual collection of guidelines
Table of contents | ◄ | 13.16 MPEG-DASH | █ | 13.18 MPEG-CICP | ► |