The development of the MPEG-V standard Media context and control started in 2006 from the consideration that MPEG media – audio, video, 3D graphics etc. – offer virtual experiences that may be a digital replica of a real world, a digital instance of a virtual world or a combination of natural and virtual worlds. At that time, however, MPEG could not offer users any means to interact with those worlds.
MPEG undertook the task to provide standard interactivity technologies that allow a user to
- Map their real-world sensor and actuator context to a virtual-world sensor and actuator context, and vice-versa and
- Achieve communication between virtual worlds.
All data streams indicated are specified in one or more of the 7 MPEG-V parts
- Part 1 – Architecture expands of the figure above
- Part 2 – Control information specifies control devices interoperability (actuators and sensors) in real and virtual worlds
- Part 3 – Sensory information specifies the XML Schema-based Sensory Effect Description Language to describe actuator commands such as light, wind, fog, vibration, etc. that trigger human senses
- Part 4 – Virtual world object characteristics defines a base type of attributes and characteristics of the virtual world objects shared by avatars and generic virtual objects
- Part 5 – Data formats for interaction devices specifies syntax and semantics of data formats for interaction devices – Actuator Commands and Sensed Information – required to achieve interoperability in controlling interaction devices (actuators) and in sensing information from interaction devices (sensors) in real and virtual worlds
- Part 6 – Common types and tools specifies syntax and semantics of data types and tools used across MPEG-V parts.
Table of contents | ◄ | 13.10 MPEG-E | █ | 13.12 MPEG-MAR | ► |