Page 29 - ITU Journal, ICT Discoveries, Volume 3, No. 1, June 2020 Special issue: The future of video and immersive media
P. 29

ITU Journal: ICT Discoveries, Vol. 3(1), June 2020



          5.2  Objects filtering                               5.6  Object of interest
          The user may choose to render only the objects of    The same personalization idea can be extended to
          interest  while blur  or  remove  other  objects.  Also,   content creation: each object can be compressed in
          the encoder may choose to pack patches only from     different visual quality.  If one wants to focus on a
          those objects of interest and transmit them rather   specific point-cloud object, one can latch on to the
          than  sending  them  all  in  the  event  of  limited   stream  with  the  object-of-interest  encoded  in
          bandwidth,  or  objects  may  dedicate  more  bits  to   higher visual quality.
          these patches to deliver objects of interest in higher
          resolution.                                          6.    CONCLUSION
          5.3  Background rendering                            At  the  time  of  this  paper’s  publication,  the
                                                               immersive  media  standardization  effort  is  still
          Background  is  a  special  static  object  that  can  be   ongoing.  The  context  of  the  paper  presents  an
          rendered  by  itself  from  the  related  patches  or   object-based  point-cloud  signaling  solution  that
          synthesized from virtual/pre-rendered content. In    provides  a  simple  way  to  meet  multiple  MPEG-I
          the case of rendering from patches, there might be   requirements  for  emerging  immersive  media
          regions in the background that are not visible in any   standards V-PCC and MIV. This approach supports
          of the input source views due to occlusions. Such    the  future  interoperability  needs  to  have  a
          hole  regions  can  be  filled  using  inpainting    standard-based  signaling mechanism  for  uniquely
          techniques.  Another  approach  is  to  capture  the   identifiable objects in sports.
          scene ahead of time without any objects and stream
          a single image metadata once per intra-period so it   The requirements contributions for MPEG-I include
          can be used for rendering the background content     the signaling and object ID for each patch in V-PCC
          and  populate  the  scene  with  objects  of  interest   and adding signaling per patch of  an object  ID as
          afterwards. A synthetic background can be inserted   part of MIV. These contributions were adopted by
          as well, and objects can be augmented within.        MPEG,  and  modified  to  be  included  in  an  SEI
                                                               message  instead  of  in  the  patch  data  syntax
          5.4  Object-based scalability                        structure.  The  object-based  applications  proposal
          Point-cloud  objects  and  a  separate  background   to  MPEG was adopted as part of MIV, which adds
          provide  object-based  scalability  for  adaptive    the signaling ability per patch of an object ID.
          streaming  over  different  network  conditions.     These  contributions  address  the  needs  of  Intel
          Patches  belonging  to  unimportant  point-cloud     Sports  to  optimize  point-cloud  compression  and
          objects, e.g., point-cloud objects too far away from a   object identification for creating immersive media
          viewport,  can  be  entirely  dropped  or  encoded  at   experiences  in  sports.  The  object-based  approach
          lower visual quality. It is the job for an encoder to   also provides a very low impact solution for when
          decide  the  relative  importance  of  point-cloud   the feature is not utilized. In addition, the object-
          objects using contextual information available to it.    based point-cloud signaling can be used to create

          5.5  Personalized 6DoF user experience               innovative visual experiences outside of sports by
                                                               providing a simple solution for identifying points of
          Viewers of object-based volumetric video can filter   interest  that  ultimately  create  higher  quality
          out uninteresting or unimportant objects and keep    volumetric content.
          only  relevant  or  interesting  objects  based  on  the
          bounding box attributes of point-cloud objects, even   To  conclude,  MPEG-I  supports  delivering  object-
          though all data / objects have been streamed to the   based immersive media experiences for sports, and
          client  side.  The  decoder  simply  does  not  render   the  approach  can  be  applied  towards  other  use
          patches  if  a  viewer  filters  out  the  object  these   cases.
          patches  belong  to.  This  allows  a  personalized
          experience for viewers to choose only content that
          matters to them, e.g., show me only the red team
          players or offense players.











                                                © International Telecommunication Union, 2020                  7
   24   25   26   27   28   29   30   31   32   33   34