Page 131 - ITU Journal Future and evolving technologies Volume 2 (2021), Issue 4 – AI and machine learning solutions in 5G and future networks
P. 131

ITU Journal on Future and Evolving Technologies, Volume 2 (2021), Issue 4




          As proposed in [16], the CAVIAR framework concerns   also relies on the scene description and can extract fea‑
          a speci ic category of 6G simulations that rely on vir‑  tures  from  the  raw  sensor  data  to  feed  its  AI/ML  algo‑
          tual worlds and incorporate two subsystems: wireless  rithms.
          communications and AI/ML. In the next paragraphs, we
          brie ly review the CAVIAR framework, depicted in Fig. 1,  Fig. 1 illustrates the INLOOP CAVIAR framework with the
          and then focus on the important aspect of generating the  AI/ML module within the simulation loop. When the deci‑
          communication channel corresponding to a given scene  sions of this module do not affect the environment, it can
          of the virtual world. We discuss how the Raymobtime  be convenient to split the simulation into two stages, with
          methodology [12]  its well to the demand for communi‑  the  irst one being an OUTLOOP CAVIAR simulation that
          cation channels imposed by 6G CAVIAR simulations.    writes episode  iles that will be later used for designing
                                                               and assessing AI/ML models.  The more evolved INLOOP
          A CAVIAR simulation generates multimodal data for each  simulation is required in cases such as a drone mission in
          discrete time    ∈ ℤ, and is able to operate in two  which the AI/ML decisions will change the drone trajec‑
          modes, the  irst mode is focused on online learning, run‑  tory and, consequently, its wireless channel.  In general,
          ning the simulation and the neural network simulta‑  when the AI/ML model issues commands or actuator sig‑
          neously, creating an environment where data is trans‑  nals that effectively change the trajectories of mobile en‑
          mitted in real time, or in discrete samples with time  tities,  alter  the  environment  or  the  communication  sys‑
          stamps de ined by the user. The second mode of op‑   tem  state  (e.g.,  buffer  occupation),  the  simulations  may
          eration performs data recording in databases or text  need to be INLOOP and communication channels gener‑
           iles, working as a tool for creating datasets.  Along  ated   ly.  In the simpler OUTLOOP simulation cat‑
          the simulation, the machine learning for communications  egory, channels can be pre‑computed and the communi‑
          (ML4COMM) engine operates on data organized as an    cation simulation decoupled from the physical engine, as
          episode    = [(   ,    ), … , (   ,    )], with a sequence of  often used in AI/ML applied to beam selection [19,  12].
                        1
                           1
                                     
                                        
             tuples (   ,    ),    = 1, … ,   , of paired data, where        The next sections provide two examples to distinguish IN‑
                      
                         
          and    are sets with the input AI/ML parameters and cor‑  LOOP and OUTLOOP CAVIAR simulations.
                
          responding outputs, respectively. In supervised learning,
             consists of desired labels for classi ication or regres‑  2.1  OUTLOOP CAVIAR simulation for beam
             
          sion, while for reinforcement learning    consists of re‑  selection
                                              
          wardsfortheagents. Thetuples(   ,    )denoteevolution
                                         
                                            
          over discrete‑time   . In our methodology, the outputs of  Beam selection is a classical application of AI/ML to com‑
          the simulators are periodically stored as “snapshots” (or  munications [20, 21, 22]. The goal is to choose the best
          scenes) over time      sam , where    sam  is the sampling period  pair of beams for analog beamforming, with both trans‑
          and    ∈ ℤ.
                                                               mitter (Tx) and receiver (Rx) having antenna arrays with
          The main steps in Fig. 1 can be summarized as follows.  only one Radio Frequency (RF) chain and  ixed beam
          The environment is composed of a 3D scenery with  ixed  codebooks. Fig. 2 illustrates beamforming from a Base
          and mobile objects. These objects are created and placed  Station (BS) to both vehicles and drones.
          with specialized tools and data from the Internet, as de‑
          scribed in [12] and [17]. The positions and interactions
          among mobile objects are determined by a physics engine
          (for instance, the Unreal engine or the Simulation of Ur‑
          ban MObility (SUMO) traf ic generator [18]).
          Once the scene is complete, the environment is repre‑
          sented via sensors, such as LIDAR, which is simulated
          by Blensor and Blender software, returning point cloud
          data (PCD) that maps the shapes of the 3D space around
          the sensor. It is possible to adjust the resolution of the
          PCD through a quantization process. A ray‑tracing soft‑
          ware (Remcom’s Wireless InSite in Fig. 1) also captures
          the communication channel for the given scene. The sen‑
          sors output constitute the episode input    , and the cor‑
                                                
          responding output    is obtained by a signal process‑
                              
          ing module. These episodes are actually what is stored
          in Raymobtime episodes [12] but in a CAVIAR simula‑
          tion they can be created and used on‑the‑ ly, if needed.  Fig. 2 – Beamforming from BS to both vehicles and drones.
          The CAVIAR 6G virtual world simulator also incorporates
          a communication system that has some functionalities
          driven by the ML4COMM engine. The ML4COMM engine





                                             © International Telecommunication Union, 2021                   115
   126   127   128   129   130   131   132   133   134   135   136