Page 127 - ITU Journal, Future and evolving technologies - Volume 1 (2020), Issue 1, Inaugural issue
P. 127

ITU Journal on Future and Evolving Technologies, Volume 1 (2020), Issue 1




          runtime environment where offline training of the    into  other  network  components  via  the  different
          ML model takes places. This refers to the training of   specified  interfaces.  For  example,  management
          model before being executed within the network. In   configuration actions from an actor within the SMO
          addition to the data collected from the real network,   layer on any RAN node can be conducted via the O1
          offline training may also rely on synthesized data   interface,  control  actions  on  O-CU/O-RU  from  an
          which can accurately reproduce the behavior of the   actor  within  the  near-RT  RIC  can  go  over  the  E2
          real  network  environment.  The  training  may      interface  and  policy  management  configuration
          include  an  evaluation  stage  to  assess  the      actions  between the non-RT RIC  and  the  near-RT
          performance  of  the  model  and  validate  that  it  is   RIC can be communicated over the A1 interface.
          ready  and  reliable  to  be  deployed  in  the  live
          network environment. Offline training is necessary   3.2  Information  models  for  network  slice
          to  obtain  supervised  learning  models  (e.g.  deep      management
          neural networks, support vector machines, etc.) as   With regard to the management of network slicing
          well  as  reinforcement  learning  models  (e.g.  Q-  in  5G  networks,  3GPP  specifications  include
          learning,  multi-armed  bandit  learning,  deep  RL).   information  model  definitions,  referred  to  as
          The training host component is likely to be part of   Network  Resource  Models  (NRMs),  for  the
          the SMO layer.                                       characterization  of  network  slices  [42]  together
          The  ML  inference  host  represents  the  runtime   with  a  set  of  management  services  (MnS)  for
          environment  where  the  (previously  trained  and   network slice life-cycle management (e.g. network
          validated) ML model is executed and fed with online   slice provisioning  MnS for network  slice creation,
          data to produce the outputs that will be used in the   modification   and   termination,   performance
          network operation. Multiple ML inference hosts can   monitoring services per slice, etc.) [43]. In addition,
          be in place, whose location depends on aspects such   work is being conducted at 3GPP level to support
          as  the  purpose  and  type  of  ML  models  being   SLA/SLS management [44], as well as closed loop
          executed,   its   computation   complexity,   the    assurance solutions that allow a service provider to
          availability and the quantity of data used and the   continuously  deliver  the  expected  level  of
          response time requirements (real-time or non-real-   communication service quality in a 5G network [45].
          time)  of  the  ML  application.  Hence,  ML  inference   Fig. 3 provides an overview of the different types of
          hosts can be placed within the SMO layer but also    information  models  and  their  relations  that  are
          within the RAN nodes (i.e. near-RT RIC, O-CU, O-DU).    relevant for network slice management. The main

          In turn, the actor represents the network entity (i.e.   idea  behind  the  overall  flow  of  the  information
          UE, O-DU, O-DU, Near-RT RIC and Non-RT RIC) that     models, as illustrated in Fig. 3, is that a network slice
          hosts the decision-making function that consumes     is  conceived  as  a  “product”  offered  by  a Network
          the  outputs  of  the  ML  inference  host  and  takes   Slice Provider (NSP) to a Network Slice Customer
          actions.  It  is  worth  noting  that  the  distinction   (NSC).  In  this  respect,  the  GSMA  Generic  Slice
          between the ML inference host and the actor obeys    Template  (GST)  is  used  as  the  SLA  information
          the fact that these components may or may not be     associated with the network slice product for the
          co-located as part of the same network entity. An    communication between the NSC and NSP through,
          example of non-co-location could be the case of a    e.g.  a  Business  Support  Systems  (BSS)  product
          mobility prediction model executed in an inference   order  management  Application  Programming
          host within the non-RT RIC that produces outputs     Interface (API).
          (e.g.  mobility  patterns)  that  are  retrieved  and   The  GSMA  GST  provides  a  standardized  list  of
          consumed by the near-RT RIC (i.e. the actor in this   attributes  (e.g.  performance  related,  function
          case)  for  enhanced  RRM  (e.g.  handover  decisions   related,  etc.)  that  can  be  used  to  characterize
          based on mobility patterns). In contrast, an example   different types of network slices [46]. GST is generic
          of  co-location  could  be  an  RRM  algorithm  for   and is not tied to any type of network slice or to any
          mobility management that embeds a reinforcement      agreement between an NSC and an NSP. A Network
          learning model and is executed within the near-RT    Slice Type (NEST) is a GST filled with (ranges of)
          RIC, which in this case serves as both the inference   values. There are two kinds of NESTs: Standardized
          host and the actor. The actions decided by the actor   NESTs (S-NEST), i.e. NESTs with values established
          can  be  handled  either  internally  within  the  actor   by standards  organizations,  working groups,  fora,
          (e.g.  RL-based  RRM  algorithm  for  mobility       etc. such as, e.g. 3GPP, GSMA, 5GAA, 5G-ACIA, etc.;
          management within the near-RT RIC) or enforced       and Private NESTs (P-NEST), i.e. NESTs with values





                                             © International Telecommunication Union, 2020                   107
   122   123   124   125   126   127   128   129   130   131   132