Page 83 - ITU Journal Future and evolving technologies – Volume 2 (2021), Issue 2
P. 83

ITU Journal on Future and Evolving Technologies, Volume 2 (2021), Issue 2




          additionally  include  external  factors  like  the  device’s   ment of the tasks partitioning decision, i.e the decision to
          residual battery [59] and so adjust to the devices’ needs   process on which nodes each part of the task, can reduce

          in real time. When MEC servers are UAVs, the hover time   the latency. Indeed, if the decision is taken on the reques-
          is to be included in the energy model under the form:   ℎ    ting node, it can take its much constrained resources. But




          =    ⋅      , where  P  is  the  power  to  hover  and  T  the   if the decision is taken on a remote MEC server, it may
          hovering  time  [92].  Besides  the hovering time, UAV    take more time to reach other MEC servers, depending on






          consumes energy for   lying, depending  on  its  velocity   their placement from the device.


          and  weight  [86]. Its  accelerations  have  equally
          signi icant  impact  on  energy  [78]. We  can  ignore  some   4.3.3   Reliability

          energy consumption points in the optimization whether
                                                               As seen in Section 3, some tasks of mission‑critical appli‑
          they are idle energy and we cannot control it. This is the




                                                               cation are of vital importance. Thus, the MEC must have





          case for server idle energy consumption  or  energy
                                                               a certain level of reliability to ensure that these tasks are
          consumption  of  links  that  are  traf ic  independent  [60,
                                                               processed. In wireless networks, the reliability is seen as
          58]. Plus, some  actions  are  negligible  in  comparison  to


                                                               the probability to successfully transfer data within a de‑
          others  in  the  system,  like  downloading  energy
          consumption [53].                                    lay [93]. A  irst challenge in MEC networks is node failure.
                                                               The redundancy of tasks is a relevant solution to mitigate
          4.3.2   Latency                                      this effect [94, 95]. However, it can burden the network








                                                               if the redundancy takes more than the needed compu-

          Latency is crucial in mission‑critical applications where   ting  or  communication  resources.  A  node  failure


          situations may be life or death, like in search and res‑   measurement  helps  ensure  the  minimum  tasks’










          cue. A task’s latency comprises the processing time and   reliability, avoiding the resources’ overuse [77]. Another
          necessary transmission time from the device to the edge   challenge is extreme events in server and UE processing




          and potentially to the cloud [79, 68, 54, 65]. Work [52]   queues. When  queues  are  overloaded  they  may  drop





          add to it the compression time, present in system with   some  critical  tasks,  and  assuring  an  average  queuing








          heavy tasks like  video processing. We can also add the   delay  is  not  suf icient  to  prevent  that  [96]. Thus,  the







          local  or remote computational  queuing delay [55, 64]   work [64] uses the statistics of the extreme queue length










          because of the continue tasks generation, present even   to ensure reliability.

          when other tasks are processed. The processing time de‑







          pends on CPU cycles required to complete the task  and   4.4 Methods


          the computing capacities, e.g., CPU cycles/seconds, allo‑
                                                               The chosen method for resource allocation has to propose
          cated to the task [74, 69].  The latency is equally affected
                                                               a satisfactory compromise between precision, computa‑




          by the data  generation speed. When the generation is



                                                               tional complexity and scalability depending on the prob‑
          superior to the system processing capacities, data accu‑

                                                               lem and its context. Some methods may be unable to solve
          mulates in buffers and nodes don’t process tasks in real

                                                               a problem [56] or  ill the system requirements. In addi‑
          time. Wang et al. [61] refer to it as a blocking state and

                                                               tion,  the  method  has  to   it  the  scale  of  the  system,  not
          propose to adapt the resource allocation scheme depen-
          ding on whether the system is in a blocking  state  or    being too complex for large‑scale systems, and its needs,








          in  a    nonblocking   state.   Furthermore,    the    data   for example if suboptimal results are suf icient.
          generation is usually non‑uniform across the system.  It   4.4.1   Optimization methods
          leads  vary‑ ing  workloads  between  servers,  and  some
          may be over‑ loaded while others are free from tasks.  It   Classic mathematical optimization methods aims to solve
          is so interest‑ ing to consider balancing in the resource   problems optimally. Cao et al.  [47] solve optimally  the






          allocation  [69],  as  well  as  the  trade‑off  between   resource allocation in a three‑node network to minimize
          computing and transmis‑ sion time when moving a task   devices’ energy consumption with the Lagrange duality






          to a less loaded but further node [65]. In addition, some   method. Chen et al. [68] propose a scheme for resource


          devices can process critical tasks or occupy a pivotal role   allocation  and task  placement  in ultra‑dense networks



          in  the  system,  therefore  they  need priority in their    for minimizing the task  completion time. They resolve









          processing. A solution proposed in [52] is minimizing a   the computational  resource allocation  part of the prob‑








          weighted‑sum delay of all devices, the  weight  re lecting   lem with Karush–Kuhn–Tucker (KKT) conditions. Ren





          devices’  importance  in  the  system.  Alternatively, [51]    et al.  [52] exploit  the KKT  conditions to allocate  a MEC








          proposes to measure each tasks’  pri‑ ority with delay   server’s resources to users while minimizing the delay,










          and   reliability   requirements.   Standardly,   the   where data  is compressed locally  by the user before






          downloading  time  from  server  to  devices  is  ignored,   sending. Even though classic mathematical optimization

          since results data are smaller and downlinks have higher   methods allow optimal outcomes they come with signi i‑







          rates [62, 58]. What’s  more, the transmission time be‑   cant complexity. Thus they are adapted to small‑scale sys‑
          tween a base station and its associated MEC server is ig‑   tems with few parameters. They are unadapted to large‑


          nored [59]. Finally, as seen in Section 4.1, the partition   scale systems where the complexity is too high to handle


          of tasks can greatly reduce the processing time by paral‑   and they will either not be able to solve the problem or
          leling the processing. [45] shows that the dynamic place‑  demand an unfeasible amount of time.
                                             © International Telecommunication Union, 2021                    69
   78   79   80   81   82   83   84   85   86   87   88