Page 75 - ITUJournal Future and evolving technologies Volume 2 (2021), Issue 1
P. 75

ITU Journal on Future and Evolving Technologies, Volume 2 (2021), Issue 1




          This procedure is carried out for two scenarios: 1)  results from the controller placement experiment (for the
          when the switch‑to‑controller placement is imbalanced  case when two control instances are deployed). In other
          (switch‑to‑controller assignment is two and  ive switches  words, two control instances were deployed at optimal
          for controller one and two respectively) and 2) for  locations to minimize propagation latency. Additionally,
          the scenario where switch‑to‑controller placement is  the ONOS mastership management module was activated
          balanced (switch‑to‑controller assignment is three and  to balance the switch‑to‑controller placement.
          four for controller one and two respectively).
                                                               Failover is evaluated by shutting down one controller in
          In addition to switch‑to‑controller balancing,  the  the cluster and calling the “pingall” function. If no packet
          control‑plane has several tuneable parameters in the  loss is observed, then it means all hosts can reach each
          control‑plane, such as polling frequency and soft idle  other and switch reassignment to the active controller
          timeout [57].  Polling frequency is a parameter that  was successful. We also take note of the time it takes for
          speci ies how frequently statistics requests are sent to  the controller to take mastership of the “controller‑less”
          the data‑plane. Soft idle timeout speci ies the total time  switches.
          an inactive  low entry is stored in the  low tables before
          deletion. Tuning these parameters impacts control‑plane  7.3 Results and discussion
          overhead. In other words, increasing polling frequency is
          likely to decrease the control‑plane overhead (of course  This section presents and discusses the results obtained
          at the expense of data‑plane protection and restoration)  from following the procedures described above.
          while increasing the soft idle timeout results in more  7.3.1  Controller placement
           low rules in the  low tables and reduces control‑plane
          overhead (with the switch resource (e.g. memory and  Fig. 11 and Fig. 12 present the results obtained from
          storage) exhaustion as a trade‑off). In an operational  our analysis of the SANReN network. As per Fig. 11,
          environment, OpenFlow switches with TCAM (Ternary    our results show that the optimum controller location
          Content Addressable Memory) support are typically    when one controller is deployed is Cape Town since this
          preferred for fast processing [58]. However, TCAM is  node has the lowest average latency (          =88.78 ms).
          very expensive with very limited memory space [59].  Similarly, the worst location to place the controller when
          Therefore, the soft idle timeout can only be increased up  one controller is deployed is Bloemfontein since this
          to a certain threshold to maintain the switch memory  location yields the highest average latency (          =164.4
          utilization around acceptable levels.                ms).

          To determine how the soft idle timeout affects       Fig.   12 presents the results obtained when two
          control‑plane overhead, we gradually increase the    controllers are deployed. These results are intepreted
          soft idle timeout and polling frequency (from 5 s to 40 s  as follows: the blue bars indicate a scenario where one
          in increments of 5 s) and measure the number of packets  controller is placed in Pretoria (a region belonging to
          (i.e. Packet‑In, Packet‑Out, Flow‑Mod, Stats‑Request and  cluster one as described in Section 7.2.1), while the other
          Stats‑Reply). In order to evoke control traf ic we generate  controller’s location is iterated between Johannesburg,
          200 000 packets between two hosts (one connected to  East London, Port Elizabeth and Cape Town (regions
          the node in Johannesburg and the other connected to  belonging to cluster two).  Similarly, the red and
          a node in Cape Town). The duration, packet size and  green bars indicate controller placement in Durban and
          bandwidth are the same as for the switch‑to‑controller  Bloemfontein (regions belonging to cluster one) while the
          placement experiment. This experiment leveraged the  other controller is placed in all regions within cluster two.






















                                   Fig. 11 – Total average latency for the ONOS controller without clustering.





                                             © International Telecommunication Union, 2021                    59
   70   71   72   73   74   75   76   77   78   79   80