Page 165 - ITU-T Focus Group IMT-2020 Deliverables
P. 165
ITU-T Focus Group IMT-2020 Deliverables 3
In implementing our prototype, we adopt OpenFlow’s pattern-match-action convention for programming
abstraction and define one’s own programming abstraction as API as following
<UEID, TEID><Action><Stat>,
where UEID could be a UE’s IP address assigned by MME through signalling channel, Action refers
create/update/remove a GTP-U tunnel.
LTE network slice instances are isolated without interfering one another. We demonstrate play back of
YouTube movies on a smart phone (Nexus5) connected to an OAI slice on FLARE [Ref. 7.3-6].
7.3.5 Functional Enhancement in Data Plane Using SDN Software Switches
SDN aims primarily at flexible networking enabled by software control. For example, OpenFlow, the most
widely used SDN technology, specifies OpenFlow Switch Specifications as SBI, by which an SDN controller can
impose packet forwarding rules onto OpenFlow-compliant switches. Unlike conventional Layer-2/3 switches,
the rules are not limited to predetermined, proprietary set of packet forwarding criteria, but are programmed
using openly defined interfaces. In this sense, programmability of SDN resides largely in control layer.
However, there have been technological developments in which data plane is made programmable while
retaining the SDN framework. (That is, the application-control-data structure and NBI and SBI interfaces.) An
example is Protocol Oblivious Forwarding (POF), which will be described in detail in the next section (7.4). It
should also be mentioned that the SDN architecture discussed at ITU-T SG13 encompasses not only data
forwarding functionalities but also data processing functionalities in data plane.
Another approach to realize data plane programmability in SDN is to enhance data plane functionalities by
using it in combination with NFV, or more broadly speaking, computational capabilities. As stated in Section
7.3.1, synergy between SDN and NFV has been discussed widely. This is because both the technologies make
use of abstraction of hardware and/or its capabilities and are thus in a complementary relationship with each
other, enabling their combined use to realize flexible and sophisticated control of packets by software. In
fact, FLARE, described above, can incorporate this idea into its architecture.
Software-based SDN switches fit well in this approach. There are, however, issues in doing so, most notable
one being performance. As stated in Section 7.3.2, to keep reasonable and predictable performance becomes
the key when considering using a software switch that runs on general-purpose CPUs.
7.3.6 Introduction to Lagopus [Ref.7.3-7 and 7.3-8]
There are several SDN software switch products, both commercial and open-sourced. Open-source software
switches are especially useful when trying to explore cutting-edge network softwarization moves. At the
same time, they are paving way to commercial usage as their functionalities, performance, and reliability
continue to improve.
‘Lagopus’ is an open-source software switch that runs on x86 CPUs and are fully compliant with OpenFlow
Switch Specifications. Its development started under O3 Project, aiming at a switch with high performance,
functional extensibility, and usability for wide area network uses including telecom carrier networks. (See
Section 6.2.1 for more information about O3 Project.) It features supporting multiple WAN networking
protocols, management protocols/interfaces, and large-scale flow entries to name a few [Ref.7.3-7].
Regarding performance, Lagopus has a number of characteristics in its software architecture and design. The
switch’s software is divided into two main components: switch agent and data plane. The switch agent
component has a unified data store functionality to configure and manage switch resources and provides
interfaces to OpenFlow controllers. The data plane component is responsible for all the processes that packet
forwarding involves. It utilizes Intel DPDK libraries to accelerate network I/O performance, which enables to
bypass packet processing in Linux OS kernel and to access directly to NIC packet buffers from userspace
programs. It also exploits multiple CPU cores to achieve fast, efficient handling of packet flows, using parallel
processing technique. Figure 7.3-4 shows the parallel processing architecture from ingress to flow lookup
and header modification to egress. By assigning specific CPUs dedicatedly to the I/O receive (RX) and the I/O
transmit (TX) threads, overheads in these threads can be well reduced. In addition, flow lookup is accelerated
by employing fast flow-table look schemes as well as CPU caches, leading to an overall improvement of packet
forwarding performance [Ref.7.3-8].
159