Page 87 - Kaleidoscope Academic Conference Proceedings 2020
P. 87
Industry-driven digital transformation
Edge cloud latency tolerant batch processing applications (e.g., scientific
computation tasks not having strict deadline for completion).
Resource controller The VNFs for batch-processing applications use the
Virtual network functions
RC remaining resources available after allocating to the high
priority, latency-sensitive VNFs. The resource controller
VNF 1 VNF 2 …... VNF
N (RC) continuously monitors and records a VNF’s resource
allocation and utilization status as well as performance
metrics (e.g., service latency). The RC executes resource
adjustment algorithms (e.g., our proposed regression models
EU 1 EU 2 … EU K as well as conventional algorithm [10]) to adjust the
computational resource by using Docker update commands
End users
Figure 2 - Resource adjustment system model [12]. The round-trip time or latency of the service provided
by a latency sensitive VNF measured at an end-user (EU)
objective was to optimize resource allocation and device is the time elapsed from the instance the EU sends a
simultaneously satisfy the target performance requirements service request to the edge cloud to the instance it receives
in terms of latency and resource utilization. The scheme can the response from the VNF. This latency consists of two
complete three related tasks (i.e., monitoring data collection components: the round-trip communication latency between
and processing, resource adjustment decision making, and the EU and VNF and the computational latency of the VNF.
completion of decision execution) within a second interval. With the dynamic and accurate adjustment of computational
As a use case of latency-sensitive VNF, we have selected the resource allocated to the VNF, we can control the value of
Internet-of-Things directory service (IoT-DS) function [10] the latter part of latency.
to evaluate the effectiveness of the proposed resource control
scheme. 3.2. Data set preparation and offline training
Both supervised [6-8] and unsupervised [11] ML-based The data set preparation process includes system monitoring
approaches can be employed for the purposes of dynamic data collection and processing. The data is collected from the
resource adjustment. Supervised learning approaches system running the real VNF offline with simulated patterns
provide the advantages of offline training of the system with of possible input workloads. Various parameters are
a large data set and achieving high prediction accuracy from recorded such as workload (e.g., number of service requests
the beginning of the system operation. However, they may fed to the system per unit time), amount of resource allocated
suffer from less accurate prediction of unknown input Start
patterns and require tedious jobs for the preparation of a
training data set. Unsupervised ML techniques, on the other
hand, are desirable to reduce the necessity of human Training data
involvement in the preparation of a training data set and
improve the prediction accuracy of unseen input data
patterns. For example, an unsupervised reinforcement- Regression model(s)
learning model presented in [11] can dynamically adjust
multipath TCP window sizes to avoid network congestion.
This paper extends the supervised approach presented in [8] Selection of single or
combined prediction
with an unsupervised one, where multiple regression-based method
ML models of gradient boosting regression and extremely
randomized trees are retrained online at regular intervals Training execution
from monitoring data collected from the running system. The
ML models are updated by the newly trained models so that
they can make more accurate resource adjustment decisions. Model evaluation
3. DYNAMIC RESOURCE ADJUSTMENT
SCHEME WITH RE-TRAINING Yes Training
In this section, we discuss the system model and present the data
proposed scheme employing multiple regression models for available?
virtual resource adjustment of latency sensitive VNFs. No
3.1. System model
Selected model
The resource adjustment system model is shown in Figure 2.
An edge cloud hosts N containerized VNFs in a server
machine. The VNFs can be categorized into two classes: one Stop
supporting high priority, latency-sensitive, and mission-
critical services and the other supporting low priority, Figure 3 - Flowchart of offline training procedure
– 29 –