Page 18 - ITU Journal Future and evolving technologies Volume 2 (2021), Issue 5 – Internet of Everything
P. 18
ITU Journal on Future and Evolving Technologies, Volume 2 (2021), Issue 5
Accordingly, with numerous contemporary papers • As detailed in [20], the server side computes the
[20, 8] recently proposed in literature, the optimization weighted average expressed by
of (14) limiting the computational complexity, is
pursued by applying the gradient descent method.
∑ |Θ |w
w( ) = ∈ . (16)
4.2 Federated learning framework ∑ ∈ |Θ |
Algorithm 1 Client Side
1: for each NE involved in learning process do It is important to make evident that EUs, in performing
distributed data training accordingly with the FL frame‑
2: update w ( ) = ̂ w ( − 1) − ∇ ( ̂ w ( − 1)); work, achieve numerous advantages in terms of client pri‑
3: return w ( ) to the central server;
4: end for vacy, and limited exploitation of their computational re‑
sources. This is directly connected to the fact that train‑
ing data locally on the client’s site, helps users to keep
their sensitive and personal information reserved, since
Algorithm 2 Server Side
the uploading of the EU parameter vector w does not
1: initialize w ; expose the client to any sort of privacy matter. More
0
2: for each NE involved in learning process in parallel speci ically, from w , it is not elementary to retrieve Θ .
do Finally, each algorithm iteration round involves just a part
3: Receive and update w ( ) of the whole EUs’ set, reducing the message passing be‑
4: end for tween client and central server entities. Strongly con‑
∑ |Θ |w
5: update global model w( ) = ∈ . nected with this aspect, the usage of the gradient descent
∑ |Θ |
∈ algorithm is able to afford the learning problem without
implying an excessive resource consumption, meeting the
limited computational capabilities intrinsic of each mo‑
bile device.
Algorithm 3 VFs Placement Planning
Algorithms 1 and 2 exhibit the pseudocode correspond‑
1: Input: predicted application popularity vector p; ing to the client and server sides, respectively.
2: for each VF ∈ p do
for each CN ℎ ∈ do
3:
4.3 VFs placement planning
4: if ℎ has enough SRBs then load on ℎ;
5: else Once the FL framework is applied to obtain SRs predic‑
if cloud has enough SRBs then
6: tion on the basis of the historical EUs’ information, prop‑
7: load on cloud; erly aggregated by the central server, the VFs’ placement
end if
8: planning strategy starts. The placement acts on the basis
end if of the VFs popularity, expressed with the popularity vec‑
9:
end for
10: tor p. The popularity vector p has length equal to and
11: end for contains the type of the VFs sorted by descending order
on the basis of the occurrence frequency of each VF type
As represented in Fig. 2, the proposed FL framework con‑ in the pool of the whole network requests.
sists of the client level, responsible for the distributed lo‑ In order to validate the bene its of the proposed frame‑
cal data training, and of a server side. The server side work to the VFs placement problem, we propose a
is typically represented by a base station or a more gen‑ straightforward placement strategy strictly dependent on
eral central unit, set up for improving the global learning p. Supposing that the predicted network SRs are given in
model, and to merge the locally trained EU models. The terms of the VFs’ popularity and expressed with the popu‑
client and server sides interact with each other, through‑
larity vector p, the VFs’ placement is realized through the
out a series of iteration rounds . It is important to high‑
following steps
light that the number of EUs involved in the training pro‑
cedure are a subset of the totality of the EUs. 1. Process the popularity vector p starting from the
⋆
The FL procedure consists of the following steps most popular VF in p, i.e., , hence from the most
requested VF;
• Let be the set of the EUs involved in the training
process. In parallel, each EU belonging to , i.e. EU ⋆
2. Deploy on the irst CN with enough available SRBs
⋆
, updates its local parameter vector w ( ), which to host ;
depends on its local dataset Θ , accordingly with the
⋆
following rule [8] 3. Deploy on the cloud if it has enough available SRBs
⋆
w ( ) = ̂ w ( − 1) − ∇ ( ̂ w ( − 1)), (15) to host ;
⋆
where is the learning rate and ̂ w ( −1) represents 4. If cannot be loaded neither on the CNs nor on the
the term w ( − 1) after global aggregation. cloud
6 © International Telecommunication Union, 2021