Page 61 - ITU Journal Future and evolving technologies Volume 2 (2021), Issue 5 – Internet of Everything
P. 61

ITU Journal on Future and Evolving Technologies, Volume 2 (2021), Issue 5




          we can measure the SNR of a received signal prior to clas‑
          si ication. In order to get rid of this requirement and also  60  In-library
          to save time, we merged the training sets of different SNR      Out-of-library
          levels to create a more generalized model. The trunca‑   50
          tion cut‑off that gives the smallest average loss was found  40           4
          to be −10 dB/Hz, therefore we merged all the images for
          eight SNR levels denoised at this cut‑off threshold. Train‑  pdf  30      2
          ing and test sets are eight times greater than those of the
          single SNR sets. A classi ication accuracy of 98.8% (across  20
          all SNR levels) is achieved when using this model.                        0
                                                                   10                   0.2     0.4     0.6
          A confusion matrix of the merged model is given in Fig. 7.
                                                                    0
          The major de iciency of the model is observed at (13, 3),  0.0    0.2     0.4     0.6     0.8     1.0
          where 14 out of 200 test data which belongs to class 3                   Model uncertainty
          is predicted as class 13. These two controllers belong to
          the same company and both their time‑series plots and  Fig. 9 – Pdfs of the model uncertainty for in‑library and out‑of‑library
                                                               UAV classes. The model predicts in‑library signals with high certainty.
          spectrograms show high virtual resemblance.          The prediction uncertainty increases when the model encounters an
                                                               out‑of‑library controller.
          We also tested the merged model with images at inter‑
          mediate SNR levels ranging from −12 dB to 22 dB with  set of 25 samples per class unseen by the models before.
          increments of 5 dB. We used 30 images for each class at  Here it is seen that for small training set sizes, classi i‑
          each SNR, which add up to 3600 images, all previously un‑  cation accuracy decreases as expected. After roughly 50
          seen to the classi ier, to test the model. Our model gives  samples/class, the accuracy reaches saturation and be‑
          96.9% accuracy, as shown in Table 4. In the case when  gins to  luctuate. On the other hand, we see that the cre‑
          we exclude the test data at −12 dB, the accuracy of the  ated models give reasonable accuracy even for a training
          model increases up to 99.3%, which indicates that almost  set size of as low as 20 samples per class, which is because
          all the misclassi ication is associated with this particular  these samples are created by devices that have a high level
          SNR level.                                           of consistency.
                                                               For the practicality of the proposed system, the RF sig‑
          5.4 Classi ication accuracy vs. training set size    nal database should be updated as new products are in‑
                                                               troduced to the market. This also requires retraining of
          There are popular CNN models in the literature that can
          be implemented to a wide variety of image classi ica‑  the CNN models, and hence fast training algorithms are
          tion problems via transfer learning, e.g., VGG16 or Incep‑  needed. However, as explained above, the proposed sys‑
          tionV3. These models have abundant hidden layers and  tem only requires a limited amount of training data, which
          have been trained over enormous data sets. Other than  in turn makes it a promising solution.
          these models, it is more customary to come across CNN
          models that are deeper and trained on larger data sets  5.5 Out‑of‑library UAV controller signals
          in the literature. If the problem in hand is to accurately
          classify images of miscellaneous objects, e.g., humans, an‑  Finally, we investigate the behavior of the proposed algo‑
          imals or cars, then a deep model with a very large number  rithm when the receiver captures an out‑of‑library UAV
          of training set should be required. This is because these  controller signal. To do that, we tested our optimized
          images have more diversity in terms of position, angle,  CNN‑based classi ier for 40 signals from a Hubsan H501S
          ambiance, lightning, etc. However, in our model, the set of  X4 drone and compared the estimated probability distri‑
          images that we classify are generated by the well‑de ined  bution functions (pdfs) of the prediction uncertainty with
          methods that use the outputs of quite robust electronic  those of the in‑library test signals in Fig. 9. The output
          circuitry. Thus, proposed models reach very high accu‑  layer of the trained model gives a set of predictions for
          racy with as low as 100 training samples per class.  an incoming signal, where each element of the set corre‑
                                                               sponds to the estimated probability of that signal belong‑
          To better explain the suf iciency of a low number of sam‑  ing to a particular class. A  inal decision on the class of
          ples for this particular problem, we examined the depen‑  the test signal is made based on the maximum probabil‑
          dence of accuracy to the training data set size. Fig. 8  ity,     , in this set. We de ine the model uncertainty in
                                                                   max
          shows the accuracy of the classi ier with respect to sam‑  Fig. 9 as (1 −     ).
          ple size per class for different SNR levels. In these simu‑       max
          lations, the same models that are optimized for 100 sam‑  We observe that the two classes (i.e., in‑library and out‑
          ples per class are used for all cases. Note that, in this  ig‑  of‑library UAVs) are well separated in terms of the model
          ure, x‑axis denotes the size of the training sets only. We  uncertainty associated with each of them, and out‑of‑
          did not shrink the validation set while we tune the train‑  library UAV signals can be easily identi ied by a simple
          ing set sizes. All models have been validated with a test  thresholding mechanism. The threshold can be placed





                                             © International Telecommunication Union, 2021                     49
   56   57   58   59   60   61   62   63   64   65   66