Page 226 - Kaleidoscope Academic Conference Proceedings 2024
P. 226
2024 ITU Kaleidoscope Academic Conference
model stated in [14] has a latency of 0.312 seconds. A
comparative analysis of latency is presented in Table 3.
Table 3 – Gesture recognition latency
Sl. No Model Name Accuracy Latency
1 Long-term 97.3% 0.312 seconds
Memory
Augmented
Network [14]
2 Proposed model 99.16% 0.195 seconds
Figure 2 – Experimental setup
The test accuracy, denoting the proportion of correctly
4. RESULTS AND DISCUSSIONS classified instances in our model when assessed on
previously unseen data, stands impressively at 98.24%.
4.1 Comparative analysis of model performance Conversely, the training accuracy, gauging the proportion of
correctly classified instances within the training dataset,
The model referred in [11] which is CNN-based, achieves a registers at 99.16%. The training loss of 2.63%, indicates the
training accuracy of 82.36% and a testing accuracy of extent of error or deviation between the model's predictions
66.18%. In contrast, the baseline model for gesture and actual target values during the training process, guiding
classification [12] demonstrates higher accuracies, with a the optimization of our machine learning model. Validation
training accuracy of 98.6% and a testing accuracy of 72.62%. loss, another critical metric, quantifying the discordance
Moreover, the gesture classification model utilizing Tensor between the model's predictions and the actual target values
extraction and attention mechanisms achieves even greater on validation data, is 5.9%, thus serving as a vital gauge of
accuracy, boasting a training accuracy of 99.53% and a model generalization and performance on unseen data.
testing accuracy of 83.2%. Lastly, the transfer learning
model surpasses all others in accuracy, achieving an
impressive training accuracy of 99.16% and an outstanding
testing accuracy of 98.24%. These results underscore the
effectiveness of gesture classification through different
model architectures, with the dynamic learning approach
particularly excelling in both training and testing accuracies.
Table 2 lists the comparative analytical performance
measure of the attempted models.
Table 2 – Comparison with existing models
Sl. Training Testing Figure 3(a) – Realtime Implementation to on the
Model Name
no Accuracy Accuracy appliance
1 CNN model [12] 82.36% 66.18%
Baseline CNN Model
2 for Gesture 98.6% 72.62%
Classification [13]
Gesture
Classification using
4 99.53% 83.2%
Tensor extraction –
Attention based
CNN Model with
5 99.16% 98.24%
dynamic learning
Figure 3(b) – Realtime Implementation to off the
appliance
4.2 Realtime testing and latency analysis
A sample result of real-time experimentation of system
The latency is observed in predicting a gesture from
output depicted in Figure 3, shows gesture 4 being classified.
accepting the input till updating the state of the appliance. It invokes the light to be in ON state whereas gesture 5
Latency calculation is carried using python’s inbuilt time
invokes the light to be in OFF state. The appliance control
package. The observed latency is 0.195 seconds, which is the
with gestures is shown in Table 4.
average delay for 20 consecutive predictions. The existing
– 182 –