Page 155 - Kaleidoscope Academic Conference Proceedings 2024
P. 155
Innovation and Digital Transformation for a Sustainable World
the CNN with sparse cross-entropy underscores the commitment tactile nature of touchscreens provides a natural and user-friendly
to achieving high accuracy in recognizing foundational elements way for young children to explore and manipulate visual content,
like alphabets and numbers, aligning with the core objectives of surpassing the limitations associated with mousebased or indirect
the Alpha-Bit research initiative. command interfaces (Geist, 2014).
D. Integrating TensorFlow Lite Model into Android Studio for
Alpha-bit Application
Integrating a TensorFlow Lite model file into Android Studio
for the Alpha-Bit application, developed with Kotlin and Jetpack
Compose, involves a systematic process. First, the TensorFlow
Lite model is prepared by training and optimizing it for character
recognition, and then it is converted to the TensorFlow Lite
format (‘.tflite‘) to ensure compatibility with mobile devices. In
Android Studio, the project setup includes adding the necessary
dependencies for TensorFlow Lite in the ‘build.gradle‘ file. The
TensorFlow Lite model file is placed in the ‘assets‘ folder within
the ‘main‘ directory of the Alpha-Bit project.
Within the Kotlin codebase of Alpha-Bit, the TensorFlow Lite
interpreter is used to load the model. This involves creating
an interpreter instance and implementing a function (‘loadMod-
elFile()‘) to load the TensorFlow Lite model file from the assets
folder. The TensorFlow Lite model is then invoked for inference
on input data, which, in Alpha-Bit’s case, would typically be
images of characters. The TensorFlow Lite interpreter runs the
model on the input data, producing output predictions.
To seamlessly integrate TensorFlow Lite with the Jetpack Com-
pose UI, the TensorFlow Lite inference is incorporated within the
Kotlin code that defines the UI components. This ensures real-
time character recognition within the Jetpack Compose interface.
For example, the recognized character can be displayed within a
Composable function using the TensorFlow Lite inference results.
Throughout the integration process, thorough testing on various
devices is conducted to ensure compatibility and optimal perfor-
mance. Additionally, the TensorFlow Lite model and inference
process are optimized to enhance efficiency on mobile devices.
This comprehensive integration aligns with the overarching goal
of Alpha-Bit, leveraging TensorFlow Lite to augment character Fig. 4. Homepage of Alpha-Bit
recognition capabilities within an Android application developed
using Kotlin and Jetpack Compose. The seamless integration of Thirdly, smartphones, with their built-in AI capabilities, present
machine learning models into the mobile application contributes an opportunity to leverage advanced technologies like Optical
to the vision of democratizing education through innovative OCR Character Recognition (OCR) for enhanced educational cus-
technologies. tomization. OCR, utilizing deep learning convolutional neural
networks, can recognize text in images and documents (Reddy
VI. RESULTS AND DISCUSSION
& Suruliandi, 2020). Integrating OCR into educational apps on
This research focuses on the introduction of AI and computer smartphones holds the potential for personalized and adaptive
science concepts to young children through educational games, learning experiences. This technology could automatically detect
activities, and technologies. While existing studies demonstrate children’s worksheets or drawings, providing valuable insights
promise in enhancing foundational AI/CS skills and knowledge, into their progress.
a notable research gap lies in the absence of comparative eval-
uations across various technological mediums. The inference A. System Requirements
drawn here suggests that utilizing smartphones as the primary
1) Hardware:
technology interface may prove to be a more effective approach
than some of the tools currently explored, for several compelling • Processor: Intel Core i3 or equivalent
reasons: • RAM: 4GB or higher
Firstly, smartphones, being ubiquitous and mobile, transcend • Storage: 50GB available disk space
the confines of the classroom, enabling learning experiences • Display: Minimum resolution of 1280x720 pixels
to extend seamlessly into homes and flexible environments. 2) Operating System:
The portability and widespread usage of smartphones empower
• Windows 10 or later
children to engage in interactive AI/CS learning activities beyond
• macOS 10.12 (Sierra) or later
traditional classroom settings. Research supports the advantages
• Linux distributions with kernel version 4.4 or later
of mobile learning, indicating increased student engagement and
personalized learning compared to exclusive reliance on conven- 3) Web Browser:
tional classroom instruction (Sung et al., 2016). • Google Chrome (latest version recommended)
Secondly, smartphones offer an intuitive touch interface, allow- • Mozilla Firefox (latest version recommended)
ing for direct interaction that aligns seamlessly with the hands- • Microsoft Edge (latest version recommended)
on, interactive nature of effective early childhood pedagogy. The • Safari (latest version recommended)
– 111 –