Page 775 - AI for Good Innovate for Impact
P. 775
AI for Good Innovate for Impact
Partner: Center for Disability and Development in Ethiopia
2�2 Future work
Data collection and analysis for more general systems: Additional datasets of facial expressions 4.9: Accessibility
and eye tracking are required to supplement existing training data, especially considering
the availability of facial data to make it inclusive for all users with less bias. Collaborating with
healthcare and research institutions for further implementation will be made.
Model development for a general computer-human interaction system: Continuous refinement
and optimization of machine learning models for facial expression and eye tracking analysis for
all the systems that need physical interaction. This involves training the models on larger and
more diverse datasets to enhance accuracy, applicability, functions, and robustness. Iterative
model development will enable the system to effectively interpret and respond as a desired
output.
Search engine optimization (SEO): SEO for websites and businesses can be done effectively
by employing eye-tracking-based information and data analytic. This potential area of research
is being spearheaded by Microsoft and Google.
Market research and advertising testing: Face and Eye tracker can play a major role in marketing.
To understand the demands of the customer, eye trackers are placed in markers, and they
determine for how much time the user is staring at the product. This will help manufacturers
and producers to make products that meet consumer requirements.
3 Use Case Requirements
REQ-01: It is critical that the system enables individuals with hand disabilities to control digital
devices using only face and eye gestures in real time.
REQ-02: It is expected that the system utilizes accurate face and eye detection algorithms
(YOLOv5 and Dlib) operating at a minimum of 30 Frames Per Second (FPS ) to ensure smooth
user interaction.
REQ-03: It is critical that the system supports blink-based input (e.g., clicks) through EAR analysis
from Dlib facial landmarks.
REQ-04: It is critical that the system runs efficiently on low-cost, widely available hardware such
as laptops or Raspberry Pi with webcam support.
REQ-05: It is of added value that the system maintains robust performance across diverse
lighting conditions, skin tones, and facial structures.
REQ-06: It is expected that the system functions locally without requiring an internet connection
to ensure user privacy and security.
REQ-07: It is of added value that the system allows users to customize gesture mappings based
on their personal preferences and abilities.
REQ-08: It is critical that the interface remains user-friendly and easily operable by individuals
with limited mobility, requiring minimal setup and training.
739

