Page 731 - AI for Good Innovate for Impact
P. 731
AI for Good Innovate for Impact
those of whole-sentence attention but also offer stable solutions to noise, effectively mitigating
latency issues in online services.
XiLing's end-to-end speech recognition model directly maps speech to text, which has resulted
in over a 15% improvement in cloud-based recognition accuracy. The adoption of unsupervised 4.9: Accessibility
learning is set to become a standard in industrial applications, leading to a marked increase in
both whole-sentence and dialect recognition rates. This advancement enhances communication
accessibility and accuracy for a wider audience, marking a significant leap forward in the field
of speech recognition and sign language interpretation.
Its scalability allows deployment across a wide range of platforms and scenarios. It supports
online platform deployment within hours or offline all-in-one machine plug-and-play, as well
as integration into various digital channels such as TVs, apps, websites, and WeChat mini-
programs, offering "barrier-free deployment" for the DHH community to access sign language
services in their daily lives. The platform's cost-effectiveness makes sign language services more
affordable, reaching a wider audience than human interpreters.
Use Case Status: During the 2022 Winter Olympics, the XiLing-powered AI sign language
anchors provided 24/7 sign language live broadcasting and play-by-play for the DHH
community, garnering over 100 million views, including both live broadcasts and video replays.
[4]
Partners: Tianjin University of Technology School for the Deaf, language linguists,
and special education experts
2�2 Benefits of the use case
Good Health and Well-being: AI-powered sign-language solution employs the Streaming
Multi-layer Truncated Attention Model (SMLTA) for online automatic speech recognition (ASR)
and neural-network translation—refined with linguistic expertise—to achieve over 98% speech
recognition accuracy and 98.5% sign-language fidelity. By deploying AI interpreters in hospitals
and telehealth platforms, it removes barriers that 97% of deaf patients face during medical
consultations, enabling round-the-clock access to healthcare information and services.
Quality Education: Adaptive Natural Language Processing(NLP) algorithms tailor sign-language
outputs to regional dialects and classroom contexts. DHH students gain real-time 3D avatar
translations of STEM (Science, Technology, Engineering, Mathematics) lectures, breaking down
language barriers and ensuring that deaf learners receive the same depth of instruction as their
hearing peers. This promotes truly inclusive and equitable learning environments.
Decent Work and Economic Growth: By integrating AI interpreters into workplaces and customer-
facing services—such as banks and government offices—the platform dismantles communication
obstacles that previously limited employment and entrepreneurship opportunities for deaf
individuals[5]. Organizations report smoother workflows and broader talent pools as hearing-
impaired staff participate fully in meetings, training, and client interactions.
Reduced Inequality: The solution standardizes sign-language support across urban centers
and remote regions by training on a corpus of over 11,000 gestures and regional variations.
Whether in legal aid clinics or municipal announcements, deaf users receive uniform, high-
quality interpretation—narrowing the service gap between city and countryside.
695

