Page 772 - AI for Good Innovate for Impact
P. 772
AI for Good Innovate for Impact
5 References
[1] Q. Zhu, J. Li, F. Yuan, and Q. Gan, “Continuous sign language recognition based on motor
attention mechanism and frame-level self-distillation,” Machine Vision and Applications,
vol. 36, no. 1, pp. 1–12, 2025. doi: 10.1007/s00138-024-01371-5.
[2] Z. Wang, D. Li, R. Jiang, and M. Okumura, “Continuous sign language recognition with
multi-scale spatial-temporal feature enhancement,” IEEE Access, 2025. doi: 10.1109/
ACCESS.2025.1234567.
[3] K. Hirooka, A. S. M. Miah, T. Murakami, Y. Akiba, Y. S. Hwang, and J. Shin, “Stack
transformer based spatial-temporal attention model for dynamic multi-culture sign
language recognition,” arXiv preprint, 2025. eprint: arXiv:2503.16855. [Online]. Available:
https:// arxiv .org/ abs/ 2503 .16855.
[4] V. K. Tanwar, g. sharma gaurav, B. Raman, and R. Bhargava, “P2slr: A privacy-preserving
sign language recognition as-a-cloud service using deep learning for encrypted gestures,”
Feb. 2022. doi: 10.36227/ techrxiv.19064063.v1. [Online]. Available: http:// dx .doi .org/ 10
.36227/ techrxiv .19064063 .v1.
[5] S. Alyami and H. Luqman, “Clip-sla: Parameter-efficient clip adaptation for continuous sign
language recognition,” arXiv preprint, 2025. eprint: arXiv:2504.01666. [Online]. Available:
https:// arxiv. org/abs/2504.01666.
[6]
736

