Department of CSE (Data Science), ACE Engineering College, Telangana, India.
World Journal of Advanced Engineering Technology and Sciences, 2025, 15(02), 1362-1369
Article DOI: 10.30574/wjaets.2025.15.2.0593
Received on 31 March 2025; revised on 08 May 2025; accepted on 10 May 2025
Effective communication between deaf and hearing individuals remains a significant challenge due to the fundamental differences in language modalities. GestureTalk presents a real-time, AI-driven communication system designed to bridge this gap by enabling seamless bidirectional interaction. The proposed solution leverages state-of-the-art technologies including Automatic Speech Recognition (ASR), Natural Language Processing (NLP), gesture detection using YOLO, and pose estimation with DWpose and MediaPipe. Spoken language is transcribed and translated into American Sign Language (ASL) gloss, then rendered as realistic 3D animated sign language via a virtual avatar. In the reverse direction, the system captures and interprets sign language gestures in real time, converting them into textual output for hearing users. Designed with real-time performance, high accuracy, and user accessibility in mind, GestureTalk serves as an inclusive interface for communication particularly suited for digital contexts such as video conferencing. This system offers a scalable and adaptable solution, contributing meaningfully to assistive technology and digital accessibility for the deaf and hard-of-hearing communities.
Real-Time AI-Driven System; NLP; YOLO; Dwpose; 3D Animated Sign Avatar; Offering an Inclusive; Scalable Solution
Preview Article PDF
K Kiran Babu, Srikanth Banoth, Vijaya Lakshmi Muvvala,Mohammad Shafee and Shravan Kumar Ainala. Gesture talks real-time sign language recognition and animation system using AI. World Journal of Advanced Engineering Technology and Sciences, 2025, 15(02), 1362-1369. Article DOI: https://doi.org/10.30574/wjaets.2025.15.2.0593.