Senior Software Engineer at Amazon Lab126, CA.
World Journal of Advanced Engineering Technology and Sciences, 2025, 15(01), 1145-1152
Article DOI: 10.30574/wjaets.2025.15.1.0311
Received on 03 March 2025; revised on 08 April 2025; accepted on 11 April 2025
Vision‑Language Models (VLMs) promise to bridge visual perception and natural language for truly intuitive robotic interaction, yet their real‑world robustness remains underexplored. In this paper, we quantitatively evaluate state‑of‑the‑art VLM performance—showing VLM‑RT achieves 96.8% reasoning accuracy at 18.2 FPS but suffers dramatic degradation (94.3% → 37.8% accuracy) under variable lighting and a 48.4‑point recognition gap between Western and East Asian objects. We introduce a concise failure‑mode analysis that links these deficits to core root causes (environmental variability, distributional bias, multimodal misalignment) and map each to practical mitigation strategies. Building on this foundation, we propose a prioritized research roadmap—human‑in‑the‑loop systems, continual learning, and embodied intelligence—and define standardized metrics for fairness, privacy containment, and safety verification. Together, these contributions offer actionable benchmarks to guide the development of robust, trustworthy VLM‑powered robots.
Multimodal Representation; Zero-Shot Generalization; Embodied Cognition; Distributional Bias; Human-Robot Collaboration
Preview Article PDF
Prashant Anand Srivastava. Innovations in visual language models for robotic interaction and contextual awareness: Progress, pitfalls and perspectives. World Journal of Advanced Engineering Technology and Sciences, 2025, 15(01), 1145-1152. Article DOI: https://doi.org/10.30574/wjaets.2025.15.1.0311.