Advancing human pose estimation with transformer models: An experimental approach
Los Angeles, California, United States of America.
Review
World Journal of Advanced Engineering Technology and Sciences, 2024, 12(02), 047–052
Article DOI: 10.30574/wjaets.2024.12.2.0261
Publication history:
Received on 16 May 2024; revised on 29 June 2024; accepted on 02 July 2024
Abstract:
This paper explores the integration of Transformer architectures into human pose estimation, a critical task in computer vision that involves detecting human figures and predicting their poses by identifying body joint positions. With applications ranging from enhancing interactive gaming experiences to advancing biomechanical analyses, human pose estimation demands high accuracy and flexibility, particularly in dynamic and partially occluded scenes. This study hypothesizes that Transformers, renowned for their ability to manage long-range dependencies and focus on relevant data parts through self-attention mechanisms, can significantly outperform existing deep learning methods such as Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs). We introduce the PoseTransformer, a hybrid model that combines the precise feature extraction capabilities of CNNs with the global contextual awareness of Transformers, aiming to set new standards for accuracy and adaptability in pose estimation tasks. The model's effectiveness is demonstrated through rigorous testing on benchmark datasets, showing substantial improvements over traditional approaches, especially in complex scenarios.
Keywords:
Transformer architectures; Human pose estimation; Self-attention mechanisms; PoseTransformer; Convolutional Neural Networks (CNNs); Benchmark datasets
Full text article in PDF:
Copyright information:
Copyright © 2024 Author(s) retain the copyright of this article. This article is published under the terms of the Creative Commons Attribution Liscense 4.0