Indian Institute of Technology Guwahati, India.
World Journal of Advanced Engineering Technology and Sciences, 2025, 15(01), 878-895
Article DOI: 10.30574/wjaets.2025.15.1.0315
Received on 01 March 2025; revised on 08 April 2025; accepted on 11 April 2025
As artificial intelligence increasingly permeates critical domains such as healthcare, financial services, transportation, and governance, the question of accountability has evolved from theoretical to urgent. This article explores the ethical complexities arising when AI systems make consequential decisions affecting human lives, exploring the challenges in assigning responsibility when these systems fail. Addressing the epistemological, normative, and material dimensions of AI accountability, the article investigates the distributed nature of responsibility across developers, users, and organizations. The discussion spans from the EU's comprehensive risk-based regulatory framework to the United States' sector-specific approach, identifying best practices for ethical AI development including impact assessments, explainability by design, meaningful human oversight, robust testing protocols, and clear liability frameworks. The article ultimately argues for a multi-layered governance approach that balances innovation with accountability through complementary legal, technical, professional, economic, and educational mechanisms to ensure AI systems remain aligned with human values and subject to democratic oversight.
Accountability; Algorithmic Bias; Digital Ethics; Governance Frameworks; Technological Responsibility
Preview Article PDF
Samuel Tatipamula. The ethics of AI decision-making: When should machines be accountable. World Journal of Advanced Engineering Technology and Sciences, 2025, 15(01), 878-895. Article DOI: https://doi.org/10.30574/wjaets.2025.15.1.0315.