Enhancing the transparency of data and ml models using explainable AI (XAI)
The department of Information systems and operations management, The University of Texas at Arlington, Arlington, Texas, United States of America.
Research Article
World Journal of Advanced Engineering Technology and Sciences, 2024, 13(01), 397–406.
Article DOI: 10.30574/wjaets.2024.13.1.0428
Publication history:
Received on 07 August 2024; revised on 18 September 2024; accepted on 20 September 2024
Abstract:
To this end, this paper focuses on the increasing demand for the explainability of Machine Learning (ML) models especially in environments where these models are employed to make critical decisions such as in healthcare, finance, and law. Although the typical ML models are considered opaque, XAI provides a set of ways and means to propose making these models more transparent and, thus, easier to explain. This paper describes and analyzes the model-agnostic approach, method of intrinsic explanation, post-hoc explanation, and visualization instruments and demonstrates the use of XAI in various fields. The paper also speaks about the requirement of capturing the accuracy and interpretability for creating responsible and ethical AI.
Keywords:
Explainable AI (XAI); Model Transparency; Machine Learning Interpretability; Data-driven Decision-Making; AI Ethics; Model-Agnostic Techniques
Full text article in PDF:
Copyright information:
Copyright © 2024 Author(s) retain the copyright of this article. This article is published under the terms of the Creative Commons Attribution Liscense 4.0