Home
World Journal of Advanced Engineering Technology and Sciences
International, Peer reviewed, Referred, Open access | ISSN Approved Journal

Main navigation

  • Home
    • Journal Information
    • Abstracting and Indexing
    • Editorial Board Members
    • Reviewer Panel
    • Journal Policies
    • WJAETS CrossMark Policy
    • Publication Ethics
    • Instructions for Authors
    • Article processing fee
    • Track Manuscript Status
    • Get Publication Certificate
    • Issue in Progress
    • Current Issue
    • Past Issues
    • Become a Reviewer panel member
    • Join as Editorial Board Member
  • Contact us
  • Downloads

ISSN: 2582-8266 (Online)  || UGC Compliant Journal || Google Indexed || Impact Factor: 9.48 || Crossref DOI

Fast Publication within 2 days || Low Article Processing charges || Peer reviewed and Referred Journal

Research and review articles are invited for publication in Volume 18, Issue 2 (February 2026).... Submit articles

Hybrid models combining explainable AI and traditional machine learning: A review of methods and applications

Breadcrumb

  • Home
  • Hybrid models combining explainable AI and traditional machine learning: A review of methods and applications

Ranjith Gopalan 1, Dileesh Onniyil 2, Ganesh Viswanathan 3, * and Gaurav Samdani 3

1 Principal Consultant, Cognizant Technologies Corp, Charlotte, NC, United states.

2 Director Software Engineering, Lytx.inc, Charlotte, NC, United states.

3 Department of Data science and Business Analytics, UNC Charlotte, Charlotte, NC, United states.

Review Article

World Journal of Advanced Engineering Technology and Sciences, 2025, 15(02), 1388-1402

Article DOI: 10.30574/wjaets.2025.15.2.0635

DOI url: https://doi.org/10.30574/wjaets.2025.15.2.0635

Received on 28 March 2025; revised on 05 May 2025; accepted on 07 May 2025

The rapid advancements in artificial intelligence and machine learning have led to the development of highly sophisticated models capable of superhuman performance in a variety of tasks. However, the increasing complexity of these models has also resulted in them becoming "black boxes", where the internal decision-making process is opaque and difficult to interpret. This lack of transparency and explainability has become a significant barrier to the widespread adoption of these models, particularly in sensitive domains such as healthcare and finance. 

To address this challenge, the field of Explainable AI has emerged, focusing on developing new methods and techniques to improve the interpretability and explainability of machine learning models. This review paper aims to provide a comprehensive overview of the research exploring the combination of Explainable AI and traditional machine learning approaches, known as "hybrid models". 

This paper discusses the importance of explainability in AI, and the necessity of combining interpretable machine learning models with black-box models to achieve the desired trade-off between accuracy and interpretability. It provides an overview of key methods and applications, integration techniques, implementation frameworks, evaluation metrics, and recent developments in the field of hybrid AI models.

The paper also delves into the challenges and limitations in implementing hybrid explainable AI systems, as well as the future trends in the integration of explainable AI and traditional machine learning. Altogether, this paper will serve as a valuable reference for researchers and practitioners working on developing explainable and interpretable AI systems.

Keywords: Explainable AI (XAI), Traditional Machine Learning (ML), Hybrid Models, Interpretability, Transparency, Predictive Accuracy, Neural Networks, Ensemble Methods, Decision Trees, Linear Regression, SHAP (Shapley Additive Explanations), LIME (Local Interpretable Model-agnostic Explanations), Healthcare Analytics, Financial Risk Management, Autonomous Systems, Predictive Maintenance, Quality Control, Integration Techniques, Evaluation Metrics, Regulatory Compliance, Ethical Considerations, User Trust, Data Quality, Model Complexity, Future Trends, Emerging Technologies, Attention Mechanisms, Transformer Models, Reinforcement Learning, Data Visualization, Interactive Interfaces, Modular Architectures, Ensemble Learning, Post-Hoc Explainability, Intrinsic Explainability, Combined Models

Explainable AI; Machine Learning; SHAP; LIME; Hybrid Models; Interpretability

https://wjaets.com/sites/default/files/fulltext_pdf/WJAETS-2025-0635.pdf

Preview Article PDF

Ranjith Gopalan, Dileesh Onniyil, Ganesh Viswanathan and Gaurav Samdani. Hybrid models combining explainable AI and traditional machine learning: A review of methods and applications. World Journal of Advanced Engineering Technology and Sciences, 2025, 15(02), 1388-1402. Article DOI: https://doi.org/10.30574/wjaets.2025.15.2.0635.

Get Certificates

Get Publication Certificate

Download LoA

Check Corssref DOI details

Issue details

Issue Cover Page

Editorial Board

Table of content


Copyright © Author(s). All rights reserved. This article is published under the terms of the Creative Commons Attribution 4.0 International License (CC BY 4.0), which permits use, sharing, adaptation, distribution, and reproduction in any medium or format, as long as appropriate credit is given to the original author(s) and source, a link to the license is provided, and any changes made are indicated.


Copyright © 2026 World Journal of Advanced Engineering Technology and Sciences

Developed & Designed by VS Infosolution