Home
World Journal of Advanced Engineering Technology and Sciences
International, Peer reviewed, Referred, Open access | ISSN Approved Journal

Main navigation

  • Home
    • Journal Information
    • Abstracting and Indexing
    • Editorial Board Members
    • Reviewer Panel
    • Journal Policies
    • WJAETS CrossMark Policy
    • Publication Ethics
    • Instructions for Authors
    • Article processing fee
    • Track Manuscript Status
    • Get Publication Certificate
    • Issue in Progress
    • Current Issue
    • Past Issues
    • Become a Reviewer panel member
    • Join as Editorial Board Member
  • Contact us
  • Downloads

ISSN: 2582-8266 (Online)  || UGC Compliant Journal || Google Indexed || Impact Factor: 9.48 || Crossref DOI

Fast Publication within 2 days || Low Article Processing charges || Peer reviewed and Referred Journal

Research and review articles are invited for publication in Volume 18, Issue 3 (March 2026).... Submit articles

Consistency models for distributed deep learning: Tradeoffs between convergence and communication

Breadcrumb

  • Home
  • Consistency models for distributed deep learning: Tradeoffs between convergence and communication

Anjan Kumar Dash *

Maulana Azad National Institute of Technology, India.

Review Article

World Journal of Advanced Engineering Technology and Sciences, 2025, 15(03), 436–445

Article DOI: 10.30574/wjaets.2025.15.3.0892

DOI url: https://doi.org/10.30574/wjaets.2025.15.3.0892

Received on 21 April 2025; revised on 29 May 2025; accepted on 01 June 2025

Ensuring model convergence in distributed deep learning systems often leads to unnecessary communication. This article discusses strong consistency, eventual consistency and bounded staleness to explore the theories behind them and their use in different machine learning fields. It has been observed in experiments that relaxed consistency models greatly decrease the amount of communication needed, although these models make the outcome more variable and might prolong the time for training. The article explains a dynamic system that changes requirements in accordance with training and gradient behavior to ensure both high efficiency and dependability. Different CNNs and transformer models are compared in this article, with the former responding better to relaxing consistency. This framework offers gradient-based adaptation, phase-based consistency changes, topology-aware communication and auto-tuning of the staleness bound to enhance results for training large datasets in a distributed environment, compared to static methods.

Distributed Deep Learning; Consistency Models; Communication Efficiency; Adaptive Framework; Parameter Staleness

https://wjaets.com/sites/default/files/fulltext_pdf/WJAETS-2025-0892.pdf

Preview Article PDF

Anjan Kumar Dash. Consistency models for distributed deep learning: Tradeoffs between convergence and communication.World Journal of Advanced Engineering Technology and Sciences, 2025, 15(03), 436–445. Article DOI: https://doi.org/10.30574/wjaets.2025.15.3.0892.

Get Certificates

Get Publication Certificate

Download LoA

Check Corssref DOI details

Issue details

Issue Cover Page

Editorial Board

Table of content


Copyright © Author(s). All rights reserved. This article is published under the terms of the Creative Commons Attribution 4.0 International License (CC BY 4.0), which permits use, sharing, adaptation, distribution, and reproduction in any medium or format, as long as appropriate credit is given to the original author(s) and source, a link to the license is provided, and any changes made are indicated.


Copyright © 2026 World Journal of Advanced Engineering Technology and Sciences

Developed & Designed by VS Infosolution