Network Engineer (Network Layers and Storage) – MTS IV, IRELAND.
World Journal of Advanced Engineering Technology and Sciences, 2025, 16(01), 194–204
Article DOI: 10.30574/wjaets.2025.16.1.1207
Received on 21 May 2025; revised on 05 July 2025; accepted on 07 July 2025
As Artificial Intelligence (AI) and machine learning (ML) become more integrated into business operations, securing AI pipelines has become essential. This paper explains how the concept of Zero Trust can be used to provide a greater security to training distributed AI models, especially when federated learning or multi-region trainings are involved. The concepts of Zero Trust who concentrate on identity verification, tightly controlled accesses to sensitive data and persistence monitoring can protect sensitive data, as well as maintain integrity in machine learning during training and during inference. Data-in-motion and data-at-rest security are also discussed in the paper, particularly securing them in GPU clusters and cloud-native systems, where there is a higher risk. Also, the security of AI APIs and microservices by using microservices security frameworks such as gRPC, Istio, and Envoy is discussed. Finally, integrating AI threat detection and auditing into continuous integration/continuous deployment (CI/CD) pipelines is discussed as a key strategy for proactively identifying and mitigating security threats. The article has brought out the best practices that any enterprise should deploy in an attempt to strengthen its AI/ML activity.
Zero Trust; AI Security; Federated Learning; Access Control; Data Encryption; Threat Detection
Preview Article PDF
Oluwatosin Oladayo Aramide. Zero trust in AI pipelines: Securing distributed model training and inference. World Journal of Advanced Engineering Technology and Sciences, 2025, 16(01), 194-204. Article DOI: https://doi.org/10.30574/wjaets.2025.16.1.1207.