Indian Institute of Technology, Kanpur, India.
World Journal of Advanced Engineering Technology and Sciences, 2025, 15(03), 882–888
Article DOI: 10.30574/wjaets.2025.15.3.0855
Received on 16 April 2025; revised on 07 June 2025; accepted on 09 June 2025
Large language models (LLMs) have transformed content moderation capabilities for messaging platforms, offering unprecedented accuracy, efficiency, and context awareness improvements compared to traditional rule-based approaches. This article presents a comprehensive integrity enforcement system implemented for an American messaging platform Business Platform that leverages transformer-based LLMs to detect and mitigate policy violations in real-time. The system employs a multi-layered architecture encompassing data processing, LLM analysis, decision-making, and enforcement components, all designed to balance sophisticated language understanding with practical engineering considerations. Through extensive fine-tuning, optimization, and continuous learning frameworks, the implementation achieves substantial improvements in detecting impersonation attempts, spam, and policy violations while maintaining acceptable latency targets. Despite challenges related to model bias, adversarial resilience, and resource requirements, the deployment demonstrates that LLM-powered content moderation can significantly enhance platform trust and user experience when properly integrated into messaging infrastructure. The findings contribute valuable insights for integrity enforcement strategies across digital communication channels facing similar scale and accuracy challenges.
Content Moderation; Large Language Models; Integrity Enforcement; Transformer Optimization; Adversarial Resilience; Fairness; Real-Time Processing
Preview Article PDF
Aniruddha Zalani. LLM-powered real-time integrity enforcement for messaging platforms. World Journal of Advanced Engineering Technology and Sciences, 2025, 15(03), 882-888. Article DOI: https://doi.org/10.30574/wjaets.2025.15.3.0855.