Independent Researcher, USA.
World Journal of Advanced Engineering Technology and Sciences, 2025, 15(02), 2899–2907
Article DOI: 10.30574/wjaets.2025.15.2.0884
Received on 20 April 2025; revised on 27 May 2025; accepted on 30 May 2025
This article examines innovative threading architectures optimized for network packet processing on resource-constrained edge devices. As network functions increasingly migrate to the edge, traditional threading models designed for high-performance servers often create significant performance bottlenecks when deployed on limited hardware. The article analyzes the strengths and weaknesses of three primary threading models—run-to-completion, pipeline, and parallel approaches—and proposes hybrid solutions that adaptively combine their advantages. It introduces several key innovations: dynamic thread allocation that adjusts to changing traffic patterns, cache-aware thread scheduling that maximizes locality, lock-free synchronization mechanisms that reduce contention, and workload-aware pipeline adaptation that optimizes processing paths. Implementation considerations address thread creation overhead, queue management, memory access patterns, and performance diagnostics. Empirical testing demonstrates substantial improvements in throughput, latency, CPU utilization, and performance consistency across various workloads. These optimizations enable sophisticated network functions to be deployed on existing edge infrastructure without hardware upgrades, supporting the continued expansion of distributed network architectures in resource-constrained environments.
Edge Computing; Thread Optimization; Packet Processing; Resource-Constrained Hardware; Network Function Virtualization
Preview Article PDF
Thilak Raj Surendra Babu. Threading models for network packet processing: Optimizing performance on low-end hardware. World Journal of Advanced Engineering Technology and Sciences, 2025, 15(02), 2899–2907. Article DOI: https://doi.org/10.30574/wjaets.2025.15.2.0884.