High-Probability Convergence in Decentralized Stochastic Optimization with Gradient Tracking

Published in arXiv preprint, 2026

Recommended citation: Armacki, A., Cai, H., & Sayed, A. H. (2025). High-Probability Convergence in Decentralized Stochastic Optimization with Gradient Tracking. In arXiv:2605.00281 https://arxiv.org/abs/2605.00281

TLDR: We establish high-probability convergence guarantees of DSGD incorporating the gradient tracking mechanism under a relaxed sub-Gaussian noise condition, achieving optimal rates for both non-convex and PL costs with linear speed-up, while relaxing many strong assumptions used in prior works.