TY - JOUR T1 - Convergence of a Generalized Primal-Dual Algorithm with an Improved Condition for Saddle Point Problems AU - Jiang , Fan AU - Luo , Yueying AU - Cai , Xingju AU - Wang , Tanxing JO - Numerical Mathematics: Theory, Methods and Applications VL - 2 SP - 463 EP - 486 PY - 2025 DA - 2025/05 SN - 18 DO - http://doi.org/10.4208/nmtma.OA-2024-0105 UR - https://global-sci.org/intro/article_detail/nmtma/24072.html KW - Generalized primal-dual algorithm, saddle point problem, convex programming, convergence rate. AB -
We consider a general convex-concave saddle point problem that frequently arises in large-scale image processing. First-order primal-dual algorithms have garnered significant attention due to their promising results in solving saddle point problems. Notably, these algorithms exhibit improved performance with larger step sizes. In a recent series of articles, the upper bound on step sizes has been increased, thereby relaxing the convergence-guaranteeing condition. This paper analyzes the generalized primal-dual method proposed in [B. He, F. Ma, S. Xu, X. Yuan, SIAM J. Imaging Sci. 15 (2022)] and introduces a better condition to ensure its convergence. This enhanced condition also encompasses the optimal upper bound of step sizes in the primal-dual hybrid gradient method. We establish both the global convergence of the iterates and the ergodic $\mathcal{O}(1/N)$ convergence rate for the objective function value in the generalized primal-dual algorithm under the enhanced condition.