Volume 6, Issue 4
A Sharp Uniform-in-Time Error Estimate for Stochastic Gradient Langevin Dynamics

Lei Li & Yuliang Wang

CSIAM Trans. Appl. Math., 6 (2025), pp. 711-759.

Published online: 2025-09

Export citation
  • Abstract

Abstract. We establish a sharp uniform-in-time error estimate for the stochastic gradient Langevin dynamics (SGLD), which is a widely-used sampling algorithm. Under mild assumptions, we obtain a uniform-in-time $\mathcal{O}(η^2)$ bound for the Kullback-Leibler divergence between the SGLD iteration and the Langevin diffusion, where $η$ is the step size (or learning rate). Our analysis is also valid for varying step sizes. Consequently, we are able to derive an $\mathcal{O}(\eta)$ bound for the distance between the invariant measures of the SGLD iteration and the Langevin diffusion, in terms of Wasserstein or total variation distances. Our result can be viewed as a significant improvement compared with existing analysis for SGLD in related literature.

  • AMS Subject Headings

65C20, 68Q25, 60H30

  • Copyright

COPYRIGHT: © Global Science Press

  • Email address
  • BibTex
  • RIS
  • TXT
@Article{CSIAM-AM-6-711, author = {Li , Lei and Wang , Yuliang}, title = {A Sharp Uniform-in-Time Error Estimate for Stochastic Gradient Langevin Dynamics}, journal = {CSIAM Transactions on Applied Mathematics}, year = {2025}, volume = {6}, number = {4}, pages = {711--759}, abstract = {

Abstract. We establish a sharp uniform-in-time error estimate for the stochastic gradient Langevin dynamics (SGLD), which is a widely-used sampling algorithm. Under mild assumptions, we obtain a uniform-in-time $\mathcal{O}(η^2)$ bound for the Kullback-Leibler divergence between the SGLD iteration and the Langevin diffusion, where $η$ is the step size (or learning rate). Our analysis is also valid for varying step sizes. Consequently, we are able to derive an $\mathcal{O}(\eta)$ bound for the distance between the invariant measures of the SGLD iteration and the Langevin diffusion, in terms of Wasserstein or total variation distances. Our result can be viewed as a significant improvement compared with existing analysis for SGLD in related literature.

}, issn = {2708-0579}, doi = {https://doi.org/10.4208/csiam-am.SO-2024-0039}, url = {http://global-sci.org/intro/article_detail/csiam-am/24501.html} }
TY - JOUR T1 - A Sharp Uniform-in-Time Error Estimate for Stochastic Gradient Langevin Dynamics AU - Li , Lei AU - Wang , Yuliang JO - CSIAM Transactions on Applied Mathematics VL - 4 SP - 711 EP - 759 PY - 2025 DA - 2025/09 SN - 6 DO - http://doi.org/10.4208/csiam-am.SO-2024-0039 UR - https://global-sci.org/intro/article_detail/csiam-am/24501.html KW - Random batch, Euler-Maruyama scheme, Fokker-Planck equation, log-Sobolev inequality. AB -

Abstract. We establish a sharp uniform-in-time error estimate for the stochastic gradient Langevin dynamics (SGLD), which is a widely-used sampling algorithm. Under mild assumptions, we obtain a uniform-in-time $\mathcal{O}(η^2)$ bound for the Kullback-Leibler divergence between the SGLD iteration and the Langevin diffusion, where $η$ is the step size (or learning rate). Our analysis is also valid for varying step sizes. Consequently, we are able to derive an $\mathcal{O}(\eta)$ bound for the distance between the invariant measures of the SGLD iteration and the Langevin diffusion, in terms of Wasserstein or total variation distances. Our result can be viewed as a significant improvement compared with existing analysis for SGLD in related literature.

Li , Lei and Wang , Yuliang. (2025). A Sharp Uniform-in-Time Error Estimate for Stochastic Gradient Langevin Dynamics. CSIAM Transactions on Applied Mathematics. 6 (4). 711-759. doi:10.4208/csiam-am.SO-2024-0039
Copy to clipboard
The citation has been copied to your clipboard