- Journal Home
- Volume 43 - 2025
- Volume 42 - 2024
- Volume 41 - 2023
- Volume 40 - 2022
- Volume 39 - 2021
- Volume 38 - 2020
- Volume 37 - 2019
- Volume 36 - 2018
- Volume 35 - 2017
- Volume 34 - 2016
- Volume 33 - 2015
- Volume 32 - 2014
- Volume 31 - 2013
- Volume 30 - 2012
- Volume 29 - 2011
- Volume 28 - 2010
- Volume 27 - 2009
- Volume 26 - 2008
- Volume 25 - 2007
- Volume 24 - 2006
- Volume 23 - 2005
- Volume 22 - 2004
- Volume 21 - 2003
- Volume 20 - 2002
- Volume 19 - 2001
- Volume 18 - 2000
- Volume 17 - 1999
- Volume 16 - 1998
- Volume 15 - 1997
- Volume 14 - 1996
- Volume 13 - 1995
- Volume 12 - 1994
- Volume 11 - 1993
- Volume 10 - 1992
- Volume 9 - 1991
- Volume 8 - 1990
- Volume 7 - 1989
- Volume 6 - 1988
- Volume 5 - 1987
- Volume 4 - 1986
- Volume 3 - 1985
- Volume 2 - 1984
- Volume 1 - 1983
J. Comp. Math., 43 (2025), pp. 1045-1062.
Published online: 2025-09
Cited by
- BibTex
- RIS
- TXT
Stochastic gradient descent (SGD) methods have gained widespread popularity for solving large-scale optimization problems. However, the inherent variance in SGD often leads to slow convergence rates. We introduce a family of unbiased stochastic gradient estimators that encompasses existing estimators from the literature and identify a gradient estimator that not only maintains unbiasedness but also achieves minimal variance. Compared with the existing estimator used in SGD algorithms, the proposed estimator demonstrates a significant reduction in variance. By utilizing this stochastic gradient estimator to approximate the full gradient, we propose two mini-batch stochastic conjugate gradient algorithms with minimal variance. Under the assumptions of strong convexity and smoothness on the objective function, we prove that the two algorithms achieve linear convergence rates. Numerical experiments validate the effectiveness of the proposed gradient estimator in reducing variance and demonstrate that the two stochastic conjugate gradient algorithms exhibit accelerated convergence rates and enhanced stability.
}, issn = {1991-7139}, doi = {https://doi.org/10.4208/jcm.2505-m2025-0004}, url = {http://global-sci.org/intro/article_detail/jcm/24470.html} }Stochastic gradient descent (SGD) methods have gained widespread popularity for solving large-scale optimization problems. However, the inherent variance in SGD often leads to slow convergence rates. We introduce a family of unbiased stochastic gradient estimators that encompasses existing estimators from the literature and identify a gradient estimator that not only maintains unbiasedness but also achieves minimal variance. Compared with the existing estimator used in SGD algorithms, the proposed estimator demonstrates a significant reduction in variance. By utilizing this stochastic gradient estimator to approximate the full gradient, we propose two mini-batch stochastic conjugate gradient algorithms with minimal variance. Under the assumptions of strong convexity and smoothness on the objective function, we prove that the two algorithms achieve linear convergence rates. Numerical experiments validate the effectiveness of the proposed gradient estimator in reducing variance and demonstrate that the two stochastic conjugate gradient algorithms exhibit accelerated convergence rates and enhanced stability.