- Journal Home
- Volume 43 - 2025
- Volume 42 - 2024
- Volume 41 - 2023
- Volume 40 - 2022
- Volume 39 - 2021
- Volume 38 - 2020
- Volume 37 - 2019
- Volume 36 - 2018
- Volume 35 - 2017
- Volume 34 - 2016
- Volume 33 - 2015
- Volume 32 - 2014
- Volume 31 - 2013
- Volume 30 - 2012
- Volume 29 - 2011
- Volume 28 - 2010
- Volume 27 - 2009
- Volume 26 - 2008
- Volume 25 - 2007
- Volume 24 - 2006
- Volume 23 - 2005
- Volume 22 - 2004
- Volume 21 - 2003
- Volume 20 - 2002
- Volume 19 - 2001
- Volume 18 - 2000
- Volume 17 - 1999
- Volume 16 - 1998
- Volume 15 - 1997
- Volume 14 - 1996
- Volume 13 - 1995
- Volume 12 - 1994
- Volume 11 - 1993
- Volume 10 - 1992
- Volume 9 - 1991
- Volume 8 - 1990
- Volume 7 - 1989
- Volume 6 - 1988
- Volume 5 - 1987
- Volume 4 - 1986
- Volume 3 - 1985
- Volume 2 - 1984
- Volume 1 - 1983
Cited by
- BibTex
- RIS
- TXT
It is one of the most challenging issues in applied mathematics to approximately solve high-dimensional partial differential equations (PDEs) and most of the numerical approximation methods for PDEs in the scientific literature suffer from the so-called curse of dimensionality in the sense that the number of computational operations employed in the corresponding approximation scheme to obtain an approximation precision $ε > 0$ grows exponentially in the PDE dimension and/or the reciprocal of $ε.$ Recently, certain deep learning based methods for PDEs have been proposed and various numerical simulations for such methods suggest that deep artificial neural network (ANN) approximations might have the capacity to indeed overcome the curse of dimensionality in the sense that the number of real parameters used to describe the approximating deep ANNs grows at most polynomially in both the PDE dimension $d ∈ \mathbb{N}$ and the reciprocal of the prescribed approximation accuracy $ε > 0.$ There are now also a few rigorous mathematical results in the scientific literature which substantiate this conjecture by proving that deep ANNs overcome the curse of dimensionality in approximating solutions of PDEs. Each of these results establishes that deep ANNs overcome the curse of dimensionality in approximating suitable PDE solutions at a fixed time point $T > 0$ and on a compact cube $[a, b]^d$ in space but none of these results provides an answer to the question whether the entire PDE solution on $[0, T ] × [a, b]^d$ can be approximated by deep ANNs without the curse of dimensionality. It is precisely the subject of this article to overcome this issue. More specifically, the main result of this work in particular proves for every $a ∈ \mathbb{R},$ $b ∈ (a,∞)$ that solutions of certain Kolmogorov PDEs can be approximated by deep ANNs on the space-time region $[0, T ] × [a, b]^d$ without the curse of dimensionality.
}, issn = {1991-7139}, doi = {https://doi.org/10.4208/cm.2308-m2021-0266}, url = {http://global-sci.org/intro/article_detail/jcm/24266.html} }It is one of the most challenging issues in applied mathematics to approximately solve high-dimensional partial differential equations (PDEs) and most of the numerical approximation methods for PDEs in the scientific literature suffer from the so-called curse of dimensionality in the sense that the number of computational operations employed in the corresponding approximation scheme to obtain an approximation precision $ε > 0$ grows exponentially in the PDE dimension and/or the reciprocal of $ε.$ Recently, certain deep learning based methods for PDEs have been proposed and various numerical simulations for such methods suggest that deep artificial neural network (ANN) approximations might have the capacity to indeed overcome the curse of dimensionality in the sense that the number of real parameters used to describe the approximating deep ANNs grows at most polynomially in both the PDE dimension $d ∈ \mathbb{N}$ and the reciprocal of the prescribed approximation accuracy $ε > 0.$ There are now also a few rigorous mathematical results in the scientific literature which substantiate this conjecture by proving that deep ANNs overcome the curse of dimensionality in approximating solutions of PDEs. Each of these results establishes that deep ANNs overcome the curse of dimensionality in approximating suitable PDE solutions at a fixed time point $T > 0$ and on a compact cube $[a, b]^d$ in space but none of these results provides an answer to the question whether the entire PDE solution on $[0, T ] × [a, b]^d$ can be approximated by deep ANNs without the curse of dimensionality. It is precisely the subject of this article to overcome this issue. More specifically, the main result of this work in particular proves for every $a ∈ \mathbb{R},$ $b ∈ (a,∞)$ that solutions of certain Kolmogorov PDEs can be approximated by deep ANNs on the space-time region $[0, T ] × [a, b]^d$ without the curse of dimensionality.