- Journal Home
- Volume 38 - 2025
- Volume 37 - 2025
- Volume 36 - 2024
- Volume 35 - 2024
- Volume 34 - 2023
- Volume 33 - 2023
- Volume 32 - 2022
- Volume 31 - 2022
- Volume 30 - 2021
- Volume 29 - 2021
- Volume 28 - 2020
- Volume 27 - 2020
- Volume 26 - 2019
- Volume 25 - 2019
- Volume 24 - 2018
- Volume 23 - 2018
- Volume 22 - 2017
- Volume 21 - 2017
- Volume 20 - 2016
- Volume 19 - 2016
- Volume 18 - 2015
- Volume 17 - 2015
- Volume 16 - 2014
- Volume 15 - 2014
- Volume 14 - 2013
- Volume 13 - 2013
- Volume 12 - 2012
- Volume 11 - 2012
- Volume 10 - 2011
- Volume 9 - 2011
- Volume 8 - 2010
- Volume 7 - 2010
- Volume 6 - 2009
- Volume 5 - 2009
- Volume 4 - 2008
- Volume 3 - 2008
- Volume 2 - 2007
- Volume 1 - 2006
Commun. Comput. Phys., 38 (2025), pp. 181-222.
Published online: 2025-07
Cited by
- BibTex
- RIS
- TXT
This paper proposes a Kolmogorov high order deep neural network (K-HOrderDNN) for solving high-dimensional partial differential equations (PDEs), which improves the high order deep neural networks (HOrderDNNs). HOrderDNNs have been demonstrated to outperform conventional DNNs for high frequency problems by introducing a nonlinear transformation layer consisting of $(p+1)^d$ basis functions. However, the number of basis functions grows exponentially with the dimension $d,$ which results in the curse of dimensionality (CoD). Inspired by the Kolmogorov Superposition Theorem (KST), which expresses a multivariate function as superpositions of univariate functions and addition, K-HOrderDNN utilizes a HOrderDNN to efficiently approximate univariate inner functions instead of directly approximating the multivariate function, reducing the number of introduced basis functions to $d(p+1).$ We theoretically demonstrate that CoD is mitigated when target functions belong to a dense subset of continuous multivariate functions. Extensive numerical experiments show that: for high-dimensional problems $(d=10, 20, 50)$ where HOrderDNNs$(p>1)$ are intractable, K-HOrderDNNs$(p>1)$ exhibit remarkable performance. Specifically, when $d=10,$ K-HOrderDNN$(p=7)$ achieves an error of $4.40E−03,$ two orders of magnitude lower than that of HOrderDNN$(p=1)$ (see Table 10); for high frequency problems, K-HOrderDNNs$(p>1)$ can achieve higher accuracy with fewer parameters and faster convergence rates compared to HOrderDNNs (see Table 8).
}, issn = {1991-7120}, doi = {https://doi.org/10.4208/cicp.OA-2024-0095}, url = {http://global-sci.org/intro/article_detail/cicp/24256.html} }This paper proposes a Kolmogorov high order deep neural network (K-HOrderDNN) for solving high-dimensional partial differential equations (PDEs), which improves the high order deep neural networks (HOrderDNNs). HOrderDNNs have been demonstrated to outperform conventional DNNs for high frequency problems by introducing a nonlinear transformation layer consisting of $(p+1)^d$ basis functions. However, the number of basis functions grows exponentially with the dimension $d,$ which results in the curse of dimensionality (CoD). Inspired by the Kolmogorov Superposition Theorem (KST), which expresses a multivariate function as superpositions of univariate functions and addition, K-HOrderDNN utilizes a HOrderDNN to efficiently approximate univariate inner functions instead of directly approximating the multivariate function, reducing the number of introduced basis functions to $d(p+1).$ We theoretically demonstrate that CoD is mitigated when target functions belong to a dense subset of continuous multivariate functions. Extensive numerical experiments show that: for high-dimensional problems $(d=10, 20, 50)$ where HOrderDNNs$(p>1)$ are intractable, K-HOrderDNNs$(p>1)$ exhibit remarkable performance. Specifically, when $d=10,$ K-HOrderDNN$(p=7)$ achieves an error of $4.40E−03,$ two orders of magnitude lower than that of HOrderDNN$(p=1)$ (see Table 10); for high frequency problems, K-HOrderDNNs$(p>1)$ can achieve higher accuracy with fewer parameters and faster convergence rates compared to HOrderDNNs (see Table 8).