- Journal Home
- Volume 41 - 2025
- Volume 40 - 2024
- Volume 39 - 2023
- Volume 38 - 2022
- Volume 37 - 2021
- Volume 36 - 2020
- Volume 35 - 2019
- Volume 34 - 2018
- Volume 33 - 2017
- Volume 32 - 2016
- Volume 31 - 2015
- Volume 30 - 2014
- Volume 29 - 2013
- Volume 28 - 2012
- Volume 27 - 2011
- Volume 26 - 2010
- Volume 25 - 2009
Cited by
- BibTex
- RIS
- TXT
Online gradient method has been widely used as a learning algorithm for training feedforward neural networks. Penalty is often introduced into the training procedure to improve the generalization performance and to decrease the magnitude of network weights. In this paper, some weight boundedness and deterministic convergence theorems are proved for the online gradient method with penalty for BP neural network with a hidden layer, assuming that the training samples are supplied with the network in a fixed order within each epoch. The monotonicity of the error function with penalty is also guaranteed in the training iteration. Simulation results for a 3-bits parity problem are presented to support our theoretical results.
}, issn = {2707-8523}, doi = {https://doi.org/}, url = {http://global-sci.org/intro/article_detail/cmr/19174.html} }Online gradient method has been widely used as a learning algorithm for training feedforward neural networks. Penalty is often introduced into the training procedure to improve the generalization performance and to decrease the magnitude of network weights. In this paper, some weight boundedness and deterministic convergence theorems are proved for the online gradient method with penalty for BP neural network with a hidden layer, assuming that the training samples are supplied with the network in a fixed order within each epoch. The monotonicity of the error function with penalty is also guaranteed in the training iteration. Simulation results for a 3-bits parity problem are presented to support our theoretical results.