Comparison of PSO and BP Algos for Training Feedforward NN
Comparison of PSO and BP Algos for Training Feedforward NN by Nasser Mohammadi and Seyed Javad Mirabedini Journal of Mathematics and Computer Science 12 (2014), 113-123
It’s fun in this period of computer science to look back at earlier moments in our ML history to both flesh out our undertanding of how we chose different methods, but also other routes we might have taken. YOLO for a bit, but then like, hey there may be some other routes worth exploring, and the sun is still out and we are feeling exploratory.
Let’s take a look at this paper that compares weight tuning optimization of PSO and BP on ANN’s. TLDR; The methods here compare a couple different variations of BP Learning (Levenberg-Marquardt, Gradient descent with some variation) with PSO to train a MLP neural network. PSO performs better, positioning it as a viable choice for training your ANN.