-
The goal of any supervised learning algorithm is to find a function that best maps a set of inputs to their correct output. The motivation for backpropagation is to train a multi-layered neural network such that it can learn the appropriate internal representations to allow it to learn any arbitrary mapping of input to output. See Wikipedia and Neural Network Architectures & Deep Learning.
-
Neuron/perceptron: the basic unit of the neural network. Accepts an input and generates a prediction. Neural networks generate their predictions in the form of a set of real values or boolean decisions. Each output value is generated by one of the neurons in the output layer.
-
Activation function example: non-linear Hyperbolic Tangent, zero centered making it easier to model inputs that have strongly negative, neutral, and strongly positive values. See video Types of activation functions.
Training sets: XOR, OR, AND, AND3, NAND and more can be found in text files in the data sub directory.
Source code: David Miller, C++ code example, also available in src-original directory. Associated video: David Miller, Neural Net in C++ Tutorial Goal of this project: refactoring the David Miller example code to Modern C++. Still under construction ...
The C++ code needs c++20 and cmake to be installed.
Go to the build directory and type:
cmake ..
make -j
cppcheck --enable=all --std=c++20 --verbose .
A set of inputs for which the correct outputs are known, used to train the neural network.
Training XOR, topology:
- 2 inputs
- 1 hidden layer 5 neurons
- 1 output neuron
Empty lines and single line comments after # are allowed.
The training parameters momentum and learning_rate are optional. If not used default values (hard coded) are used. Training will always stop after 1000000 steps (hard coded).
For every layer (except inputs) the activation function can be selected:
- tanh
- sigmoid
- relu
- leaky_relu
Training script example:
# trainingXOR.txt
momentum: 0.5
learning_rate: 0.15
topology: 2 5 1
actionfs: inputs tanh tanh
in: 0.0 0.0
out: 0.0
in: 1.0 0.0
out: 1.0
in: 0.0 1.0
out: 1.0
in: 1.0 1.0
out: 0.0
show_max_inputs: 2
show_max_outputs: 1
output_names: XOR
Go to the bin directory and run the code for training XOR:
./backpropnn ../data/trainingXOR.txt
At the end of the training the next text will be shown:
- Results after training:
+0.000 +0.000
===>
XOR +0.003
+1.000 +0.000
===>
XOR +0.984
+0.000 +1.000
===>
XOR +0.983
+1.000 +1.000
===>
XOR +0.020