# Neural Network Trainer application that uses the matrix algebra based RProp algorithm

Dear Reader, I decided to publish an old version of the RProp algorithm I wrote back in 2003 that demonstrates how the algorithm modified to use matrix algebra works. The demo can be downloaded from here, both source files and binaries: https://github.com/bulyaki/NeuralNetworkTrainer Originally this version was lacking a couple of libraries and I could not […]

# The matrix form of the RProp algorithm

Since the RProp algorithm uses if/else conditional statements while determining update values some special helper matrix-functions and helper matrices must be introduced. These functions will allow us to express the conditional statements more elegantly, only using matrix operations. Some matrices needed to make the above functions work: D: decision matrix, M: meta-gradient matrix, U: update […]

# The matrix form of the Backpropagation algorithm

In a multi-layered neural network weights and neural connections can be treated as matrices, the neurons of one layer can form the columns, and the neurons of the other layer can form the rows of the matrix. The figure below shows a network and its parameter matrices. The meanings of vectors and matrices above: ninl […]

# The Resilient Propagation (RProp) algorithm

The RProp algorithm is a supervised learning method for training multi layered neural networks, first published in 1994 by Martin Riedmiller. The idea behind it is that the sizes of the partial derivatives might have dangerous effects on the weight updates. It implements an internal adaptive algorithm which focuses only on the signs of the […]

# Common extensions to Backpropagation

Preconditioning weights The outcome and speed of a learning process is influenced by the initial state of the network. However, it is impossible to tell which condition will be the most ideal. The commonly accepted way is initializing weights by uniformly distributed random numbers on the (0,1) interval. Preconditioning data Very often the training set […]

# Creating COM servers with CMake

I have recently come across a problem when I needed to use CMake to create a visual studio ATL COM Dll project using the MIDL compiler. I haven’t found an example anywhere I looked that provides a fully working solution, only bits and pieces here and there. So here I am posting a fully working […]

# The Backpropagation algorithm

If there are multiple layers in a neural network the inner layers have neither target values nor errors. This problem remained unsolved until the 1970s when mathemathicians found the backpropagation algorithm to be usable for this particular problem. Backpropagation provides a way to train neural networks with any number of hidden layers. The neurons don’t […]

# Feedforward neural networks

I am only covering feedforward neural networks in this project. General features of such networks are: Neurons are grouped into one input layer,  one or more hidden layers and an output layer. Neurons are not connecting to each other within the same layer. Each neuron of a hidden layer connects to all the neurons of […]

# A brief introduction to artificial neural networks

This post will try to give you a brief introduction to artificial neural networks or at least to some types of them. I will skip the introduction to biological neural networks as I am neither a biologist nor a doctor, I prefer not to write about what I do not fully understand. Overview of artificial […]

# Intro

Dear Reader, My aim with this blog is to publish some of the work I did on the field of Artificial Neural Networks. Between 2001 nd 2004 I developed a Neural Network trainer software that was able to distribute workload among multiple networked workstations. I think the idea deserves to be reserved for future reference. In the […]