You are here

A Brief Summary of Neural Network Types

I hope to expand on this article quite a bit in the future, and provide code samples and diagrams as well. The networks listed here are the neural network types that will be introduced with Encog 2.0. Encog makes use of simple building blocks for neural networks, so other types are likely supported as well. However, these are the ones that Encog provides wizards for to automatically create for you.

Feed Forward Neural Network - A simple neural network type where synapses are made from an input layer to zero or more hidden layers, and finally to an output layer. The feedforward neural network is one of the most common types of neural network in use. It is suitable for many types of problems. Feedforward neural networks are often trained with simulated annealing, genetic algorithms or one of the propagation techniques.

Self Organizing Map (SOM) - A neural network that contains two layers and implements a winner take all strategy in the output layer. Rather than taking the output of individual neurons, the neuron with the highest output is considered the winner. SOM's are typically used for classification, where the output neurons represent groups that the input neurons are to be classified into. SOM's are usually trained with a competitive learning strategy.

Hopfield Neural Network - A simple single layer recurrent neural network. The Hopfield neural network is trained with a special algorithm that teaches it to learn to recognize patterns. The Hopfield network will indicate that the pattern is recognized by echoing it back. Hopfield neural networks are typically used for pattern recognition.

Simple Recurrent Network (SRN) Elman Style - A recurrent neural network that has a context layer. The context layer holds the previous output from the hidden layer and then echos that value back to the hidden layer's input. The hidden layer then always receives input from its previous iteration's output. Elman neural networks are generally trained using genetic, simulated annealing, or one of the propagation techniques. Elman neural networks are typically used for prediction.

Simple Recurrent Network (SRN) Jordan Style - A recurrent neural network that has a context layer. The context layer holds the previous output from the output layer and then echos that value back to the hidden layer's input. The hidden layer then always receives input from the previous iteration's output layer. Jordan neural networks are generally trained using genetic, simulated annealing, or one of the propagation techniques. Jordan neural networks are typically used for prediction.

Simple Recurrent Network (SRN) Self Organizing Map - A recurrent self organizing map that has an input and output layer, just as a regular SOM. However, the RSOM has a context layer as well. This context layer echo's the previous iteration's output back to the input layer of the neural network. RSOM's are trained with a competitive learning algorithm, just as a non-recurrent SOM. RSOM's can be used to classify temporal data, or to predict.

Feedforward Radial Basis Function (RBF) Network - A feedforward network with an input layer, output layer and a hidden layer. The hidden layer is based on a radial basis function. The RBF generally used is the gaussian function. Several RBF's in the hidden layer allow the RBF network to approximate a more complex activation function than a typical feedforward neural network. RBF networks are used for pattern recognition. They can be trained using genetic, annealing or one of the propagation techniques. Other means must be employed to determine the structure of the RBF's used in the hidden layer.

Technology: 

Theme by Danetsoft and Danang Probo Sayekti inspired by Maksimer