Next, we will examine a simple neural network that will learn the XOR operator. The XOR operator was covered in chapter 1. You will see how to use several classes from this neural network. These classes are provided in the companion download available with the purchase of this book. Appendix A describes how to obtain this download. Later in the chapter, you will be shown how these classes were constructed.

**Listing 5.1: The XOR Problem (XOR.cs)**

// Introduction to Neural Networks for C#, 2nd Edition // Copyright 2008 by Heaton Research, Inc. // http://www.heatonresearch.com/online/introduction-neural-networks-cs-edition-2 // // ISBN13: 978-1-60439-009-4 // ISBN: 1-60439-009-3 // // This class is released under the: // GNU Lesser General Public License (LGPL) // http://www.gnu.org/copyleft/lesser.html using System; using System.Collections.Generic; using System.Linq; using System.Text; using HeatonResearchNeural.Feedforward; using HeatonResearchNeural.Feedforward.Train; namespace Chapter5XOR { /// <summary> /// Chapter 5: The Feedforward Backpropagation Neural Network /// /// Solve the classic XOR problem. /// </summary> class XOR { /// <summary> /// Input for the XOR function. /// </summary> public static double[][] XOR_INPUT ={ new double[2] { 0.0, 0.0 }, new double[2] { 1.0, 0.0 }, new double[2] { 0.0, 1.0 }, new double[2] { 1.0, 1.0 } }; /// <summary> /// Ideal output for the XOR function. /// </summary> public static double[][] XOR_IDEAL = { new double[1] { 0.0 }, new double[1] { 1.0 }, new double[1] { 1.0 }, new double[1] { 0.0 } }; /// <summary> /// Create, train and use a neural network for XOR. /// </summary> /// <param name="args">Not used</param> static void Main(string[] args) { FeedforwardNetwork network = new FeedforwardNetwork(); network.AddLayer(new FeedforwardLayer(2)); network.AddLayer(new FeedforwardLayer(3)); network.AddLayer(new FeedforwardLayer(1)); network.Reset(); // train the neural network Train train = new HeatonResearchNeural.Feedforward.Train.Backpropagation.Backpropagation(network, XOR_INPUT, XOR_IDEAL, 0.7, 0.9); int epoch = 1; do { train.Iteration(); Console.WriteLine("Epoch #" + epoch + " Error:" + train.Error); epoch++; } while ((epoch < 5000) && (train.Error > 0.001)); // test the neural network Console.WriteLine("Neural Network Results:"); for (int i = 0; i < XOR_IDEAL.Length; i++) { double[] actual = network.ComputeOutputs(XOR_INPUT[i]); Console.WriteLine(XOR_INPUT[i][0] + "," + XOR_INPUT[i][1] + ", actual=" + actual[0] + ",ideal=" + XOR_IDEAL[i][0]); } } } }

The last few lines of output from this program are shown here.

Epoch #4997 Error:0.006073963240271441 Epoch #4998 Error:0.006073315333403568 Epoch #4999 Error:0.0060726676304029325 Neural Network Results: 0.0,0.0, actual=0.0025486129720869773,ideal=0.0 1.0,0.0, actual=0.9929280525379659,ideal=1.0 0.0,1.0, actual=0.9944310234678858,ideal=1.0 1.0,1.0, actual=0.007745179145434604,ideal=0.0

As you can see, the network ran through nearly 5,000 training epochs. This produced an error of just over half a percent and took only a few seconds. The results were then displayed. The neural network produced a number close to zero for the input of 0,0 and 1,1. It also produced a number close to 1 for the inputs of 1,0 and 0,1.

This program is very easy to construct. First, a two dimensional **double** array is created that holds the input for the neural network. These are the training sets for the neural network.

public static double[][] XOR_INPUT ={ new double[2] { 0.0, 0.0 }, new double[2] { 1.0, 0.0 }, new double[2] { 0.0, 1.0 }, new double[2] { 1.0, 1.0 } };

Next, a two dimensional **double** array is created that holds the ideal output for each of the training sets given above.

public static double[][] XOR_IDEAL = { new double[1] { 0.0 }, new double[1] { 1.0 }, new double[1] { 1.0 }, new double[1] { 0.0 } };

It may seem as though a single dimensional **double** array would suffice for this task. However, neural networks can have more than one output neuron, which would produce more than one **double** value. This neural network has only one output neuron.

A **FeedforwardNetwork** object is now created. This is the main object for the neural network. Layers will be added to this object.

FeedforwardNetwork network = new FeedforwardNetwork();

The first layer to be added will be the input layer. A **FeedforwardLayer** object is created. The value of two specifies that there will be two input neurons.

network.AddLayer(new FeedforwardLayer(2));

The second layer to be added will be a hidden layer. If no additional layers are added beyond this layer, then it will be the output layer. The first layer added is always the input layer, the last layer added is always the output layer. Any layers added between those two layers are the hidden layers. A **FeedforwardLayer** object is created to serve as the hidden layer. The value of three specifies that there will be three neurons in the hidden layer.

network.AddLayer(new FeedforwardLayer(3));

The final layer to be added will be the output layer. A **FeedforwardLayer** object is created. The value of one specifies that there will be a single output neuron.

network.AddLayer(new FeedforwardLayer(1));

Finally, the neural network is reset. This randomizes the weight and threshold values. This random neural network will now need to be trained.

network.Reset();

The backpropagation method will be used to train the neural network. To do this, a **Backpropagation** object is created.

Train train = new HeatonResearchNeural.Feedforward.Train.Backpropagation.Backpropagation( network, XOR_INPUT, XOR_IDEAL, 0.7, 0.9);

The training object requires several arguments to be passed to its constructor. The first argument is the network to be trained. The second argument is the **XOR_INPUT** and **XOR_IDEAL** variables, which provide the training sets and expected results. Finally, the learning rate and momentum are specified.

The learning rate specifies how fast the neural network will learn. This is usually a value around one, as it is a percent. The momentum specifies how much of an effect the previous training iteration will have on the current iteration. The momentum is also a percent, and is usually a value near one. To use no momentum in the backpropagation algorithm, you will specify a value of zero. The learning rate and momentum values will be discussed further later in this chapter.

Now that the training object is set up, the program will loop through training iterations until the error rate is small, or it performs 5,000 epochs, or iterations.

int epoch = 1; do { train.Iteration(); Console.WriteLine("Epoch #" + epoch + " Error:" + train.Error); epoch++; } while ((epoch < 5000) && (train.Error > 0.001));

To perform a training iteration, simply call the **Iteration** method on the training object. The loop will continue until the error is smaller than one-tenth of a percent, or the program has performed 5,000 training iterations.

Finally, the program will display the results produced by the neural network.

Console.WriteLine("Neural Network Results:"); for (int i = 0; i < XOR_IDEAL.Length; i++) {

As the program loops through each of the training sets, that training set is presented to the neural network. To present a pattern to the neural network, the **ComputeOutputs** method is used. This method accepts a **double** array of input values. This array must be the same size as the number of input neurons or an exception will be thrown. This method returns an array of **double** values the same size as the number of output neurons.

double[] actual = network.ComputeOutputs(XOR_INPUT[i]);

The output from the neural network is displayed.

Console.WriteLine(XOR_INPUT[i][0] + "," + XOR_INPUT[i][1] + ", actual=" + actual[0] + ",ideal=" + XOR_IDEAL[i][0]); } }

This is a very simple neural network. It used the default sigmoid activation function. As you will see in the next section, other activation functions can be specified.

Technology:

Calais Document Category:

- Log in to post comments

Theme by Danetsoft and Danang Probo Sayekti inspired by Maksimer