Artificial Intelligence (AI) is the field of Computer Science that attempts to give computers humanlike abilities. One of the primary means by which computers are endowed with humanlike abilities, is through the use of a neural network. The human brain is the ultimate example of a neural network. The human brain consists of a network of over a hundred billion interconnected neurons. Neurons are individual cells that can process small amounts of information and then activate other neurons to continue the process.
The term neural network, as it is normally used, is actually a misnomer. Computers attempt to simulate an artificial neural network. However, most publications use the term “neural network” rather than “artificial neural network.” This book follows this pattern. Unless the term “neural network” is explicitly prefixed with the terms “biological” or “artificial” you can assume that the term “artificial neural network” is intended. To explore this distinction, you will first be shown the structure of a biological neural network.
To construct a computer capable of “human like thought,” researchers used the only working model they had available—the human brain. To construct an artificial neural network, the brain is not considered as a whole. Taking the human brain as a whole would be far too complex. Rather, the individual cells that make up the human brain are studied. At the most basic level, the human brain is composed primarily of neuron cells.
A neuron cell, as seen in Figure 1.1, is the basic building block of the human brain. It accepts signals from the dendrites. When a neuron accepts a signal, that neuron may fire. When a neuron fires, a signal is transmitted over the neuron’s axon. Ultimately the signal will leave the neuron as it travels to the axon terminals. The signal is then transmitted to other neurons or nerves.
Figure 1.1: A Neuron Cell
This signal, transmitted by the neuron is an analog signal. Most modern computers are digital machines, and thus require a digital signal. A digital computer processes information as either on or off. This is the basis of the binary digits zero and one. The presence of an electric signal represents a value of one, whereas the absence of an electrical signal represents a value of zero. Figure 1.2 shows a digital signal.
Figure 1.2: A Digital Signal
Some of the early computers were analog rather than digital. An analog computer uses a much greater range of values than zero or one. This greater range is achieved as by increasing or decreasing the voltage of the signal. Figure 1.3 shows an analog signal. Though analog computers are useful for certain simulation activates, they are not suited to processing the large volumes of data that digital computers typically process. Because of this, nearly every computer in use today is digital.
Figure 1.3: Sound Recorder Shows an Analog File
Biological Neural Networks are analog. As you will see in the next section, simulating analog neural networks on a digital computer can present some challenges. Neurons accept an analog signal through their dendrites, as seen in Figure 1.1. Because this signal is analog the voltage of this signal will vary. If the voltage is within a certain range, the neuron will fire. When a neuron fires, a new analog signal is transmitted from the firing neuron to other neurons. This signal is conducted over the firing neuron’s axon. The regions of input and output are called synapses. Later, in Chapter 3, “Using Multilayer Neural Networks”, you will be shown that the synapses are the interface between your program and the neural network.
By firing or not firing, a neuron is making a decision. These are extremely low level decisions. It takes the decisions of a large number of such neurons to read this sentence. Higher level decisions are the result of the collective input and output of many neurons.
These decisions can be represented graphically by charting the input and output of neurons. Figure 1.4 shows the input and output of a particular neuron. As you will be shown in Chapter 3 there are different types of neurons that have different shaped output graphs. As you can see from the graph shown in Figure 1.4, this neuron will fire at any input greater than 0.5 volts.
Figure 1.4: Activation Levels of a Neuron
As you can see, a biological neuron is capable of making basic decisions. This model is what artificial neural networks are based on. You will now be shown how this model is simulated using a digital computer.
This book will show you how to create neural networks using the Java programming language. You will be introduced to the Java Object Oriented Neural Engine (JOONE). JOONE is an open source neural network engine written completely in Java. JOONE is distributed under Lesser GNU Public License, or LGPL. The lesser GNU public license means that JOONE may be freely used in both commercial and non-commercial projects without royalties, so long as you mention that you used JOONE. For more information on the lesser GNU public license, visit the website http://www.gnu.org/copyleft/lesser.html. JOONE will be used in conjunction with many of the examples in this book. JOONE will be introduced in Chapter 3. More information about JOONE can be found at http://jooneworld/.
To simulate a biological neural network JOONE gives you several objects that approximate the portions of a biological neural network. JOONE gives you several types of neurons to construct your networks. These neurons are then connected together with synapse objects. The synapses connect the layers of an artificial neural network just as real synapses connect a biological neural network. Using these objects, you can construct complex neural networks to solve problems.