Hello, this is my first post, I'm a programmer and I've being reading about NN's over the last few days. I've also ordered Introduction to Neural Networks for Java, 2nd Edition threw amazon, but it should be arriving in three weeks, so I'm hoping to find answers somewhere else in the meantime.
Before someone ships me directly to google, I've looked and didn't found much. Most of the examples of feedforward are outdated and can't be directly applied.
I found this link where this question was discussed:
I would start by training the SOM with your training data. It will learn to cluster just as it always does. Then, I would take your entire training set, run it through the SOM, and produce a 2nd training set that has the output from your SOM as the input, and the ideal values from your original training set. The ideal values were not used with the SOM, as it is unsupervised. This 2nd training set can now be used as the training set for your feedforward network. Once this training is done you have your trained SOM and trained feedforward. Any time you want to use them just present input to the SOM, take the output and present it as input to the feedforward, then the output from the feedforward is the output from the "compound" neural network.
I'm thinking this could be a solution. But there is one thing that is bothering me. Let's say I feed the SOM network with all the letters and numbers. Number 5 is recognized as 6, so the output from my SOM would be 6. Then I would be passing the image for 6 as input and the image of 5 as ideal. But number 6 is also recognized as 6. So the feedforward wouldn't be adjusting the weights for both 5 and 6 in both cases?
I'm really new to NN's, so maybe my questions sound a bit silly, I'm looking for some guidelines mainly, I don't want to spend a lot of time trying out different possibilities to later realize my approach was completely wrong. So I would appreciate any recommendations to what sort of networks to use to handle my problem.
Basically what I do is extract letters from images. I put those images in a 15*15 grid and run the network to predict which letter is which by comparing pixel by pixel if the RGB values are black or white. After a few seconds I'll know the exact value of the letter predicted, no matter if the prediction was right or wrong. I wanted to use this information to train the network to reduce the amount of mistakes as much as it can. So, I've being digging into feedforward networks to see if I can do this.
BTW, for my feedforward approach, I have 225 input neurons(15*15), 150 hidden, 55 output (got 55 characters at the moment, was planning on adding a few more). I'm hoping I got this right, but I might be wrong so I posted it as well...
Now my main problem is that at first I only have the images, no data was processed, so I don't have the ideal value, only the inputs. Is there a way to train a feedforward network just like a SOM network to later use adjust the values by back propagation / manhattan update / resilient prop. ?
I've tried to train a resilient propagation network like a SOM network, but I got really bad results.
I tried building the training set just like on som: MLDataSet.add(new BasicMLDataPair(item, null));
And I also tried building it with BasicNeuralDataSet, as the documentation on encog 2.5 suggests I should do it, but I got the same results.
I also have a few more doubts that emerged in the process, if any of you guys want to answer something don't feel the pressure to answer the whole post, an answer to any of my doubts it's quite usefull for me.
So moving on, does the BasicNetwork.classify(MLData input) work the same way in feedforward compared to SOM? What I meant is if I can expect an int returned the same way to indicate which output neuron was fired?
Is BasicNeuralDataSet deprecated? Are there any differences between MLDataSet's and this one? Should I be using this on instead of MLDataSet?
Which of the feedforward propagation methods is best suited for my problem? At the moment I selected Resilient propagation, but I'm not quite sure if I should stick to it
Also, to adjust the values, do I have to run the whole training again? or can I just pass the MLDataSet of just the input/output I want?
Well that's it, I'm hoping I didn't asked for too much.