What are the learning rules and learning paradigm in neural
What are the learning rules and learning paradigm in neural networks explain them briefly
Solution
Three major learning paradigms.:
Supervised Learning
The learning algorithm would fall under this category if the desired output for the network is also provided with the input while training the network. By providing the neural network with both an input and output pair it is possible to calculate an error based on it\'s target output and actual output.
Unsupervised Learning
In this paradigm the neural network is only given a set of inputs and it\'s the neural network\'s responsibility to find some kind of pattern within the inputs provided without any external aid. This type of learning paradigm is often used in data mining and is also used by many recommendation algorithms due to their ability to predict a user\'s preferences based on the preferences of other similar users it has grouped together.
Reinforcement Learning
Reinforcement learning is similar to supervised learning in that some feedback is given, however instead of providing a target output a reward is given based on how well the system performed. The aim of reinforcement learning is to maximize the reward the system receives through trial-and-error.
Implementing Supervised Learning
As mentioned earlier, supervised learning is a technique that uses a set of input-output pairs to train the network. The idea to provide the network with examples of inputs and outputs then to let it find a function that can correctly map the data we provided to a correct output. If the network has been trained with a good range of training data when the network has finished learning we should even be able to give it a new, unseen input and the network should be able to map it correctly to an output.
The Perceptron Learning rule
The perceptron learning ruleworks by finding out what went wrong in the network and making slight corrections to hopefully prevent the same errors happening again. First we take the network\'s actual output and compare it to the target output in our training set. If the network\'s actual output and target output don\'t match we know something went wrong and we can update the weights based on the amount of error.This is how the percetion learning rule works.
