Thursday, February 16, 2012

HEBBIAN LEARNING::::::THE TEACHER OF A NEURON:::::::

HEBBIAN BREEZE::::

The development of various methodologies to train a neuron have a significant place in the history of learning algorithms used by  neurons to produce good neuron citizens for the artificial intelligence community. Hebbian learning is one of the most primitive form of learning. It is widely accepted as a supervised learning and also a small community included it in the unsupervised learning..

My first encounter to hebbian learning methodologies ended in a mischap of all of its basic concepts. But timely intervention of good friends and Lecturers made me clear to a good extent about the concept of training a neuron, what learning is etc...

For instance of hebbian learning adaptation we can use iris data set. In the case of iris data set it we have 3 groups. They are Iris-virginica, Iris-versicolor, Iris-setosa. As human beings if we had given all the parts of each group of flower we will naturally starts learning to identify the parts of each flower. We will keep an eye on certain parts of the flower especially the inflorescence or petal colour etc. At last as the time period over we will be put into a test, in which we have to select a flower of right group with all kinds of combination tests. Either we have to identify flower group from  petal colour or group the given flower into the right group etc.

The whole phenomenon was copied and abstracted on  a single neuron. Here we have to become a neuron. As we all know all the sense organs of an abstract neuron are the inputs that got from external environment in the form of numbers(normalised values). We have to look certain inputs with at most affection(weights). We have to be trained with the data set of each group for estimating the final weights of required group(updating the weights with hebbian equation). Thus we learned to identify the 'flowers' (can group successfully into matched group).

The whole process of hebbian learning is a systematic one that each data set have to be trained seaparately and store final updated  weights for reference. This final updated weights are the keypoints in identifying the missing properties to some extent. ie with desired output and weights we are able to get the missing input value. It will be closed to some values predefined as inputs. Thats how we say that a neuron is learned to identify the group to some extent by performing a comparison with the desired output. 

Saturday, February 11, 2012

ARTIFICIAL INTELLIGENCE::AWSOME ABSTRACTION OF HUMAN BRAIN

Artificial intelligence is one of the most reforming branches of computer science. When man starts thinking about how his own brain works, how can he adopt the features of brain to solve most complicated problems, artificial intelligence emerges. In-fact many eminent researchers with their outstanding inventions, found out numerous models of human thought process. 

Computational intelligence, Artificial neural networks, Design of neurons, ganglia etc are the most challenging tasks that happening at the  back-end of all these research.

Most of the neural network studies starts with the study of biological neurons, which are the major component of human brain where amazing mind blogging mysteries are happening. We have to wonder and with wide open eyes to believe that these neurons are cells which carry more than gigabytes of informations about various learning process  right from a human beings' birth. It is the place where all information exchanges are happening. If neuron has this much importance in our life, its better to fold our hands and give a worshipful respect to the creator of all these structures at this moment..

Most classification algorithms with neurons include perceptron model, alpha-LMS, steepest descent, Back-propagation etc. The perceptron model includes the training of a neuron with the inputs from external environment with a specialized weights given to each of these inputs. Clearly when we remap perceptron to our daily routine, it can be visualized as a process when we starts learning a new environment.

In this process of learning we will naturally give more importance to certain inputs and we will constantly observing these inputs in course of our learning. Similar thing happens in the training of perceptron training. In each iteration we will update the importance of certain inputs by improving their weights also give unimportance to certain weights by the process of weight updation. There are different systematic ways to do this critical process. It can be alpha-LMS approach, Back-propagation, Steepest descent etc. Each of these methods have their own merits and demerits. These demerits are the pointers to further research in this area. 

One simple classification problem using perceptron is described below. The matlab  simulation code also included which contains weight updation process done with perceptron approach as well as alpha-LMS.

The problem is that we have to design a neuron using perceptron approach and also with alpha-LMS which can distinguish between two triangles located on either side of a line. The output is the final weights that we will give to each input pattern and inputs are the various points of training. It is assumed that the expected output if 1 if the point is above the line and the expected output is 0 when the point is below the line. The image of the problem specification is shown above.


The matlab code for perceptron learning and alpha-LMS is shown below. If we execute the code we will be getting weights at which the neuron is to be ready to classify the points whether it is above or below the line.

The perceptron code:
 
p = [1 2 2
    1 2 .75
    1 2.25 .75

    1 2.75 .75
    1 2.125 .75
    1 3 .75
    1 3.25 .75
    1 3.5 .75
    1 4 .75
    1 2.25 .5
    1 2.5 .25
    1 2.75 0
    1 3 -.25
    1 3.25 -.25
    1 4 .25
    1 1 1
    1 1.125 1
    1 1.25 1
    1 1.25 1
    1 1.5 1
    1 2 1
    1 2.25 1
    1 2.5 1
    1 2.75 1
    1 3 1
    1 1.25 1.25
    1 1.5 .5
    1 1.75 1.75
    1 2 2
    1 2.25 1.75
    1 2.5 1.5
    1 2.75 .5
    1 3 1
    1 2.125 1.75];
d = [0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1];
w = [0 0 0];
eta = 1;
update = 1;
while update == 1
    for i=1:34
        y = p(i,:)*w';
        if y>=0 & d(i) == 0
            w = w-eta*p(i,:);
            up(i) = 1;
        elseif y<=0 & d(i) ==1
            w = w + eta*p(i,:);
            up(i) = 1;
        else
            up(i) = 0;
        end
    end
    numberofupdates = up * up' ;
    if numberofupdates > 0
        update = 1;
    else
        update = 0;
    end
end


The alpha-LMS code :


p = [1 2 2
    1 2 .75
    1 2.25 .75
    1 2.75 .75
    1 2.125 .75
    1 3 .75
    1 3.25 .75
    1 3.5 .75
    1 4 .75
    1 2.25 .5
    1 2.5 .25
    1 2.75 0
    1 3 -.25
    1 3.25 -.25
    1 4 .25
    1 1 1
    1 1.125 1
    1 1.25 1
    1 1.25 1
    1 1.5 1
    1 2 1
    1 2.25 1
    1 2.5 1
    1 2.75 1
    1 3 1
    1 1.25 1.25
    1 1.5 .5
    1 1.75 1.75
    1 2 2
    1 2.25 1.75
    1 2.5 1.5
    1 2.75 .5
    1 3 1
    1 2.125 1.75];
d = [0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1];
w = [0 0 0];
eta = 1;
update = 1;
 while update == 1
     for i=1:34
         y = p(i,:)*w';
         ek = d(i) - y;
         pi = p(i,:);
         pi = pi/(norm(pi)^2)
         w = w + 1*ek*pi;
         if y>=0 & d(i) == 0
%             w = w-eta*p(i,:);
             up(i) = 1;
        elseif y<=0 & d(i) ==1
%             w = w + eta*p(i,:);
             up(i) = 1;
        else
             up(i) = 0;
        end
     end
     numberofupdates = up * up' ;
     if numberofupdates > 0
         update = 1;
     else
         update = 0;
     end
 end

The above views are according to the descriptions from the book of neural networks by Dr. sathish Kumar and subject to change in-case, any error in the conepts and the pointers to errors are most welcome.