Information about KSOM

Published on August 5, 2014

Author: vijayakorupu

Source: authorstream.com


PowerPoint Presentation: Artificial neural networks Kohonen Self Organising Maps (KSOM) K.Vijaya Lakshmi PowerPoint Presentation: The main property of a neural network is an ability to learn from its environment, and to improve its performance through learning. So far we have considered supervised or active learning  learning with an external “teacher” or a supervisor who presents a training set to the network. But another type of learning also exists: unsupervised learning . Introduction PowerPoint Presentation: In contrast to supervised learning, unsupervised or self-organised learning does not require an external teacher. During the training session, the neural network receives a number of different input patterns, discovers significant features in these patterns and learns how to classify input data into appropriate categories. Unsupervised learning tends to follow the neuro-biological organisation of the brain. Unsupervised learning algorithms aim to learn rapidly and can be used in real-time. PowerPoint Presentation: In competitive learning, neurons compete among themselves to be activated. While in Hebbian learning, several output neurons can be activated simultaneously, in competitive learning, only a single output neuron is active at any time. The output neuron that wins the “competition” is called the winner-takes-all neuron. Competitive learning PowerPoint Presentation: The basic idea of competitive learning was introduced in the early 1970s. In the late 1980s, Teuvo Kohonen introduced a special class of artificial neural networks called self-organising feature maps . These maps are based on competitive learning. PowerPoint Presentation: Our brain is dominated by the cerebral cortex, a very complex structure of billions of neurons and hundreds of billions of synapses. The cortex includes areas that are responsible for different human activities (motor, visual, auditory, somatosensory, etc.), and associated with different sensory inputs. We can say that each sensory input is mapped into a corresponding area of the cerebral cortex. The cortex is a self-organising computational map in the human brain. What is a self-organising feature map? PowerPoint Presentation: Feature-mapping Kohonen model PowerPoint Presentation: The Kohonen model provides a topological mapping. It places a fixed number of input patterns from the input layer into a higher-dimensional output or Kohonen layer. Training in the Kohonen network begins with the winner’s neighbourhood of a fairly large size. Then, as training proceeds, the neighbourhood size gradually decreases. The Kohonen network PowerPoint Presentation: Architecture of the Kohonen Network SOM Architecture: SOM Architecture Two layers of neurons Input layer Output map layer Each output neuron is connected to each input neuron Fully connected network Output map usually has two dimensions one and three dimensions also used Neurons in output map can be laid out in different patterns rectangular Hexagonal SOM Architecture: SOM Architecture SOMs are competitive networks Neurons in the network compete with each other Other kinds of competitive network exist e.g. ART SOM Algorithm: SOM Algorithm Each output neuron is connected to each neuron in the input layer Therefore, each output neuron has an incoming connection weight vector Dimensionality of this vector is the same as the dimensionality of the input vector Since the dimensionality of these vectors is the same, we can measure the Euclidean distance between them Winning node is that with the least distance i.e. the lowest value of D Outputs from a SOM are binary A node is either the winner, or it is not Only one node can win SOM Training: SOM Training Based on rewarding the winning node This is a form of competitive learning Winners weights are adjusted to be closer to the input vector Why not equal? We want the output map to learn regions, not examples SOM Training: SOM Training Based on rewarding the winning node This is a form of competitive learning Winners weights are adjusted to be closer to the input vector Why not equal? We want the output map to learn regions, not examples SOM Training: SOM Training An important concept in SOM training is that of the “Neighbourhood” The output map neurons that adjoin the winner Neighbourhood size describes how far out from the winner the neighbours can be Neighbours weights are also modified Neighbourhood: Neighbourhood SOM Training: SOM Training Number of neighbours is effected by the shape of the map rectangular grids 4 neighbours hexagonal grids 6 neighbours Neighbourhood size and learning rate is reduced gradually during training SOM Training: SOM Training Overall effect of training groups, or “clusters” form in output map clusters represent spatially nearby regions in input space since dimensionality of the output map is less than the dimensionality of the input space vector quantisation PowerPoint Presentation: The lateral connections are used to create a competition between neurons. The neuron with the largest activation level among all neurons in the output layer becomes the winner. This neuron is the only neuron that produces an output signal. The activity of all other neurons is suppressed in the competition. The lateral feedback connections produce excitatory or inhibitory effects, depending on the distance from the winning neuron. This is achieved by the use of a Mexican hat function which describes synaptic weights between neurons in the Kohonen layer. PowerPoint Presentation: The Mexican hat function of lateral connection PowerPoint Presentation: In the Kohonen network, a neuron learns by shifting its weights from inactive connections to active ones. Only the winning neuron and its neighbourhood are allowed to learn. If a neuron does not respond to a given input pattern, then learning cannot occur in that particular neuron. The competitive learning rule defines the change  w ij applied to synaptic weight w ij as where x i is the input signal and  is the learning rate parameter. PowerPoint Presentation: The overall effect of the competitive learning rule resides in moving the synaptic weight vector W j of the winning neuron j towards the input pattern X . The matching criterion is equivalent to the minimum Euclidean distance between vectors. The Euclidean distance between a pair of n -by-1 vectors X and W j is defined by where x i and w ij are the i th elements of the vectors X and W j , respectively. PowerPoint Presentation: To identify the winning neuron, j X , that best matches the input vector X , we may apply the following condition: where m is the number of neurons in the Kohonen layer. PowerPoint Presentation: Suppose, for instance, that the 2-dimensional input vector X is presented to the three-neuron Kohonen network, The initial weight vectors, W j , are given by PowerPoint Presentation: We find the winning (best-matching) neuron j X using the minimum-distance Euclidean criterion: Neuron 3 is the winner and its weight vector W 3 is updated according to the competitive learning rule. PowerPoint Presentation: The updated weight vector W 3 at iteration ( p + 1) is determined as: The weight vector W 3 of the wining neuron 3 becomes closer to the input vector X with each iteration. PowerPoint Presentation: Step 1 : Initialisation . Set initial synaptic weights to small random values, say in an interval [0, 1], and assign a small positive value to the learning rate parameter  . Competitive Learning Algorithm PowerPoint Presentation: Step 2 : Activation and Similarity Matching . Activate the Kohonen network by applying the input vector X , and find the winner-takes-all (best matching) neuron j X at iteration p , using the minimum-distance Euclidean criterion where n is the number of neurons in the input layer, and m is the number of neurons in the Kohonen layer. PowerPoint Presentation: Step 3 : Learning . Update the synaptic weights where  w ij ( p ) is the weight correction at iteration p . The weight correction is determined by the competitive learning rule: where  is the learning rate parameter, and  j ( p ) is the neighbourhood function centred around the winner-takes-all neuron j X at iteration p . PowerPoint Presentation: Step 4 : Iteration . Increase iteration p by one, go back to Step 2 and continue until the minimum-distance Euclidean criterion is satisfied, or no noticeable changes occur in the feature map. PowerPoint Presentation: To illustrate competitive learning, consider the Kohonen network with 100 neurons arranged in the form of a two-dimensional lattice with 10 rows and 10 columns. The network is required to classify two-dimensional input vectors  each neuron in the network should respond only to the input vectors occurring in its region. The network is trained with 1000 two-dimensional input vectors generated randomly in a square region in the interval between –1 and +1. The learning rate parameter  is equal to 0.1. Competitive learning in the Kohonen network PowerPoint Presentation: Initial random weights PowerPoint Presentation: Network after 100 iterations PowerPoint Presentation: Network after 1000 iterations PowerPoint Presentation: Network after 10,000 iterations Kohonens: The Basic Idea: Kohonens: The Basic Idea Make a two dimensional array, or map, and randomize it. Present training data to the map and let the cells on the map compete to win in some way. Euclidean distance is usually used. Stimulate the winner and some friends in the “neighborhood”. Do this a bunch of times. The result is a 2 dimensional “weight” map. “Competitive Learning”: “Competitive Learning” There are two types of competitive learning: hard and soft . Hard competitive learning is essentially a “winner-take-all” scenario. The winning neuron is the only one which receives any training response. (Closer to most supervised learning models?) “Competitive Learning”: “Competitive Learning” Soft competitive learning is essentially a “share with your neighbors” scenario. This is actually closer to the real cortical sheet model. This is obviously what the KSOM and other unsupervised connectionist methods use. Feature Vector Presentation: Feature Vector Presentation Update Strategy: Update Strategy A Trained KSOM: A Trained KSOM Applications: General: Applications: General Oil and Gas exploration. Satellite image analysis. Data mining. Document Organization Stock price prediction (Zorin, 2003). Technique analysis in football and other sports (Barlett, 2004). Spatial reasoning in GIS applications As a pre-clustering engine for other ANN architectures Over 3000 journal-documented applications, last count. Next time….: Next time…. Satellite Image Analysis (Questions?) Conclusion: Conclusion Kohonen SOMs are competitive networks SOMs learn via an unsupervised algorithm SOM training is based on forming clusters SOMs perform vector quantisation

Related presentations

Other presentations created by vijayakorupu

Labview material
25. 01. 2017

Labview material