Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
2 artificial intelligence and neural networks ppt
#1

Neural Network Concept in Artificial Intelligence

Abstract

Since the 1980's there have been renewed research efforts dedicated to neural networks. The present interest is largely due to the difficult problems confronted by artificial intelligence, and due to the deeper understanding of how the brain works, the recent developments in theoretical models, technologies and algorithms. One motivation of neural network research is the desire to build a new breed of powerful computers to solve a variety of problems that have proved to be very difficult with conventional computers. Another motivation is the desire to develop cognitive models that can serve as an alternative way to artificial intelligence. Human brain functions have not yet been successfully simulated in an AI system. Some existing neural network, on the other hand, have shown potential for these abilities. Using self-organization capabilities, neural networks are able to acquire and organize knowledge through learning in response to external stimuli. This paper addresses many techniques used in neural networks and possible applications in artificial intelligence. Some generic information about hybrid intelligent systems is also provided.
Introduction

There have been a variety of neural network models developed by researchers of different backgrounds, from different point of view and with different aims and applications. However, neural networks are emulation of biological neural systems. With such an emulation it is hoped that some brain abilities, such as generalization, and attention focusing, can be simulated. The neural network can be defined in many ways. From the structural point of view, a neural network can be defined as a directed network (or graph) with its nodes representing neurons. Generally speaking, neural networks are specified by 1) node (processing unit) characteristics, 2) network topology and, 3) learning paradigm. A neural network structure is based on: Node characteristics The nodes of a directed network are called processing units. The state of the unit represents the potential of a neuron. It can also be called activation. The state of a neuron is affected by its previous state, the total accumulated input signals, and the activation function. As the signal generated at a neuron cell body is transmitted down the axon and then distributed to the synapses, the property of this transmission path may affect the ultimate signal that arrives at the synapses. This can be described by the output function of the unit. 1. Input Function: The synapses and the signal modulations are simulated by links and page link functions in neural networks. The signals received by the unit in a neural network may come from different types of output signals (binary, continuous, symbolic), undergo modulations of different page link functions, and the page link can be of different types (inhibitory, excitatory). Ports for input signals are called sites. All incoming links are impinging upon the sites instead of directly upon the units themselves. 2. Activation Function: The total stimuli coming to a unit is defined to be the sum of the all signals coming from other units. The total stimuli to a node i can be written as sitotal = j sij where sij is the signal at j-th site of i-th unit. The state of a activation of a unit can be either discrete or continous. In discrete case, the states usually take values of 0 and 1, or -1 and +1. In the continuous case the states are very often scaled to the range of [0,1]. The state of unit is a function of its input and previous state; ai(t+1) = fia (sitotal , ai(t)) where fia denotes he activation function of i-th node. 3. Output Function: The output of unit is a function of its state. In some models the output of unit exactly equal to its activation. Very often, however, output function is some sort of threshold function such that the unit has no effect on other units unless its activation is raised to a certain value. Although these key terms stay the same, neural networks differ in terms of network structure, algorithm and their purpose. In the following section, different types of neural networks and their areas of use will be discussed.
Neural Network Architectures

Hopfield: A Hopfield network is a fully connected network. A unit receives input from all other units. There is no distinction between input units, hidden units and output units. When an input pattern is presented, all units obtain their initial state from the input pattern. When the network reaches a stable state, the output is represented by states of N units as a binary word of N bits. If we denote the page link weight between unit i and j by wij, assuming that the output of a unit equals its activation, then the total signal received by unit i will be sitotal = j<>iwijaj. For each unit there is a fixed threshold Ti such that the state change of unit i is determined by ai = 1 if sitotal > Ti and 0 otherwise. Hopfield networks store information via their local stable points in the state space, which are referred as attractors. The information retrieval is performed by state evolution. The information is retrieved when the state evolution reaches a local stable point. Hopfield structure is very effective in the implementation of associative memories. Associative memory works much more like our mind does. If we are for instance looking for someone's name, it will help to know where we met this person or what he looks like. With this information as input, our memory will usually come up with the right name. A memory is called an associative memory if it permits the recall of information based on partial knowledge of its contents. Associative memory is a tool that is well-known in the AI field, Hopfield network architecture is closely related to AI in terms of associative memory implementation.
Boltzmann divides all network nodes into three groups: input nodes, output nodes, and hidden nodes. For learning, the network is run in two "phases": the clamped phase and the free phase. During the clamped phase the input units are set to their corresponding value (depending on the input we want to give the network) and the output is set to nodes to whatever value the network has to have whenever this input is presented. During the free phase only the values of the input units are set. Boltzmann has a formula which is a simple modification of Hopfield's evolution formula, ai(t+1) = sgn[sitotal(t)]. The evolution formula of the Hopfield Network is deterministic. In the stochastic sense, the system can only determine the probability for a unit to take one of the values, -1 or +1. The probability of unit i taking value +1, regardless of its previous state, is P(ai(t+1)=+1) = (1+e-s/T)-1 where T is a parameter acting like the temperature of a physical system. In Boltzmann Machine there is no guarantee that one will reach the global energy minima. Boltmann Model is similar to Hopfield model, but boltmann can have hidden units, which allow it, given enough units, to learn arbitrary functions.
Multi-layered network is a feedforward network. Three or more layers of artificial neurons are used with one layer representing input data and one layer representing the corresponding output. Between these layers one or more intermediate layers contain a variable number of nodes that provide sufficient complexity to the network so that complex, non-linear relationships between inputs and outputs can be represented. Multi-layered perceptrons are, in theory, capable of solving a wide range of problems. However, as the scale of many problems is increased, or requirements change, multi-layered perceptrons fail to learn or become impractical to implement. An alternative is known as a Master-Slave architecture, an associative learning paradigm. Using competitive and suggestive learning, inputs are distributed across all available categorization units, without the need for a-priori knowledge.
Kohonen learning is an enhancement of competitive learning. It extends the competition over spatial neighborhoods. While competitive learning only updates the weights of the winning output, Kohonen learning also updates the weights within a neighborhood of the winning output. The network type is self-organizing and associated training is unsupervised.
Adaptive Resonance Theory The term "resonance" refers here to the so called resonant state of the network in which a category prototype vector matches the current input vector close enough so the orienting subsystem will not generate a reset signal to the second layer. In this case, the activity pattern in the first layer causes the same second layer node to be selected, which in turn sends the same prototype vector down to the first layer, which again matches the current input close enough, and so on. The network learns only in its resonant state. The ART network can develop stable clusterings of arbitrary sequences of input patterns by self-organisation.
Hybrid Intelligent Systems

The architectures presented above provide suitable environment for different types of AI problems. The neural networks can also be combined with other systems such as Expert Systems, Fuzzy Systems and Genetic Algorithms. The resulting system is called a hybrid intelligent system. There are five types of models that represent the integration of such intelligent systems: Stand-alone models, transformations, loose coupling, tight coupling and full integration. As it can be understood from the names of each model they depend on the level of integration of two systems. Neural networks, being a component in the hybrid intelligent systems, help to improve the intelligence level of the other system and itself. It is possible to construct hybrid systems to mitigate the limitations and take advantage of the opportunities to produce systems that are more powerful than those that could be built with single technologies. The idea here is that, neural networks are powerful tools but they can also be combined with other systems which helps to obtain powerful learning tools in AI.
Conclusion

Neural networks are a form of non-symbolic artificial intelligence. There are many different types of neural networks; however, most of them consist of simple processing units (called neurons or nodes) which are highly interconnected. The strength of neural networks in processing data comes from their ability to distribute information in parallel through the neurons, enabling them to perform quite complex tasks relatively rapidly. The most common architectures used in neural networks are Hopfield, Boltzmann, Multi-layered network, Kohonen and Adaptive Resonance Theory. The problems in AI have different features, to get the optimal solution, one need to select a suitable type of architecture for the neural network implementation in an attempt to solve it. The main reason for the use of neural networks is to attack complex problems in the field of AI. There are different types of systems other than neural networks. Expert Systems, Fuzzy Systems and Genetic Algorithms can be integrated with neural networks to solve the complex problems in machine learning, such systems are called hybrid intelligent systems. Neural networks are very powerful tools for Intelligence Systems. Although there are some limitations in terms of the complexity of the neural circuit and the lack of representation in very complex systems, there is an ongoing research to improve performance and capabilities of neural networks.
Reply



[-]
Quick Reply

Forum Jump:


Users browsing this thread:
1 Guest(s)

Powered By MyBB, © 2002-2024 iAndrew & Melroy van den Berg.