Kontrol sistemlerinde yapay sinir ağı uygulamaları
Başlık çevirisi mevcut değil.
- Tez No: 55536
- Danışmanlar: PROF.DR. LEYLA GÖREN
- Tez Türü: Yüksek Lisans
- Konular: Bilgisayar Mühendisliği Bilimleri-Bilgisayar ve Kontrol, Computer Engineering and Computer Science and Control
- Anahtar Kelimeler: Belirtilmemiş.
- Yıl: 1996
- Dil: Türkçe
- Üniversite: İstanbul Teknik Üniversitesi
- Enstitü: Fen Bilimleri Enstitüsü
- Ana Bilim Dalı: Belirtilmemiş.
- Bilim Dalı: Belirtilmemiş.
- Sayfa Sayısı: 36
Özet
ÖZET Bu tezde, günümüzde kullanım alanları gittikçe artan Yapay Sinir Ağlan konusu içerisinde yer alan CMAC (Cerebellar Model Arithmetic Computer) yaklaşımı ele alınarak çeşitli uygulamalara ilişkin simülasyon sonuçlan verilmiştir. Bölüm l'de Yapay sinir ağlan hakkında genel bilgiler kısaca verilmiştir. Bölüm 2' de CMAC'm dayandığı nörolojik model verilerek kullanılan dönüşüm ve öğrenme algoritmaları incelenmiş, çeşitli özellikleri verilerek diğer yapay sinir ağı modelleri ile karşılaştırılması yapılmıştır. Bölüm 3 'de CMAC tek değişkenli fonksiyonların öğrenilmesinde, ters kinematik denklemlerin çözümünde ve sistem belirleme işleminde kullanılmış, yapılan simülasyon sonuçlan verilmiştir.
Özet (Çeviri)
SUMMARY ARTIFICIAL NEURAL NETWORK APPLICATIONS IN CONTROL SYSTEMS The recent resurgence of interest in neural networks has its roots in the recognition that brain performs computations in a different manner than do conventional digital computers. Computers are extremely fast and precise at executing sequences of instructions that have been formulated for them. A human information processing system is composed of neurons switching at speeds about a million times slower than computer gates. Yet, humans are more efficient than computers at computationally complex tasks such as speech understanding. Moreover, not only humans, but even animals can process visual information better than the fastest computers. A neural network's ability to perform computations is based on the hope that we can reproduce some of the flexibility and power of the human brain by artificial means. Network computation is based is performed by a dense mesh of computing nodes and connections. They operate collectively and simultaneously on most or all data and inputs. The basic processing elements of neural networks are called artificial neurons, or simply neurons. Often they are called as nodes. Neurons perform as summing and nonlinear mapping junctions. In some cases they can be considered as threshold units that fire their total input exceeds certain bias levels. Neurons usually operate in parallel and configured inregular architectures. They are often organized in layers, and feedback connections both within the layer and toward adjacent layers are allowed. Each connection strength is expressed by a numerical value called a weight, which can be modified. The attractive feature of the neural networks is their learning capabilities. They can learn complex nonlinear relationships trough a training procedure which is done by changing the connection weights with a learningalgorithm. There are different architectures and different learning algorithms which are used for different purposes. The CMAC (Cerebeller Model Arithmetic Computer) algorithm which is a less known technique as artificial neural networks, imitates the human cerebellum. The cerebellum which is attached to the midbrain and nestles up under the visual cortex is intimately involved with control of rapid, precise, coordinated movements of limbs, hands and eyes. Injury to the cerebellum results in motor deficiencies such as overshot in reaching for objects, lack of coordinating and inability to execute delicate tasks or track precisely with the eyes. Input to the cerebellum arrives in the form of sensory and proprioceptive feedback from the muscles, joints and skin together with commands from higher level motor centres concerning what movement is to be performed. This input constitutes an address, the contents of which are the appropriate muscle actuator signals required to carry out the desired movement. At each point of the time the input addresses an output which drives the muscle control circuits. The resulting motion produces a new input and the process is repeated. The result is a trajectory of the limb trough space. At each point on the trajectory the state of the limb is sent to the cerebellum as input and the cerebellar memory responds with actuator signals which drive the limb to the next point on the the trajectory. In the case of N input variables with R distinguishable levels there are RN possible inputs and it is assumed that the brain does not have sufficient cells to store the corresponding control values for this amount of inputs and uses a memory management algorithm to solve the problem The theory of the CMAC was first proposed by J.S. Albus. CMAC is a neuromuscular control system by which control functions for many degrees of freedom can be computed simultaneously in a table look-up fashion rather than mathematical solution of simultaneous equations. The CMAC algorithm can be classified as an advanced look-up table technique with learning capabilities. Normal look-up table techniques transform an input vector into a pointer, which is used to index a table. With this pointer the input vector is VIuniquely coupled to a weight (table location). Using normal look-up table techniques a number of problems occur. The main problems are:. The input signals must be quantized. Quantization adds quantizing noise to the input signals. The quantization interval can not be chosen too small because of the increasing table size.. The size of the table increases exponentially as the number of inputs increases.. Output values corresponding to nearby input vectors are not correlated because every weight is uniquely coupled to a point in the quantized input space. But in smooth-continuous functions nearby input vectors produce similar outputs. Normal look-up table techniques do not use this property to their advantage. Despite these disadvantages of look-up table techniques, two major advantages exist:. A function of any shape can be learned because the function is stored numerically, so linearity is no constraint.. Can be very fast when simple addressing algorithms used. The extensions of CMAC to the normal look-up table methods are: Distributed storage and Generalization algorithm. Distributed storage means that the output value CMAC is stored as the sum of number of weights. The generalization algorithm uses the distributed storage feature in a clever way: nearby input vectors address many weights commonly. The number of selected (addressed) weights (memory locations) is called as p, generalization number. In normal CMAC applications p>l, for p=l the algorithm reduces a simple look-up table. As a result of generalization, CMAC memory addresses in the same neighborhood are not independent. Data storage at ant point alters the values stored at neighboring points, so the training is not required for all possible input vectors in the input space. VIIThe training of the CMAC table is done by taking the difference between the actual and desired output and distributing equal shares of that difference to p memory locations. (For more detail see part 2) Changing p locations instead of all weights in entire network is a significant advantage of CMAC over other neural network algorithms. The result of this property is fast learning and fast response. Because function values are simply stored, CMAC can only generate a reasonable output in an area where learning has been performed. Compared to the normal look-up methods CMAC offers the advantages of memory saving and generalization. Compared to other neural network algorithms CMAC can be trained in significially shorter time because for each training data only a small subset of the memory table adjusted instead of the entire network. The calculating effort is lower but memory consumption is higher. In part 3, CMAC is used for;. Learning single variable functions showing the properties,. Solving Inverse kinematics equations,. System Identification. Vlll
Benzer Tezler
- Çok düzeyli statik bellek gözesi ve kohonen türü yapay sinir ağına uygulanması
Multiple valued static storage cell and its application to kohonen type neural network
NURETTİN YAMAN ÖZELÇİ
Doktora
Türkçe
1999
Elektrik ve Elektronik Mühendisliğiİstanbul Teknik ÜniversitesiPROF.DR. UĞUR ÇİLİNGİROĞLU
- Yapay zekâ yöntemleri kullanılarak akıllı ev sistemi geliştirilmesi
Development of smart home system using artificial intelligence methods
CEVDET TAMER GÜVEN
Yüksek Lisans
Türkçe
2020
Bilgisayar Mühendisliği Bilimleri-Bilgisayar ve KontrolMersin ÜniversitesiBilgisayar Mühendisliği Ana Bilim Dalı
DR. ÖĞR. ÜYESİ MEHMET ACI
- Hava araçları kokpitlerinde makine öğrenmesi tabanlı tahmine dayalı kullanıcı arayüzü
Machine learning prediction based ui for aircraft cockpit
BİLGE TOPAL
Yüksek Lisans
Türkçe
2024
Bilgisayar Mühendisliği Bilimleri-Bilgisayar ve Kontrolİstanbul Teknik ÜniversitesiBilgisayar Bilimleri ve Mühendisliği Ana Bilim Dalı
PROF. DR. BEHÇET UĞUR TÖREYİN
- Rüzgar-fotovoltaik hibrit güç sistemlerinin yapay sinir ağları ile kontrolü
Artificial neural networks for controlling wind-PV power systems
KERİM KARABACAK
Doktora
Türkçe
2016
Elektrik ve Elektronik MühendisliğiEge ÜniversitesiGüneş Enerjisi Ana Bilim Dalı
DOÇ. DR. NUMAN SABİT ÇETİN
- FPGA tabanlı uzun kısa-süreli bellek yapay sinir ağı ile darbesel sinyal tespiti
FPGA based long short-term memory artificial neural network for pulse signal detection
ERDOĞAN BERKAY TEKİNCAN
Yüksek Lisans
Türkçe
2022
Bilgisayar Mühendisliği Bilimleri-Bilgisayar ve KontrolBaşkent ÜniversitesiBilgisayar Mühendisliği Ana Bilim Dalı
DR. ÖĞR. ÜYESİ TÜLİN ERÇELEBİ AYYILDIZ
DR. NİZAM AYYILDIZ