Geri Dön

Rekürsif en küçük kare kafes filtreleri

Recursive least squares lattice filters

  1. Tez No: 22074
  2. Yazar: SADIK ARSLAN
  3. Danışmanlar: DOÇ. DR. AHMET H. KAYRAN
  4. Tez Türü: Yüksek Lisans
  5. Konular: Elektrik ve Elektronik Mühendisliği, Electrical and Electronics Engineering
  6. Anahtar Kelimeler: Belirtilmemiş.
  7. Yıl: 1992
  8. Dil: Türkçe
  9. Üniversite: İstanbul Teknik Üniversitesi
  10. Enstitü: Fen Bilimleri Enstitüsü
  11. Ana Bilim Dalı: Belirtilmemiş.
  12. Bilim Dalı: Belirtilmemiş.
  13. Sayfa Sayısı: 95

Özet

ÖZET Adaptif filtre teorisinde üç temel yaklaşım var dır. Bunlardan ilcisi, Wiener Filtre Teorisi'ne dayanan yaklaşım ve Kalman Filtre Teorisi'ne dayanan yaklaşımdır Bu iki yaklaşım da istatiksel kökenli yani bir istatik- sel ön bilgiye dayalıdır. Üçüncü ve diğer yaklaşım, klasik en küçük kare yöntemine dayanır ve diğer iki yön temden deterministik yapısıyla ayrılır. Bu incelemede esas konu olan rekürsif en küçük kare kafes filtreleri {recursive least-square lattice filters) ele alınmadan önce yukarıda adı geçen, temel konu, en küçük kare yön temi incelenmiştir. Wiener filtresinin tasarımı işlenecek veri hak kında istatiksel bir bilgi gerektirir. Filtre, ancak bu istatiksel bilginin işlenecek olan verininkilere uyması durumunda optimumdur. En küçük kare yöntemi, bir ista tiksel ön bilgiye gerek olmaksızın doğrusal filtreleme problemini çözmek için kullanılır. En küçük kare yöntemi anlatılırken, deterministik normal denklem çıkarılmıştır. Bu denklemin çözümü opti mum en küçük kare yöntemi filtre katsayılarını vermekte dir. Bu denklemin çözümünü matris tersi alma işlemi ge- gerekrirmektedir. Rekürsif algoritmalar bu ters alma işlemi olmaksızın bu katsayıları kestirme üzerinde dur maktadırlar. Rekürsif en küçük kare algoritmasına temel teşkil etmesi açısından ileri ve geri doğrusal kestirim, doğru sal kestirimden yola çıkılarak anlatılmıştır. Her iki kestirim için ayrı ayrı normal ve genişletilmiş denklem çıkarılarak adım adım algoritmaya yaklaşılmıştır. Hesaplama karışıklığı problemi, çok evreli kafes kestirici yapısının adaptif filtreyi gerçekleştirmede kullanımasıyla çözülmüştür. Rekürsif en küçük kare kafes filtre algoritması, derece ve zaman güncelleme rekürsiyonları elde edilerek çıkarılmış, bu algoritmanın da yardımıyla birleşik süreç kestirimi algoritmasına geçilmiştir. Yapılan bilgisayar deneylerinde de algoritmanın olumlu sonuç verdiği göz lenmiştir.

Özet (Çeviri)

RECURSIVE LEAST SQUARES LATTICE FILTERS SUMMARY Adaptive filters is also an adaptive system. An adaptive system requires a recursive operation which starts from some predetermined set of initial conditions. Recursive algorithms reduce hardware cost. Because if it is used a nonrecursive algorithms to compute system pa rameters the hardware must be more parallel that is more complex and more expensive. A recursive algorithm makes it possible for the filter to perform satisfactorily in the environment where complete knowlodge of the relevant signal charac- reristics is not available. The algorithm starts with predetermined conditions, representing complate ignoran ce about the environment. Yet, in a stationary environ ment, It is found that after after successive iteration of the algorithm it converges to the optimum Wiener so lution in some statistical sense. In a nonstationary environment, the algorithm offers a tracking capability, whereby it can track time varations in the statistics of the input data, provided that the variations are suffi- cently slow. As a direct consequence of the application of a recursive algorithm, whereby the parameters of an adap tive filter are updated from one iteration to the next, the parameters become data dependent. This, therefore, means that an adaptive filters is a nonlinear device. In another context, an adaptive filter is often referred to as linear in the sense that the estimate of a quantity of interest is optained adaptively at the output of the filter as a linear combination of the ava- iable set of observations aplied to the filter input. A wide variety of recursive algorithms have been developed in the literature for the operation of adapti ve filters. In the final analysis, the choice of one algorithm over another is determined by various factor. There are six basic factors which are considered. These are rate of convergence,misadjustment, robustnes, computa tional requirements, structure and numerical properties.It will be useful to explain those factors in brief. Ra te of convergence is defined as the number of iterations required for the algorithm, Misadjustment is defined as the dimensionless ratio of the steady-state value of the average excess mean squared-error to the minumum mean- squared error. Robustness refers to the abilty of the algorithm to operate satisfactorily with ill conditioned data. Computational requirements are the number of ope ration -such as multiplications, divisions and additions /sub ractions- required to make one complete iteration of the algorithm, the size of memory locations required to store the data and the program, and the investment requ ired to program the.algorithm on a computer. Structure refers to the structure of information flow in the algo rithm, determining the manner in which it is implemented in hardware form. For example an algorithm whose struc ture exhibits high modularity, paallelism or concurency is well-suited for information using' very-large scale integration. The last factor to be explain is the Nume rical properties. When an algorithm is implemented nu merically, in accuracies are produces due to round-off noise and representation errors in the computer. There are two issues of concern, namely, the manner in which error introduced at an arbitrary point in the algorithm propagates to future time instant, and the effect and amplification of round-off noise on the output. For ex ample, certain algorithms are known to be unstable with respect to such errors, which makes them unstable for continious adaptation, unless sum special rescue devices are incorporated. There are three basic approaches to adaptive fil ter theory, each of which offers desirable features of its own. Two of these are the approach based on Wiener Filter Theory and the approach based on Kalman Filter theory. The theory both filters is statistical-oriented The other approach based on the classical method of least squares differs from these two in that is deterministic in its formulation right from the start. According to the method of least square,it is minimi zes an index of performance that consists of the sum of wighted error squares, where the error or residual is is defined as the difference between some desired res ponse and the actual filter output. Depending on the structure used for implementing the adaptive filter, it can be identify three basically different classes of adaptive filtering algorithms that originate from the method of least squares. These are Recursive least-squ ares, recursive least-squares lattice, which is the issue of this paper, and QR decomposition -least squares lattice algorithms, respectively. The design of a Wiener filter requires a priory information about the statistics of the data to be pro cessed. The filter is optimum only when the statistical viicharacteristic of the input data match the a priory in formation on which the design of the filter is based. Method of least squares is used to solve the linear fil tering problem, without inwoking assumption on the stat- tistics of the input aplied to the filter. To illustrate the basic idea of least squares, suppose there is a set of real valued measurements u(l), u(2),..,u(N), made at times t,t,...,t, respectively, and the requirement is and the requViments1 is to consruct a curve that is used to fit these point in some optimum fashion. Let the time depedence of this curve be denoted by f(t.) and u(i) for i = 1,2,..N; hence, the name of the method. The method of least squares may be viewed as the deterministic counter part of Wiener filter theory. Basically, wiener filters are derived from enseble ave rages with the result that one filter (optimun in a pro babilistic sense) is optained for all realizations of the operational environment, assumed to be wide-sense stationary. On the other hand, the method of least squa res yields a different filter for each collection of in put data. The study begins by deriving the deterministic form of normal equation defining the tap weights of a linear transfersal filter that produces an estimate of some desired response due to a set of inputs, which is a optimum in the least-squares sense. It is referred to the resulting filter as the least-squares filter. The study continues by presenting the linear prediction problem. The problem of linear prediction may be viwed as a special type of parameter estimation that is well suited for the method of least squares. In this subject, the least squares transversal filter is used as the linear predictor. Linear prediction and its other forms, forward and backward linear prediction, are fundemental subject of famous least square algorithm such as the fast transversal filter algoritm (FTF) and the subject of this paper, the recursive least squares lattice filters. Some information about the prediction methods mentioned above may be useful to make the: subject appe ar. In forward linear prediction, it is used the set of inputs u(i-l),u(i-2),..,u(i-M+l),u(i-M) to make a linear prediction of u(i). According to this notation, the prediction is made at time (i-1), one step into the f tü re. It is defined the forward linear prediction error as the difference between the desired response u(i) and the prediction output produced by the tap inputs u(i-l), u(i-2),....,u(i-M). When the method of least squares is used to design the predictor, it is chosen its tap viiiweight so as to minimize the sum of forward prediction error energy. It is referred to this method of design a predictor as the forward linear prediction method. In backward linear prediction, it is used the set of inputs u(i-M+l),...,u(i-l),u(i) to make a linear pre diction of u(i-M). According to the notation, the pre diction is made as time (i-M+1), one step into the past. It is defined the backward linear prediction error as the difference between the desired response u(i-M) and the predictor output produced by the tap inputs u(i-M+l),.....,u(i-l). When the method of least squares is used to design the predictor, we chose its tap weight so as to minimize the sum of backward prediction-error squares or the backward prediction-error energy. It is referred to this second method of designing a predictor as the backward linear prediction method. In this paper the normal equation for forward li near prediction and the normal equation for backward li near prediction are presented in deterministic sense. Thus the augmented normal equation for forward linear prediction and augmented normal equation for backward linear prediction which are used to construct the recui - sive least square lattice algorithm are determined. The issue of computational complexity is resol ved by using a multistage lattice predictor as the structural basis of implementing the adaptive filter. This predictor consists of a cascade of stages, each in the form of a lattice; hence, the name. An important property of the multistage lattice predictor is that its induvudal stages are decoupled from each other in a time -averaged sense. This property is exploited in the de rivation of the recursive least-squares lattice (LSL) algorithm that involves both time and order updates. The algorithm is rapidly converged, robust, and compu tationally efficent. Moreover, the highly piplined, mo dular structure of the multistage lattice predictor and associated parts makes the recursuive LSL algorithm well suited for implemetation on a single silicon chip very large scale integration technology. The FTF algorithm uses a maximum of four trans versal filters with a common input, one of which defines impulse of response of the adaptive filter. In this pa per it is described another class of extarct least-squa res algorithms based on a different structure, the mul tistage lattice predictor, which is modular in form. They are known collectively as least-square lattice al gorithms, involving both ordei - update and time-update recursions. These algorithms are as efficent as the FTF algorithm in that they both essentially realize the same ixrate of convergence at the expense of a computational cost that increases linearly with the number of adjust- table tap weights. The class of LSL algorithms is also just as robust as the RSL and FTF algorithms in that that they both essentially insensitive to variations in the eigen value spread of the corrolation matrix of the input signal. The lattice predictors in this paper may operate in nonstationary environments, though with time-invari ant reflection coefficents. As such, they are basically different from the lattice predictor that were based on the assumption of an underlying stationary environment. Because of the basic structural differences bet ween the LSL and FTF algorithms, these two algorithms present the relevant information in different ways. The FTF algorithm presents information about the input data in the form of instantaneous values of transversal fil ter coefficents. By contrast, the LSL algorithm pre sents the information in the form of a corresponding set of reflection coefficents. Such a difference may influ ence the choice of one algorithm over the other, depen ding of the aplication of interest. Finally, as a result, the least squares lattice algorithm is suitable for the environment in which no prior statistical information about input data. It makes no difference whether iput data satatioary sto chastic process or not. Because it presents determinis tic solution to adaptive linear filtering problem. Mo reover this algoritm is rapidly converged and more pipe lined due to its structure. In this work there are two computer experiments which use adaptif LSL algorithm- One is linear predic tion and the other is adaptive channel equalization. Sample outputs of both experiment are also given in the end of the paper.

Benzer Tezler

  1. Uyarlamalı süzgeçler

    Adaptive filters

    RIDVAN AYSEL

    Yüksek Lisans

    Türkçe

    Türkçe

    1994

    Elektrik ve Elektronik Mühendisliğiİstanbul Teknik Üniversitesi

    PROF.DR. AHMET H. KAYRAN

  2. Doğrusal olmayan sistemlerin yapay sinir ağları kullanılarak kimliklendirilmesi

    Identification of nonlinear systems by using artificial neural networks

    HASAN ZORLU

    Yüksek Lisans

    Türkçe

    Türkçe

    2004

    Elektrik ve Elektronik MühendisliğiErciyes Üniversitesi

    Elektronik Mühendisliği Ana Bilim Dalı

    Y.DOÇ.DR. ŞABAN ÖZER

  3. Gain-scheduled air path system control and compressor air mass flow estimation in diesel engines

    Dizel motorlarda kazanç ayarlamalı hava yolu sistemi kontrolü ve kompresör hava kütlesi kestirmesi

    ERCAN ATAM

    Doktora

    İngilizce

    İngilizce

    2010

    Makine MühendisliğiBoğaziçi Üniversitesi

    Makine Mühendisliği Ana Bilim Dalı

    PROF. İ. EMRE KÖSE

  4. Robust adaptive learning approach of artificial neural networks

    Yapay sinir ağları için sağlam adaptif öğrenme yaklaşımı

    ALAA ALI HAMEED HAMEED

    Doktora

    İngilizce

    İngilizce

    2017

    Bilgisayar Mühendisliği Bilimleri-Bilgisayar ve KontrolSelçuk Üniversitesi

    Bilgisayar Mühendisliği Ana Bilim Dalı

    YRD. DOÇ. DR. BARIŞ KOÇER

  5. Modelling, identification and passivity-based control of 6 dof industrial robot

    6 serbestlik dereceli endüstriyel robotun modellenmesi, tanılanması ve pasiflik tabanlı kontrolü

    MEHMET ALİ AKBULUT

    Yüksek Lisans

    İngilizce

    İngilizce

    2020

    Bilgisayar Mühendisliği Bilimleri-Bilgisayar ve Kontrolİstanbul Teknik Üniversitesi

    Makine Mühendisliği Ana Bilim Dalı

    DOÇ. DR. ZEKİ YAĞIZ BAYRAKTAROĞLU