Geri Dön

Değişken bağımlılığı ve döngülerde paralellik

Data dependency and parallelism on loops

  1. Tez No: 19253
  2. Yazar: GÜLDEN ÇEVİK
  3. Danışmanlar: DOÇ.DR. BÜLENT ÖRENCİK
  4. Tez Türü: Yüksek Lisans
  5. Konular: Elektrik ve Elektronik Mühendisliği, Electrical and Electronics Engineering
  6. Anahtar Kelimeler: Belirtilmemiş.
  7. Yıl: 1991
  8. Dil: Türkçe
  9. Üniversite: İstanbul Teknik Üniversitesi
  10. Enstitü: Fen Bilimleri Enstitüsü
  11. Ana Bilim Dalı: Belirtilmemiş.
  12. Bilim Dalı: Belirtilmemiş.
  13. Sayfa Sayısı: 70

Özet

Compilers that automatically detect concurrency in loops use one of several synchronization methods. Some of these methods are listed below. 1) One method is to synchronize on every data dependency (random synchronization). In this method the compiler looks for data dependence that cross from one iteration to another and add an appropriate synchronization primitive for each such dependency. Random placement of synchronization is very flexible but may require many synchronization points. Depending on how the system implements synchronization, this may require too many synchronization registers. There are two methods to improve the speedup of random synchronization. One is to reorder the statements to improve overlap and the other is to eliminate covered dependencies. 2) Another method is to dev ide the loop into segments. Synchronization is added so segment S of iteration i is executed only after segment s o-F iteration i-1 is completed. The iterations are pipelined through the processors. The segments are created so all data dependency relationships stay in the same pipe-line segment or go to a lexically later segment. Pipe-line method only allows lexically forward dependencies. Random is sychronization is very flexible but optimizing this strategy is very difficult in the general case. Pipeline synchronization is more restricted, but is easier to implement. The number of synchronization points is controlled. With random synchronization the number of synchronization points may crow uncontrollably. However because of segmentation, pipelining may unnecessarily break code that need not to be synchonized. 3) A third method is to place barriers at various points of the loop. No iteration can pass beyond the barrier until all iterations reach the barrier. This stategy also allows lexically forward dependencies. When the only dependence relation is lexically forward, barrier synchronization will allow more parallelism then pipel ining. Problems with barrier synchronization occur when temporary variables are used in the loop. If a temporary variable is used in a statement and this statement is executed for all the iterations the value for the single iteration is lost. This problem can be used by using iteration local dimensional variables. A compiler that translates serial loops into concurrent loop will recognize the need for such iteration local variables. ix

Özet (Çeviri)

4) A -fourth method is to -Find sections o-F loop where communication is concentrated and make this sections in to critical sections. An advantage of critical sections is they handle backward dependencies. The compiler's goal is to insert as -Few critical sections as necessary and make them as small as possible. Because speedup is limited by the size o-F the largest critical section. When there are only a -Few lexically backward dependencies, critical sections provide as much parallelism as random synchronization. 5) Another method is to devide the loop according to a speci-Fic algorithm in to blocks and execute them -For all iterations on seperate processors. This algorithm specifies the blocks which can execute concurrently. This method also handles backward dependencies. Powerful compiler systems, such as Para-Frase Analyzer at University of Illinois, PFC at Rice University, Ptran at IBM use any of these mechanisms.Compilers that automatically detect concurrency in loops use one of several synchronization methods. Some of these methods are listed below. 1) One method is to synchronize on every data dependency (random synchronization). In this method the compiler looks for data dependence that cross from one iteration to another and add an appropriate synchronization primitive for each such dependency. Random placement of synchronization is very flexible but may require many synchronization points. Depending on how the system implements synchronization, this may require too many synchronization registers. There are two methods to improve the speedup of random synchronization. One is to reorder the statements to improve overlap and the other is to eliminate covered dependencies. 2) Another method is to dev ide the loop into segments. Synchronization is added so segment S of iteration i is executed only after segment s o-F iteration i-1 is completed. The iterations are pipelined through the processors. The segments are created so all data dependency relationships stay in the same pipe-line segment or go to a lexically later segment. Pipe-line method only allows lexically forward dependencies. Random is sychronization is very flexible but optimizing this strategy is very difficult in the general case. Pipeline synchronization is more restricted, but is easier to implement. The number of synchronization points is controlled. With random synchronization the number of synchronization points may crow uncontrollably. However because of segmentation, pipelining may unnecessarily break code that need not to be synchonized. 3) A third method is to place barriers at various points of the loop. No iteration can pass beyond the barrier until all iterations reach the barrier. This stategy also allows lexically forward dependencies. When the only dependence relation is lexically forward, barrier synchronization will allow more parallelism then pipel ining. Problems with barrier synchronization occur when temporary variables are used in the loop. If a temporary variable is used in a statement and this statement is executed for all the iterations the value for the single iteration is lost. This problem can be used by using iteration local dimensional variables. A compiler that translates serial loops into concurrent loop will recognize the need for such iteration local variables. ix4) A -fourth method is to -Find sections o-F loop where communication is concentrated and make this sections in to critical sections. An advantage of critical sections is they handle backward dependencies. The compiler's goal is to insert as -Few critical sections as necessary and make them as small as possible. Because speedup is limited by the size o-F the largest critical section. When there are only a -Few lexically backward dependencies, critical sections provide as much parallelism as random synchronization. 5) Another method is to devide the loop according to a speci-Fic algorithm in to blocks and execute them -For all iterations on seperate processors. This algorithm specifies the blocks which can execute concurrently. This method also handles backward dependencies. Powerful compiler systems, such as Para-Frase Analyzer at University of Illinois, PFC at Rice University, Ptran at IBM use any of these mechanisms.Compilers that automatically detect concurrency in loops use one of several synchronization methods. Some of these methods are listed below. 1) One method is to synchronize on every data dependency (random synchronization). In this method the compiler looks for data dependence that cross from one iteration to another and add an appropriate synchronization primitive for each such dependency. Random placement of synchronization is very flexible but may require many synchronization points. Depending on how the system implements synchronization, this may require too many synchronization registers. There are two methods to improve the speedup of random synchronization. One is to reorder the statements to improve overlap and the other is to eliminate covered dependencies. 2) Another method is to dev ide the loop into segments. Synchronization is added so segment S of iteration i is executed only after segment s o-F iteration i-1 is completed. The iterations are pipelined through the processors. The segments are created so all data dependency relationships stay in the same pipe-line segment or go to a lexically later segment. Pipe-line method only allows lexically forward dependencies. Random is sychronization is very flexible but optimizing this strategy is very difficult in the general case. Pipeline synchronization is more restricted, but is easier to implement. The number of synchronization points is controlled. With random synchronization the number of synchronization points may crow uncontrollably. However because of segmentation, pipelining may unnecessarily break code that need not to be synchonized. 3) A third method is to place barriers at various points of the loop. No iteration can pass beyond the barrier until all iterations reach the barrier. This stategy also allows lexically forward dependencies. When the only dependence relation is lexically forward, barrier synchronization will allow more parallelism then pipel ining. Problems with barrier synchronization occur when temporary variables are used in the loop. If a temporary variable is used in a statement and this statement is executed for all the iterations the value for the single iteration is lost. This problem can be used by using iteration local dimensional variables. A compiler that translates serial loops into concurrent loop will recognize the need for such iteration local variables. ix4) A -fourth method is to -Find sections o-F loop where communication is concentrated and make this sections in to critical sections. An advantage of critical sections is they handle backward dependencies. The compiler's goal is to insert as -Few critical sections as necessary and make them as small as possible. Because speedup is limited by the size o-F the largest critical section. When there are only a -Few lexically backward dependencies, critical sections provide as much parallelism as random synchronization. 5) Another method is to devide the loop according to a speci-Fic algorithm in to blocks and execute them -For all iterations on seperate processors. This algorithm specifies the blocks which can execute concurrently. This method also handles backward dependencies. Powerful compiler systems, such as Para-Frase Analyzer at University of Illinois, PFC at Rice University, Ptran at IBM use any of these mechanisms.

Benzer Tezler

  1. İstanbul'daki yeme bağımlılığı bulunan bireylerin pozitif ve negatif duygu durumlarının incelenmesi

    Examination of the relationship between food addictionand positive-negative emotions in Istanbul

    AYŞEGÜL ÖZKULA

    Yüksek Lisans

    Türkçe

    Türkçe

    2019

    Beslenme ve DiyetetikÜsküdar Üniversitesi

    Klinik Psikoloji Ana Bilim Dalı

    DR. ÖĞR. ÜYESİ HÜSEYİN ÜNÜBOL

  2. Erken erişkinlerde internet bağımlılığı ve cinsel kompulsiyonlar arasındaki ilişkide yalnızlığın aracı rolünün incelenmesi

    Mediation of loneliness in the relationship of adults internet addiction and sexual compulsions

    YASEMİN YALÇIN

    Yüksek Lisans

    Türkçe

    Türkçe

    2019

    PsikolojiÜsküdar Üniversitesi

    Klinik Psikoloji Ana Bilim Dalı

    DOÇ. DR. GÜL ERYILMAZ

  3. Karışık madde bağımlısı olgularının erken dönem uyumsuz şemaları, başa çıkma tutumları ve benlik saygıları yönünden karşılaştırılması

    Comparison of mixed-substance abuse cases in terms of patients' early maladjustment schemas, coping behavior, and self-esteem

    ŞİMAL ÇAVDAR

    Yüksek Lisans

    Türkçe

    Türkçe

    2016

    PsikolojiÜsküdar Üniversitesi

    Klinik Psikoloji Ana Bilim Dalı

    YRD. DOÇ. DR. CEMAL ONUR NOYAN

  4. Üniversite öğrencilerinde yeme bağımlılığının dürtüsellik ve beden algısıyla ilişkisi

    Relationship between food addiction and impulsivity and body image among university students

    EZGİ GENEL

    Yüksek Lisans

    Türkçe

    Türkçe

    2018

    PsikolojiÜsküdar Üniversitesi

    Klinik Psikoloji Ana Bilim Dalı

    YRD. DOÇ. DR. FATMA DUYGU KAYA YERTUTANOL

  5. External shocks and macroeconomic policies in Africa

    Afrika'daki dışsal şoklar ve makroekonomik politikalar

    YACOUBA KASSOURI

    Doktora

    İngilizce

    İngilizce

    2020

    EkonomiErciyes Üniversitesi

    İktisat Ana Bilim Dalı

    PROF. DR. HALİL ALTINTAŞ