Değişken bağımlılığı ve döngülerde paralellik
Data dependency and parallelism on loops
- Tez No: 19253
- Danışmanlar: DOÇ.DR. BÜLENT ÖRENCİK
- Tez Türü: Yüksek Lisans
- Konular: Elektrik ve Elektronik Mühendisliği, Electrical and Electronics Engineering
- Anahtar Kelimeler: Belirtilmemiş.
- Yıl: 1991
- Dil: Türkçe
- Üniversite: İstanbul Teknik Üniversitesi
- Enstitü: Fen Bilimleri Enstitüsü
- Ana Bilim Dalı: Belirtilmemiş.
- Bilim Dalı: Belirtilmemiş.
- Sayfa Sayısı: 70
Özet
Compilers that automatically detect concurrency in loops use one of several synchronization methods. Some of these methods are listed below. 1) One method is to synchronize on every data dependency (random synchronization). In this method the compiler looks for data dependence that cross from one iteration to another and add an appropriate synchronization primitive for each such dependency. Random placement of synchronization is very flexible but may require many synchronization points. Depending on how the system implements synchronization, this may require too many synchronization registers. There are two methods to improve the speedup of random synchronization. One is to reorder the statements to improve overlap and the other is to eliminate covered dependencies. 2) Another method is to dev ide the loop into segments. Synchronization is added so segment S of iteration i is executed only after segment s o-F iteration i-1 is completed. The iterations are pipelined through the processors. The segments are created so all data dependency relationships stay in the same pipe-line segment or go to a lexically later segment. Pipe-line method only allows lexically forward dependencies. Random is sychronization is very flexible but optimizing this strategy is very difficult in the general case. Pipeline synchronization is more restricted, but is easier to implement. The number of synchronization points is controlled. With random synchronization the number of synchronization points may crow uncontrollably. However because of segmentation, pipelining may unnecessarily break code that need not to be synchonized. 3) A third method is to place barriers at various points of the loop. No iteration can pass beyond the barrier until all iterations reach the barrier. This stategy also allows lexically forward dependencies. When the only dependence relation is lexically forward, barrier synchronization will allow more parallelism then pipel ining. Problems with barrier synchronization occur when temporary variables are used in the loop. If a temporary variable is used in a statement and this statement is executed for all the iterations the value for the single iteration is lost. This problem can be used by using iteration local dimensional variables. A compiler that translates serial loops into concurrent loop will recognize the need for such iteration local variables. ix
Özet (Çeviri)
4) A -fourth method is to -Find sections o-F loop where communication is concentrated and make this sections in to critical sections. An advantage of critical sections is they handle backward dependencies. The compiler's goal is to insert as -Few critical sections as necessary and make them as small as possible. Because speedup is limited by the size o-F the largest critical section. When there are only a -Few lexically backward dependencies, critical sections provide as much parallelism as random synchronization. 5) Another method is to devide the loop according to a speci-Fic algorithm in to blocks and execute them -For all iterations on seperate processors. This algorithm specifies the blocks which can execute concurrently. This method also handles backward dependencies. Powerful compiler systems, such as Para-Frase Analyzer at University of Illinois, PFC at Rice University, Ptran at IBM use any of these mechanisms.Compilers that automatically detect concurrency in loops use one of several synchronization methods. Some of these methods are listed below. 1) One method is to synchronize on every data dependency (random synchronization). In this method the compiler looks for data dependence that cross from one iteration to another and add an appropriate synchronization primitive for each such dependency. Random placement of synchronization is very flexible but may require many synchronization points. Depending on how the system implements synchronization, this may require too many synchronization registers. There are two methods to improve the speedup of random synchronization. One is to reorder the statements to improve overlap and the other is to eliminate covered dependencies. 2) Another method is to dev ide the loop into segments. Synchronization is added so segment S of iteration i is executed only after segment s o-F iteration i-1 is completed. The iterations are pipelined through the processors. The segments are created so all data dependency relationships stay in the same pipe-line segment or go to a lexically later segment. Pipe-line method only allows lexically forward dependencies. Random is sychronization is very flexible but optimizing this strategy is very difficult in the general case. Pipeline synchronization is more restricted, but is easier to implement. The number of synchronization points is controlled. With random synchronization the number of synchronization points may crow uncontrollably. However because of segmentation, pipelining may unnecessarily break code that need not to be synchonized. 3) A third method is to place barriers at various points of the loop. No iteration can pass beyond the barrier until all iterations reach the barrier. This stategy also allows lexically forward dependencies. When the only dependence relation is lexically forward, barrier synchronization will allow more parallelism then pipel ining. Problems with barrier synchronization occur when temporary variables are used in the loop. If a temporary variable is used in a statement and this statement is executed for all the iterations the value for the single iteration is lost. This problem can be used by using iteration local dimensional variables. A compiler that translates serial loops into concurrent loop will recognize the need for such iteration local variables. ix4) A -fourth method is to -Find sections o-F loop where communication is concentrated and make this sections in to critical sections. An advantage of critical sections is they handle backward dependencies. The compiler's goal is to insert as -Few critical sections as necessary and make them as small as possible. Because speedup is limited by the size o-F the largest critical section. When there are only a -Few lexically backward dependencies, critical sections provide as much parallelism as random synchronization. 5) Another method is to devide the loop according to a speci-Fic algorithm in to blocks and execute them -For all iterations on seperate processors. This algorithm specifies the blocks which can execute concurrently. This method also handles backward dependencies. Powerful compiler systems, such as Para-Frase Analyzer at University of Illinois, PFC at Rice University, Ptran at IBM use any of these mechanisms.Compilers that automatically detect concurrency in loops use one of several synchronization methods. Some of these methods are listed below. 1) One method is to synchronize on every data dependency (random synchronization). In this method the compiler looks for data dependence that cross from one iteration to another and add an appropriate synchronization primitive for each such dependency. Random placement of synchronization is very flexible but may require many synchronization points. Depending on how the system implements synchronization, this may require too many synchronization registers. There are two methods to improve the speedup of random synchronization. One is to reorder the statements to improve overlap and the other is to eliminate covered dependencies. 2) Another method is to dev ide the loop into segments. Synchronization is added so segment S of iteration i is executed only after segment s o-F iteration i-1 is completed. The iterations are pipelined through the processors. The segments are created so all data dependency relationships stay in the same pipe-line segment or go to a lexically later segment. Pipe-line method only allows lexically forward dependencies. Random is sychronization is very flexible but optimizing this strategy is very difficult in the general case. Pipeline synchronization is more restricted, but is easier to implement. The number of synchronization points is controlled. With random synchronization the number of synchronization points may crow uncontrollably. However because of segmentation, pipelining may unnecessarily break code that need not to be synchonized. 3) A third method is to place barriers at various points of the loop. No iteration can pass beyond the barrier until all iterations reach the barrier. This stategy also allows lexically forward dependencies. When the only dependence relation is lexically forward, barrier synchronization will allow more parallelism then pipel ining. Problems with barrier synchronization occur when temporary variables are used in the loop. If a temporary variable is used in a statement and this statement is executed for all the iterations the value for the single iteration is lost. This problem can be used by using iteration local dimensional variables. A compiler that translates serial loops into concurrent loop will recognize the need for such iteration local variables. ix4) A -fourth method is to -Find sections o-F loop where communication is concentrated and make this sections in to critical sections. An advantage of critical sections is they handle backward dependencies. The compiler's goal is to insert as -Few critical sections as necessary and make them as small as possible. Because speedup is limited by the size o-F the largest critical section. When there are only a -Few lexically backward dependencies, critical sections provide as much parallelism as random synchronization. 5) Another method is to devide the loop according to a speci-Fic algorithm in to blocks and execute them -For all iterations on seperate processors. This algorithm specifies the blocks which can execute concurrently. This method also handles backward dependencies. Powerful compiler systems, such as Para-Frase Analyzer at University of Illinois, PFC at Rice University, Ptran at IBM use any of these mechanisms.
Benzer Tezler
- İstanbul'daki yeme bağımlılığı bulunan bireylerin pozitif ve negatif duygu durumlarının incelenmesi
Examination of the relationship between food addictionand positive-negative emotions in Istanbul
AYŞEGÜL ÖZKULA
Yüksek Lisans
Türkçe
2019
Beslenme ve DiyetetikÜsküdar ÜniversitesiKlinik Psikoloji Ana Bilim Dalı
DR. ÖĞR. ÜYESİ HÜSEYİN ÜNÜBOL
- Erken erişkinlerde internet bağımlılığı ve cinsel kompulsiyonlar arasındaki ilişkide yalnızlığın aracı rolünün incelenmesi
Mediation of loneliness in the relationship of adults internet addiction and sexual compulsions
YASEMİN YALÇIN
Yüksek Lisans
Türkçe
2019
PsikolojiÜsküdar ÜniversitesiKlinik Psikoloji Ana Bilim Dalı
DOÇ. DR. GÜL ERYILMAZ
- Karışık madde bağımlısı olgularının erken dönem uyumsuz şemaları, başa çıkma tutumları ve benlik saygıları yönünden karşılaştırılması
Comparison of mixed-substance abuse cases in terms of patients' early maladjustment schemas, coping behavior, and self-esteem
ŞİMAL ÇAVDAR
Yüksek Lisans
Türkçe
2016
PsikolojiÜsküdar ÜniversitesiKlinik Psikoloji Ana Bilim Dalı
YRD. DOÇ. DR. CEMAL ONUR NOYAN
- Üniversite öğrencilerinde yeme bağımlılığının dürtüsellik ve beden algısıyla ilişkisi
Relationship between food addiction and impulsivity and body image among university students
EZGİ GENEL
Yüksek Lisans
Türkçe
2018
PsikolojiÜsküdar ÜniversitesiKlinik Psikoloji Ana Bilim Dalı
YRD. DOÇ. DR. FATMA DUYGU KAYA YERTUTANOL
- External shocks and macroeconomic policies in Africa
Afrika'daki dışsal şoklar ve makroekonomik politikalar
YACOUBA KASSOURI