Okunabilir kopyalama algoritmalı DSM sisteminin gerçeklenmesi
Başlık çevirisi mevcut değil.
- Tez No: 75312
- Danışmanlar: DOÇ. DR. TAKUHİ NADİA ERDOĞAN
- Tez Türü: Yüksek Lisans
- Konular: Bilgisayar Mühendisliği Bilimleri-Bilgisayar ve Kontrol, Computer Engineering and Computer Science and Control
- Anahtar Kelimeler: Belirtilmemiş.
- Yıl: 1998
- Dil: Türkçe
- Üniversite: İstanbul Teknik Üniversitesi
- Enstitü: Fen Bilimleri Enstitüsü
- Ana Bilim Dalı: Kontrol ve Bilgisayar Mühendisliği Ana Bilim Dalı
- Bilim Dalı: Belirtilmemiş.
- Sayfa Sayısı: 83
Özet
ÖZET Bu çalışmada Okunabilir Kopyalama algoritmasına dayanan bir Dağıtılmış Ortak Bellek sistemi geliştirilmiştir, Bu algoritmanın seçilmesinde bu sistem üzerinde kullanılabilecek programlarda değişkenin değeri değiştirme işleminin (yazma işleminin) az miktarda okuma işleminin ise çok miktarda yapılacağı düşünülmüştür. Güncelleme Modeli olarak Serbest güncelleme modeline benzeyen bir model kullanılmıştır. Bu güncelleme modelinin daha hızlı ve güvenilir şekilde güncelleme işlemini sağlaması bu modelin seçilmesinde etkin rol oynamıştır. Güncelleme protokolü olarakta Yazma-İptal etme protokolü kullanılmış bu sayede daha az sayıda mesaj transferi yapılacağı ve sistemin daha etkin çalışacağı düşünülmüştür. Sistem Windows NT işletim sistemi üzerinde Windows 95 yüklü makinelerde çalışacak şekilde tasarlanmıştır. Programlama dili olarak JAVA programlama dili seçilmiş oması itibarıyla programda yapılabilecek ufak değişiklerle, JAVA nın platformdan bağımsız olabilme özelli kullanılarak sistemin Unix işletim sistemi yüklü bir ağ sistemindede çalıştırılabilme imkanı vardır. Programın geliştirilme aşamaları ve kullanım kılavuzu hakkında geniş bilgiler Bölüm 5 te detaylı olarak verilmiştir.
Özet (Çeviri)
SUMMARY 1. INTRODUCTION As the need for more computing power demanded by new applications constantly increases, systems with multiple processors are becoming a necessity. However, it seems that the programming of such systems still requires significant efforts and skills. The commercial success of multiprocessor and distributed systems will be highly dependent on favours of the programming paradigms they offer. In this direction, numerous ongoing research efforts are focused on an increasingly attractive class of parallel computer systems - distributed shared memory systems, which are the main topic of this article. Processor speed and memory speeds are not same. There is a relatively big gap between them. As a result of this, the memory system organization becomes one of the most critical design decisions to be made by computer architects. According to the memory system organization, systems with multiple processors can be classified into two large groups: shared memory systems and distributed memory systems. In a shared memory system (often called a tightly-coupled multiprocessor), a single global physical memory is equally accessible to all processors. The ease of programming due to the simple and general programming model is the main advantage of this kind of systems. However, they typically suffer from increased contention in accessing the shared memory, especially in single bus topology, which limits their scalability, and the design of the memory system is a very complex issue. A distributed memory system (often called a multicomputer) consists of a collection of autonomous processing nodes, having an independent flow of control and local memory modules. Communication between processes residing on different nodes is achieved through a message passing model, via a general interconnection network. Such a programming model imposes significant burden on the programmer, and induces a considerable software overhead. On the other hand, these systems are claimed to have better scalability and cost- effectiveness. A relatively new concept - distributed shared memory (DSM) - tries to combine the better of these two approaches. A DSM system logically implements shared memory model on a physically distributed memory system. This approach hides the mechanism of communication between remote sites from the application writer, so the ease of programming and the portability typical for shared memory systems, as well as the scalability and cost-effectiveness of distributed memory systems, can be achieved with less engineering effort Figure 1 depicts the DSM concepts. Considerable research endeavours have been devoted recently to the building of DSM systems, although the vast majority of DSM systems are implemented only as experimental prototypes in the research laboratories of universities.Figure 1 DSM System 2- DSM Implementation Schemes: Software and Hardware In recent years many DSM systems have been designed in the research labs of universities. These systems can be classified as software and hardware systems. Both schemes have some advantages and disadvantages which will be briefly explained below with representative sample systems. 2.1. The Software Approach The early research projects that explored the relatively original concept of DSM started with the idea to hide message passing mechanism in the loosely coupled systems, typically on a network of workstations. The goal was to provide the abstraction of shared memory to the programmer. Some software-based solutions try to integrate the DSM mechanism using an existing virtual memory management system. Software support for DSM is generally more flexible and convenient for experiments than hardware implementations, but in many cases can not compete with the hardware level DSM in performance. Nevertheless, majority of DSM systems described in the open literature was based on software mechanisms, since networks of workstations are getting more popular and powerful. Therefore, the use of DSM concept seems to be an appropriate, and relatively low- cost solution for their use as parallel computers. Some samples of software based DSM systems are IVY IVY [3] is a DSM which implements the sequential consistency model on a ring of Apollo workstations running a modified version of the Aegis operating system. IVY uses the write invalidate update protocol and implements multiple reader - single writer semantics. The granularity of access is a 1Kbyte page - for access detection to shared memory locations the virtual memory primitives are used. Write accesses and first read accesses to a shared page cause page faults; the page fault handler acquires the page from the current holder. Treadmarks TreadMarks [4] implements distributed shared memory at the user level on a network of workstations. Users can allocate shared memory using a special memory allocator. TreadMarks uses a lazy release consistency memory model. A new kind of consistency model for DSM systems called lazy release consistency (LRC) is currently evaluated in Munin and TreadMarks. LRC reduces memory coherence related communication with similar mechanisms as entry consistency developed for the Midway system. The thesis discusses LRC in very much detail heavily dealing with performance and correctness issues.CRL CRL [6] is an all software, relaxed consistency DSM system developed at MIT. It is implemented as a user level library. It has been shown to be competitive with hardware DSM on a system with low latency, high bandwidth message passing. 2.2.The Hardware Approach Hardware-level implementation of DSM mechanisms can be seen as a natural extension of cache coherence mechanisms used in shared-memory multiprocessor with private caches. The hardware approach has two very important advantages: complete transparency to the programmer, and generally better performance than other approaches. Since hardware implementations typically use a smaller unit of sharing (eg., cache block), they are less susceptible to false sharing and thrashing effects. Hardware implementations are particularly superior for applications that have high level of fine-grain sharing. These solutions are predominantly based on directory schemes, in order to achieve scalability. The use of the snooping method is limited to systems with the appropriate type of network (eg., bus, ring) and smaller bus-based system components (clusters). 3. Design issues of a software based DSM systems The main problems that every DSM approach has to address are: a) mapping of a logically shared address space onto the physically distributed memory modules, b) locating and accessing a needed data item, and c) preserving the coherent view of the overall shared address space. The crucial objective of solving those problems is the minimization of the average access time to the shared data. To achieve this goal, two strategies for distribution of shared data are most frequently applied: replication and migration. Replication allows multiple copies of the same data item reside in different local memories, in order to increase the parallelism in accessing logically shared data. Migration implies a single copy of a data item which has to be moved to the accessing site, counting on the locality of reference in parallel applications. Besides that, just as in shared-memory systems with private caches, systems with distributed shared memory have to deal with the consistency problem, when replicated copies of the same data exist. In order to preserve the coherent view of shared address space, a read operation must return the most recently written value. Therefore, when one of multiple copies of data is written, the others become nonupdated, and have to be invalidated or updated, depending on the applied coherence policy. Although some coherence semantics, that will be briefly explained below, provides the most natural view of shared address space, various weaker forms of memory consistency can be applied in order to reduce latency. As a consequence of applied strategies and distribution of shared address space across different memories, on some memory reference, the data item and its copies have to be located and managed according to a mechanism which is appropriate for such an architecture. The solutions to the above problems are incorporated into a DSM algorithm, that can be implemented at the hardware and/or software level. The implementation level of a DSM mechanism is regarded as the basic design decision, since it profoundly affects system performance. The other important issues include: structure and granularity of shared data, memory consistency model that determines allowable memory access orderings, coherence XIpolicy (invalidates or update). Thus, the selection of the algorithm, consistency model and coherence protocol are very critical in designing a good DSM system. 3.1 DSM Algorithms The overall performance of a DSM system is highly dependent on the correspondence between the applied DSM algorithm and the access patterns generated by the application. The classification of these algorithms is done according to their read and writes accesses, which are : a) Central Server algorithm b) Migration algorithm,SRSW (Single Reader/Single Writer) c) Read replication algorithm,MRSW (Multiple Reader/Single Writer), d) Full replication algorithm, MRMW (Multiple Reader/Multiple Writer). 3.1.1.Central-Server Algorithm In the Central-Server Algorithm, a central-server maintains all the shared data. It services the read requests from other nodes or clients by returning the data items to them. It updates the data on write requests by clients and returns acknowledgment messages. A timeout can be employed to resend the requests in case of failed acknowledgments. Duplicate write requests can be detected by associating sequence numbers with write requests. A failure condition is returned to the application trying to access shared data after several retransmissions without a response. Although, the central-server algorithm is simple to implement, the central-server can become a bottleneck. To overcome this problem, shared data can be distributed among several servers. In such a case, clients must be able to locate the appropriate server for every data access. Multicasting data access requests is undesirable as it does not reduce the load at the servers compared to the central-server scheme. A better way to distribute data is to partition the shared data by address and use a mapping function to locate the appropriate server. 3.1.2.Migration Algorithm In the Migration Algorithm, the data is shipped to the location of the data access request allowing subsequent accesses to the data to be performed locally. The migration algorithm allows only one node to access a shared data at a time. This is a single reader/single writer protocol, since only the threads executing on one host can read or write a given data item at any time. 3.1.3.Read-Replication Algorithm One disadvantage of the migration algorithm is that only the threads on one host can access data contained in the same block at any given time. Replication can reduce the average cost of read operations, since it allows read operations to be simultaneously executed locally ( with no communication overhead ) at multiple hosts. However, some of the write operations may become more expensive, since the replicas may have to be invalidated or updated to maintain consistency. Nevertheless, if the ratio of reads overwrites is large, the extra expense for the write operations may be more than offset by the lower average cost of the read operations. Replication can be naturally added to the migration algorithm by allowing either one site a read/write copy of a particular block or multiple sites read-only copies of that block. This type of replication Is referred to as multiple readers/single writer replication.For a read operation on a data item in a block that is currently not local, it is necessary to communicate with remote sites to first acquire a read-only copy of that block and to change the access rights to any writable copy to read only if necessary before the read operation can complete. For a write operation to data in a block that is either not local or for which the local host has no write permission, all copies of the same block held at other sites must be invalidated before the write can proceed. The read-replication algorithm is consistent because a read access always returns the value of the most recent write to the same location. In this algorithm, DSM must keep track of the location of all the copies of data blocks. One way to do this is to have the owner node of a data block keep track of all the nodes that have a copy of the data block. Alternatively, a distributed linked list may be used to keep track of all the nodes that have a copy of the data block. 3.1.4.Full-RepIication Algorithm The full replication algorithm is an extension of the read replication algorithm. It allows multiple nodes to have both read and write access to shared data blocks ( the multiple readers- multiple writers protocol ). Because many nodes can write shared data concurrently, the access to shared data must be controlled to maintain its consistency. One possible way to keep the replicated data consistent is to globally sequence the write operations. A simple strategy based on sequencing uses a single global gap-free sequencer which is a process executing on a host participating in DSM. When a process attempts a write to shared memory, the intended modification is sent to the sequencer. This sequencer assigns the next sequence number to the modification with this sequence number to all sites. Processes on each site, broadcast write operations in sequence number order. When a modification arrives at a site, the sequence number is verified as the next expected one. If a gap in the sequence numbers is detected, either a modification was missed or a modification was received out of order, in which case a retransmissions of the modification message is requested. In effect, this strategy implements a negative acknowledgment protocol 4.1.Consistency Model In a DSM system data replication is very important as it improves the performance of the system. The system has to know which machine has the latest correct value of a data item which is governed by the system's consistency semantics. The following consistency models may be applied : 4.1.1.Strict Consistency The most stringent consistency model is called strict consistency. It is defined by the following condition:“Any read to a memory location X returns the value stored by the most recent write operation to X”[7]. This definition implicitly assumes the existence of absolute global time so that the determination of“most recent”is unambiguous. Uniprocessors have traditionally observed strict consistency. In summary, when memory is strictly consistent, all writes are instantaneously visible to all processes and an absolute global time order is maintained. If a memory location is changed, all subsequent reads from that location see the new value, no matter how soon after the change the reads are done and no matter which processes are doing the reading and where they are located. Similarly, if a read is done, it gets the then current value, no matter how quickly the next write is done.4.1.2.PRAM Consistency and Processor Consistency In causal consistency, it is permitted that concurrent writes be seen in a different order on different machines, although causally related ones must be seen in the same order by all machines. The next step in relaxing memory is to drop the latter requirement. Doing so gives PRAM consistency, which is subject to the condition [7]:“ Writes done by a single process are received by all other processes in the order in which they were issued, but writes from different processes may be seen in a different order by different processes.”. PRAM stands for Pipelined RAM, because writes by a single process can be pipelined, that is, the process does not have to stall waiting for each one to complete before starting the next one. 4.1 J. Weak Consistency Although processor consistency can give better performance than the stronger models, they are still unnecessarily restrictive for many applications because they require that writes originating in a single process be seen everywhere in order. Not all applications require even seeing all writes, let alone seeing them in order. Consider the case of a process inside a critical section reading and writing some variables in a tight loop. Even though other processes are not supposed to touch the variables until the first process has left its critical section, the memory has no way of knowing when a process is in a critical section and when it is not, so it has to propagate all writes to all memories in the usual way. A model is weak consistency, if it has three properties: 1. Accesses to synchronization variables are sequentially consistent. 2. No access to a synchronization variable is allowed to be performed until all previous writes have completed everywhere. 3. No data access ( read or write ) is allowed to be performed until all previous accesses to synchronization variables have been performed. 4.1.4.ReIease Consistency Weak consistency has the problem that when a synchronization variable is accessed, the memory does not know whether this is being done because the process is finished writing the shared variables or about to start reading them. Consequently, it must take the actions required in both cases, namely making sure that all locally initiated writes have been completed, as well as gathering in all writes from other machines. If the memory could tell the difference between entering a critical region and leaving one, a more efficient implementation might be possible. To provide this information, two kinds of synchronization variables or operations are needed instead of one. Release consistency provides these two kinds. Acquire accesses are used to tell the memory system that a critical region is about to be entered. Release accesses say that a critical region has just been exited. These accesses can be implemented either as ordinary operations on special variables or as special operations. [5] 5.Coherency Protocols In a DSM system the consistency of the data is very important and can be provided by one of the models explained above. A DSM system has also to select a coherency protocol which controls“ how the data in other processors can be updated”. This can be established by a one of twocoherency protocol, Write-Invalidate Protocol or Write-Update Protocol 5.1.Write-Invalidate Protocol: This protocol is commonly implemented in the form of multiple-reader-single-writer sharing. At any time, a data item may either be: XIV. accessed in read-only mode by one or more processes. read and written by a single process An item that is currently accessed in read-only mode can be copied indefinitely to other processes. When a process attempts to write to it, a multicast message is sent to all other copies to invalidate them, and this is acknowledged before the write can take place; the other processes are thereby prevented from reading stale data. Any processes attempting to access the data item are blocked if a writer exists. Eventually, control is transferred from the writing process and other accesses may take place once the update has been sent. The effect is to process all accesses to the item on a first-come-first-served basis. This scheme achieves sequential consistency. Under the invalidation scheme, updates are only propagated when data are read, and several updates can take place before communication is necessary. Against this must be placed the cost of invalidating read-only copies before a write can occur. In the multiple-reader-single- write scheme described, this is potentially expensive. But, if the read/write ratio is sufficiently high, then the parallelism obtained by allowing multiple simultaneous readers offsets this cost. Where the read/write ratio is relatively small, a single-reader-single-writer scheme can be more appropriate: ie, one in which at most one process may be granted read-only access at a time. 5.2.Write-Update Protocol: In the write update protocol, the updates made by a process are made locally and multicast to all other replica managers possessing a copy of the data item, which immediately modify the data read by local processes. Processes read the local copies of data items, without the need for communication. In addition to allowing multiple readers, several processes may write the same data item at the same time; this is known as multiple-reader-multiple-writer sharing. The memory consistency model that is implemented with write-update depends on several factors, mainly the multicast ordering property. Sequential consistency can be achieved by using multicasts that are totally ordered which do not return until the update message has been delivered locally. All processes then agree on the order of updates. The set of reads that take place between any two consecutive updates is well defined, and their ordering is immaterial to sequential consistency. Reads are cheap in the write-update option. However, ordered multicast protocols are relatively expensive to implement in software. 6. Design and Implementation of the DSM System A DSM Sytem based on the Read Replication Algorithm is is designed and implemented. This is a new generation DSM system implementation designed for *86 macnines running Microsoft Windows NT 4.0 network on Windows 95 computers. Program uses selective multicasting to reduce the number of message passing between porcesses. One of the main disadvantages of DSM sytems is the large number of messages introduced on the network. One DSM system can be said to be better if it can reduce message traffic To decrease the number of messages there is a necessity to send messages to only to target adresses ( and not to all adresses ) in the network.Even though broadcasting is the easier way to implement a DSM System, we have designed the infrastructure to use“selective multicast”to reduce the message trafic on the network. Traditionally, there was a big gap between the higher performance achievable on engineering workstations and Personnel Computers(PC). As a result of this, both hardware and software DSM systems have been implemented on Unix based computers. Recent increases in PC performances, the exceptionally low cost of PCs relative to that of engineering workstations and introduction of advanced PC network programming makes network of PCs an attractive xvalternative options for large scientific computations. Windows NT differs substantially from Unix. The major differences that directly affect DSM implementation and performance are. Windows NT has native multithreading support built into the Operating System. Exeption handling is implemented through structured exception handling.. Windows NT implements TCP/IP through the WinSock user-level library One of the significant features of the Windows NT system is the native support for multithreaded operation. Windows NT provides support for multiple ligthweight threads executing within the same process adress space. The Win32 API provides a rich set of calls to address threading issues, including support for thread priority manipulation, synchronization,. Standard Unix does not provide for lightweight threads, although there are several lightweight thread packages, such as Pthreads, that are available to run on top of Unix. The DSM system is implementated in programming language Our choice is due to Java being an object-oriented (facilities of Java are essentially those of C++), distributed (it has libraries for coping with TCP/IP protocols), portable( it is platform independent), and multithreaded language [8]. In addition to these, network programming in Java is very easy. Java offers socket-based communications that enable applications to view networking as if it were a file I/O- a program can read from a socket or write to a socket as simply as reading from a file or writing to a file. Java program stream sockets, and datagram sockets. With stream socket a process establishes a connection to other process, but in datagram socket is a connectionless service, and packets can arrive in any particular way, In fact packet can be lost, can be duplicated and can even arrive out of sequence. So to overcome these problems there is a need of extra programming by the programmer. There are some main terms in this paper, first of all these should be understand explicitly 6.1. The DSM Algorithm. The following data structures and terms are used to define the DSM algorithm. DSM Table: Every node in the DSM system owns a data structure called the DSM table, which is responsible for keeping a description each shared data block to which references are made from that node. The table on node i has entries for blocks which are either created on the local memory of the node i or which are created at the local memory of come other node j but have access from processes on node i. Table 1 The structure of the DSM Table is Messages: The communication between processes which implements the DSM algorithm is ensured by messages. Messages carry requests and resulting information XVIbetween processes on the same node or on different nodes of the DSM system. A single message format is implemented for simplicity, Subfields of the message are evaluated according to the request code. The message format is given in Table 2. Table 2 Message Format in the DSM System Owner: Every data block has a unique owner node, which has the right to write to that block. If a process wants to write to a data block, first, it should take the ownership of that block. Copyset: If a data block is in the local memory of its owner node computer then that node contains a reference to a copyset which is a list of IP addresses and process id's of the other nodes which have read access to that data block. Node List: Every node has to hold the addresses of other nodes. For accomplishing this, when a node introduces itself to the existing DSM system for the first time, it broadcasts a message to all IP addresses in the network to identify itself and to receive the addresses of other nodes already present in the system. This is important for the multicasting property of the DSM Algorithm. When a node is creating a data block, or system wants to get a data block, not in the local DSM table, the system sends messages to node presents in the nodelist by multicasting. Probable Owner : In designing a DSM systems there is another big problem to overcome, finding the owner of a data block on its own. A simple way to do this is to broadcast a request for the owner of the desired data block. This has the obvious problem of increasing network traffic. A more efficient method is for each data block to keep track of the probable owner. When a node needs to copy a data block, it requests the node from the probable owner of the data block. If the probable owner cannot satisfy the request, it forwards the request to its probable owner of the data block. The probable owner information must be kept up to date so that the search is guaranteed to complete. A dynamic distributed manager can reduce the average number of messages required to find the owner of a data block compared to a fixed distributed manager if the probable owner is usually correct. In every message the original requesting process address is also sent, so when the message reaches to the owner node, it can directly send the requested data to the process which is waiting for it. The elapsed time for finding the real owner node may be seen as a problem, because, if there are“n”processes running on DSM system, for every replication operation, one new probable owner is produced, and to find the real owner a message will be sent“n-1”times in a ring. Somessage traffic in the network is increased. To overcome this problem, an update-time(UT) is chosen. In every UT the owner nodes send a multicast message to the owner list of data block, the address of the real owner.thus minimizing message traffic. Status: Every data block has a status field. A node decides what operations( read, write, locked,... ) it can do according to this status flag. When a node tries to access a data block for the first time it should control the status flag of that data block. If status is locked than it should find the owner node and get the updated data. 6.2. Implementation of DSM Algorithm. The DSM algorithm is implemented by two main processes, Listener Process(LP) and Message Interpreter Process(MIP) on each node of the system. Listener Process The main function of LP is, waiting for message from a specified port of the node. These messages may be from a user program running on that node or from the MIP of another node. After LP receives a message, it transfers this message directly to MIP without doing any operation on it. Message Interpreter Process When introducing a new node to DSM System the most important thing is the production of the node list. To achieve this goal, at creation, MIP sends a“broadcast”message to all nodes in the network. If a reply comes from any other node, MIP adds this process's address to its node list, if no message comes, than the node list will be set as empty. After setting node list, the DSM System can do selective multicasting which is the one of the most important activities of the program. After a node enters the DSM System, It waits for messages from the network at a specified port (eg. port no:5000). When a message comes it first checks the type of message and then partitions this message according to its request type, and extracts the necessary information ( message type, variable type, variable name size, variable name, variable value, message IP,... etc). Afterwards it creates a new thread with its new data to serve the request of the message. Multiple messages are served concurrently by a multithreaded system. All threads have the right to access to DSM table of the node. If a thread cannot receive necessary data from the local DSM Table, it has to ask for the data from other nodes. The thread uses selective multicasting or broadcasting techniques depending on the type of the request. But the concurrent accesses are prevented through mutual exclusion mechanism to ensure data consistency of the DSM Sytem. Accessing to DSM table is controlled by“syncronized”methods of JAVA, which guarantee that only one thread can execute any“syncronized”method at any time. A thread is killed after completing its operation. Another important thing to be mentioned is the multicasting routines. When a process sends a selective multicast message to“n”nodes, it waits for“n”replies for its message. Therefore this message waiting must be of a different port than the original waiting port (in this sample port no:5000). The waiting port is decided in the selective multicast routine. It selects a random port number at the beginning, then send this port number with multicast message to other nodes, So they can easily reply to this multicasting node.node i nodej DSM System Threads of Interpreter Process Used for request messages Used for reply messages request \ from MI Reply to us program Figure 2 The message passing relationship between server processes and user program 6.3. The User Interface. The user interface is through the following function calls.In all function calls there is a parameter named as“err”which is used for deciding the type of error (if exist). SO the programmer can solve the problem according to thişs error type and continue programming without interrupt. The usage of function calls and error calls will be shown in the programming sample. DSM.create(name, type, err): Name is the identifier of the shared variable, type shows the type of the variable. There are 9 basic types that can be defined in the system. These are integer, string, char, boolean, float, long, byte, short, and double. On receiving a message the MIP process of a node first refers to its DSM tables to see if the variable exists and check the status of data block to see if it is available for the client process, than takes the data with its ownership DSM.remove(name, err). Name is the identifier of the variable. Only the owner process of a data block can remove a data block from DSM tables. When a DSM.remove message comes, xixthan server controls the ownership of that data block. If it is not the owner, the MIP process finds the data block, gets the ownership, and removes the variable from table. DSM.read(name, vartype, err): Name is the identifier of the variable. Vartype is a variable whose type is the same as the requested variable type, determining the type of the returned value. If vartype is an integer type variable then integer type will be returned. On receiving a read message the MIP process controls its own table, if it doesn't have it sends a message to probable owner, if server process has the data block and its status is not available. If there is no sign of that data block then a selective multicast message is send, and data block is getting from owner of the data. DSM.put(name, vartype.err): Name is the identifier of the variable. Vartype is the type of the variable to be set in the table. Vartype can be one of the nine types of variables, listed above. When a put message is received from server processes first it controls if that variable is in its own table. If it is not found then there will be an error, because for changing a variable first it has to be read by DSM.readw() command. As a result of this the ownership of that variable is also registered to DSM tables. If variable is found in DSM tables, then it changes the variable and changes the status of that data block to available mode. DSM.lock(name,err): This call is used while there is a need for locking a data block, because of various reasons, especially for changing data block. DSM.unIock(name,err). When programmer ants to unlock a data block, he should use this call to unlock it. So other programs can easily read that block. A programming sample. The main goal of the program is showing the usage of the functions in DSM library. Program is getting sum of the numbers between 1 and the value of the shared variable, which has the digit '5' in it public class firstsample { public static void main(String args[]) { intdl=0; intj=0; int sum=0; int hata[]=new int [5]; DSM.create(“sanl ”,“int”,hata); lf(hata[l]^0) exit; DSM.lock(“sanl”,hata) DSM.put(“sanl”,15,hata); DSM.unlock(“san 1 ”,hata); DSM.lock(“sanl”,hata); dl=DSM.read("sanr,dl); for(j=l;j
Benzer Tezler
- Bir dağıtılmış ortak bellek sisteminin gerçeklenmesi
Implementation of a distributed shared memory system
YUNUS EMRE SELÇUK
Yüksek Lisans
Türkçe
2000
Bilgisayar Mühendisliği Bilimleri-Bilgisayar ve Kontrolİstanbul Teknik ÜniversitesiDOÇ.DR. NADİA ERDOĞAN
- Çoklu etmen ortamında nesne tabanlı dağıtık bellek paylaşımı
Distributed object sharing in the multi-agent environment
METEHAN PATACI
Yüksek Lisans
Türkçe
2014
Bilgisayar Mühendisliği Bilimleri-Bilgisayar ve Kontrolİstanbul Teknik ÜniversitesiBilgisayar Mühendisliği Ana Bilim Dalı
PROF. DR. NADİA ERDOĞAN
- An Alternative read-only table replication system in- Oracle database
Oracle veri tabanında salt-okunabilir kopyalama sistemi
KENAN ÇİFTÇİ
Yüksek Lisans
İngilizce
2001
Bilgisayar Mühendisliği Bilimleri-Bilgisayar ve Kontrolİstanbul Teknik ÜniversitesiDOÇ. DR. TAKUHİ NADİA ERDOĞAN
- Deception and duplication: Eminem's and The Weeknd's representations of faustian deal, shadow archetype, and Hollywood industry
Aldatma ve kopyalama: Eminem'in ve The Weeknd'in Faust anlaşması, gölge arketipi ve Hollywood endüstrisi temsilleri
SERENAY ÇEVİK
Yüksek Lisans
İngilizce
2022
Amerikan Kültürü ve EdebiyatıDokuz Eylül ÜniversitesiAmerikan Dili ve Edebiyatı Ana Bilim Dalı
DR. ÖĞR. ÜYESİ EVRİM ERSÖZ KOÇ
- İlkokul öğrencilerinin yazma hatalarının düzeltilmesi: Bir eylem araştırması
Correcting primary school students' writing errors: An action research
SEMA EKMEKCİ
Yüksek Lisans
Türkçe
2022
Eğitim ve Öğretimİnönü ÜniversitesiTemel Eğitim Ana Bilim Dalı
DOÇ. DR. BAŞAK KASA AYTEN