القائمة الرئيسية

الصفحات

GADS

Advanced Operating System research papers 2020/2021

Advanced Operating System research papers 2020/2021

Advanced Operating System research papers 2020/2021

Welcome Dears, In light of the pandemic that the whole world has gone through, many or most countries have decided to suspend studies and to replace evaluation with research instead of exams. So, Today We give you an Advanced Operating System research papers that maybe help you in your study.
This research was prepared by me; [Mostafa Fawzi (CSI,Egypt)]

Table of Contents
1. Abstract
2. Introduction
3. Process Synchronization Introduction
4. Test-and-Set
5. WAIT and SIGNAL
6. Semaphores
7. Conclusion
8. References

1. Abstract

Generally this research project concerns with Process Synchronization Software and Networking and presents the following Three Synchronization Mechanisms:
Process Synchronization Introduction
Semaphores Mechanisms
Test-and-Set Mechanisms
WAIT and SIGNAL Mechanisms

And It contains clearly and simply concept, advantages and disadvantages, applications and how this Three Synchronization Mechanisms work. 

2. Introduction

The synchronization of the two processes is the mutual understanding.
Syncing means sharing system resources in simple words. e.g. The track of a train is mutually synchronized.
The success of process synchronization depends on the operating system's ability to render another resource unavailable while one of them is used. The "tools" can include printers and other I / O devices, a saved location, or a data file. Essentially, before the resource is released, it must be locked off from other processes. The waiting process is only allowed to use the resource when it is released.
Synchronization is essential in this respect. An error could leave a job hungry for ever (hunger) or trigger a deadlock, if the main resource is.
It's the same thing in a busy ice cream store. To be served, clients take a set. The numbers on the wall have been modified by the workers who pull a chain to raise the numbers for each customer. Yet what happens if the customer service and the number are not synchronized?  Chaos!
Yes, this is the case of the missed waiting customer. 

3. Process Synchronization Introduction

Process Synchronization is a way of coordinating processes using shared data. It happens within interacting processes in an operating system. Processes of cooperation are processes which share resources. Throughout the introduction of multiple parallel processes, process syncing helps ensure the continuity of common data and the execution of collaboration processes. Processes will be designed to eliminate contradictions in the parallel access to shared knowledge. Data incoherence can contribute to what is known as a contest. A race state happens where there are two or more operations at once, not scheduled in the proper series and not terminated correctly in the crucial portion. [A]

4. Test-and-Set

Test-and-Set is single, indivisible TS-computer system instruction and was developed by IBM for its 360/370 computer multiprocessing system.
It tests if a key is available in a single machine cycle and if so, it becomes unavailable. [1]

How it works?
The actual key is a single bit at a position where a 0 (if free) or a 1 (if busy) can be located. We may consider TS to be a subsystem function which has one parameter, the position of the storage, and returns a value, such that only a single loop of the process is needed. 
Consequently, before reaching a sensitive area, a step (Step 1) will check the condition code by TS instruction. If there is no other process in this vital region, Process 1 is allowed and condition code modified from 0 to 1. Later, if Process 1 exits the critical area, the state code is reset to 0. If Process 1 detects a busy state tag, on the other hand, it's put in a waiting loop where it keeps checking and waiting until it is finished.  
That's called busy waiting, not only taking precious processing time but also depending on efficient systems to check the main, which the operating system or hardware better manages.[1]

Advantages and drawbacks

it is easy to implement and works well for a few processes,   
As many processes wait to reach a crucial area, famine can happen, because the processes are arbitrarily open. Any mechanisms may be preferred over others without creating a first came, first-served approach.
The waiting processes remain in unsuccessful, resource-consuming waiting loops that require context switching

Applications:
You use it any time you want to write data to memory after doing some work and make sure another thread hasn't overwrite the destination since you started.
We could say it in another way as it is used when you need to get a common value, do something with it, change the value, assuming that another thread has not actually changed it.[B]

5. WAIT and SIGNAL

A change to the test-and-set to remove busy waiting is a WAIT and SIGNAL. The WAIT and SIGNAL are two different operations that remove one another and are part of the operations of the process scheduler.
How it works?
WAIT is enabled when a busy condition code is encountered. WAIT defines the blocked condition for the process control block (PCB) and connects it to the process queue waiting to reach this specific crucial area. Then the project manager chooses another execution process.
SIGNAL is activated by exiting a critical region and settings in the condition code to be "free." SIGNAL controls and selects the queue of processes waiting to enter the critical area and set it to the READY state.

Advantages and drawbacks:
The System Scheduler must finally agree on what method to start. The inclusion of the WAIT and SIGNAL operations frees the machine from the busy waiting issue and gives the operating system power back, which can then execute more tasks while the waiting processes idle (WAIT).[1]

Applications:
If you have carried out a reduction technique such as a dot (a number of calculations are needed) locking and unlocking are helpful, since the sequence of the sum is meaningless and the thread will go free for that.
If you solved a PDE over time before doing the next step, the previous step will not work, because even if the data is free to adjust the measurement of preconditions, this is where a waiting / signal will be needed. [C]

6. Semaphores

A semaphor is a variable that is non-negative and uses the binary signal, a flag.
The signaling unit, used by railways to tell whether a section of the track was clear was one of the best known semaphors. The track was clear and the train could proceed when the sepulchre arm was lifted. The track was busy when the arm was lowered and the train had to wait till the arm was lifted. This just had up or down (on or off) two roles. [1]

Types of Semaphore:
Binary Semaphores
Counting Semaphores

How it works?
If we let s be a semaphore variable, then the V operation on s is simply to increment s by 1. 
The action can be stated as: 
V(s): s: = s + 1 
This in turn necessitates a fetch, increment, and store sequence. Like the test-and-set operation, the increment operation must be performed as a single indivisible action to avoid deadlocks. And this means that s cannot be accessed during operation through any other process.
The operation P on s is to test the value of s and, if it’s not 0, to decrement it by 1. The action can be stated as:   P(s): If s > 0 then s: = s – 1 
This involves checking, fetching, decrease, and transport. This series will again be carried out as an indivisible procedure in one continuous computer loop or structured in such a manner that the mechanism can not take place before the operation is completed (test or increase). 

In response to calls provided by one process which names semaphor as its parameter (this alleviates the process from getting control), checking or increment operations are performed by the operating system. When s = 0, this means that the critical area is active, and the check procedure call method needs to wait for it to be performed and not before s > 0 is completed. 

P3 is placed in WAIT (for semaphor) condition in state 4, as seen in Table 6.3. For States 6 and 8, the value of s is reset to 1 , which means the critical region is free as the cycle leaves the critical region, as seen in Table 6.3. It in effect allows one of the disabled processes to wake up, enter a sensitive region and reset it to 0. In State 7, P1 and P2, in this crucial region, are not being handled, and P4 is still blocked. 

The longest waiting process, P3, was chosen to reach the crucial area after State 5 of Table 6.3, although this is not automatically the case when the system has an initial, initial selection procedure. In addition, depends on the algorithm used by this segment of the process planner for which work is next processed.[1]

Advantages and drawbacks
Advantages:
The theory of reciprocal exclusion is observed by using semaphores, since semaphors can only reach the critical segment with one operation.

You do not have to verify whether or not a critical section should be allowed into a process. But here is not idle processing time.[F]
Drawbacks:
Simple algorithms require more than one semaphore.
Semaphores are too low level.
The programmer must keep track of all calls to wait and to signal the semaphore.
Semaphores are used for both condition synchronization and mutual exclusion.[E]


Applications
Operating systems frequently differentiate between counting and binary semaphores. A counting semaphore 's importance will extend beyond an unregulated domain. The value of a binary semaphore can only vary from 0 to 1. Therefore, binary semaphores function similarly to mutex keys. In fact, binary semaphores can be used on systems that do not provide mutex locks instead for mutual exclusion.[2]
And Also There are three typical uses of semaphores:
mutual exclusion:
Mutex (i.e., Mutual Exclusion) locks count-down lock:
Keep in mind that semaphores have a counter.
notification: Indicate an event has occurred. 

7. Conclusion

Finally, I wish that this research fulfilled the basic idea behind the project's goal, by obtaining the improved performance, evaluated in depending on well-known criteria, Process synchronization is the task of coordinating process execution so that the same shared data and resources can not be accessed by any two processes.

8. References

Text Books:
1. Ann McIver McHoes, Ida M. Flynn,Understanding Operating Systems,Course Technology, Cengage Learning,2011
2. Abraham Silberschatz, Peter B. Galvin, Greg Gagne,"Operating System Concepts", 9th Edition,Wiley,2012


WEB:
A. Process Synchronization in Operating Systems: Definition & Mechanisms. Retrieved from https://study.com/academy/lesson/process-synchronization-in-operating-systems-definition-mechanisms.html,(2019, January 15).
B. What is Test-and-Set used for?,Retrieved from https://stackoverflow.com/questions/120937/what-is-test-and-set-used-for/,2008
C. What are the benefits of using wait() and signal()?,Retrieved from https://stackoverflow.com/questions/17779271/what-are-the-benefits-of-using-wait-and-signal,2013
D. Jianhui Yue,CS 4411 - Operating Systems,Retrieved from http://www.csl.mtu.edu/cs4411.choi/www/Resource/Semaphore.pdf,2002
E. CS 551: Distributed Operating Systems Disadvantages of Semaphores,Retrieved from https://www.cs.colostate.edu/~cs551/CourseNotes/ConcurrentConstructs/DisAdvSems.html, 2003
F. What is semaphore and what are its types?,Retrieved from https://afteracademy.com/blog/what-is-semaphore-and-what-are-its-types,2019


هل اعجبك الموضوع :

تعليقات

التنقل السريع