Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

os galvinsilberschatz sol, Study notes of Operating Systems

Solutions for GalvinSilberschatz book

Typology: Study notes

2017/2018

Uploaded on 07/23/2018

manoj-verma-3
manoj-verma-3 🇮🇳

1 document

1 / 134

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
Solution Operating System Concepts By Galvin,Silberschatz
Solved By Abhishek Pharkya
Part 1: Theory
What is the primary difference between a kernel-level context switch between processes
(address spaces) and a user-level context switch?
The primary difference is that kernel-level context switches involve execution of OS
code. As such it requires crossing the boundary between user- and kernel-land two times.
When the kernel is switching between two different address spaces it must store the
registers as well as the address space. Saving the address space involves saving pointers
to the page tables, segment tables, and whatever other data structures are used by the
CPU to describe an address space. When switching between two user-level threads only
the user-visible registers need to be saved and the kernel need not be entered. The
overhead observed on a kernel-level context switch is much higher than that of a user-
level context switch.
Does spawning two user-level threads in the same address space guarantee that the
threads will run in parallel on a 2-CPU multiprocessor? If not, why?
No, the two user-level threads may run on top of the same kernel thread. There are, in
fact, many reasons why two user-level threads may not run in parallel on a 2-CPU MP.
First is that there may be many other processes running on the MP, so there is no other
CPU available to execute the threads in parallel. Second is that both threads may be
executed on the same CPU because the OS does not provide an efficient load balancer to
move either thread to a vacant CPU. Third is that the programmer may limit the CPUs on
which each thread may execute.
Name three ways to switch between user mode and kernel mode in a general-purpose
operating system.
The three ways to switch from between user-mode and kernel-mode in a general-purpose
operating system are in response to a system call, an interrupt, or a signal. A system call
occurs when a user program in user-space explicitly calls a kernel-defined "function" so
the CPU must switch into kernel-mode. An interrupt occurs when an I/O device on a
machine raises an interrupt to notify the CPU of an event. In this case kernel-mode is
necessary to allow the OS to handle the interrupt. Finally, a signal occurs when one
process wants to notify another process that some event has happened, such as that a
segmentation fault has occurred or to kill a child process. When this happens the OS
executes the default signal handler for this type of signal.
Consider a unprocessed kernel that user programs trap into using system calls. The kernel
receives and handles interrupt requests from I/O devices. Would there by any need for
critical sections within the kernel?
Yes. Assume a user program enters the kernel through a trap. While running the
operating system code, the machine receives an interrupt. Now, the interrupt handler may
modify global data structures that the kernel code was trying to modify. Therefore, while
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff
pf12
pf13
pf14
pf15
pf16
pf17
pf18
pf19
pf1a
pf1b
pf1c
pf1d
pf1e
pf1f
pf20
pf21
pf22
pf23
pf24
pf25
pf26
pf27
pf28
pf29
pf2a
pf2b
pf2c
pf2d
pf2e
pf2f
pf30
pf31
pf32
pf33
pf34
pf35
pf36
pf37
pf38
pf39
pf3a
pf3b
pf3c
pf3d
pf3e
pf3f
pf40
pf41
pf42
pf43
pf44
pf45
pf46
pf47
pf48
pf49
pf4a
pf4b
pf4c
pf4d
pf4e
pf4f
pf50
pf51
pf52
pf53
pf54
pf55
pf56
pf57
pf58
pf59
pf5a
pf5b
pf5c
pf5d
pf5e
pf5f
pf60
pf61
pf62
pf63
pf64

Partial preview of the text

Download os galvinsilberschatz sol and more Study notes Operating Systems in PDF only on Docsity!

Solution Operating System Concepts By Galvin,Silberschatz Solved By Abhishek Pharkya Part 1: Theory What is the primary difference between a kernel-level context switch between processes (address spaces) and a user-level context switch? The primary difference is that kernel-level context switches involve execution of OS code. As such it requires crossing the boundary between user- and kernel-land two times. When the kernel is switching between two different address spaces it must store the registers as well as the address space. Saving the address space involves saving pointers to the page tables, segment tables, and whatever other data structures are used by the CPU to describe an address space. When switching between two user-level threads only the user-visible registers need to be saved and the kernel need not be entered. The overhead observed on a kernel-level context switch is much higher than that of a user- level context switch. Does spawning two user-level threads in the same address space guarantee that the threads will run in parallel on a 2-CPU multiprocessor? If not, why? No, the two user-level threads may run on top of the same kernel thread. There are, in fact, many reasons why two user-level threads may not run in parallel on a 2-CPU MP. First is that there may be many other processes running on the MP, so there is no other CPU available to execute the threads in parallel. Second is that both threads may be executed on the same CPU because the OS does not provide an efficient load balancer to move either thread to a vacant CPU. Third is that the programmer may limit the CPUs on which each thread may execute. Name three ways to switch between user mode and kernel mode in a general-purpose operating system. The three ways to switch from between user-mode and kernel-mode in a general-purpose operating system are in response to a system call, an interrupt, or a signal. A system call occurs when a user program in user-space explicitly calls a kernel-defined "function" so the CPU must switch into kernel-mode. An interrupt occurs when an I/O device on a machine raises an interrupt to notify the CPU of an event. In this case kernel-mode is necessary to allow the OS to handle the interrupt. Finally, a signal occurs when one process wants to notify another process that some event has happened, such as that a segmentation fault has occurred or to kill a child process. When this happens the OS executes the default signal handler for this type of signal. Consider a unprocessed kernel that user programs trap into using system calls. The kernel receives and handles interrupt requests from I/O devices. Would there by any need for critical sections within the kernel? Yes. Assume a user program enters the kernel through a trap. While running the operating system code, the machine receives an interrupt. Now, the interrupt handler may modify global data structures that the kernel code was trying to modify. Therefore, while

there is only one thread that runs inside the kernel at any given time, the kernel may not be re-entrant if access to global data structures is not protected through the use of appropriate murexes. Name the pros and cons of busy waiting. Can busy waiting be avoided altogether in synchronization primitive? One of pros of busy waiting is that it is efficient in cases where the expected wait time is less than the overhead of context switching out of and back to the waiting thread. This happens when a critical section is protecting just few lines of code. It is also good in that a thread can simply stay on the CPU rather than having to give up the CPU before its quantum expires. The biggest con of busy waiting is that it burns CPU cycles without accomplishing anything. By definition a busy wait just spins on the CPU until the lock becomes available, and perhaps these cycles could be used for some other computation. It is important to note that busy waiting is not ever good on a uniprocessor. If there is only one CPU in the system, then there is no chance that the lock will be released while the thread is spinning. In the best case cycles are truly wasted, in the worst the system deadlocks. However, busy waiting can be highly effective on MPs. This leads to a second con: busy waiting leads to an important difference between UP and MP code. Busy waiting can be avoided in synchronization primitive if the primitive always performs a yield whenever an unavailable lock is needed. Can a mutual exclusion algorithm be based on assumptions on the relative speed of processes, i.e. that some processes may be "faster" than the others in executing the same section of code? No, mutual exclusion algorithms cannot be based on assumptions about the relative speed of processes. There are MANY factors that determine the execution time of a given section of code, all of which would affect the relative speed of processes. A process that is 10x faster through a section of code one time, may be 10x slower the next time. If processes do actually differ in their speed, does this pose any other problems on the system? Hint: Think of fairness. It depends. Fairness in the eyes of the operating system is in terms of the resources which it provides to each process. As long as each process is given the same resources, then fairness is achieved regardless of the efficiency each thread has with respect to these resources. One problem that may come up if two threads execute at greatly different speeds is if they must periodically synchronize with each other. In this case the faster thread would have to wait for the slower thread to reach the checkpoint before it could continue. Why does disabling interrupts affect the responsiveness of a system, i.e. the ability of the system to respond quickly to user requests? Disabling interrupts reduces the responsiveness of a system because the system will be

Using similar reasoning, you can see that the parent in program 2 waits for the child to return, and the child exits immediately. In this case only one value gets printed out which is the number 6 (from the parent process.) Note:Problem from thefall’ 03 MidtermWritedownthesequenceofcontextswitches that would occur in Nachos if the main thread were to execute the following code. Assume that the CPU scheduler runs threads in FIFO order with notime slicing and all threads have the same priority. The willjoin flag is used to signify that the thread will be joined by its parent. In your answer use the notation "childX->childY" to signify a context switch. For example "child1->child2" signifies a context switch from child1 to child2. Assume that the join causes the calling thread to release the CPU and sleep in some waiting queue until the child completes its execution. void thread::SelfTest2() { Thread *t1 = new Thread ("child 1", willjoin); Thread *t2 = new Thread ("child 2", willjoin); t1->fork((VoidFunctionPtr) &Thread::Yield, t1); t2->fork((VoidFunctionPtr) &Thread::Yield, t2); t2->join; t1->join; } This will cause the following cause the following sequence of context switches to occur: main->t1->t2->t1->t2->main. This sequence occurs because when the main thread fork () s two children they don't begin executing until main relinquishes the CPU which happens when main calls join (). Fork () has put t1 at the head of the run queue and t2 behind it, so when main calls join (), Nachos context switches to t1, causing the execution of t1. t1 executes Yield() and in so doing it puts itself at the tail of the run queue and relinquishes the CPU. When this happens Nachos looks for the next thread on the run queue, which is t2. So Nachos context switches to t2. As with t1, t2 yields the CPU and Nachos context switches back to t1. t1 finishes executing and exits. When this happens Nachos takes t2 off the run queue and context switches to it, allowing it to finish execution and exit. This whole time main has not been able to execute because it was not on the run queue because it was stuck in join () waiting for t2 to finish. Now that t2 has finished, Nachos context switches back to main which is able to pass the t1->join because t1 has already finished as well. Prove that, in the bakery algorithm (Section 7.2.2), the following property holds: If Pi is in its critical section and Pk (k != i) has already chosen its number[k] != 0,then(number[i],i) < (number[k],k). Note: there are differences in the implementation of the bakery algorithm between the book and the lectures. This solution follows the book assumptions. It is not difficult to adapt the solution to cope with the assumptions made in the lecture notes. Suppose that Pi

is in its critical section, and Pk(k!=i) has already chosen its number[k], there are two cases:

  1. Pk has already chosen its number when Pi does the last test before entering its critical section. In this case, if (number[i], i) < (number[k],k) does not hold, since they can not be equal, (number[i],i) > (number[k],k). Suppose this is true, then Pi can not get into the critical section before Pk does, and Pi will continue looping at the last while statement until the condition does not hold, which is modified by Pk when it exits from its critical section. Note that if Pk gets a number again, it must be larger than that of Pi.
  2. Pk has not chosen its number when Pi does the last test before entering its critical section. In this case, since Pk has not chosen its number when Pi is in its critical section, Pk must chose its number later than Pi. According to the algorithm, Pk can only get a bigger number than Pi, so (number[i],i) < (number[k],k) holds. Too Much Milk: Two robotic roommates, A and B, buy milk using the following processes (note that the two roommates are not doing the exact same thing, and that their brains work using general-purpose microprocessors!): NoteA=TRUE; while(NoteB==TRUE) ; if(NoteB==FALSE){ if(NoMilk){ BuyMilk(); } } noteA=FALSE; NoteB=TRUE; if(NoteA==FALSE) { if(NoMilk) { BuyMilk(); } } noteB=FALSE; Is there any chance that the two roommates buy too much milk for the house? If not, do you see any other problems with this solution? This is a correct solution to the "Too Much Milk" problem. The robotic roommates will never buy too much milk. There are, however, other problems with this solution. First, it is asymmetric in that both roommates do not execute the same code, which is a problem. Second, it involves busy waiting by roommate A.

Part 2: Problems Problem comes from past midterm. You have been hired by your professor to be a grader for CSCI444/544. Below you will find a solution given by a student to a concurrency assignment. For the proposed solution, mark it either as i) correct, if it has no flaws, ii) incorrect, if it doesn't work, or if it works for some cases but not all cases. Assume condition variables with semantics as defined in the lectures. If the solution is incorrect, explain everything wrong with the solution, and add a minimal amount of code to correct the problem. Note that you must not implement a completely different solution in this case -- use the code that we give you as a base. Here is the synchronization problem: A particular river crossing is shared by Microsoft employees and Linux hackers. A boat is used to cross the river, but it only seats three people, and must always carry a full load. In order to guarantee the safety of the Linux hackers, you cannot put one Linux hacker and two Microsoft employees in the same boat (because the Microsoft guys will gang up and corrupt the pure mind of the Linux hacker), but all other combinations are legal. Two procedures are needed: MSDudeArrives and LinuxDudeArrives, called by a Microsoft employee or a Linux hacker when he/she arrives at the river bank. The procedures arrange the arriving MS and Linux dudes in safe boatloads; once the boat is full, one thread calls RowBoat and after the call to RowBoat, the three procedures then return. There should also be no undue waiting, which means that MS and Linux dudes should not wait if there are enough of them for a safe boatload. Here's the proposed solution: int numLinuxDudes = 0, numMSDudes = 0; Lock *mutex; Condition *linuxwait, *mswait; void RowBoat{printf("Row, row, row your boat");} void LinuxDudeArrives () { mutex->Acquire(); if (numLinuxDudes == 2) {

/* Fix: numLinuxDudes - = 2; / linuxwait->Signal(); linuxwait->Signal(); RowBoat(); } else if (numMSDudes == 1 && numLinuxDudes == 1) { / Fix: numMSDudes--;numLinuxDudes--; / mswait->Signal(); linuxwait->Signal(); RowBoat(); } else { numLinuxDudes++; linuxwait->wait(mutex); numLinuxDudes--; / Remove this line to fix */ } mutex->Release(); } void MSDudeArrives () { mutex->Acquire(); if (numMSDudes == 2) {

queue. When the system switches back to Linux3, Linux3 signals Linux1 and Linux2 and calls RowBoat. But consider what happens to Linux1 and Linux2. They will both go back to the ready queue, but according to the semantics of condition variables, they must reacquire the mutex, and this will happen later than Linux4. Eventually Linux3 will release the mutex, and Linux4 will get in (Linux1 and Linux2 are behind him in the mutex's queue). Linux 4 see numLinuxDudes == 2 (remember that Linux1 and Linux are still inside the wait trying acquiring the mutex), he sends two useless signals and mistakenly calls RowBoat, although he is the only guy the shore. A similar scenario occurs if instead of Linux4, Microsoft1 arrives. Microsoft1 will too send two useless signals and call RowBoat without enough people on the boat. All this happens just because the number of Linux/Microsoft dudes are not updated before the members of a boat laod are signaled but after. To fix this code, you need to updated numLinuxDudes and numMSDudes right before a Linux/MS dude gets signaled to enter the boat, as shown with red lines. The solution has another problem; it does not always prevent unnecessary waiting. An example that demonstrates where it causes waiting when a boat could be sent is the following order of arrivals: MSDudeArrives MSDudeArrives LinuxDudeArrives LinuxDudeArrive If this is the order of arrivals, then two Linux Dudes should be sent across the river with one MS Dude, but that will not happen. LinuxDudeArrives( ) only checks that ( numMSDudes == 1 ) in its second if statement, when it should really be checking that ( numMSDudes >= 1 ) because as long as there are any MS Dudes at this point, then a boat can be sent, not just exactly one MS Dude. Write a solution to the dining philosophers using locks and condition variables. Your solution must prevent philosopher starvation. /*

  • Each philosopher is a thread that does the following: */ philosopherMain(int myId, Table *table){ while(1){ table->startEating(myId);

eat(); table->doneEating(myId); } } typedef enum stick_status{STICK_FREE, STICK_USED} stick_status; const static int NOT_WAITING = - 999; public class Table{ public: Table(int nSeats); void startEating(int philosopherId); void doneEating(int philosopherId); private: mutex lock; cond cv; int nseats; stick_status stickStatus[]; int entryTime[]; int currentTime;

cv.wait(&lock); } stickStatus[stickLeft(id)] = STICK_USED; stickStatus[stickRight(id)] = STICK_USED; entryTime[id] = NOT_WAITING; lock.release(); } void Table::doneEating(int id) { lock.acquire(); stickStatus[stickLeft(id)] = STICK_FREE; stickStatus[stickRight(id)] = STICK_FREE; cv.broadcast(&lock); lock.release(); } int Table::okToGo(int id) { assert(lock is held on entry); /*

  • OK to go if both left and right sticks
  • are free AND for each stick my neighbor
  • is not waiting, or my neighbors waiting
  • number is larger than mine */ if(stickStatus[stickLeft(id)] != STICK_FREE){ return 0; } if(stickStatus[stickRight(id)] != STICK_FREE){ return 0; } if(entryTime[seatRight(id)] != NOT_WAITING && entryTime[seatRight(id)] < entryTime[id]){ return 0; } if(entryTime[seatLeft(id)] != NOT_WAITING && entryTime[seatLeft(id)] < entryTime[id]){ return 0; } return 1; }

Stack::Stack() { stackp = MaxStackSize; e = new Exception(); sem = new Semaphore(1); } int Stack::Pop(void) { P(sem) if(stackp == MaxStackSize){ e->SetErrorMsg("Popping empty stack"); e->SetErrorLocation("Stack::Pop()"); throw(e); Error: Before throwing the exception, we must release the lock (i.e. V(sem)), or the stack object will not be accessible to any process any time in the future. } V(sem); return s[stackp++]; Error: We are incrementing stackp after releasing the lock!!

void Stack::Push(int item) { P(sem) if(stackp == 0) { e->SetErrorMsg("Pushing to a full stack"); e->SetErrorLocation("Stack::Push()"); throw(e); Error: Before throwing the exception, we must release the lock (i.e. V(sem)), or the stack object will not be accessible to any process any time in the future. } s[--stackp] = item; V(sem); } Consider the following program fragment: P(s1); a++; P(s2); v++;

V(s2); V(s1); s1, s2, s3 and s4 are semaphores. All variables are automatic, that is each thread as a local copy of a and b that is modifies. Now, consider two threads running this fragment of code simultaneously, can there be a deadlock? Why, or why not? Yes, there can be a deadlock. Consider the scenario where thread 1 starts by locking semaphore s1 then gets switched out while thread 2 runs. Thread 2 may start running with variable a set to 0 (remember, because the variables are automatic, each thread has its own independent set). Thread 2 will acquire semaphore s2, then proceeds to acquire s3. Then, if (b<0 && a <=0), it will try to acquire s1. This forces thread 2 to wait for thread 1 which holds this semaphore. Now, thread 1 may proceed but will block when it tries to acquire semaphore s3. This forms a cyclic wait, and a deadlock occurs. Solve problems 7.8 and 7.9 from the textbook. 7.8 The Sleeping-Barber Problem. A barbershop consists of a waiting room with n chairs and the barber room containing the barber chair. If there are no customers to be served, the barber goes to sleep. If a customer enters the barbershop and all chairs are occupied, then the customer leaves the shop. If the barber is busy but chairs are available, then the customer sits in one of the free chairs. If the barber is asleep, the customer wakes up the barber. Write a program to coordinate the barber and the customers. customers=0; barbers=1; mutex=1; waiting=0; void barber(void) { while(true){ wait(customers);

wait(mutex); waiting=waiting-1; signal(barbers); signal(mutex); cut_hair(); } } void customer(void) { wait(mutex); if(waiting < NUMBER_OF_CHAIRS){ waiting ++; signal(customers); signal(mutex); wait(barbers); get_haircut(); }else{ signal(mutex); } }