In Chapter 5 we discussed possible race conditions on variou

In Chapter 5, we discussed possible race conditions on various kernel data structures. Most scheduling algorithms maintain a run queue, whichlistsprocesseseligibletorunonaprocessor.Onmulticoresystems, there are two general options: (1) each processing core has its own run queue, or (2) a single run queue is shared by all processing cores. What are the advantagesand disadvantages of each of these approaches?

Solution

The main advantage of every processing core having its own run queue is that there is no contention over a solo run queue when the scheduler is running simultaneously on two or more processors. When a scheduling decision should be made for a processing core, the scheduler only want to look no further than its private run queue. A disadvantage of a particular single run queue is that it have to be protected with locks to avoid a race condition and a processing core may be accessible to run a thread, yet it must first acquire the lock to regain the thread from the single queue. Though, load balancing would expected not be an issue with a single run queue, while when each processing core has its own run queue, there should be some sort of load balancing between the dissimilar run queues.
In Chapter 5, we discussed possible race conditions on various kernel data structures. Most scheduling algorithms maintain a run queue, whichlistsprocesseseligi

Get Help Now

Submit a Take Down Notice

Tutor
Tutor: Dr Jack
Most rated tutor on our site