Any help on any question is appreciated Thanks 4 2 points De
Any help on any question is appreciated. Thanks!
4. (2 points) Describe any benefits of using shared memory over message passing communication.
5. (2 points) Describe any benefits of using message passing over shared memory for communication.
6. (2 points) In what ways is the microkernel approach to for OS structure similar to the modular approach?In what ways do they differ?
10. (4 points) There are five active processes P1, P2, P3, P4, and P5 given below. Apply multilevel queuescheduling (MQS) consisting of two queues (Queue 1 has absolute higher priority over queue 2).Processes DO NOT change queues, NOTE: this is not MLFQProcess CPU burst time Arrival time Priority queue
P1 12 0
2
P2 7 3
1
P3 9 6
2P4 9 12 2P5 4 13 1
Both queues use Round Robin scheduling, withTq1=5 (Priority queue 1) and Tq2=4(Priority queue 2).Show the Gantt chart and calculate individual and average waiting time, individual and average responsetime and individual and average turnaround time
11. (9 points) Consider this set of processes. Construct Gantt charts for the scheduling algorithms indicatedbelow and compute the individual and average waiting time, individual and average response time andindividual and average turnaround time for each algorithm
FCFS (non preemptive) , SJF(non preemptive) , and RR (TQ 4) :
Process Arrival time Burst time
P1 0 10P2 2 8P3 5 14P4 7 6P5 9 7 AVERAGE:
Provide Gantt charts and results for all three algorithms
Solution
Shared memory versus Message passing programming model
Shared Memory Model
In the shared memory programming model, assignments share a typical address space, which they read and compose/write asynchronously .
Different methodologies, for example, locks/semaphores might be utilized to control access to the common memory.
benefits of this model from the software engineer\'s perspective is that the thought of data \"ownership\" is lacking with regards to, so there
An advantage of this model from the programmer’s point of view is that the notion of data “ownership” is lacking, so there is no need to specify explicitly the communication of data between tasks. Program development can often be simplified.
An important disadvantage in terms of performance is that it becomes more difficult to understand and manage data
locality.
data is keeping withing the same machine processor that works on it conserves memory accesses,because of this cashe is useful for when multiple processors are using the same type of data cashe refreshes the data whenever the bus traffic is increasingit happens when numerous processors utilize similar information.
Shockingly, controlling information territory is difficult to comprehend and outside the ability to control of the normal client.
Executions:
On shared memory stages, the local compilers make an interpretation of client program factors into real memory addresses, which are worldwide.
No regular common memory stage usage right now exist. Be that as it may, as said already in the Overview area, the KSR ALLCACHE approach gave a common memory perspective of information despite the fact that the physical memory of the machine was conveyed.
The message passing model exhibits the following attributes:
Message Passing Model
An arrangement of tasks that utilization their own particular nearby memory while doing the computation. Various tasks can live on the same physical machine too over a discretionary number of machines.
tasks exchange their information through interchanges by sending and getting messages.
Information exchange as a rule requires helpful operations to be performed by each procedure. For instance, a send operation must have a matching get operation.
Usage:
From a programming viewpoint, message passing usage regularly involve a library of subroutines that are imbedded in source code. The software engineer is in charge of deciding all parallelism.
Truly, an assortment of message passing libraries have been accessible since the 1980s. These usage varied generously from each other making it troublesome for software engineers to create portable applications.
In 1992, the MPI Forum was shaped with the essential objective of setting up a standard interface for message passing .
usage.
Section 1 of the Message Passing Interface (MPI) was discharged in 1994. Section 2 (MPI-2) was discharged in 1996. the specifications of MPI available in some of wibsites
MPI is presently the \"true\" business standard for message passing, supplanting practically all other message passing usage utilized for production work. Most, if not the greater part of the mainstream parallel processing stages offer no less than one usage of MPI. A couple offer a full execution of MPI-2.
For shared memory structures, MPI usage normally don\'t utilize a system for undertaking interchanges. Rather, they
utilize shared (memory duplicates) for execution reasons.
It\'s an entirely basic distinction. In a common memory demonstrate, various specialists all work on similar information. This opens up a parcel of the simultaneousness issues that are basic in parallel programming.
Message passing frameworks make workers communication through an informing framework. Messages keep everybody seperated, so that communication can\'t alter each other\'s information.
By similarity, lets say we are working with a group on a venture together. In one model, we are altogether swarmed around a table, with the greater part of our papers and information layed out. We can just impart by changing things on the table. We must be watchful not to all attempt to work on a similar bit of information on the double, or it will get befuddling and things will get mixed up.
In a message passing model, we as a whole sit at our work areas, with our own particular arrangement of papers. When we need to, we can pass a paper to
another person as a \"message\", and that laborer can now do what they need with it. We just ever have admittance to whatever we have before us, so we never need to stress that somebody will reach over and transform one of the numbers while we are sincerely busy summing them up.
In shared memory display, memory is shared by coordinating procedures, which can trade data by perusing and composing information however in message passing correspondence happens by method for messages traded between the collaborating forms.
Shared memory runs forms simultaneously however message passing can\'t.
Message passing office has two operations: send (message) and get (message). The procedure of which has settled or variable size.
Message passing is helpful for trading littler measures of information, in light of the fact that no contentions need be maintained a strategic distance from. Message passing is likewise less demanding to execute than is shared memory for interprocess correspondence.
In shared-memory frameworks, framework calls are required just to set up shared-memory locales. Once shared memory is built up, all gets to are dealt with as standard memory gets to, and no help from the bit is required.
Speedier
Shared memory permits most extreme speed and accommodation of correspondence, as it should be possible at memory speeds when inside a PC. Shared memory is speedier than message going, as message-passing frameworks are normally executed utilizing framework calls and subsequently require the additional tedious undertaking of portion mediation.
Message passing models (Erlang, for instance) don\'t have any mutual express; all synchronization and correspondence is finished by trading messages. Shared memory models impart by perusing/writing to shared memory pieces, which are ensured by semaphores or similar.


