Arid Agriculture University, Rawalpindi: Office of The Controller of Examinations

Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 8

Pir Mehr Ali Shah

Arid Agriculture University, Rawalpindi


Office of the controller of Examinations
Mid Exam / Spring 2020 (Paper Duration 48 hours)
To be filled by Teacher
Course No.: …….CS-583……………………Course Title: .….Operating System…………..…..
Total Points:……36….………………………Date of Exam:…25-June-2020…………………….
Degree: …….…..BS(SE)……………..……...Semester:.…4th …… Section:…A, Eve……...........
Marks
Q.No. 1 2 3 4 5 6 7 8 9 10 Obtained/
Total Marks
Marks __ __ __ __ ___
_ _ /36 = _ _ /18
Obtained
Total Marks in Words:
Name of the teacher: Dr. Kashif Sattar
Who taught the course: Signature of teacher / Examiner:

To be filled by Student

Registration No.: 18-ARID-3040 Name: Sheikh Babar Ali

Note:
1. Approximately paper solving time is 2.5 hrs, however submission time is till 48 hours.
2. Try to upload paper as soon as possible to avoid DOS error at server due to bulk subm
issions in late hours.
3. There are total five (5) questions in the paper.
4. Draw clearly and add diagrams/screenshots in your answer sheet (word file) wherever
required.
5. Make sure you have filled your Name and Reg. No. in the above provided space.
6. Upload the paper only through PC or laptop, Mobile submission is not allowed.

Answer the following questions.

Q. No. 1: (Points: 2+2+2=6)


a. Keeping in mind the various definitions of operating system, consider whether the ope
rating system should include applications such as web browsers and mail programs.
Argue both that it should and that it should not and support your answers.

b. How does the distinction between kernel mode and user mode function as a rudimenta
ry form of protection (security)?

c. Describe three general methods for passing parameters to the operating system.

Solution. Q. No. 1:
A. An argument in favor of including popular applications with the operating system is that if
the application is embedded within the operating system, it is likely to be better able to take a
dvantage of features in the kernel and therefore have performance advantages over an applica
tion that runs outside of the kernel.
Arguments against embedding applications within the operating system typically dominate ho
wever:
1. the applications are applications and not part of an operating system
2. any performance benefits of running within the kernel are offset by security
vulnerabilities
3. it leads to a bloated operating system

B.
The distinction between kernel mode and user mode provides a rudimentary form of protecti
on in the following manner. Certain instructions could be executed only when the CPU is in k
ernel mode. Similarly, hardware devices could be accessed only when the program is executi
ng in kernel mode. Control over when interrupts could be enabled or disabled is also possible
only when the CPU is in kernel mode. Consequently, the CPU has very limited capability wh
en executing in user mode, thereby enforcing protection of critical resources.

C.
Three general methods used to pass parameters to the Operating System
1. Simplest: pass the parameters in registers
• In some cases, may be more parameters than registers
2. Parameters stored in a block, or table , in memory, and address of block passed as a
parameter in a register
3. Parameters placed, or pushed , onto the stack by the program and popped off the stac
k by the operating system

Q. No. 2: (Points: 2+2+2=6)


a. It is sometimes difficult to achieve a layered approach if two components of the opera
ting system are dependent on each other. Identify a scenario in which it is unclear how
to layer two system components that require tight coupling of their functionalities.

b. Some computer systems provide multiple register sets. Describe what happens when a
context switch occurs if the new context is already loaded into one of the register sets.
What happens if the new context is in memory rather than in a register set and all the r
egister sets are in use?

c. Assume that an operating system maps user-level threads to the kernel using the man
y-to-many model and that the mapping is done through LWPs. Furthermore, the syste
m allows developers to create real-time threads for use in real-time systems. Is it nece
ssary to bind a real-time thread to an LWP? Explain

Solution. Q. No. 2:
A.
The virtual memory subsystem and the storage subsystem are typically tightly coupled and re
quires careful design in a layered system due to the following interactions. Many systems allo
w files to be mapped into the virtual memory space of an executing process. On the other han
d, the virtual memory subsystem typically uses the storage system to provide the backing stor
e for pages that do not currently reside in memory. Also, updates to the file system are someti
mes buffered in physical memory before it is flushed to disk, thereby requiring careful coordi
nation of the usage of memory between the virtual memory subsystem and the file system.
For example, different components of an OS are related to each other would be the virtual
memory and the storage system. Having strong dependencies between components makes it
difficult to divide the layers.
B.
The CPU current register set pointer is changed to point to the set containing the new context,
which takes very little time. If the context is in memory, one of the contexts in a register set
mist be chosen and be moved to memory, and the new context must be loaded from memory
into the set. This process takes a little more time than on systems with one set of registers,
depending on how a replacement victim is selected.

C.
Yes. Timing is crucial to real-time applications. If a thread is marked as real-time but is not b
ound to an LWP, the thread may have to wait to be attached to an LWP before running. Consi
der if a real-time thread is running (is attached to an LWP) and then proceeds to block. While
the real-time thread is blocked, the LWP it was attached to has been assigned to another threa
d. When the real-time thread has been scheduled to run again, it must first wait to be attached
to an LWP.
By binding an LWP to a real-time thread we are ensuring that the thread will be able to run w
ith minimal delay once it is scheduled.

Q. No. 3: (Points: 5+1+1+1=8)


Consider the following set of processes, with the length of the CPU burst time given in
milliseconds:
Process Burst Time Priority
P1 2 2
P2 1 1
P3 8 4
P4 4 3
P5 5 2

The processes are assumed to have arrived in the order P1, P2, P3, P4, P5, all at time 0.

a. Draw four Gantt charts that illustrate the execution of these processes using the follow
ing scheduling algorithms: FCFS, SJF, non-preemptive priority (a smaller priority nu
mber implies a higher priority), and RR (quantum = 2).

b. What is the waiting time of each process for each of these scheduling algorithms?

c. What is the turnaround time of each process for each of the scheduling algorithms in p
art a?

d. Which of the algorithms results in the minimum average waiting


time (over all processes)?

Solution. Q. No. 3:

FCFS (First Come First Serve)


P1 P2 P3 P4 P5
0 2 3 11 15 20
Performance Statistics:
Process Arrival Time Burst Time Priority Finish Time Turnaround Time Waiting Time
P1 0 2 2 2 2 0
P2 0 1 1 3 3 2
P3 0 8 4 11 11 3
P4 0 4 3 15 15 11
P5 0 5 2 20 20 15
Average 10.2 6.2

SJT (Non-Preemptive)
P2 P1 P4 P5 P3
0 1 3 7 12 20
Performance Statistics:
Process Arrival Time Burst Time Priority Finish Time Turnaround Time Waiting Time
P1 0 2 2 3 3 1
P2 0 1 1 1 1 0
P3 0 8 4 20 20 12
P4 0 4 3 7 7 3
P5 0 5 2 12 12 7
Average 8.6 4.6

SJT (Preemptive)
In this case the processes all arrive at the same time. This method turned out to be the
same as non-preemptive SJF.
P2 P1 P4 P5 P3
0 1 3 7 12 20
Process Arrival Time Burst Time Priority Finish Time Turnaround Time Waiting Time
P1 0 2 2 3 3 1
P2 0 1 1 1 1 0
P3 0 8 4 20 20 12
P4 0 4 3 7 7 3
P5 0 5 2 12 12 7
Average 8.6 4.6

Non-preemptive Priority
P2 P1 P5 P4 P3
0 1 3 8 12 20
Performance Statistics:
Process Arrival Time Burst Time Priority Finish Time Turnaround Time Waiting Time
P1 0 2 2 3 3 1
P2 0 1 1 1 1 0
P3 0 8 4 20 20 12
P4 0 4 3 12 12 8
P5 0 5 2 8 8 3
Average 8.8 4.8
Round Robin
P1 P2 P3 P4 P5 P1 P3 P4 P5 P3 P4 P5 P3 P4 P5 P3 P5 P3 P3 P3
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
Performance Statistics:
Process Arrival Time Burst Time Priority Finish Time Turnaround Time Waiting Time
P1 0 2 2 6 6 4
P2 0 1 1 2 2 1
P3 0 8 4 20 20 12
P4 0 4 3 14 14 10
P5 0 5 2 17 17 12
Average 11.8 7.8

Average Waiting Time:


Algorithm Average Waiting Time
FCFS 6.2
SJF (non-preemptive) 4.6
SJF (preemptive) 4.6
Non-preemptive, Priority 4.8
Round Robin 7.8
Therefore, SJF (either non-preemptive algorithm or preemptive algorithm, since they given th
e same result in this case) has the minimum average waiting time.

Q. No. 4: (Points: 6+2=8)


The following UNIX terminal is showing currently available processes, where PID is the proc
ess ID and PRI represents the priority of the process (less number has high priority). Arrival
time and burst time requirement are given in front of each process. If two process have same
priority then you will select that process which has less arrival time. Keep in mind that
process ID is not always incremental so keep eye on arrival time.
a) Design a gantt chart using a priority-based preemptive scheduling algorithm?
b) Calculate the average Turn Around Time (TAT) for all the processes.
Solution. Q. No. 4:

P6 P10 P8 P7 P9 P3 P5 P2 P1 P4
0 14 19 27 34 47 51 69 75 77 87
Performance Statistics:
Process Arrival Time Burst Time Priority Finish Time Turnaround Time Waiting Time
P1 43 2 63 77 34 32
P2 36 6 46 75 39 33
P3 20 4 31 51 31 27
P4 60 10 62 87 27 17
P5 30 18 63 69 39 21
P6 0 14 46 14 14 14
P7 15 7 31 34 19 12
P8 12 8 31 27 15 7
P9 15 13 31 47 32 19
P10 10 5 31 19 9 4

Avg Turnaround Time = (34+39+31+27+39+14+19+15+32+9)/10 = 25.9 units


Q. No. 5: (Points:2+2+4 )
Consider the given diagram of multi-level queue (MLQ), where the ready queue is portioned
into separate queues. Each queue has the fixed assignment of type of processes. No priority
will be given to the processes having small burst time. If you are assigning RR then you will
use Time Quantum from this set {2,4,8,16}
a) Can you write the possibilities that which scheduling algorithms you will implement o
n each queue.
b) Which scheduling algorithm you will implement on MLQ itself.
c) Explain your answer with the help of one example through Gantt charts for each
queue.

Solution. Q. No. 5:

A.
There is no restriction or formula to apply on different process we have our own choice to
apply algorithm on different type of processes but generally follow
System process => round robin
Interactive process => shortest job first
Batch System process => first come first serve

B.
The scheduling algorithm I’ll implement on MLQ itself will be
System process => Shortest Job First (sjf)
Interactive process => Shortest Job First (sjf)
Batch system process => First Come First Serve (fcfs)

There is no restriction to apply different algorithm on different process, we have the authority
to apply the same algorithm on different processes
C.
System process
Processes Burst time
P1 2
P2 4
P3 8

Interactive process
Processes Burst time
P4 2
P5 4
P6 8

Batch processes
Processes Burst time
P7 2
P8 2
P9 2

System process having higher priority the CPU execute these processes first then CPU
execute interactive and then batch processes
Gantt chart
P1 P2 P3 P4 P5 P6 P7 P8 P9
0 2 6 14 16 20 28 30 32
34
If CPU execute batch process then interactive process come then CPU switch to execute
interactive process

You might also like