Itp109 Midterm 5-9
Itp109 Midterm 5-9
Itp109 Midterm 5-9
It is a preemptive version of FCFS algorithm. The one that enters the Ready queue first gets to be
executed by the CPU first but is given a time limit. This limit is called time quantum or time slice. The
process will enter the rear of the queue after its time slice expires.
Example:
A set of processes with their respective arrival times at the ready queue and the length of their next CPU
burst are given below.
Solution:
a. Process A arrives at the ready queue at t = 0 and will start executing at t = 0. At t = 3, process B
arrives and process A consumes its first time slice. Process A goes back to the ready queue.
b. Processes B (CPU burst of 4) and A (CPU burst of 5) are inside the ready queue (in that order) by
the time process A finishes executing at t = 3. The CPU scheduler will select process B to execute next.
Process B will start executing at t = 3. Process C arrives at the ready queue at t = 4. At t = 6, process D
arrives and process B consumes its first time slice. Process B goes back to the ready queue.
c. Processes A (CPU burst of 5), C (CPU burst of 5), D (CPU burst of 3), and B (CPU burst of 1) are
inside the ready queue (in that order) by the time process B finishes executing at t = 6. The CPU
scheduler will select process A to execute next. Process A will start executing at t = 6 and ends after its
second time slice at t = 9. Process A goes back to the ready queue.
d. Processes C (CPU burst of 5), D (CPU burst of 3), B(CPU burst of 1), and A (CPU burst of 2) are
inside the ready queue (in that order) by the time process A finishes executing at t = 9. The CPU
scheduler will select process C to execute next. Process E arrives at the ready queue at t = 10. At t = 12,
process C consumes its first time slice. Process C goes back to the ready queue.
e. Processes D (CPU burst of 3), B (CPU burst of 1), A (CPU burst of 2), E (CPU burst of 2), and C
(CPU burst of 2) are inside the ready queue (in that order) by the time process C finishes executing at t =
12. The CPU scheduler will select process D to execute next. Process D will start executing at t = 12 and
ends at t = 15.
f. Processes B (CPU burst of 1), A (CPU burst of 2), E (CPU burst of 2), and C (CPU burst of 2) are
inside the ready queue (in that order) by the time process C finishes executing at t = 15. The CPU
scheduler will select process B to execute next. Process B will start executing at t = 15 and ends at t = 16.
g. Processes A (CPU burst of 2), E (CPU burst of 2), and C (CPU burst of 2) are inside the ready
queue (in that order) by the time process C finishes executing at t = 16. The CPU scheduler will select
process A to execute next. Process A will start executing at t = 16 and ends at t = 18.
h. Processes E (CPU burst of 2) and C (CPU burst of 2) are inside the ready queue (in that order) by
the time process A finishes executing at t = 18. The CPU scheduler will select process E to execute next.
Process E will start executing at t = 18 and ends at t = 20.
i. Process C (CPU burst of 2) is the only process remaining inside the ready queue at t = 20. The
CPU scheduler will select process C to execute next. Process C will start executing at t = 20 and ends at t
= 22.
WTA = (0 - 0) + (6 - 3) + (16 - 9) = 10 ms
WTB = (3 - 3) + (15 - 6) = 9 ms
WTD = 12 - 6 = 6 ms
WTE = 18 - 10 = 8 ms
TAD = 15 - 6 = 9 ms
TAE = 20 - 10 = 10 ms
RR scheduling algorithm guarantees that each process gets a fair share of CPU time. It prioritizes fast
response time, not the waiting time and turnaround time. This algorithm is ideal for multitasking
systems.
Caution must be observed in deciding the time slice since it will directly affect the number of context
switches that will occur. Too small time slices will result to many context switches while too large time
slices will somewhat perform similarly as FCFS. It is recommended that the time slice should be greater
than 80% of all the CPU burst times.
5. Priority Scheduling
This algorithm may be non-preemptive or preemptive. The CPU scheduler chooses the process with the
highest priority to be executed next. Each process is assigned a priority which is usually expressed as an
integer. In this discussion, the lower the value of the integer, the higher its priority.
In preemptive priority scheduling, the currently executing process is preempted if a higher priority
process arrives in the ready queue. The preempted process will enter the rear of the queue. If two or
more processes have the same priority, the FCFS algorithm may be used.
Example:
A set of processes with their respective arrival times at the ready queue and the length of their next CPU
burst are given below.
a. Process A arrives at the ready queue at t = 0 and will start executing at t = 0. It has a CPU burst
of 8 so it will end at t = 8.
b. Processes B, C, and D are inside the ready queue (in that order) by the time process A finishes
executing at t = 8. The CPU scheduler will select process C to execute next since it has the highest
priority among the three. It will start executing at t = 8. It has a CPU burst of 5 so it will end at t = 13.
c. Processes B, D, and E are inside the ready queue (in that order) by the time process B finishes
executing at t = 13. The CPU scheduler will select between D and E because both have the same priority.
Since process D arrives first, it will be executed next. It will start executing at t = 13. It has a CPU burst of
3 so it will end at t = 16.
d. Processes B and E remain inside the ready queue (in that order) at t = 16. The CPU scheduler will
select process E to execute next. It will start executing at t = 16. It has a CPU burst of 2 so it will end at t
= 18.
e. Process B is the only process remaining inside the ready queue at t = 18. The CPU scheduler will
select process B to execute next. It will start executing at t = 18. It has a CPU burst of 4 so it will end at t
= 22.
WTA = 0 - 0 = 0 ms
WTB = 18 - 3 = 15 ms
WTC = 8 - 4 = 4 ms
time
TAA = 8 - 0 = 8 ms
TAB = 22 - 3 = 19 ms TAC = 13 - 4 = 9 ms
a. Process A arrives at the ready queue at t = 0 and will start executing at t = 0. It has a CPU burst
of 8 but it cannot be assumed that it will end at t = 8.
b. Process B arrives at the ready queue at t = 3. Since process B has a higher priority, it will
preempt process A. Process A goes back to the ready queue. Process B will start executing at t = 3 but it
cannot be assumed that it will end at t = 7.
c. Process C arrives at the ready queue at t = 4. Since process C has a higher priority, it will
preempt process B. Process B goes back to the ready queue. Process B will start executing at t = 3. Since
it has the highest priority, it is safe to assume that it will end at t = 9.
d. Processes A (CPU burst of 5), B (CPU burst of 3), and D (CPU burst of 3) are inside the ready
queue (in that order) by the time process C finishes executing at t = 9. The CPU scheduler will select
process D to execute next. Process D will start executing at t = 9 but it cannot be assumed that it will end
at t = 12.
e. Process E arrives at the ready queue at t = 10. Since E has the same priority as D, it cannot
preempt D. Process D will continue to execute and it will end at t = 12.
f. Processes A (CPU burst of 5), B (CPU burst of 3), and E (CPU burst of 2) are inside the ready
queue (in that order) by the time process D finishes executing at t = 12. These will be executed
according to their priorities.
WTA = (0 - 0) + (17 - 3) = 14 ms
WTD = (9 - 6) = 3 ms
The following are the turnaround time for each of the five processes:
TAA = 22 - 0 = 22 ms
TAB = 17 - 3 = 14 ms
TAC = 9 - 4 = 5 ms
TAD = 12 - 6 = 6 ms
TAE = 14 - 10 = 4 ms
In priority scheduling, the waiting time and turnaround time of higher priorities are minimized.
However, this may lead to starvation wherein low-priority processes may wait indefinitely in the ready
queue. To solve this, a method called aging can be used wherein the priority of a process gradually
increases the longer it stays in the ready queue.
This may be used by the operating system to determine the behavior pattern of a process. It uses
several ready queues. Each of these queues will have different priorities. The figure below shows an
example of a multilevel feedback queue.
Based on the given figure, queue Q0 has the highest priority while Qn has
the lowest. The queues follow the FCFS algorithm except for the lowest
queue. The lowest queue follows the RR algorithm. Each queue is
assigned a time slice that increases as its priority decreases.
All processes enter at the rear of the queue with the highest priority. A
process will be moved to the next lower-priority queue if it does not
complete its execution or request for an I/O operation within the given
time slice. Processes in the lower-priority queue can only execute if there
are no processes in the higher-priority queue. An executing process in the
lower-priority queue can also be preempted if a new process arrives at
the higher-priority queue.
9
OPERATING SYSTEMS HANDOUT 9
CPU Scheduling
Non-preemptive and Preemptive CPU Scheduling CPU scheduling may be classified as:
1. Non-preemptive Scheduling
In non-preemptive scheduling, the CPU cannot be taken away from its currently executing process. The only time the CPU scheduler can assign
the CPU to another process in the ready queue is when the currently executing process terminates or enters into a Blocked state.
2. Preemptive Scheduling
In preemptive scheduling, the CPU can be taken away from its currently executing process. The currently executing process is sent back to the
ready queue and the CPU scheduler assigns the CPU to another process. This will happen if:
a. An interrupt occurred so the current process has to stop executing. The CPU is then assigned to execute the ISR of the requesting device
or process.
b. The priority of the process that enters the ready queue is higher than the currently executing process.
c. The time limit of a process for using the CPU has been exceeded. The CPU is then assigned to execute another process even though
running process has not yet completed its current CPU burst.
10
OPERATING SYSTEMS HANDOUT 9
Preemptive scheduling is ideally used for interactive or real-time computing systems. Non-preemptive scheduling is good for batch processing
systems only. However, non-preemptive scheduling has fewer context switches than preemptive scheduling; therefore, the former has less
overhead.
1. CPU Utilization
2. Throughput
7. 3. Turnaround Time
The time between the point a process is submitted and the time it finishes executing is minimized.
8. 4. Response Time
The time between the submission of a request and the start of the system’s first response is minimized.
9. 5. Waiting Time
The time a process has to spend inside the ready queue waiting to be executed by the CPU is minimized. Many considers this as a measure of
how good a scheduling algorithm is since waiting time somehow have an impact on the order of process execution.
11
OPERATING SYSTEMS HANDOUT 9
Note that a scheduling algorithm cannot optimize all the performance criteria because of conflicts. Some criteria might be optimized while
others are compromised. The scheduling algorithm may select to optimize only one or two criteria, which depends on the type of computer
system being used.
Fairness, meaning all processes will be given equal opportunity to use the CPU, is another performance criterion.
It is a non-preemptive scheduling algorithm wherein the one that enters the Ready queue first gets to be executed by the CPU first. In choosing
the next process to be executed, the CPU scheduler selects the process at the front of the Ready queue. Processes enter at the rear of the Ready
queue.
Example:
A set of processes with their respective arrival times at the ready queue and the length of their next CPU burst are given below.
In the succeeding solutions, Gantt charts are used to illustrate the time each process starts and ends executing.
12
OPERATING SYSTEMS HANDOUT 9
Solution:
a. Process A arrives at the ready queue at t = 0 and will start executing at t = 0. It has a CPU burst of 8 so it will end at t = 8.
b. Processes B, C, and D are inside the ready queue (in that order) by the time process A finishes executing at t = 8. The CPU scheduler will
select process B to execute next. It will start executing at t = 8. It has a CPU burst of 4 so it will end at t = 12.
c. Processes C, D, and E are inside the ready queue (in that order) by the time process B finishes executing at t = 12. The CPU scheduler will
select process C to execute next. It will start executing at t = 12.
It has a CPU burst of 5 so it will end at t = 17.
13
OPERATING SYSTEMS HANDOUT 9
d. Processes D and E remain inside the ready queue (in that order) at t = 17. The CPU scheduler will select process D to execute next. It will
start executing at t = 17. It has a CPU burst of 3 so it will end at t = 20.
e. Process E is the only process remaining inside the ready queue at t = 20. The CPU scheduler will select process E to execute next. It will
start executing at t = 20. It has a CPU burst of 2 so it will end at t = 22.
WTA = 0 - 0 = 0 ms
WTB = 8 - 3 = 5 ms
WTC = 12 - 4 = 8 ms
14
OPERATING SYSTEMS HANDOUT 9
In FCFS algorithm, if processes with longer CPU bursts arrived at the ready queue ahead of shorter processes, the waiting times of shorter
processes will be large. This increases the average waiting time of the system.
TAA = 8 - 0 = 8 ms
TAB = 12 - 3 = 9 ms
TAC = 17 - 4 = 13 ms
TAD = 20 - 6 = 14 ms
TAE = 22 - 10 = 12 ms
The FCFS scheduling algorithm is simple. Since choosing the next process to be executed is straightforward, the execution of the CPU scheduler
can be very fast.
15
OPERATING SYSTEMS HANDOUT 9
However, FCFS favors CPU-bound processes, which in effect, generally yields a high average waiting time. CPU-bound processes tend to enter
the ready queue often and have long CPU burst times. I/O-bound processes spend most of their time in Blocked state. When I/O-bound
processes enter the ready queue, CPU-bound processes are already there. This can cause I/O-bound processes to wait for a long period of time
and stack up at the rear of the ready queue. This is called the convoy effect. Because of this, FCFS scheduling algorithm is not suitable for time-
sharing and interactive computer systems.
This algorithm is also called Shortest Job First (SJF). It is a nonpreemptive scheduling algorithm wherein the process with the shortest CPU burst
time is the one that will be executed first. If two or more processes have the same CPU burst time, the FCFS algorithm may be used.
Example:
A set of processes with their respective arrival times at the ready queue and the length of their next CPU burst are given below.
Solution:
ITP109 (Platform Technologies) ITP109 (Platform Technologies)
16
OPERATING SYSTEMS HANDOUT 9
a. Process A arrives at the ready queue at t = 0 and will start executing at t = 0. It has a CPU burst of 8 but it cannot be assumed that it will
end at t = 8.
b. Processes B, C, and D are inside the ready queue (in that order) by the time process A finishes executing at t = 8. The CPU scheduler will
select process D to execute next. It will start executing at t = 8. It has a CPU burst of 3 so it will end at t = 11.
c. Processes B, C, and E are inside the ready queue (in that order) by the time process D finishes executing at t = 11. The CPU scheduler will
select process E to execute next. It will start executing at t = 11. It has a CPU burst of 2 so it will end at t = 13.
d. Processes B and C remain inside the ready queue (in that order) at t = 13. The CPU scheduler will select process B to execute next. It will
start executing at t = 13. It has a CPU burst of 4 so it will end at t = 17.
17
OPERATING SYSTEMS HANDOUT 9
e. Process C is the only process remaining inside the ready queue at t = 17. The CPU scheduler will select process C to execute next. It will
start executing at t = 17. It has a CPU burst of 5 so it will end at t = 22.
WTA = 0 - 0 = 0 ms
WTB = 13 - 3 = 10 ms WTC = 17 - 4 = 13 ms
TAA = 8 - 0 = 8 ms
TAB = 17 - 3 = 14 ms TAC = 22 - 4 = 18 ms
TAD = 11 - 6 = 5 ms
TAE = 13 - 10 = 3 ms
18
OPERATING SYSTEMS HANDOUT 9
SPF favors I/O-bound processes. It is optimal in terms of waiting time and turnaround time since shorter processes are executed ahead of the
longer processes.
However, the exact CPU burst of each process is practically impossible to determine. The only solution is to use exponential averaging. But in
this method, the next CPU burst of a process may be predicted based on past CPU bursts. Therefore, the operating system cannot determine the
CPU burst time if there is no historical data. A solution to this can be found in Multilevel Feedback Queues. This will be discussed in succeeding
topics.
Since SPF favors I/O-bound processes, it may take time for CPUbound processes to be executed. And if ever a very large CPUbound processes
finally gets executed, it will monopolize the CPU since SPF is non-preemptive. Because of this, SPF scheduling algorithm may not be suitable for
time-sharing and interactive computer systems since fast response times are not always guaranteed.
It is a preemptive version of SPF algorithm. The currently executing process is preempted if a process with shorter CPU burst time than the
remaining CPU burst time of the running process arrives in the ready queue. The preempted process will enter the rear of the queue.
Example:
A set of processes with their respective arrival times at the ready queue and the length of their next CPU burst are given below.
ITP109 (Platform Technologies) ITP109 (Platform Technologies)
19
OPERATING SYSTEMS HANDOUT 9
Solution:
a. Process A arrives at the ready queue at t = 0 and will start executing at t = 0. It has a CPU burst of 8 but it cannot be assumed that it will
end at t = 8.
b. Process B arrives at the ready queue at t = 3. Process B has a CPU burst of 4 while process A has a remaining CPU burst of 5. Since
process B is shorter, it will preempt process A. Process A goes back to the ready queue. Process B will start executing at t = 3 but it cannot be
assumed that it will end at t = 7.
c. Process C arrives at the ready queue at t = 4. Process C has a CPU burst of 5 while process B has a remaining CPU burst of 3. Since
process B is shorter, it will continue its execution. Process D arrives at the ready queue at t = 6. Process D has a CPU burst of 3 while process B
has a remaining CPU burst of 1. Since process B is shorter, it will continue its execution. Process B will end at t = 7.
ITP109 (Platform Technologies) ITP109 (Platform Technologies)
20
OPERATING SYSTEMS HANDOUT 9
d. Processes A (CPU burst of 5), C (CPU burst of 5), and D (CPU burst of 3) are inside the ready queue (in that order) by the time process B
finishes executing at t = 7. The CPU scheduler will select process D to execute next. Process D will start executing at t = 7 but it cannot be
assumed that it will end at t = 10.
e. Process E arrives in the ready queue by the time process D finishes executing at t = 10.
f. Processes A (CPU burst of 5), C (CPU burst of 5), and E (CPU burst of 2) are inside the ready queue (in that order) by the time process D
finishes executing at t = 10. The CPU scheduler will select process E to execute next. Process E will start executing at t = 10. Since no new process
arrives, it will end at t = 12.
g. Processes A and C remain inside the ready queue (in that order) at t = 12. Process A and C have the same CPU burst time. Since process
A is ahead of process C, the CPU scheduler will select process A to execute next. It will start executing at t = 12. It has a CPU burst of 5 so it will
end at t = 17.
21
OPERATING SYSTEMS HANDOUT 9
h. Process C is the only process remaining inside the ready queue at t = 17. The CPU scheduler will select process C to execute next. It will
start executing at t = 17. It has a CPU burst of 5 so it will end at t = 22.
WTA = (0 - 0) + (12 - 3) = 9 ms
WTB = 3 - 3 = 0 ms
WTC = 17 - 4 = 13 ms
WTD = 7 - 6 = 1 ms
WTE = 10 - 10 = 0 ms
22
OPERATING SYSTEMS HANDOUT 9
TAA = 17 - 0 = 17 ms TAB = 7 - 3 = 4 ms
TAC = 22 - 4 = 18 ms
TAD = 10 -6 = 4 ms
TAE = 12 - 10 = 2 ms
SRTF scheduling algorithm is optimal in terms of minimizing waiting time. It is ideal for interactive systems.
In addition to the problem of determining the exact CPU burst time of processes, another burden is tracking the remaining CPU burst time of
running processes. It is also expected that SRTF will have more context switches compared to FCFS and SPF. For these reasons, the CPU
scheduler will have additional overheads.
23
OPERATING SYSTEMS HANDOUT 9
Objectives
You will be able to describe:
• The fundamentals of file management and the
structure of the file management system
• File-naming conventions, including the role of
extensions
• The difference between fixed-length and
Topic 6 - 7 variablelength record format
File Management (Part I & II) • The advantages and disadvantages of contiguous,
noncontiguous, and indexed file storage techniques
• Comparisons of sequential and direct file access
24
OPERATING SYSTEMS HANDOUT 9
25
OPERATING SYSTEMS HANDOUT 9
26
OPERATING SYSTEMS HANDOUT 9
27
OPERATING SYSTEMS HANDOUT 9
28
OPERATING SYSTEMS HANDOUT 9
29
OPERATING SYSTEMS HANDOUT 9
30
OPERATING SYSTEMS HANDOUT 9
Contiguous Storage
Noncontiguous Storage
• Records stored one after another
– Advantages: • Allows files to use any available disk storage space
• Any record can be found once starting address and
size are known • File’s records are stored in a contiguous manner if
• Direct access easy as every part of file is stored in enough empty space
same compact area
• Any remaining records, and all other additions to file,
– Disadvantages:
are stored in other sections of disk (extents)
• Files can’t be expanded easily, and fragmentation
– Linked together with pointers
– Physical size of each extent is determined by OS
(usually 256 bytes)
Figure 8.7: Contiguous storage
31
OPERATING SYSTEMS HANDOUT 9
32
OPERATING SYSTEMS HANDOUT 9
Access Methods
• Dictated by a file’s organization
• Most flexibility is allowed with indexed sequential
files and least with sequential
• File organized in sequential fashion can support
only sequential access to its records
– Records can be of fixed or variable length
• File Manager uses the address of last byte read to Figure 8.11: (a) Fixed-length records
access the next sequential record (b) Variable-length records
• Current byte address (CBA) must be updated
every time a record is accessed ITP109 (Platform Technologies)
33
OPERATING SYSTEMS HANDOUT 9
Levels in a File Management System • Each file management system has its own method
(continued) to control file access
• Types:
• Verification occurs at every level of the file
– Access control matrix
management system:
– Access control lists
– Directory level: file system checks to see if the
requested file exists – Capability lists
– Access control verification module determines – Lockword control
whether access is allowed
– Logical file system checks to see if the requested byte
address is within the file’s limits
– Device interface module checks to see whether the ITP109 (Platform Technologies)
storage device exists
ITP109 (Platform Technologies)
34
OPERATING SYSTEMS HANDOUT 9
Capability Lists
Access Control Lists (continued)
• Lists every user and the files to which each has access
• • Can control access to devices as well as to files
Contains the name of only those users who may use
file; those denied any access are grouped under
“WORLD”
• List is shortened by putting users into categories:
– SYSTEM: personnel with unlimited access to all files
– OWNER: Absolute control over all files created in own
account
– GROUP: All users belonging to appropriate group have
access Table 8.3: Capability Lists
– WORLD: All other users in system
Lockwords
• Lockword: similar to a password but protects a single file Data Compres
• Advantages:
– Requires smallest amount of storage for file protection
• A technique used to save space in files • Methods for data comp
– Records with repeated characters: Repeated characters are replaced with
• Disadvantages:
– Can be guessed by hackers or passed on to unauthorized
• e.g., ADAMSbbbbbbbbbb => ADAMSb10
300000000 => 3#
users
– Repeated terms: Compressed by using symbols to represent most commonl
– Generally doesn’t control type of access to file • e.g., in a university’s student database common words like student, course, grad
• Anyone who knows lockword can read, write, execute, or delete file
35
OPERATING SYSTEMS HANDOUT 9
36
OPERATING SYSTEMS HANDOUT 9
Summary
Summary (continued)
• The File Manager controls every file in the system
• Processes user commands (read, write, modify,
• Each level of file management system is
create, delete, etc.) to interact with any other file
implemented with structured and modular
• Manages access control procedures to maintain the
programming techniques
integrity and security of the files under its control
• File Manager must accommodate a variety of file • Verification occurs at every level of the file
organizations, physical storage allocation schemes, management system
record types, and access methods • Data compression saves space in files
• Linux specifies five types of files used by the system
• VFS maintains an interface between system calls
related to files and the file management code
ITP109 (Platform Technologies) ITP109 (Platform Technologies)
Objectives
You will be able to describe:
• The basic functionality of the memory allocation methods
covered in this chapter: paged, demand paging, segmented,
and segmented/demand paged memory allocation
Topic 5 • The influence that these page allocation methods have had
Main Memory Management (Part II) on virtual memory
• The difference between a first-in first-out page replacement
policy, a least-recently-used page replacement policy, and a
clock page replacement policy
- ITP109 (Platform Technologies) - ITP109 (Platform Technologies)
37
OPERATING SYSTEMS HANDOUT 9
38
OPERATING SYSTEMS HANDOUT 9
39
OPERATING SYSTEMS HANDOUT 9
40
OPERATING SYSTEMS HANDOUT 9
• Swapping Process: • Page fault handler: The section of the operating system
– To move in a new page, a resident page must be swapped that determines
back into secondary storage; involves – Whether there are empty page frames in memory
• Copying the resident page to the disk (if it was modified) • If so, requested page is copied from secondary storage
• Writing the new page into the empty page frame – Which page will be swapped out if all page frames are busy
– Requires close interaction between hardware components, • Decision is directly dependent on the predefined policy for page
software algorithms, and policy schemes removal
41
OPERATING SYSTEMS HANDOUT 9
Figure 3.8: Working of a FIFO algorithm for a job with Figure 3.9: Working of an LRU algorithm for a job with
four pages (A, B, C, D) as it’s processed four pages (A, B, C, D) as it’s processed
by a system with only two available page by a system with only two available page
frames frames
42
OPERATING SYSTEMS HANDOUT 9
in memory
ITP109 (Platform Technologies)
43
OPERATING SYSTEMS HANDOUT 9
44
OPERATING SYSTEMS HANDOUT 9
45
OPERATING SYSTEMS HANDOUT 9
Segmented/Demand Paged
Memory Allocation
• Subdivides segments into pages of equal size, smaller
than most segments, and more easily manipulated than
whole segments. It offers:
– Logical benefits of segmentation Segmented/Demand Paged Memory Allocation (continued)
– Physical benefits of paging
• Removes the problems of compaction, external • This scheme requires following four tables:
fragmentation, and secondary storage handling – Job Table lists every job in process (one for the whole system)
• The addressing scheme requires segment number, – Segment Map Table lists details about each segment (one for each job)
page number within that segment, and displacement – Page Map Table lists details about every page (one for each segment)
within that page – Memory Map Table monitors the allocation of the page frames in main memory (one for the whole system)
ITP109 (Platform Technologies) ITP109 (Platform Technologies)
46
OPERATING SYSTEMS HANDOUT 9
Virtual Memory
Virtual Memory (continued)
• Allows programs to be executed even though they are
not stored entirely in memory • Advantages (continued):
• Requires cooperation between the Memory Manager – Eliminates external fragmentation and minimizes internal fragmentation
and the processor hardware – Allows the sharing of code and data
• Advantages of virtual memory management: – Facilitates dynamic linking of program segments
– Job size is not restricted to the size of main memory • Disadvantages:
– Memory is used more efficiently – Increased processor hardware costs
– Allows an unlimited amount of multiprogramming
– Increased overhead for handling paging interrupts
– Increased software complexity to prevent thrashing
Cache Memory
47
OPERATING SYSTEMS HANDOUT 9
Figure 3.18: Virtual memory management in Linux Figure 3.19: An example of Buddy algorithm in Linux
ITP109 (Platform Technologies) ITP109 (Platform Technologies)
48
OPERATING SYSTEMS HANDOUT 9
Summary (continued)
Summary
• Segmented/demand paged memory allocation
• Paged memory allocations allow efficient use of removes the problems of compaction, external
memory by allocating jobs in noncontiguous fragmentation, and secondary storage handling
memory locations • Associative memory can be used to speed up the
• Increased overhead and internal fragmentation are process
problems in paged memory allocations • Virtual memory allows programs to be executed
• Job no longer constrained by the size of physical even though they are not stored entirely in memory
memory in demand paging scheme • Job’s size is no longer restricted to the size of
• LRU scheme results in slightly better efficiency as main memory by using the concept of virtual
compared to FIFO scheme memory
• Segmented memory allocation scheme solves • CPU can execute instruction faster with the use of
internal fragmentation problem cache memory
ITP109 (Platform Technologies) ITP109 (Platform Technologies)
49