L-2.1.3 Swapping, Fragmentation - Compaction
L-2.1.3 Swapping, Fragmentation - Compaction
L-2.1.3 Swapping, Fragmentation - Compaction
Swapping
,
Fragmentation
&
Compaction
Swapping
Swapping is a mechanism in which
a process can be swapped
temporarily out of main memory
(or move) to secondary storage
(disk) and make that memory
available to other processes.
Swapping
• Backing store – fast disk large enough to accommodate copies of all memory
images for all users; must provide direct access to these memory images
• Roll out, roll in – swapping variant used for priority-based scheduling algorithms;
lower-priority process is swapped out so higher-priority process can be loaded and
executed
• Major part of swap time is transfer time; total transfer time is directly proportional
to the amount of memory swapped
• Does the swapped out process need to swap back in to same physical addresses?
Depends on address binding method
Swapping Time
• 100MB process swapping to hard disk with transfer rate of 50MB/sec Swap out
time of 2000 ms
• Plus swap in of same sized process
• Total context switch swapping component time of 4000ms (4 seconds)
• Let us assume that the user process is of size 2048KB and on a standard hard disk
where swapping will take place has a data transfer rate around 1 MB per second.
The actual transfer of the 1000K process to or from memory will take
Swapping Advantage/Disadvantage
Advantages of Swapping
1. It helps the CPU to manage multiple processes within a single main memory.
2. It helps to create and use virtual memory.
3. Swapping allows the CPU to perform multiple tasks simultaneously. Therefore,
processes do not have to wait very long before they are executed.
4. It improves the main memory utilization.
Disadvantages of Swapping
5. If the computer system loses power, the user may lose all information related
to the program in case of substantial swapping activity.
6. If the swapping algorithm is not good, the composite method can increase the
number of Page Fault and decrease the overall processing performance.
Note:
7. In a single tasking operating system, only one process occupies the user
program area of memory and stays in memory until the process is complete.
8. In a multitasking operating system, a situation arises when all the active
processes cannot coordinate in the main memory, then a process is swap out
from the main memory so that other processes can enter it.
Fragmentation
• First fit analysis reveals that given N blocks allocated, 0.5 N blocks lost to
fragmentation 1/3 may be unusable -> 50-percent rule
Compaction
If a program is moved out or terminates, it creates a hole, (i.e. a contiguous unused
area) in main memory. When a new process is to be moved in, it may be allocated one
of the available holes. It is quite possible that main memory has too many small holes
at a certain time. In such a situation none of these holes is really large enough to be
allocated to a new process that may be moving in. The main memory is too
fragmented. It is, therefore, essential to attempt compaction. Compaction means OS re-
allocates the existing programs in contiguous regions and creates a large enough free
area for allocation to a new process.
Compaction
• Reduce external fragmentation by compaction Shuffle memory contents to place
all free memory together in one large block
• Compaction is possible only if relocation is dynamic, and is done at execution time
– I/O problem Latch job in memory while it is involved in I/O
– Do I/O only into OS buffers
https://www.youtube.com/watch?v=SqYigYLFvcI
https://www.youtube.com/watch?v=ZN-baY3x85o
https://www.youtube.com/watch?v=buRdtPIieOM