Opesys2 04

Download as pdf or txt
Download as pdf or txt
You are on page 1of 37

MEMORY MANAGEMENT

INTRODUCTION TO MEMORY MANAGEMENT

 The sharing of the CPU by several processes requires


that the operating system keeps several processes
(including the OS itself) in main memory at the same
time.

 The operating system should therefore have algorithms


for facilitating the sharing of main memory among
these processes (memory management).

THE CONCEPT OF ADDRESS BINDING

 Usually, a program resides on a disk as a binary


executable file. The program must then be brought
into main memory before the CPU can execute it.

 Depending on the memory management scheme, the


process may be moved between disk and memory
during its execution. The collection of processes on
the disk that are waiting to be brought into memory for
execution forms the job queue or input queue.

 The normal procedure is to select one of the processes


in the job queue and to load the process into memory.

Memory Management 1
 A user process may reside in any part of the physical
memory. Thus, although the address space of the
computer starts at 0, the first address of the user
process does not need to be 0.

 Multistep processing of a user program

Source
Program

Compiler or compile
Assembler time

Object
Other Module
Object
Modules

Linkage
Editor

Load load
Module time
System
Library

Loader
Dynamically
Loaded
System
Library

dynamic In-memory execution


linking Binary time (run
Memory time)
Image

Memory Management 2
 Addresses in a source program are generally symbolic
(such as LOC or ALPHA). A compiler will typically
bind these symbolic addresses to relocatable addresses
(such as 14 bytes from the beginning of a certain
module). The linker editor or loader will then bind
the relocatable addresses to absolute addresses (such as
18000H). Each binding is a mapping from one address
space to another.

 The binding of instructions and data to memory


address may be done at any step along the way:

1. Compile Time. If it is known at compile time


where the process will reside in memory, then
absolute code can be generated.

For example, if it is known that a user process


resides starting at location R, then the generated
compiler code will start at that location and
extend up from there.

If, at some later time, the starting location


changes, then it will be necessary to recompile the
code.

2. Load Time. If it is not known at compile time


where the process will reside in memory, then the
compiler must generate relocatable code. In this
case, final binding is delayed at load time. If the
starting address changes, then the OS must reload
the user code to incorporate this changed value.

Memory Management 3
3. Execution Time. If the process can be moved
during its execution from one memory segment to
another, then binding must be delayed until run
time. Special hardware must be available for this
scheme to work. Most general-purpose operating
systems use this method.

 To obtain better memory-space utilization, dynamic


loading is often used.

With dynamic loading, a routine is not loaded until it is


called. All routines are kept on disk in a relocatable
load format.

Whenever a routine is called, the relocatable linking


loader is called to load the desired routine into
memory and to update the program’s address tables to
reflect this change.

The advantage of dynamic loading is that an unused


routine is never loaded. This scheme is particularly
useful when large amounts of code are needed to
handle infrequently occurring cases, such as error
routines. In this case, although the total program size
may be large, the portion that is actually used (and
hence actually loaded) may be much smaller.

Memory Management 4
LOGICAL AND PHYSICAL ADDRESS SPACE

 An address generated by the CPU is commonly


referred to as a logical (virtual) address whereas an
address seen by the memory unit is commonly referred
to as a physical address.

logical physical
address address
CPU MEMORY

 The set of logical addresses generated by a program is


called is called the logical address space while the set
of all physical addresses corresponding to these logical
addresses is known as the physical address space.

 The compile-time and load-time address binding


schemes result in an environment where the logical
and physical addresses are the same hence no mapping
of logical address to physical address is necessary.

 However, the execution-time address-binding scheme


results in different logical and physical addresses.

Memory Management 5
 The run-time mapping from logical to physical
addresses is done by a hardware device called the
memory-management unit (MMU).

 The hardware support necessary for this scheme is


similar to the ones discussed earlier. The base register
is now the relocation register. The value in the
relocation register is added to every address generated
by the user process at the time it is sent to memory.

For example, if the base is at 14000, then an attempt


by the user to address location 0 is dynamically
relocated to location 14000; an access to location 346
is mapped to location 14346.

Relocation
Register
14000

logical physical
address address
CPU + MEMORY
346 14346

MMU

Memory Management 6
 Notice that the user program never sees the real
physical addresses. The program can create a pointer
to location 346, store it in memory, manipulate it,
compare it to other addresses – all as the number 346.
Only when it is used as a memory address is it
relocated relative to the base register.

The user program deals with logical addresses. The


memory-mapping hardware converts logical addresses
into physical addresses. The final location of a
referenced memory address is not determined until a
reference is made.

 There are now two different types of addresses: logical


addresses (in the range 0 to max) and physical
addresses (in the range R + 0 to R + max for a base
value of R). The user generates only logical addresses
and thinks that the process runs in locations 0 to max.
The user program supplies the logical addresses; these
must be mapped to physical addresses before they are
used.

Memory Management 7
SWAPPING

 A process needs to be in memory to be executed. A


process, can be temporarily swapped out of memory to
a fast secondary storage (fixed disk) and then brought
back into memory for continued execution.

OPERATING
SYSTEM

SWAP OUT

PROCESS
P1

PROCESS
P2

SWAP IN

USER
SPACE SECONDARY STORAGE

MAIN MEMORY

 For example, assume a multiprogrammed environment


with a round-robin CPU-scheduling algorithm. When
a quantum expires, the memory manager will start to
swap out the process that just finished, and to swap in
another process to the memory space that has been
freed.

Memory Management 8
 Take note that the quantum must be sufficiently large
that reasonable amounts of computing are done
between swaps.

 The context-switch time in swapping is fairly high.

Example:

Size of User Process = 1 MB


= 1,048,576 bytes
Transfer Rate of
Secondary Storage = 5 MB/sec
= 5,242,880 bytes/sec

The actual transfer rate of the 1 MB process to or


from memory takes:

1,048,576 / 5,242,880 = 200 ms

Assuming that no head seeks are necessary and an


average latency of 8 ms, the swap time tales 208
ms. Since it is necessary to swap out and swap in,
the total swap time is then about 416 ms.

 For efficient CPU utilization, the execution time for


each process must be relative long relative to the swap
time. Thus, in a round-robin CPU-scheduling
algorithm, for example, the time quantum should be
substantially larger than 416 ms.

Memory Management 9
 Swapping is constrained by other factors as well. A
process to be swapped out must be completely idle.

Of particular concern is any pending I/O. A process


may be waiting for an I/O operation when it is desired
to swap that process to free up its memory. However,
if the I/O is asynchronously accessing the user memory
for I/O buffers, then the process cannot be swapped.

Assume that the I/O operation of process P1 was


queued because the device was busy. Then if P1 was
swapped out and process P2 was swapped in, the I/O
operation might attempt to use memory that now
belongs to P2.

 Normally a process that is swapped out will be


swapped back into the same memory space that it
occupied previously. If binding is done at assembly or
load time, then the process cannot be moved to
different locations. If execution-time binding is being
used, then it is possible to swap a process into a
different memory space, because the physical
addresses are computed during execution time.

Memory Management 10
MULTIPLE PARTITIONS

 In an actual multiprogrammed environment, many


different processes reside in memory, and the CPU
switches rapidly back and forth among these processes.

 Since the size of a typical process is much smaller than


that of main memory, the operating system divides
main memory into a number of partitions wherein
each partition may contain exactly one process.

OPERATING
SYSTEM

PROCESS 1

PROCESS 2

PROCESS 3

 The degree of multiprogramming is bounded by the


number of partitions.

 When a partition is free, the operating system selects a


process from the job queue and loads it into the free
partition. When the process terminates, the partition
becomes available for another process.

Memory Management 11
 There are two major memory management schemes
possible in handling multiple partitions:

1. Multiple Contiguous Fixed Partition Allocation

Example:

MFT Technique (Multiprogramming with a


Fixed number of Tasks) originally used by
the IBM OS/360 operating system.

2. Multiple Contiguous Variable Partition


Allocation

Example:

MVT Technique (Multiprogramming with a


Variable number of Tasks)

 Fixed Regions (MFT)

In MFT, the region sizes are fixed, and do not change


as the system runs.

As jobs enter the system, they are put into a job queue.
The job scheduler takes into account the memory
requirements of each job and the available regions in
determining which jobs are allocated memory.

Memory Management 12
Example:

Assume a 32K main memory divided into the


following partitions:

12K for the operating system


2K for very small processes
6K for average processes
12K for large jobs

0
OPERATING
12K
SYSTEM

USER PARTITION 1 2K

USER PARTITION 2 6K

USER PARTITION 3 12K

32K

 The operating system places jobs or process entering


the memory in a job queue on a predetermined manner
(such as first-come first-served).

 The job scheduler then selects a job to place in


memory depending on the memory available.

Memory Management 13
Example:
0
OPERATING
12K
SYSTEM

5 4 3 2 1
USER PARTITION 1 2K
... 7K 7K 3K 2K 5K

USER PARTITION 2 6K
JOB QUEUE

USER PARTITION 3 12K

32K

A typical memory management algorithm would:

1. Assign Job 1 to User Partition 2


2. Assign Job 2 to User Partition 1
3. Job 3 (3K) needs User Partition 2 (6K) since it is
too small for User Partition 3 (12K). Since Job 2
is still using this partition, Job 3 should wait for
its turn.
4. Job 4 cannot use User Partition 3 since it will go
ahead of Job 3 thus breaking the FCFS rule. So it
will also have to wait for its turn even though
User Partition 3 is free.

This algorithm is known as the best-fit only algorithm.

Memory Management 14
 One flaw of the best-fit only algorithm is that it forces
other jobs (particularly those at the latter part of the
queue to wait even though there are some free memory
partitions).

 An alternative to this algorithm is the best-fit available


algorithm. This algorithm allows small jobs to use a
much larger memory partition if it is the only partition
left. However, the algorithm still wastes some
valuable memory space.

 Another option is to allow jobs that are near the rear of


the queue to go ahead of other jobs that cannot proceed
due to any mismatch in size. However, this will break
the FCFS rule.

 Other problems with MFT:

1. What if a process requests for more memory?

Possible Solutions:

A] kill the process


B] return control to the user program with an
“out of memory” message
C] reswap the process to a bigger partition, if
the system allows dynamic relocation

2. How does the system determine the sizes of the


partitions?

Memory Management 15
3. MFT results in internal and external
fragmentation which are both sources of memory
waste.

Internal fragmentation occurs when a process


requiring m memory locations reside in a partition
with n memory locations where m < n. The
difference between n and m (n - m) is the amount
of internal fragmentation.

External fragmentation occurs when a partition is


available, but is too small for any waiting job.

Partition size selection affects internal and


external fragmentation since if a partition is too
big for a process, then internal fragmentation
results. If the partition is too small, then external
fragmentation occurs. Unfortunately, with a
dynamic set of jobs to run, there is probably no
one right partition for memory.

Memory Management 16
Example:

USER PARTITION 1 10K

4 3 2 1
4K
USER PARTITION 2
... 6K 6K 3K 7K

4K
JOB QUEUE
USER PARTITION 3

4K
USER PARTITION 4

Only Jobs 1 and 2 can enter memory (at partitions 1


and 2). During this time:

I.F. = (10 K - 7 K) + (4 K - 3 K)
= 4K

E.F. = 8K

Therefore:

Memory Utilization = 10/22 x 100


= 45.5%

What if the system partitions memory as 10:8:4 or


7:3:6:6?

Memory Management 17
 Variable Partitions (MVT)

In MVT, the system allows the region sizes to vary


dynamically. It is therefore possible to have a variable
number of tasks in memory simultaneously.

Initially, the operating system views memory as one


large block of available memory called a hole. When
a job arrives and needs memory, the system searches
for a hole large enough for this job. If one exists, the
OS allocates only as much as is needed, keeping the
rest available to satisfy future requests.

Example:

Assume that memory has 256 K locations with the


operating system residing at the first 40 K locations.
Assume further that the following jobs are in the job
queue:

JOB MEMORY COMPUTE TIME


1 60K 10 units
2 100K 5 units
3 30K 20 units
4 70K 8 units
5 50K 15 units

The system again follows the FCFS algorithm in


scheduling processes.

Memory Management 18
Example Memory Allocation and Job Scheduling for
MVT

0 0 0

OS OS OS

40K 40K 40K

Job 5
Job 1 Job 1 Job 1 out, Job 5 in after
next 5 time units
90K
100K 100K 100K

Job 4 Job 4
Job 2 out, Job 4 in after
Job 2 5 time units

170K 170K

200K 200K 200K

Job 3 Job 3 Job 3

230K 230K 230K

256K 256K 256K

Memory Management 19
This example illustrates several points about MVT:

1. In general, there is at any time a set of holes, of


various sizes, scattered throughout memory.

2. When a job arrives, the operating system searches


this set for a hole large enough for the job (using
the first-fit, best-fit, or worst fit algorithm).

First Fit Allocate the first hole that is large


enough. This algorithm is
generally faster and empty spaces
tend to migrate toward higher
memory. However, it tends to
exhibit external fragmentation.

Best Fit Allocate the smallest hole that is


large enough. This algorithm
produces the smallest leftover hole.
However, it may leave many holes
that are too small to be useful.

Worst Fit Allocate the largest hole. This


algorithm produces the largest
leftover hole. However, it tends to
scatter the unused portions over
non-contiguous areas of memory.

Memory Management 20
3. If the hole is too large for a job, the system splits
it into two: the operating system gives one part to
the arriving job and it returns the other the set of
holes.

4. When a job terminates, it releases its block of


memory and the operating system returns it in the
set of holes.

5. If the new hole is adjacent to other holes, the


system merges these adjacent holes to form one
larger hole.

It is important for the operating system to keep track of


the unused parts of user memory or holes by
maintaining a linked list. A node in this list will have
the following fields:

1. the base address of the hole


2. the size of the hole
3. a pointer to the next node in the list

Internal fragmentation does not exist in MVT but


external fragmentation is still a problem. It is possible
to have several holes with sizes that are too small for
any pending job.

Memory Management 21
The solution to this problem is compaction. The goal
is to shuffle the memory contents to place all free
memory together in one large block.

Example:

0 0
OS OS

40K 40K

Job 5 Job 5

90K 90K
10 K
100K

Job 4
Job 4

160K
170K Job 3
30 K 190K
200K
Job 3
66 K
230K
26 K
256K 256K

Compaction is possible only if relocation is dynamic,


and is done at execution time.

Memory Management 22
PAGING

 MVT still suffers from external fragmentation when


available memory is not contiguous, but fragmented
into many scattered blocks.

 Aside from compaction, paging can minimize external


fragmentation. Paging permits a program’s memory to
be non-contiguous, thus allowing the operating system
to allocate a program physical memory whenever
possible.

 In paging, the operating system divides main memory


into fixed-sized blocks called frames. The system also
breaks a process into blocks called pages where the
size of a memory frame is equal to the size of a process
page. The pages of a process may reside in different
frames in main memory.

 Every address generated by the CPU is a logical


address. A logical address has two parts:

1. The page number (p) indicates what page


the word resides.

2. The page offset (d) selects the word within


the page.

Memory Management 23
 If the size of a process (its logical address space) is 2m,
and a page size is 2n addressing units (bytes or words),
then the high-order m – n bits of a logical address
designate the page number, and the n lower-order bits
designate the page offset. Thus, the logical address is
as follows:
page number page offset
p d
m-n n

Example:

Process Size = 32,768 bytes


Page Size = 2,048 bytes

Total Number of Pages = 32,768 / 2,048


= 16 pages

Number of bits in the logical address space


= 15 bits

Out of the 15 bits, the higher 4 bits represent the


page number while the remaining 11 bits
represent the page offset.
page number page offset
p d
4 bits 11 bits

The address 010 1000 1110 0011 represent word


227 of page 5.

Memory Management 24
 The operating system translates this logical address
into a physical address in main memory where the
word actually resides. This translation process is
possible through the use of a page table.

 The page number is used as an index into the page


table. The page table contains the base address of each
page in physical memory.

logical physical
address address
Main
CPU p d f d
Memory

p
f

page table

This base address is combined with the page offset to


define the physical memory address that is sent to the
memory unit.

 The page size (like the frame size) is defined by the


hardware. The size of a page is typically a power of 2
varying between 512 bytes and 16 MB per page,
depending on the computer architecture.

Memory Management 25
Example:

Main Memory Size = 32 bytes


Process Size = 16 bytes

Page or Frame Size = 4 bytes

No. of Process Pages = 4 pages


No. of MM Frames = 8 frames

frame
page 0 number 0

0 1
page 1 1 page 0
1 4
2 3
page 2 2
3 7

page 3 3 page 2
Page Table

Logical 4 page 1
Memory
5

7 page 3

Physical
Memory

Memory Management 26
0 a 0
1 b 1
Page 0 Frame 0
2 c 2
3 d 3
4 e 0 5 4 i
5 f 1 6 5 j
Page 1 Frame 1
6 g 6 k
2 1
7 h 7 l
3 2
8 i 8 m
9 j 9 n
Page 2 Page Table Frame 2
10 k 10 o
11 l 11 p
12 m 12
13 n 13
Page 3 Frame 3
14 o 14
15 p 15
Logical 16
Memory 17
Frame 4
18
19
20 a
21 b
Frame 5
22 c
23 d
24 e
25 f
Frame 6
26 g
27 h
28
29
Frame 7
30
32
Physical
Memory

Logical address 0 is page 0, offset 0. Indexing into the


page table, it is seen that page 0 is in frame 5. Thus,
logical address 0 maps to physical address 20 (5 x 4 +
0). Logical address 3 (page 0, offset 3) maps to
physical address 23 (5 x 4 + 3). Logical address 4 is
page 1, offset 0; according to the page table, page 1 is
mapped to frame 6. Thus logical address 4 maps to
physical address 24 (6 x 4 + 0).

Memory Management 27
Main Memory Size = 32 bytes

No. of bits in the physical address = 5


No. of Frames =8
Frame Size =4

Physical Address Format:

A4 A3 A2 A1 A0

frame number frame offset

Process Size = 16 bytes

No. of bits in the logical address = 4


No. of Pages =4
Page Size =4

Logical Address Format:

A3 A2 A1 A0
page page offset
number

Memory Management 28
logical memory physical memory
0000 a 00000
0001 b 00001
page 0 frame 0
0010 c 00010
0011 d 00011
0100 e 00100 i
0101 f 00101 j
page 1 frame 1
0110 g 00110 k
0111 h 00111 l
1000 i 01000 m
1001 j 01001 n
page 2 frame 2
1010 k 01010 o
page table
1011 l 01011 p
1100 m 00 101 01100
1101 n 01 110 01101
page 3 frame 3
1110 o 10 001 01110
1111 p 11 010 01111
10000
10001
frame 4
10010
10011
10100 a
CPU sends logical address 01 01 10101 b
frame 5
10110 c
That address is translated to 10111 d
physical address 110 01
11000 e
11001 f
frame 6
11010 g
11011 h
11100
11101
frame 7
11110
11111

Memory Management 29
 There is no external fragmentation in paging since the
operating system can allocate any free frame to a
process that needs it. However, it is possible to have
internal fragmentation if the memory requirements of a
process do not happen to fall on page boundaries. In
other words, the last page may not completely fill up a
frame.

Example:

Page Size = 2,048 bytes


Process Size = 72,766 bytes

No. of Pages = 36 pages


(35 pages plus 1,086 bytes)

Internal Fragmentation is 2,048 - 1,086 = 962

 In the worst case, a process would need n pages plus


one byte. It would be allocated n + 1 frames, resulting
in an internal fragmentation of almost an entire frame.

 If process size is independent of page size, it is


expected that internal fragmentation to average one-
half page per process. This consideration suggests that
small page sizes are desirable. However, overhead is
involved in each page-table entry, and this overhead is
reduced as the size of the pages increases. Also, disk
I/O is more efficient when the number of data being
transferred is larger.

Memory Management 30
 Each operating system has its own methods for storing
page tables. Most allocate a page table for each
process. A pointer to the page table is stored with the
other register values (like the program counter) in the
PCB. When the dispatcher is told to start a process, it
must reload the user registers and define the correct
hardware page-table values from the stored user page
table.

The options in implementing page tables are:

1. Page Table Registers

In the simplest case, the page table is


implemented as a set of dedicated registers.
These registers should be built with high-speed
logic to make page-address translation efficient.
The advantage of using registers in implementing
page tables is fast mapping. Its main
disadvantage is that it becomes expensive for
large logical address spaces (too many pages).

2. Page Table in Main Memory

The page table is kept in memory and a Page


Table Base Register (PTBR) points to the page
table. The advantage of this approach is that
changing page tables requires changing only this
register, substantially reducing context switch
time. However, two memory accesses are needed
to access a word.

Memory Management 31
3. Associative Registers

The standard solution is to use a special, small,


fast-lookup hardware cache, variously called
Associative Registers or Translation Look-aside
Buffers (TLB).

The associative registers contain only a few of the


page-table entries. When a logical address is
generated by the CPU, its page number is
presented to a set of associative registers that
contain page numbers and their corresponding
frame numbers. If the page number is found in
the associative registers, its frame number is
immediately available and is used to access
memory.

If the page number is not in the associative


registers, a memory reference to the page table
must be made. When the frame number is
obtained, it can be used to access memory (as
desired). In addition, the page number and frame
number is added to the associative registers, so
that they can be found quickly on the next
reference.

Memory Management 32
 Another advantage in paging is that processes can
share pages therefore reducing overall memory
consumption.

Example:

Consider a system that supports 40 users, each of


whom executes a text editor. If the text editor
consists of 150K of code and 50K of data space,
then the system would need 200K x 40 = 8,000K
to support the 40 users.

However, if the text editor code is reentrant (pure


code that is non-self-modifying), then all 40 users
can share this code. The total memory
consumption is therefore 150K + 50K x 40 =
2,150K only.

 It is important to remember that in order to share a


program, it has to be reentrant, which implies that it
never changes during execution.

Memory Management 33
SEGMENTATION

 Because of paging, there are now two ways of viewing


memory. These are the user’s view (logical memory)
and the actual physical memory. There is a necessity
of mapping logical addresses into physical addresses.

 Logical Memory

A user or programmer views memory as a collection of


variable-sized segments, with no necessary ordering
among the segments.

Therefore, a program is simply a set of subroutines,


procedures, functions, or modules.

stack

subroutine
symbol
table

Sqrt
main
program

LOGICAL ADDRESS SPACE

Memory Management 34
 Each of these segments is of variable-length; the size is
intrinsically defined by the purpose of the segment in
the program. The user is not concerned whether a
particular segment is stored before or after another
segment. The OS identifies elements within a segment
by their offset from the beginning of the segment.

Example:

The Intel 8086/88 processor has four segments:

1. The Code Segment


2. The Data Segment
3. The Stack Segment
4. The Extra Segment

 Segmentation is the memory-management scheme that


supports this user’s view of memory. A logical
address space is a collection of segments (of variable
sizes). Each segment has a name and a length.
Addresses specify the name of the segment or its base
address and the offset within the segment.

Example:

To access an instruction in the Code Segment of


the 8086/88 processor, a program must specify
the base address (the CS register) and the offset
within the segment (the IP register).

Memory Management 35
 The mapping of logical address into physical address is
possible through the use of a segment table.

limit base

CPU (s, d)

true
d < limit ? + to MM

false

trap; addressing error

 A logical address consists of two parts: a segment


number s, and an offset into the segment, d. The
segment number is an index into the segment table.
Each entry of the segment table has a segment base
and a segment limit. The offset d must be between 0
and limit.

Memory Management 36
Example:

1400
segment 0
stack 2400

segment 3
subrouti limit base
ne symbol 0 1000 1400
table 3200
1 400 6300
2 400 4300
segment 0 segment 4 segment 3
3 1100 3200
Sqrt main 4 1000 4700
program 4300
segment 2
SEGMENT 4700
TABLE
segment 1 segment 2 segment 4

5700

LOGICAL ADDRESS SPACE 6300

segment 1

6700
PHYSICAL
MEMORY

A reference to segment 3, byte 852, is mapped to 3200


(the base of segment 3) + 852 = 4052. A reference to
byte 1222 of segment 0 would result in a trap to the
operating system since this segment is only 1000 bytes
long.

Memory Management 37

You might also like