Os Ashi
Os Ashi
Os Ashi
Interrupts are signals or events that occur during the execution of a program and cause the normal flow of the
program to be temporarily suspended. Interrupts are typically generated by hardware devices, such as timers,
input/output controllers, or external devices, to request attention or notify the processor about a specific event
that requires immediate action.
When an interrupt occurs, the processor temporarily stops executing the current instruction and transfers control
to a specific interrupt handler routine. This routine, also known as an interrupt service routine (ISR) or interrupt
handler, is responsible for handling the interrupt and performing the necessary actions associated with it.
Interrupts serve various purposes, such as:
1. Input/Output (I/O) handling: Interrupts allow devices to asynchronously request attention from the processor.
For example, when a keyboard key is pressed, an interrupt is generated to handle the input.
2. Timer events: Interrupts can be used to handle timer events, such as scheduling tasks or implementing time-
based operations.
3. Exception handling: Interrupts can be triggered by exceptional conditions, such as divide-by-zero errors or
invalid memory access. The interrupt handler can then handle these exceptions appropriately.
4. Interprocess communication: Interrupts can be used to facilitate communication between processes or threads
in a multitasking environment.
#Logical Address Space :- An address generated by the CPU is known as a “Logical Address”.
-User can access logical address of the process.
-User has indirect access to the physical address through logical address.
-Logical address does not exist physically. Hence, aka, Virtual address.
-These logical addresses are usually expressed as numeric values.
- Logical address can be changed.
-The operating system and hardware work together to translate these logical addresses into physical addresses, which
correspond to specific locations in the physical memory chips.
-The set of all logical addresses that are generated by any program is referred to as Logical Address Space.
(To understand logical addresses, an analogy. Imagine a large library with thousands of books arranged on
shelves. Each book has a unique identification number assigned to it. Now, if you want to find a specific book,
you don't need to know exactly where it is physically located on a shelf. Instead, you can use its identification
number to search for it in the library's catalog or database. The identification number serves as a logical
address for the book.)
#Physical Address Space :- An address loaded into the memory-address register of the physical memory. A
Physical address is also known as a Real address because it refers to the actual location of data or instructions in
the physical memory of a computer system. Can be accessed by a user indirectly but never directly. It is typically
expressed as a numeric value, which indicates the memory chip, the specific memory module, the row, column,
and bit location within that module. . It’s location in the main memory physically. It is computed by the Memory
Management Unit (MMU). The set of all physical addresses corresponding to the Logical addresses is commonly known
as Physical Address Space.
# the runtime mapping from virtual to physical address is done by a hardware device called the memory-management
unit (MMU).
SWAPPING
-Swapping is a mechanism in which a process can be swapped temporarily out of main memory to secondary storage
(disk) and make that memory available to other processes. At some later time, the system swaps back the process from
the secondary storage to main memory and its execution can be continued where it left off.
-Swap-out and swap-in is done by MTS.
-Swapping is necessary because main memory is limited and thus it has to be freed up for other process. It is done when
a high priority process comes or when a I/o operation has to be performed.
c. The relocation register contains value of smallest physical address (Base address [R]);
the limit register contains the range of logical addresses (e.g., relocation = 100040 & limit = 74600).
NEED : We need several user processes to reside in memory simultaneously. Therefore, we need to consider
how to allocate available memory to the processes that are in the input queue waiting to be brought into
memory.
-In this scheme, each process is contained in a single contiguous block of memory.
A). Fixed Partitioning: number of partitions are fixed. The partitions can be of different OR equal sizes. They are fixed
because once decided they cannot be changed.
LIMITATIONS:
3) Low degree of multi-programming: In fixed partitioning, the degree of multiprogramming is fixed and very less
because the size of the partition cannot be varied according to the size of processes.
1) Compaction: (tackle external fragmentation.) Through compaction, we can minimize the probability of
external fragmentation.
- All the free partitions are made contiguous, and all the loaded partitions are brought together.
-By applying this technique, we can store the bigger processes in the memory. The free partitions are
merged which can now be allocated to some other process. This technique is also called defragmentation.
Limitation: The efficiency of the system is decreased in the case of compaction since all the free spaces will
be transferred from several places to a single place. (less multiprogramming).
2) Various algorithms which are implemented by the Operating System for filling holes :
a) FIRST FIT: Allocate the process to the first hole that is big enough. Searching is always done from the
beginning, every time we are allocating the process it will also check previously allocated holes.
PROS: Simple and easy to implement. CONS: Internal fragmentation.
Fast/Less time complexity
b) NEXT FIT: same as first fit, but it doesn’t start from first position, it starts from the place where the last
partition ended. Hence, more fast than First Fit.
c) BEST FIT : Allocate process to a smallest hole that is big enough.
Lesser internal fragmentation but High external fragmentation.
Slow, as required to iterate whole free holes list.
The main disadvantage of Dynamic partitioning is External Fragmentation therefore we need more flexible mechanism,
to load processes in the partitions.
1) PAGING: Paging is a memory-management scheme that permits the physical address space of a process to be
non-contiguous (I.e., a single process can be allocated to different blocks).
-It avoids external fragmentation and the need of compaction.
-Idea is to divide the physical memory into fixed-sized blocks called Frames + divide logical memory into blocks
of same size called Pages.
IMPORTANT= Process is divided into pages, main memory is divided into frames. + (Page size = Frame size).
-Page size is usually determined by the processor architecture.
PAGE TABLE : A Data structure stores which page is mapped to which frame. It is generated by CPU. Logical
address is mapped to physical address through page table.
Page table is stored in main memory at the time of process creation and its base address is stored in process control
block (PCB).
Advantages of paging:
1) There is no external fragmentation. Since, Non-contiguous allocation of the pages of the process is
allowed in the random free frames of the physical memory
2) Minute internal fragmentation
Disadvantages of Paging:
1) Each process has a separate page table. Thus, There are too many memory references to access the
desired location in physical memory & because of this when the memory references is made the translation
is slow.
2) Address mapping is hidden from user and it is managed by OS. Therefore, it is OS centric.
3) OS must maintain a frame table.
Translation Look-aside buffer (TLB)
-A Hardware support to speed-up paging process.
-When we are retrieving physical address using page table, after getting frame address corresponding to the page
number, we put an entry of the into the TLB. So that next time, we can get the values from TLB directly without
referencing actual page table. Hence, make paging process faster.
Segmentation: An important aspect of memory management that become unavoidable with paging is separation
of user’s view of memory from the actual physical memory. Paging divides all the processes into the form of pages
regardless of the fact that a process can have some relative parts of functions which need to be loaded in the same
page.
- Operating system doesn't care about the User's view of the process. It may divide the same function into different
pages and those pages may or may not be loaded at the same time into the memory. It decreases the efficiency of
the system.
-It is better to have segmentation which divides the process into the segments. Each segment
contains the same type of functions such as the main function can be included in one
segment and the library functions can be included in the other segment. Segmentation is a
memory management technique where A logical address space is a collection of segments
and these segments are based on user’s view of logical memory. Segments are of variable
size.
USER’s view of memory :- A programmer will think ki program aise save hua ki addition ka pura
function ek saath, sub ek saath. Toh ye possible hai segmentation mei, but not in paging.
Segmented Paging
Pure segmentation is not very popular and not being used in many of the operating systems. However,
Segmentation can be combined with Paging to get the best features out of both the techniques.
In Segmented Paging, the main memory is divided into variable size segments which are further divided into fixed
size pages.
TRANSLATION OF LA TO PA USING TABLES: The CPU generates a logical address which is divided into two parts:
Segment Number and Segment Offset. The Segment Offset must be less than the segment limit. Offset is further
divided into Page number and Page Offset. To map the exact page number in the page table, the page number is
added into the page table base.
The actual frame number with the page offset is mapped to the main memory to get the desired word in the page of
the certain segment of the process.
VIRTUAL MEMORY
Virtual memory is a technique that allows the execution of processes that are not completely in the MAIN memory.
It provides user an illusion of having a very big main memory. This is done by treating a part of secondary memory
as the main memory.
-In this scheme, User can load the bigger size processes than the available main memory by having the illusion that
the memory is available to load the process.
VIRTUAL MEMORY = Process size>main memory size; still process can execute.
-Instead of loading one big process in the main memory, the Operating System loads the different parts/pages of
more than one process in the main memory that are required by the CPU for its execution.
In this scheme, whenever some pages needs to be loaded in the main memory for the execution and the memory is
not available for those many pages, then in that case, instead of stopping the pages from entering in the main
memory, the OS search for the RAM area that are least used in the recent times or that are not referenced and copy
that into the secondary memory to make the space for the new pages in the main memory.
Since all this procedure happens automatically, therefore it makes the computer feel like it is having the unlimited
RAM.
-Each user program could take less physical memory (KYUKI HMARA PROGRAM MAIN MEMORY MEI NHI DAAL
RHE, BAS KUCH PARTS DALA HAI PROGRAM KA), more programs could be run at the same time, with a
corresponding increase in CPU utilization and throughput.
-In demand paging, the pages of a process which are least used, get stored in the secondary memory.
- A page is copied to the main memory when its demand is made, or page fault occurs. There are various page
replacement algorithms which are used to determine the pages which will be replaced.
-a program called lazy swapper; also known as pager is used for this purpose.
PAGE REPLACEMENT
-Page replacement is the process of replacing one page by another in the memory when there is no free frame.
-The page replacement algorithm decides which memory page is to be replaced. Some allocated page is
swapped out from the frame and new page is swapped into the freed frame.
ALGOS:
1) FIFO:-
- Allocate frame to the page as it comes into the memory by replacing the oldest page.
- Easy to implement.
-Performance is not always good. (SHOWS BELADY ANAMOLY)
- Belady’s anomaly is present, In FIFO page replacement algorithm, the number of page faults will get increased with the
increment in number of frames
-In this we replace the page that will not be used for the longest period of time.
- lowest page fault rate, no Belady’s anomaly
- Difficult to implement as OS requires future knowledge of reference string which is kind of impossible
-used for comparison
3)LRU:-
-Page which has not been used for the longest time in main memory is the one which will be selected for replacement.
-It is like optimal page replacement algo looking backwards in time.
-Implemented using counters or stack.
THRASHING
Now, since all the pages of processes are in active state, they will demand other pages of the same process from
the memory and page replacement will be needed again. This will trigger a chain reaction of page faults. This
situation when the CPU utilization diminishes due to high paging activity is called thrashing.
//matlab agar process p1 ka pg2, p2 ka pg5, p2 ka pg 8 ab saare processes toh bhar die unke pages bhi bhr die,
par ab different process alag alag chije maangegenge tab CPU page servicing hi krta rhegaa, which is called
thrashing.//
Causes of thrashing:
1. High degree of multiprogramming.
2. Lack of frames.
3. Page replacement policy.