19.cours Notes OS

Download as pdf or txt
Download as pdf or txt
You are on page 1of 48

Unit-I Operating System 1 of 22

DEFINITION OF OPERATING SYSTEM:

Operating System (OS) is a software that acts as an interface between computer


hardware components and the user.

Every computer system must have at least one operating system to run other
programs. Applications like Browsers, MS Office, Notepad Games, etc., need some
environment to run and perform its tasks.

ROLE OF AN OPERATING SYSTEM

1. Memory Management
2. Processor Management
3. Device Management
4. File Management
5. Security
6. Control over system performance
7. Job accounting
8. Error detecting aids
9. Coordination between other software and users

Memory Management

Memory management refers to management of Primary Memory or Main Memory. Main


memory is a large array of words or bytes where each word or byte has its own address.
Main memory provides a fast storage that can be accessed directly by the CPU. For a
program to be executed, it must in the main memory. An Operating System does the
following activities for memory management –

• Keeps tracks of primary memory, i.e., what part of it are in use by whom, what
part are not in use.
• In multiprogramming, the OS decides which process will get memory when and
how much.
• Allocates the memory when a process requests it to do so.
• De-allocates the memory when a process no longer needs it or has been
terminated.

Processor Management
In multiprogramming environment, the OS decides which process gets the processor
when and for how much time. This function is called process scheduling. An Operating
System does the following activities for processor management −
• Keeps tracks of processor and status of process. The program responsible for this
task is known as traffic controller.
• Allocates the processor (CPU) to a process.
• De-allocates processor when a process is no longer required.
Unit-I Operating System 2 of 22

Device Management

An Operating System manages device communication via their respective drivers. It does
the following activities for device management −
Keeps tracks of all devices. Program responsible for this task is known as the I/O
controller.
Decides which process gets the device when and for how much time.
Allocates the device in the efficient way.
De-allocates devices.

File Management

A file system is normally organized into directories for easy navigation and usage. These
directories may contain files and other directions.

An Operating System does the following activities for file management −


• Keeps track of information, location, uses, status etc. The collective facilities are
often known as file system.
• Decides who gets the resources.
• Allocates the resources.
• De-allocates the resources.

Security

By means of password and similar other techniques, it prevents unauthorized access to


programs and data.

Control over system performance

Recording delays between request for a service and response from the system.

Job accounting
Keeping track of time and resources used by various jobs and users.

Error detecting aids

Production of dumps, traces, error messages, and other debugging and error detecting
aids.

Coordination between other softwares and users

Coordination and assignment of compilers, interpreters, assemblers and other software


to the various users of the computer systems.

TYPES OF OPERATING SYSTEM


The Operating System is responsible for managing various resources of the system.
1. Batch Operating System
2. Time-Sharing Operating System
3. Distributed Operating System
4. Embedded Operating System
5. Real-time Operating System
Unit-I Operating System 3 of 22

1. Batch Operating System

In a Batch Operating System, the similar jobs are grouped together into batches with
the help of some operator and these batches are executed one by one.

Advantages:
1. The overall time taken by the system to execute all the programmes will be
reduced.
2. The Batch Operating System can be shared between multiple users.

Disadvantages:
1. Manual interventions are required between two batches.
2. The CPU utilization is low because the time taken in loading and unloading of
batches is very high as compared to execution time.

2. Time-Sharing Operating System

In a Multi-tasking Operating System, more than one processes are being executed
at a particular time with the help of the time-sharing concept. So, in the time-sharing
environment, we decide a time that is called time quantum and when the process starts
its execution then the execution continues for only that amount of time and after that,
other processes will be given chance for that amount of time only.
Unit-I Operating System 4 of 22

Advantages:
1. Since equal time quantum is given to each process, so each process gets equal
opportunity to execute.
2. The CPU will be busy in most of the cases and this is good to have case.

Disadvantages:
1. Process having higher priority will not get the chance to be executed first because
the equal opportunity is given to each process.

3. Distributed Operating System

In a Distributed Operating System, we have various systems and all these systems have
their own CPU, main memory, secondary memory, and resources. These systems are
connected to each other using a shared communication network. Here, each system can
perform its task individually.

Advantages:
1. Since the systems are connected with each other so, the failure of one system can't
stop the execution of processes because other systems can do the execution.
2. Resources are shared between each other.
3. The load on the host computer gets distributed and this, in turn, increases the
efficiency.

Disadvantages:
1. Since the data is shared among all the computers, so to make the data secure and
accessible to few computers, you need to put some extra efforts.
2. If there is a problem in the communication network then the whole
communication will be broken.

4. Embedded Operating System

An Embedded Operating System is designed to perform a specific task for a particular


device which is not a computer.

Advantages:
1. Since it is dedicated to a particular job, so it is fast.
2. Low cost.
3. These consume less memory and other resources.
Unit-I Operating System 5 of 22

Disadvantages:
1. Only one job can be performed.
2. It is difficult to upgrade or is nearly scalable.

5. Real-time Operating System

The Real-time Operating Systems are used in the situation where we are dealing with
some real-time data. So, as soon as the data comes, the execution of the process should
be done and there should be no dealy i.e. no buffer delays should be there. There are two
types of Real-time Operating System:

1. Hard Real-time: In this type, a small delay can lead to drastic change. So, when
the time constraint is very important then we use the Hard Real-time.
2. Soft Real-time: Here, the time constraint is not that important but here also we
are dealing with some real-time data.

Advantages:
1. There is maximum utilization of devices and resources.
2. These systems are almost error-free.

Disadvantages:
1. The algorithms used in Real-time Operating System is very complex.
2. Specific device drivers are used for responding to the interrupts as soon as
possible.

MAJOR OS COMPONENTS

An operating system is a large and complex system that can only be created by
partitioning into small pieces. These pieces should be a well-defined portion of the
system, which carefully defined inputs, outputs, and functions.

Components:

1. File Management
2. Process Management
3. I/O Device Management
4. Network Management
5. Main Memory management
6. Secondary-Storage Management
7. Security Management
8. Other Important Activities
Unit-I Operating System 6 of 22

File Management

A file is a set of related information which is should define by its creator. It commonly
represents programs, both source and object forms, and data. Data files can be numeric,
alphabetic, or alphanumeric.

Function of file management in OS:

The operating system has the following important given activities in connections with file
management:

• File and directory creation and deletion.


• For manipulating files and directories.
• Mapping files onto secondary storage.
• Backup files on stable storage media.

Process Management

The process management component is a procedure for managing the many processes
that are running simultaneously on the operating system. Every software application
program has one or more processes associated with them when they are running.
Unit-I Operating System 7 of 22

For example, when you use a browser like Google Chrome, there is a process running for
that browser program. The OS also has many processes running, which performing
various functions.

All these processes should be managed by process management, which keeps processes
for running efficiently. It also uses memory allocated to them and shutting them down
when needed.

The execution of a process must be sequential so, at least one instruction should be
executed on behalf of the process.

Functions of process management in OS:

The following are functions of process management.

• Process creation and deletion.


• Suspension and resumption.
• Synchronization process
• Communication process

I/O Device Management

One of the important use of an operating system that helps you to hide the variations of
specific hardware devices from the user.

Functions of I/O management in OS:

• It offers buffer caching system


• It provides general device driver code
• It provides drivers for particular hardware devices.
• I/O helps you to knows the individualities of a specific device.

Network Management

Network management is the process of administering and managing computer networks.


It includes performance management, fault analysis, provisioning of networks, and
maintaining the quality of service.

A distributed system is a collection of computers/processors that never share their own


memory or a clock. In this type of system, all the processors have their local Memory, and
the processors communicate with each other using different communication lines, like
fiber optics or telephone lines.

The computers in the network are connected through a communication network, which
can be configured in a number of different ways.

With the help of network management, the network can be fully or partially connected,
which helps users to design routing and connection strategies that overcome connection
and security issues.
Unit-I Operating System 8 of 22

Functions of Network management:

• Distributed systems help you to various computing resources in size and function.
They may involve microprocessors, minicomputers, and many general-purpose
computer systems.
• A distributed system also offers the user access to the various resources the
network shares.
• It helps to access shared resources that help computation to speed-up or offers
data availability and reliability.

Main Memory management

Main Memory is a large array of storage or bytes, which has an address. The memory
management process is conducted by using a sequence of reads or writes of specific
memory addresses.

In order to execute a program, it should be mapped to absolute addresses and loaded


inside the Memory. The selection of a memory management method depends on several
factors.

However, it is mainly based on the hardware design of the system. Each algorithm
requires corresponding hardware support. Main Memory offers fast storage that can be
accessed directly by the CPU. It is costly and hence has a lower storage capacity. However,
for a program to be executed, it must be in the main Memory.

Functions of Memory management in OS:

An Operating System performs the following functions for Memory Management:

• It helps you to keep track of primary memory.


• Determine what part of it are in use by whom, what part is not in use.
• In a multiprogramming system, the OS takes a decision about which process will
get Memory and how much.
• Allocates the memory when a process requests
• It also de-allocates the Memory when a process no longer requires or has been
terminated.

Secondary-Storage Management

The most important task of a computer system is to execute programs. These programs,
along with the data, helps you to access, which is in the main memory during execution.

This Memory of the computer is very small to store all data and programs permanently.
The computer system offers secondary storage to back up the main Memory.

Today modern computers use hard drives/SSD as the primary storage of both programs
and data. However, the secondary storage management also works with storage devices,
like a USB flash drive, and CD/DVD drives.

Programs like assemblers, compilers, stored on the disk until it is loaded into memory,
and then use the disk as a source and destination for processing.
Unit-I Operating System 9 of 22

Functions of Secondary storage management in OS:

Here, are major functions of secondary storage management in OS:

• Storage allocation
• Free space management
• Disk scheduling

Security Management

The various processes in an operating system need to be secured from each other's
activities.

For that purpose, various mechanisms can be used to ensure that those processes which
want to operate files, memory CPU, and other hardware resources should have proper
authorization from the operating system.

OPERATING SYSTEM OPERATIONS

An operating system is a construct that allows the user application programs to interact
with the system hardware. Operating system by itself does not provide any function but
it provides an atmosphere in which different applications and programs can do useful
work.

The major operations of the operating system are process management, memory
management, device management and file management. These are given in detail as
follows:

Process Management

The operating system is responsible for managing the processes i.e assigning the
processor to a process at a time. This is known as process scheduling. The different
algorithms used for process scheduling are FCFS (first come first served), SJF (shortest
job first), priority scheduling, round robin scheduling etc.
Unit-I Operating System 10 of 22

There are many scheduling queues that are used to handle processes in process
management. When the processes enter the system, they are put into the job queue. The
processes that are ready to execute in the main memory are kept in the ready queue. The
processes that are waiting for the I/O device are kept in the device queue.

Memory Management

Memory management plays an important part in operating system. It deals with memory
and the moving of processes from disk to primary memory for execution and back again.
The activities performed by the operating system for memory management are –

• The operating system assigns memory to the processes as required. This can be
done using best fit, first fit and worst fit algorithms.
• All the memory is tracked by the operating system i.e. it nodes what memory parts
are in use by the processes and which are empty.
• The operating system deallocated memory from processes as required. This may
happen when a process has been terminated or if it no longer needs the memory.

Device Management

There are many I/O devices handled by the operating system such as mouse, keyboard,
disk drive etc. There are different device drivers that can be connected to the operating
system to handle a specific device. The device controller is an interface between the
device and the device driver. The user applications can access all the I/O devices using
the device drivers, which are device specific codes.

File Management

Files are used to provide a uniform view of data storage by the operating system. All the
files are mapped onto physical devices that are usually non volatile so data is safe in the
case of system failure.
The files can be accessed by the system in two ways i.e. sequential access and direct
access –

• Sequential Access
The information in a file is processed in order using sequential access. The files records
are accessed on after another. Most of the file systems such as editors, compilers etc. use
sequential access.
• Direct Access
In direct access or relative access, the files can be accessed in random for read and write
operations. The direct access model is based on the disk model of a file, since it allows
random accesses.

OPERATING SYSTEM SERVICES


An Operating System supplies different kinds of services to both the users and to the
programs as well. It also provides application programs (that run within an Operating
system) an environment to execute it freely. It provides users the services run various
programs in a convenient manner.
Unit-I Operating System 11 of 22

List of common services offered by an almost all operating systems:

• User Interface
• Program Execution
• File system manipulation
• Input / Output Operations
• Communication
• Resource Allocation
• Error Detection
• Accounting
• Security and protection

User Interface

Usually Operating system comes in three forms or types. Depending on the interface their
types have been further subdivided. These are:

• Command line interface


• Batch based interface
• Graphical User Interface

Let's get to know in brief about each of them.

The command line interface (CLI) usually deals with using text commands and a
technique for entering those commands. The batch interface (BI): commands and
directives are used to manage those commands that are entered into files and those files
get executed.

Program Execution

The operating system must have the capability to load a program into memory and
execute that program. Furthermore, the program must be able to end its execution, either
normally or abnormally / forcefully.

File System Manipulation

Programs need has to be read and then write them as files and directories. File handling
portion of operating system also allows users to create and delete files by specific name
along with extension, search for a given file and / or list file information. Some programs
comprise of permissions management for allowing or denying access to files or
directories based on file ownership.
Input/Output Operations

A program which is currently executing may require I/O, which may involve file or other
I/O device. For efficiency and protection, users cannot directly govern the I/O devices. So,
the OS provide a means to do I/O Input / Output operation which means read or write
operation with any file.

Communication

Process needs to swap over information with other process. Processes executing on same
computer system or on different computer systems can communicate using operating
Unit-I Operating System 12 of 22

system support. Communication between two processes can be done using shared
memory or via message passing.

Resource Allocation

When multiple jobs running concurrently, resources must need to be allocated to each of
them. Resources can be CPU cycles, main memory storage, file storage and I/O devices.
CPU scheduling routines are used here to establish how best the CPU can be used.

Error detection

Errors may occur within CPU, memory hardware, I/O devices and in the user program.
For each type of error, the OS takes adequate action for ensuring correct and consistent
computing.

Accounting

This service of the operating system keeps track of which users are using how much and
what kinds of computer resources have been used for accounting or simply to accumulate
usage statistics.

Protection and Security

Protection includes in ensuring all access to system resources in a controlled manner. For
making a system secure, the user needs to authenticate him or her to the system before
using (usually via login ID and password).

SYSTEM CALLS
A system call is a method for a computer program to request a service from the kernel of
the operating system on which it is running. A system call is a method of interacting with
the operating system via programs. A system call is a request from computer software to
an operating system's kernel.

A system call is a way for a user program to interface with the operating system. The
program requests several services, and the OS responds by invoking a series of system
calls to satisfy the request. A system call can be written in assembly language

Types of System Calls

1. Process Control
2. File Management
3. Device Management
4. Information Maintenance
5. Communication

Process Control

Process control is the system call that is used to direct the processes. Some process
control examples include creating, load, abort, end, execute, process, terminate the
process, etc.
Unit-I Operating System 13 of 22

File Management

File management is a system call that is used to handle the files. Some file management
examples include creating files, delete files, open, close, read, write, etc.

Device Management

Device management is a system call that is used to deal with devices. Some examples of
device management include read, device, write, get device attributes, release device, etc.

Information Maintenance

Information maintenance is a system call that is used to maintain information. There are
some examples of information maintenance, including getting system data, set time or
date, get time or date, set system data, etc.

Communication

Communication is a system call that is used for communication. There are some examples
of communication, including create, delete communication connections, send, receive
messages, etc.

SYSTEM PROGRAMS
System programs provide an environment where programs can be developed and
executed. In the simplest sense, system programs also provide a bridge between the user
interface and system calls. In reality, they are much more complex. For example, a
compiler is a complex system program.

System Programs Purpose

The system program serves as a part of the operating system. It traditionally lies between
the user interface and the system calls. The user view of the system is actually defined by
system programs and not system calls because that is what they interact with and system
programs are closer to the user interface.

An image that describes system programs in the operating system hierarchy is as follows:
Unit-I Operating System 14 of 22

Types of System Programs

System programs can be divided into seven parts. These are given as follows:

Status Information

The status information system programs provide required data on the current or past
status of the system. This may include the system date, system time, available memory in
system, disk space, logged in users etc.

Communications

These system programs are needed for system communications such as web browsers.
Web browsers allow systems to communicate and access information from the network
as required.

File Manipulation

These system programs are used to manipulate system files. This can be done using
various commands like create, delete, copy, rename, print etc. These commands can
create files, delete files, copy the contents of one file into another, rename files, print them
etc.

Program Loading and Execution

The system programs that deal with program loading and execution make sure that
programs can be loaded into memory and executed correctly. Loaders and Linkers are a
prime example of this type of system programs.

File Modification

System programs that are used for file modification basically change the data in the file
or modify it in some other way. Text editors are a big example of file modification system
programs.

Application Programs

Application programs can perform a wide range of services as per the needs of the users.
These include programs for database systems, word processors, plotting tools,
spreadsheets, games, scientific applications etc.

OPERATING SYSTEM STRUCTURE

1. Simple Structure
2. Monolithic Approach
3. Layered Approach
4. Microkernels
Unit-I Operating System 15 of 22

Simple Structure

• Operating systems such as MS-DOS and the original UNIX did not have well-
defined structures.
• There was no CPU Execution Mode (user and kernel), and so errors in
applications could cause the whole system to crash.

Monolithic Approach

• Functionality of the OS is invoked with simple function calls within the kernel,
which is one large program.
• Device drivers are loaded into the running kernel and become part of the kernel.

Layered Approach

This approach breaks up the operating system into different layers.

• This allows implementers to change the inner workings, and increases modularity.
• As long as the external interface of the routines don’t change, developers have
more freedom to change the inner workings of the routines.
• With the layered approach, the bottom layer is the hardware, while the highest
layer is the user interface.
o The main advantage is simplicity of construction and debugging.
o The main difficulty is defining the various layers.
Unit-I Operating System 16 of 22

o The main disadvantage is that the OS tends to be less efficient than other
implementations.

Microkernels

This structures the operating system by removing all nonessential portions of the kernel
and implementing them as system and user level programs.

• Generally they provide minimal process and memory management, and a


communications facility.
• Communication between components of the OS is provided by message passing.

The benefits of the microkernel are as follows:

• Extending the operating system becomes much easier.


• Any changes to the kernel tend to be fewer, since the kernel is smaller.
• The microkernel also provides more security and reliability.

Main disadvantage is poor performance due to increased system overhead from message
passing.
Unit-I Operating System 17 of 22

PROCESS

Defn

A process is a program at the time of execution. The process is more than the program
code. It includes the program counter, the process stack, and the content of the process
register, etc.

The purpose of the process stack is to store temporary data, such as subroutine
parameters, return address and temporary variables.

Process state diagram


Process States in OS

1. New: The process is being created.


2. Running: The process is being executed.
3. Waiting: The process is waiting for some event to occur.
4. Ready: The process is waiting to be assigned to a processor.
5. Terminate: The process has finished execution.

PROCESS SCHEDULING

The process scheduling is the activity of the process manager that handles the removal of
the running process from the CPU and the selection of another process on the basis of a
particular strategy.

Process scheduling is an essential part of a Multiprogramming operating systems. Such


operating systems allow more than one process to be loaded into the executable memory
at a time and the loaded process shares the CPU using time multiplexing.

Process Scheduling Queues

The OS maintains all PCBs in Process Scheduling Queues. The OS maintains a separate
queue for each of the process states and PCBs of all processes in the same execution state
Unit-I Operating System 18 of 22

are placed in the same queue. When the state of a process is changed, its PCB is unlinked
from its current queue and moved to its new state queue.

The Operating System maintains the following important process scheduling queues –

1. Job queue − This queue keeps all the processes in the system.
2. Ready queue − This queue keeps a set of all processes residing in main memory,
ready and waiting to execute. A new process is always put in this queue.
3. Device queues − The processes which are blocked due to unavailability of an I/O
device constitute this queue.

Schedulers

Schedulers are special system software which handle process scheduling in various ways.
Their main task is to select the jobs to be submitted into the system and to decide which
process to run. Schedulers are of three types –

• Long-Term Scheduler
• Short-Term Scheduler
• Medium-Term Scheduler

Long Term Scheduler

It is also called a job scheduler. A long-term scheduler determines which programs are
admitted to the system for processing. It selects processes from the queue and loads them
into memory for execution. Process loads into the memory for CPU scheduling.

The primary objective of the job scheduler is to provide a balanced mix of jobs, such as
I/O bound and processor bound. It also controls the degree of multiprogramming. If the
degree of multiprogramming is stable, then the average rate of process creation must be
equal to the average departure rate of processes leaving the system.
On some systems, the long-term scheduler may not be available or minimal. Time-sharing
operating systems have no long term scheduler. When a process changes the state from
new to ready, then there is use of long-term scheduler.

Short Term Scheduler

It is also called as CPU scheduler. Its main objective is to increase system performance in
accordance with the chosen set of criteria. It is the change of ready state to running state
Unit-I Operating System 19 of 22

of the process. CPU scheduler selects a process among the processes that are ready to
execute and allocates CPU to one of them.

Short-term schedulers, also known as dispatchers, make the decision of which process to
execute next. Short-term schedulers are faster than long-term schedulers.

Medium Term Scheduler

Medium-term scheduling is a part of swapping. It removes the processes from the


memory. It reduces the degree of multiprogramming. The medium-term scheduler is in-
charge of handling the swapped out-processes.

A running process may become suspended if it makes an I/O request. A suspended


processes cannot make any progress towards completion. In this condition, to remove
the process from memory and make space for other processes, the suspended process is
moved to the secondary storage. This process is called swapping, and the process is said
to be swapped out or rolled out. Swapping may be necessary to improve the process mix.

OPERATIONS ON PROCESSES

There are many operations that can be performed on processes. Some of these are
process creation, process preemption, process blocking, and process termination. These
are given in detail as follows.

Process Creation

Processes need to be created in the system for different operations. This can be done by
the following events.

• User request for process creation


• System initialization
• Execution of a process creation system call by a running process
• Batch job initialization

A process may be created by another process using fork(). The creating process is called
the parent process and the created process is the child process. A child process can have
only one parent but a parent process may have many children. Both the parent and child
processes have the same memory image, open files, and environment strings. However,
they have distinct address spaces.
Unit-I Operating System 20 of 22

Process Preemption

An interrupt mechanism is used in preemption that suspends the process executing


currently and the next process to execute is determined by the short-term scheduler.
Preemption makes sure that all processes get some CPU time for execution.

Process Blocking

The process is blocked if it is waiting for some event to occur. This event may be I/O as
the I/O events are executed in the main memory and don't require the processor. After
the event is complete, the process again goes to the ready state.

Process Termination

After the process has completed the execution of its last instruction, it is terminated. The
resources held by a process are released after it is terminated.

A child process can be terminated by its parent process if its task is no longer relevant.
The child process sends its status information to the parent process before it terminates.
Also, when a parent process is terminated, its child processes are terminated as well as
the child processes cannot run if the parent processes are terminated.
Unit-I Operating System 21 of 22

INTER PROCESS COMMUNICATION

Interprocess communication is the mechanism provided by the operating system that


allows processes to communicate with each other. This communication could involve a
process letting another process know that some event has occurred or the transferring of
data from one process to another.

Synchronization in Interprocess Communication

Synchronization is a necessary part of interprocess communication. It is either provided


by the interprocess control mechanism or handled by the communicating processes.
Some of the methods to provide synchronization are as follows –

Semaphore

A semaphore is a variable that controls the access to a common resource by multiple


processes. The two types of semaphores are binary semaphores and counting
semaphores.

Mutual Exclusion

Mutual exclusion requires that only one process thread can enter the critical section at a
time. This is useful for synchronization and also prevents race conditions.

Barrier

A barrier does not allow individual processes to proceed until all the processes reach it.
Many parallel languages and collective routines impose barriers.

Spinlock

This is a type of lock. The processes trying to acquire this lock wait in a loop while
checking if the lock is available or not. This is known as busy waiting because the process
is not doing any useful operation even though it is active.

Approaches for Inter-Process Communication


Unit-I Operating System 22 of 22

Pipes

Pipe is widely used for communication between two related processes. This is a half-
duplex method, so the first process communicates with the second process. However, in
order to achieve a full-duplex, another pipe is needed.

Message Passing

It is a mechanism for a process to communicate and synchronize. Using message passing,


the process communicates with each other without resorting to shared variables.
IPC mechanism provides two operations:

• Send (message)- message size fixed or variable


• Received (message)

Message Queues

A message queue is a linked list of messages stored within the kernel. It is identified by a
message queue identifier. This method offers communication between single or multiple
processes with full-duplex capacity.

Direct Communication

In this type of inter-process communication process, should name each other explicitly.
In this method, a link is established between one pair of communicating processes, and
between each pair, only one link exists.

Indirect Communication

Indirect communication establishes like only when processes share a common mailbox
each pair of processes sharing several communication links. A link can communicate with
many processes. The link may be bi-directional or unidirectional.

Shared Memory

Shared memory is a memory shared between two or more processes that are established
using shared memory between all the processes. This type of memory requires to
protected from each other by synchronizing access across all the processes.

FIFO

Communication between two unrelated processes. It is a full-duplex method, which


means that the first process can communicate with the second process, and the opposite
can also happen.
Unit-II Operating System 1 of 8

THREADS

A thread is a single sequential flow of execution of tasks of a process so it is also known


as thread of execution or thread of control. There is a way of thread execution inside the
process of any operating system.

Apart from this, there can be more than one thread inside a process. Each thread of the
same process makes use of a separate program counter and a stack of activation records
and control blocks. Thread is often referred to as a lightweight process.

Types of Threads

1. Kernel level thread.


2. User-level thread.

The kernel thread recognizes the operating system. There is a thread control block and
process control block in the system for each thread and process in the kernel-level thread.
The kernel-level thread is implemented by the operating system.

The kernel knows about all the threads and manages them. The kernel-level thread offers
a system call to create and manage the threads from user-space. The implementation of
kernel threads is more difficult than the user thread. Context switch time is longer in the
kernel thread. If a kernel thread performs a blocking operation, the Banky thread
execution can continue.

Benefits of Threads

1. Enhanced throughput of the system: When the process is split into many
threads, and each thread is treated as a job, the number of jobs done in the unit
time increases. That is why the throughput of the system also increases.

2. Effective Utilization of Multiprocessor system: When you have more than one
thread in one process, you can schedule more than one thread in more than one
processor.

3. Faster context switch: The context switching period between threads is less than
the process context switching. The process context switch means more overhead
for the CPU.

4. Responsiveness: When the process is split into several threads, and when a
thread completes its execution, that process can be responded to as soon as
possible.

5. Communication: Multiple-thread communication is simple because the threads


share the same address space, while in process, we adopt just a few exclusive
communication strategies for communication between two processes.

6. Resource sharing: Resources can be shared between all threads within a process,
such as code, data, and files. Note: The stack and register cannot be shared
between threads. There is a stack and register for each thread.
Unit-II Operating System 2 of 8

MULTITHREADING MODELS

Multithreading allows the application to divide its task into individual threads. In multi-
threads, the same process or task can be done by the number of threads, or we can say
that there is more than one thread to perform the task in multithreading. With the use of
multithreading, multitasking can be achieved.

Types of Multithreading models

1. Many to one multithreading model


2. One to one multithreading model
3. Many to Many multithreading models

Many to one multithreading model

The many to one model maps many user levels threads to one kernel thread. This type of
relationship facilitates an effective context-switching environment, easily implemented
even on the simple kernel with no thread support.

One to one multithreading model

The one-to-one model maps a single user-level thread to a single kernel-level thread. This
type of relationship facilitates the running of multiple threads in parallel. However, this
benefit comes with its drawback.

The generation of every new user thread must include creating a corresponding kernel
thread causing an overhead, which can hinder the performance of the parent process.
Windows series and Linux operating systems try to tackle this problem by limiting the
growth of the thread count.

Many to Many Model multithreading model

In this type of model, there are several user-level threads and several kernel-level
threads. The number of kernel threads created depends upon a particular application.

The developer can create as many threads at both levels but may not be the same. The
many to many model is a compromise between the other two models. In this model, if any
thread makes a blocking system call, the kernel can schedule another thread for
execution.

Also, with the introduction of multiple threads, complexity is not present as in the
previous models. Though this model allows the creation of multiple kernel threads, true
concurrency cannot be achieved by this model. This is because the kernel can schedule
only one process at a time.

THREAD LIBRARIES

Thread Libraries has a collection of functions that useful in creating and controlling
threads. Programmers can access these thread libraries using an application
programming interface (API). Thread libraries can be the user level library or kernel
level library.
Unit-II Operating System 3 of 8

If the thread library is implemented at the user space then code and data of the thread
library would reside in user space. In this case, invoking any function from thread library
would be a simple function call and it won’t be a system call.

THREADING ISSUES

1. The fork() and exec() system call


2. Signal handling
3. Thread cancelation
4. Thread local storage
5. Scheduler activation

The fork() and exec() system call

In case if a thread fork is the complete process copied or is the new process single
threaded? The answer is here it depends on the system and in case of if new process
execs immediately then there is no need to copy all the other thread and if it does not
create new process then the whole process should be copied.

Signal Handling

Whenever a multithreaded process receives a signal then to what thread should that
signal be conveyed? There are following four main option for signal distribution:

1. Signal deliver to the thread to which the signal applies.


2. Signal deliver to each and every thread in the process.
3. Signal deliver to some of the threads in the process.
4. Assign a particular thread to receive all the signals in a process.

Thread Cancellation

Threads that are no-longer required can be cancelled by another thread in one of two
techniques:
1. Asynchronies cancellation
2. Deferred cancellation

Asynchronies Cancellation

It means cancellation of thread immediately. Allocation of resources and inter thread data
transfer may be challenging for asynchronies cancellation.

Deferred Cancellation

In this method a flag is sets that indicating the thread should cancel itself when it is
feasible. It’s upon the cancelled thread to check this flag intermittently and exit nicely
when it sees the set flag.

Thread Local Storage

The benefit of using threads in the first place is that Most data is shared among the
threads but, sometimes threads also need thread explicit data. Major libraries of threads
are pThreads, Win32 and java which provide support for thread specific which is called
as TLS thread local storage.
Unit-II Operating System 4 of 8

Scheduler Activation

Numerous implementation of threads provides a virtual processor as an interface b/w


user and kernel thread specifically for two tier model. The virtual processor is called as
low weight process (LWP). Kernel thread and LWP has one-to-one correspondence. The
available numbers of kernel threads can be changed dynamically.

BASIC CONCEPTS OF SCHEDULING

The process scheduling is the activity of the process manager that handles the removal of
the running process from the CPU and the selection of another process on the basis of a
particular strategy.

Process scheduling is an essential part of a Multiprogramming operating systems. Such


operating systems allow more than one process to be loaded into the executable memory
at a time and the loaded process shares the CPU using time multiplexing.

Scheduling Queues

The OS maintains all PCBs in Process Scheduling Queues. The OS maintains a separate
queue for each of the process states and PCBs of all processes in the same execution state
are placed in the same queue. When the state of a process is changed, its PCB is unlinked
from its current queue and moved to its new state queue.

The Operating System maintains the following important process scheduling queues –

• Job queue − This queue keeps all the processes in the system.
• Ready queue − This queue keeps a set of all processes residing in main memory,
ready and waiting to execute. A new process is always put in this queue.
• Device queues − The processes which are blocked due to unavailability of an I/O
device constitute this queue.

SCHEDULING CRITERIA

There are several different criteria to consider when trying to select the "best" scheduling
algorithm for a particular situation and environment, including:

1. CPU utilization - Ideally the CPU would be busy 100% of the time, so as to waste
0 CPU cycles. On a real system CPU usage should range from 40% ( lightly loaded )
to 90% ( heavily loaded. )

2. Throughput - Number of processes completed per unit time. May range from 10
/ second to 1 / hour depending on the specific processes.

3. Turnaround time - Time required for a particular process to complete, from


submission time to completion. ( Wall clock time. )

4. Waiting time - How much time processes spend in the ready queue waiting their
turn to get on the CPU.
Unit-II Operating System 5 of 8

5. Load average - The average number of processes sitting in the ready queue
waiting their turn to get into the CPU. Reported in 1-minute, 5-minute, and 15-
minute averages by "uptime" and "who". )

6. Response time - The time taken in an interactive program from the issuance of a
command to the commence of a response to that command.

In general one wants to optimize the average value of a criteria ( Maximize CPU utilization
and throughput, and minimize all the others. ) However some times one wants to do
something different, such as to minimize the maximum response time.

Sometimes it is most desirable to minimize the variance of a criteria than the actual
value. I.e. users are more accepting of a consistent predictable system than an
inconsistent one, even if it is a little bit slower.

SCHEDULING ALGORITHMS

To decide which process to execute first and which process to execute last to achieve
maximum CPU utilization, computer scientists have defined some algorithms, they are:

1. First Come First Serve (FCFS)


2. Shortest-Job-First (SJF) Scheduling
3. Shortest Remaining Time
4. Priority Scheduling
5. Round Robin Scheduling
6. Multilevel Queue Scheduling

First Come First Serve (FCFS)

First Come First Serve is the full form of FCFS. It is the easiest and most simple CPU
scheduling algorithm. In this type of algorithm, the process which requests the CPU gets
the CPU allocation first. This scheduling method can be managed with a FIFO queue.
Unit-II Operating System 6 of 8

As the process enters the ready queue, its PCB (Process Control Block) is linked with the
tail of the queue. So, when CPU becomes free, it should be assigned to the process at the
beginning of the queue.

Characteristics of FCFS method:

• It offers non-preemptive and pre-emptive scheduling algorithm.


• Jobs are always executed on a first-come, first-serve basis
• It is easy to implement and use.
• However, this method is poor in performance, and the general wait time is quite
high.

Shortest Remaining Time

The full form of SRT is Shortest remaining time. It is also known as SJF preemptive
scheduling. In this method, the process will be allocated to the task, which is closest to its
completion. This method prevents a newer ready state process from holding the
completion of an older process.

Characteristics of SRT scheduling method:

• This method is mostly applied in batch environments where short jobs are
required to be given preference.
• This is not an ideal method to implement it in a shared system where the required
CPU time is unknown.
• Associate with each process as the length of its next CPU burst. So that operating
system uses these lengths, which helps to schedule the process with the shortest
possible time.

Priority Based Scheduling

Priority scheduling is a method of scheduling processes based on priority. In this method,


the scheduler selects the tasks to work as per the priority.
Priority scheduling also helps OS to involve priority assignments. The processes with
higher priority should be carried out first, whereas jobs with equal priorities are carried
out on a round-robin or FCFS basis. Priority can be decided based on memory
requirements, time requirements, etc.

Round-Robin Scheduling

Round robin is the oldest, simplest scheduling algorithm. The name of this algorithm
comes from the round-robin principle, where each person gets an equal share of
something in turn. It is mostly used for scheduling algorithms in multitasking. This
algorithm method helps for starvation free execution of processes.

Characteristics of Round-Robin Scheduling

• Round robin is a hybrid model which is clock-driven


• Time slice should be minimum, which is assigned for a specific task to be
processed. However, it may vary for different processes.
• It is a real time system which responds to the event within a specific time limit.
Unit-II Operating System 7 of 8

Shortest Job First

SJF is a full form of (Shortest job first) is a scheduling algorithm in which the process with
the shortest execution time should be selected for execution next. This scheduling
method can be preemptive or non-preemptive. It significantly reduces the average
waiting time for other processes awaiting execution.

Characteristics of SJF Scheduling

• It is associated with each job as a unit of time to complete.


• In this method, when the CPU is available, the next process or job with the shortest
completion time will be executed first.
• It is Implemented with non-preemptive policy.
• This algorithm method is useful for batch-type processing, where waiting for jobs
to complete is not critical.
• It improves job output by offering shorter jobs, which should be executed first,
which mostly have a shorter turnaround time.

Multiple-Level Queues Scheduling

This algorithm separates the ready queue into various separate queues. In this method,
processes are assigned to a queue based on a specific property of the process, like the
process priority, size of the memory, etc.
However, this is not an independent scheduling OS algorithm as it needs to use other
types of algorithms in order to schedule the jobs.

Characteristic of Multiple-Level Queues Scheduling:

• Multiple queues should be maintained for processes with some characteristics.


• Every queue may have its separate scheduling algorithms.
• Priorities are given for each queue.

MULTIPLE PROCESSORS SCHEDULING

Multiple processor scheduling or multiprocessor scheduling focuses on designing the


system's scheduling function, which consists of more than one processor. Multiple CPUs
share the load (load sharing) in multiprocessor scheduling so that various processes run
simultaneously. In general, multiprocessor scheduling is complex as compared to single
processor scheduling. In the multiprocessor scheduling, there are many processors, and
they are identical, and we can run any process at any time.

The multiple CPUs in the system are in close communication, which shares a common
bus, memory, and other peripheral devices. So we can say that the system is tightly
coupled. These systems are used when we want to process a bulk amount of data, and
these systems are mainly used in satellite, weather forecasting, etc.

There are cases when the processors are identical, i.e., homogenous, in terms of their
functionality in multiple-processor scheduling. We can use any processor available to run
any process in the queue.

Multiprocessor systems may be heterogeneous (different kinds of CPUs)


or homogenous (the same CPU). There may be special scheduling constraints, such as
devices connected via a private bus to only one CPU.
Unit-II Operating System 8 of 8

REAL-TIME CPU SCHEDULING

Real-time systems are systems that carry real-time tasks. These tasks need to be
performed immediately with a certain degree of urgency. In particular, these tasks are
related to control of certain events (or) reacting to them. Real-time tasks can be classified
as hard real-time tasks and soft real-time tasks.

A hard real-time task must be performed at a specified time which could otherwise lead
to huge losses. In soft real-time tasks, a specified deadline can be missed. This is because
the task can be rescheduled (or) can be completed after the specified time,

In real-time systems, the scheduler is considered as the most important component


which is typically a short-term task scheduler. The main focus of this scheduler is to
reduce the response time associated with each of the associated processes instead of
handling the deadline.
Unit-III Operating System 1 of 5

BACKGROUND IN PROCESS SYNCHRONIZATION

On the basis of synchronization, processes are categorized as one of the following two
types:

Independent Process : Execution of one process does not affects the execution of other
processes.

Cooperative Process : Execution of one process affects the execution of other processes.
Process synchronization problem arises in the case of Cooperative process also because
resources are shared in Cooperative processes.

CRITICAL SECTION PROBLEM

Critical section is a code segment that can be accessed by only one process at a time. Critical section
contains shared variables which need to be synchronized to maintain consistency of data variables.

In the entry section, the process requests for entry in the Critical Section.

Any solution to the critical section problem must satisfy three requirements:

1. Mutual Exclusion : If a process is executing in its critical section, then no other


process is allowed to execute in the critical section.

2. Progress : If no process is executing in the critical section and other processes are
waiting outside the critical section, then only those processes that are not
executing in their remainder section can participate in deciding which will enter
in the critical section next, and the selection can not be postponed indefinitely.

3. Bounded Waiting : A bound must exist on the number of times that other
processes are allowed to enter their critical sections after a process has made a
request to enter its critical section and before that request is granted.

SYNCHRONIZATION HARDWARE

Process Synchronization refers to coordinating the execution of processes so that no two


processes can have access to the same shared data and resources. A problem occurs when
two processes running simultaneously share the same data or variable.
Unit-III Operating System 2 of 5

There are three hardware approaches to solve process synchronization problems:

1. Swap
2. Test() and Set()
3. Unlock and lock

Test and Set

In Test and Set the shared variable is a lock that is initialized to false.

The algorithm returns whatever value is sent to it and sets the lock to true. Mutual
exclusion is ensured here, as till the lock is set to true, other processes will not be able to
enter and the loop continues.

However, after one process is completed any other process can go in as no queue is
maintained.

Swap

In this algorithm, instead of directly setting the lock to true, the key is first set to true and
then swapped with the lock.

Similar to Test and Set, when there are no processes in the critical section, the lock turns
to false and allows other processes to enter. Hence, mutual exclusion and progress are
ensured but the bound waiting is not ensured for the very same reason.

Unlock and Lock

In addition to Test and Set, this algorithm uses waiting[i] to check if there are any
processes in the wait. The processes are set in the ready queue with respect to the critical
section.

Unlike the previous algorithms, it doesn’t set the lock to false and checks the ready queue
for any waiting processes. If there are no processes waiting in the ready queue, the lock
is then set to false and any process can enter.

MUTEX LOCKS

Mutex is a mutual exclusion object that synchronizes access to a resource. It is created


with a unique name at the start of a program. The mutex locking mechanism ensures only
one thread can acquire the mutex and enter the critical section. This thread only releases
the mutex when it exits in the critical section.
Unit-III Operating System 3 of 5

It is a special type of binary semaphore used for controlling access to the shared resource.
It includes a priority inheritance mechanism to avoid extended priority inversion
problems. It allows current higher priority tasks to be kept in the blocked state for the
shortest time possible. However, priority inheritance does not correct priority inversion
but only minimizes its effect.

Advantages of Mutex

1. Mutex is just simple locks obtained before entering its critical section and then
releasing it.

2. Since only one thread is in its critical section at any given time, there are no race
conditions, and data always remain consistent.

SEMAPHORE

Semaphore is simply a variable that is non-negative and shared between threads. A


semaphore is a signaling mechanism, and another thread can signal a thread that is
waiting on a semaphore.

A semaphore uses two atomic operations,

1. Wait: The wait operation decrements the value of its argument S if it is positive. If
S is negative or zero, then no operation is performed.

2. Signal for the process synchronization: The signal operation increments the
value of its argument S.

SEMAPHORE USAGES

In the case of a single buffer, we can separate the 4 KB buffer into four buffers of 1 KB.
Semaphore can be associated with these four buffers, allowing users and producers to
work on different buffers simultaneously.
Unit-III Operating System 4 of 5

Advantages of Semaphore

1. It allows more than one thread to access the critical section.


2. Semaphores are machine-independent.
3. Semaphores are implemented in the machine-independent code of the
microkernel.
4. They do not allow multiple processes to enter the critical section.
5. As there is busy and waiting in semaphore, there is never wastage of process time
and resources.
6. They are machine-independent, which should be run in the machine-independent
code of the microkernel.
7. They allow flexible management of resources.

MONITORS

The monitor is one of the ways to achieve Process synchronization. The monitor is
supported by programming languages to achieve mutual exclusion between processes.
For example Java Synchronized methods. Java provides wait() and notify() constructs.

It is the collection of condition variables and procedures combined together in a special


kind of module or a package.

The processes running outside the monitor can’t access the internal variable of the
monitor but can call procedures of the monitor.

Only one process at a time can execute code inside monitors.

Advantages of Monitor:

Monitors have the advantage of making parallel programming easier and less error prone
than using techniques such as semaphore.

DINING PHILOSOPHERS SOLUTIONS USING MONITORS

Prerequisite: Monitor, Process Synchronization

Dining-Philosophers Problem – N philosophers seated around a circular table


Attention reader! Don’t stop learning now. Practice GATE exam well before the actual
exam with the subject-wise and overall quizzes available in GATE Test Series Course.
Unit-III Operating System 5 of 5

• There is one chopstick between each philosopher


• A philosopher must pick up its two nearest chopsticks in order to eat
• A philosopher must pick up first one chopstick, then the second one, not both at
once

We need an algorithm for allocating these limited resources(chopsticks) among several


processes(philosophers) such that solution is free from deadlock and free from
starvation.

There exist some algorithm to solve Dining – Philosopher Problem, but they may have
deadlock situation. Also, a deadlock-free solution is not necessarily starvation-free.
Semaphores can result in deadlock due to programming errors. Monitors alone are not
sufficiency to solve this, we need monitors with condition variables

DINING PHILOSOPHERS SOLUTIONS USING MONITORS

We illustrate monitor concepts by presenting a deadlock-free solution to the dining-


philosophers problem. Monitor is used to control access to state variables and condition
variables. It only tells when to enter and exit the segment. This solution imposes the
restriction that a philosopher may pick up her chopsticks only if both of them are
available.

To code this solution, we need to distinguish among three states in which we may find a
philosopher. For this purpose, we introduce the following data structure:

THINKING – When philosopher doesn’t want to gain access to either fork.


HUNGRY – When philosopher wants to enter the critical section.
EATING – When philosopher has got both the forks, i.e., he has entered the section.

IMPLEMENTATION OF MONITOR USING SEMAPHORES

We now consider a possible implementation of the monitor mechanism using


semaphores. For each monitor, a semaphore mutex (initialized to 1) is provided. A
process must execute wait(mutex) before entering the monitor and must
execute signal(mutex) after leaving the monitor.

Since a signaling process must wait until the resumed process either leaves or waits, an
additional semaphore, next, is introduced, initialized to 0. The signaling processes can
use next to suspend themselves. An integer variable next count is also provided to count
the number of processes suspended on next. Thus, each external function F is replaced by
wait(mutex);
...
body of F
...
if (next count > 0)
signal(next);
else
signal(mutex);
Unit-IV Operating System 1 of 8

MEMORY MANAGEMENT
BACKGROUND
Memory management is the functionality of an operating system which handles or
manages primary memory and moves processes back and forth between main memory
and disk during execution. Memory management keeps track of each and every memory
location, regardless of either it is allocated to some process or it is free.
It checks how much memory is to be allocated to processes. It decides which process will
get memory at what time. It tracks whenever some memory gets freed or unallocated and
correspondingly it updates the status.
SWAPPING
When a process is executed it must have resided in memory. Swapping is a process of
swap a process temporarily into a secondary memory from the main memory, which is
fast as compared to secondary memory. A swapping allows more processes to be run and
can be fit into memory at one time.

The main part of swapping is transferred time and the total time directly proportional to
the amount of memory swapped. Swapping is also known as roll-out, roll in, because if a
higher priority process arrives and wants service, the memory manager can swap out the
lower priority process and then load and execute the higher priority process.
After finishing higher priority work, the lower priority process swapped back in memory
and continued to the execution process.

CONTIGUOUS MEMORY ALLOCATION :


The main memory should oblige both the operating system and the different client
processes. Therefore, the allocation of memory becomes an important task in the
operating system.
Unit-IV Operating System 2 of 8

The memory is usually divided into two partitions: one for the resident operating system
and one for the user processes. We normally need several user processes to reside in
memory simultaneously. Therefore, we need to consider how to allocate available
memory to the processes that are in the input queue waiting to be brought into memory.
In adjacent memory allotment, each process is contained in a single contiguous segment
of memory.

PAGING

Paging is a method or techniques which is used for non-contiguous memory allocation. It


is a fixed size partitioning theme (scheme). In paging, both main memory and secondary
memory are divided into equal fixed size partitions. The partitions of secondary memory
area unit and main memory area unit known as pages and frames respectively.

Paging is a memory management method accustomed fetch processes from the


secondary memory into the main memory in the form of pages. in paging, each process is
split into parts wherever size of every part is same as the page size. The size of the last
half could also be but the page size. The pages of process area unit hold on within the
frames of main memory relying upon their accessibility.
Unit-IV Operating System 3 of 8

SEGMENTATION

Segmentation is another non-contiguous memory allocation scheme like paging. like


paging, in segmentation, process isn’t divided indiscriminately into mounted(fixed) size
pages. It is variable size partitioning theme. like paging, in segmentation, secondary and
main memory are not divided into partitions of equal size.

The partitions of secondary memory area unit known as as segments. The details
concerning every segment are hold in a table known as as segmentation table. Segment
table contains two main data concerning segment, one is Base, which is the bottom
address of the segment and another is Limit, which is the length of the segment.

In segmentation, CPU generates logical address that contains Segment number and
segment offset. If the segment offset is a smaller amount than the limit then the address
called valid address otherwise it throws miscalculation because the address is invalid.

VIRTUAL MEMORY

Virtual Memory is a storage allocation scheme in which secondary memory can be


addressed as though it were part of main memory.

The addresses a program may use to reference memory are distinguished from the
addresses the memory system uses to identify physical storage sites, and program
generated addresses are translated automatically to the corresponding machine
addresses.

DEMAND PAGING

The process of loading the page into memory on demand (whenever page fault occurs)
is known as demand paging.

The process includes the following steps:

1. If CPU try to refer a page that is currently not available in the main memory, it
generates an interrupt indicating memory access fault.
Unit-IV Operating System 4 of 8

2. The OS puts the interrupted process in a blocking state. For the execution to
proceed the OS must bring the required page into the memory.
3. The OS will search for the required page in the logical address space.
4. The required page will be brought from logical address space to physical address
space. The page replacement algorithms are used for the decision making of
replacing the page in physical address space.
5. The page table will updated accordingly.
6. The signal will be sent to the CPU to continue the program execution and it will
place the process back into ready state.

Advantages:

1. More processes may be maintained in the main memory: Because we are going to
load only some of the pages of any particular process, there is room for more
processes. This leads to more efficient utilization of the processor because it is
more likely that at least one of the more numerous processes will be in the ready
state at any particular time.

2. Process may be larger than all of main memory: One of the most fundamental
restrictions in programming is lifted. A process larger than the main memory can
be executed because of demand paging. The OS itself loads pages of a process in
main memory as required.

3. It allows greater multiprogramming levels by using less of the available (primary)


memory for each process.

COPY ON WRITE

Copy on Write or simply COW is a resource management technique. One of its main use
is in the implementation of the fork system call in which it shares the virtual
memory(pages) of the OS.
Unit-IV Operating System 5 of 8

In UNIX like OS, fork() system call creates a duplicate process of the parent process which
is called as the child process.

The idea behind a copy-on-write is that when a parent process creates a child process
then both of these processes initially will share the same pages in memory and these
shared pages will be marked as copy-on-write which means that if any of these processes
will try to modify the shared pages then only a copy of these pages will be created and
the modifications will be done on the copy of pages by that process and thus not affecting
the other process.

Suppose, there is a process P that creates a new process Q and then process P modifies
page 3.
The below figures shows what happens before and after process P modifies page 3.

PAGE REPLACEMENT

Page replacement algorithms are the techniques using which an Operating System
decides which memory pages to swap out, write to disk when a page of memory needs to
be allocated. Paging happens whenever a page fault occurs and a free page cannot be used
for allocation purpose accounting to reason that pages are not available or the number of
free pages is lower than required pages.

When the page that was selected for replacement and was paged out, is referenced again,
it has to read in from disk, and this requires for I/O completion. This process determines
the quality of the page replacement algorithm: the lesser the time waiting for page-ins,
the better is the algorithm.
Unit-IV Operating System 6 of 8

A page replacement algorithm looks at the limited information about accessing the pages
provided by hardware, and tries to select which pages should be replaced to minimize
the total number of page misses, while balancing it with the costs of primary storage and
processor time of the algorithm itself. There are many different page replacement
algorithms. We evaluate an algorithm by running it on a particular string of memory
reference and computing the number of page faults.

PAGE REPLACEMENT ALGORITHM

First In First Out (FIFO) algorithm

• Oldest page in main memory is the one which will be selected for replacement.
• Easy to implement, keep a list, replace pages from the tail and add new pages at
the head.

Optimal Page algorithm

• An optimal page-replacement algorithm has the lowest page-fault rate of all


algorithms. An optimal page-replacement algorithm exists, and has been called
OPT or MIN.
• Replace the page that will not be used for the longest period of time. Use the time
when a page is to be used.
Unit-IV Operating System 7 of 8

Least Recently Used (LRU) algorithm

• Page which has not been used for the longest time in main memory is the one
which will be selected for replacement.
• Easy to implement, keep a list, replace pages by looking back into time.

ALLOCATION OF FRAMES

1. The absolute minimum number of frames that a process must be allocated is


dependent on system architecture, and corresponds to the worst-case scenario of
the number of pages that could be touched by a single ( machine ) instruction.

2. If an instruction (and its operands) spans a page boundary, then multiple pages
could be needed just for the instruction fetch.

3. Memory references in an instruction touch more pages, and if those memory


locations can span page boundaries, then multiple pages could be needed for
operand access also.

The worst case involves indirect addressing, particularly where multiple levels of indirect
addressing are allowed. Left unchecked, a pointer to a pointer to a pointer to a pointer to
a . . . could theoretically touch every page in the virtual address space in a single machine
instruction, requiring every virtual page be loaded into physical memory simultaneously.

For this reason architectures place a limit ( say 16 ) on the number of levels of indirection
allowed in an instruction, which is enforced with a counter initialized to the limit and
decremented with every level of indirection in an instruction - If the counter reaches zero,
then an "excessive indirection" trap occurs. This example would still require a minimum
frame allocation of 17 per process.
Unit-IV Operating System 8 of 8

THRASHING

Thrashing in computing is an issue caused when virtual memory is in use. It occurs when
the virtual memory of a computer is rapidly exchanging data for data on hard disk, to the
exclusion of most application-level processing. As the main memory gets filled, additional
pages need to be swapped in and out of virtual memory.

The swapping causes a very high rate of hard disk access. Thrashing can continue for a
long duration until the underlying issue is addressed. Thrashing can potentially result in
total collapse of the hard drive of the computer.
Unit-V Operating System 1 of 5

I/O HARDWARE
One of the important jobs of an Operating System is to manage various I/O devices
including mouse, keyboards, touch pad, disk drives, display adapters, USB devices, Bit-
mapped screen, LED, Analog-to-digital converter, On/off switch, network connections,
audio I/O, printers etc.

I/O devices can be divided into two categories

Block devices − A block device is one with which the driver communicates by sending
entire blocks of data. For example, Hard disks, USB cameras, Disk-On-Key etc.

Character devices − A character device is one with which the driver communicates by
sending and receiving single characters (bytes, octets). For example, serial ports, parallel
ports, sounds cards etc

Device Controllers

Device drivers are software modules that can be plugged into an OS to handle a particular
device. Operating System takes help from device drivers to handle all I/O devices.

The Device Controller works like an interface between a device and a device driver. I/O
units (Keyboard, mouse, printer, etc.) typically consist of a mechanical component and an
electronic component where electronic component is called the device controller.

Any device connected to the computer is connected by a plug and socket, and the socket
is connected to a device controller. Following is a model for connecting the CPU, memory,
controllers, and I/O devices where CPU and device controllers all use a common bus for
communication.

APPLICATION I/O INTERFACE

I/O Interface

There is need of surface whenever any CPU wants to communicate with I/O devices. The
interface is used to interpret address which is generated by CPU. Thus, surface is used to
communicate to I/O devices i.e. to share information between CPU and I/O devices
interface is used which is called as I/O Interface.
Unit-V Operating System 2 of 5

Various applications of I/O Interface

Application of I/O is that we can say interface have access to open any file without any
kind of information about file i.e., even basic information of file is unknown. It also has
feature that it can be used to also add new devices to computer system even it does not
cause any kind of interrupt to operating system.

It can also used to abstract differences in I/O devices by identifying general kinds. The
access to each of general kind is through standardized set of function which is called
as interface.

Character-stream or Block

A character stream or block both transfers data in form of bytes. The difference between
both of them is that character-stream transfers bytes in linear way i.e., one after another
whereas block transfers whole byte in single unit.

Sequential or Random Access

To transfer data in fixed order determined by device, we use sequential device whereas
user to instruct device to seek to any of data storage locations, random-access device is
used.

Synchronous or Asynchronous

Data transfers with predictable response times is performed by synchronous device, in


coordination with others aspects of system. An irregular or unpredictable response times
not coordinated with other computer events is exhibits by an asynchronous device.

Sharable or Dedicated

Several processes or threads can be used concurrently by sharable device; whereas


dedicated device cannot.

Speed of Operation

The speed of device has range set which is of few bytes per second to few gega-bytes per
second.

Read-write, read only, write-only

Different devices perform different operations, some supports both input and output, but
others supports only one data transfer direction either input or output.

KERNEL I/O SUBSYSTEM

Kernel I/O Subsystem is responsible to provide many services related to I/O. Following
are some of the services provided.
Unit-V Operating System 3 of 5

Scheduling − Kernel schedules a set of I/O requests to determine a good order in which
to execute them. When an application issues a blocking I/O system call, the request is
placed on the queue for that device. The Kernel I/O scheduler rearranges the order of the
queue to improve the overall system efficiency and the average response time
experienced by the applications

Buffering − Kernel I/O Subsystem maintains a memory area known as buffer that stores
data while they are transferred between two devices or between a device with an
application operation. Buffering is done to cope with a speed mismatch between the
producer and consumer of a data stream or to adapt between devices that have different
data transfer sizes.

Caching − Kernel maintains cache memory which is region of fast memory that holds
copies of data. Access to the cached copy is more efficient than access to the original.

Spooling and Device Reservation − A spool is a buffer that holds output for a device,
such as a printer, that cannot accept interleaved data streams. The spooling system copies
the queued spool files to the printer one at a time. In some operating systems, spooling is
managed by a system daemon process. In other operating systems, it is handled by an in
kernel thread

Error Handling − An operating system that uses protected memory can guard against
many kinds of hardware and application errors.

COMMUNICATION WITH I/O DEVICES

The data transfers between the CPU and I/O devices may handle in a variety of
techniques. Some techniques use the CPU as an intermediate path, others transfer the
data directly to and from the memory unit. The data transfers to and from peripherals
that are saying Input-Output communication.

Types of Input-Output communication techniques

1. Programmed I/O
2. Interrupt-initiated I/O
3. Direct Memory Access (DMA)

Programmed I/O

In the programmed I/O method, the I/O device doesn’t have direct memory access. The
data transfer from an I/O device to memory requires the execution of a program or
several instructions by CPU So that this method says Programmed I/O.

In this method, the CPU stays in a program loop until the I/O unit indicates that is ready
for data transfer. It is a time-consuming process for the CPU. The programmed I/O
method is particularly useful in small low-speed computers. The CPU sends the ‘Read‘
command to the I/O device and wait in the program loop until it receives a response from
the I/O device.
Unit-V Operating System 4 of 5

Interrupt-initiated I/O

The problem in programmed I/O is that the CPU has to wait for the ready signal from the
I/O device. The alternative method for this problem is Interrupt-initiated
I/O or Interrupt driven I/O. In this method, the CPU issues a read command to the I/O
device about the status, and then go on to do some useful work. When the I/O device is
ready, the I/O device sends an interrupt signal to the processor.

When the CPU receives the interrupt signal from the I/O device, it checks the status. If the
status is ready, then the CPU read the word from the I/O device and write the word into
the main memory. If the operation was done successfully, then the processor goes on to
the next instruction.

Direct Access Memory (DMA)

Direct Memory Access (DMA) is a process where the data transforms between the
microprocessor (CPU). The memory and peripheral devices directly without the
involvement of the microprocessor (CPU). The CPU first initializes it and the CPU should
send some useful information to the DMA controller.

Letting the DMA controller is to manage the memory buses directly. It would improve the
speed of data transfer. A chip says a DMA Controller (DMAC) manages this process. The
CPU is idle and it has no control of the memory buses. A DMA controller takes over the
buses to manage the transfer directly between the I/O device and memory.
Unit-V Operating System 5 of 5

STREAMS

A full-duplex communication channel between a user-level process and a device in Unix


System V and beyond.

A STREAM consists of

1. STREAM head interfaces


2. Driver end interfaces with the device
3. Zero or more STREAM modules between them.

Each module contains a read queue and a write queue.


Message passing is used to communicate between queues.

You might also like