19.cours Notes OS
19.cours Notes OS
19.cours Notes OS
Every computer system must have at least one operating system to run other
programs. Applications like Browsers, MS Office, Notepad Games, etc., need some
environment to run and perform its tasks.
1. Memory Management
2. Processor Management
3. Device Management
4. File Management
5. Security
6. Control over system performance
7. Job accounting
8. Error detecting aids
9. Coordination between other software and users
Memory Management
• Keeps tracks of primary memory, i.e., what part of it are in use by whom, what
part are not in use.
• In multiprogramming, the OS decides which process will get memory when and
how much.
• Allocates the memory when a process requests it to do so.
• De-allocates the memory when a process no longer needs it or has been
terminated.
Processor Management
In multiprogramming environment, the OS decides which process gets the processor
when and for how much time. This function is called process scheduling. An Operating
System does the following activities for processor management −
• Keeps tracks of processor and status of process. The program responsible for this
task is known as traffic controller.
• Allocates the processor (CPU) to a process.
• De-allocates processor when a process is no longer required.
Unit-I Operating System 2 of 22
Device Management
An Operating System manages device communication via their respective drivers. It does
the following activities for device management −
Keeps tracks of all devices. Program responsible for this task is known as the I/O
controller.
Decides which process gets the device when and for how much time.
Allocates the device in the efficient way.
De-allocates devices.
File Management
A file system is normally organized into directories for easy navigation and usage. These
directories may contain files and other directions.
Security
Recording delays between request for a service and response from the system.
Job accounting
Keeping track of time and resources used by various jobs and users.
Production of dumps, traces, error messages, and other debugging and error detecting
aids.
In a Batch Operating System, the similar jobs are grouped together into batches with
the help of some operator and these batches are executed one by one.
Advantages:
1. The overall time taken by the system to execute all the programmes will be
reduced.
2. The Batch Operating System can be shared between multiple users.
Disadvantages:
1. Manual interventions are required between two batches.
2. The CPU utilization is low because the time taken in loading and unloading of
batches is very high as compared to execution time.
In a Multi-tasking Operating System, more than one processes are being executed
at a particular time with the help of the time-sharing concept. So, in the time-sharing
environment, we decide a time that is called time quantum and when the process starts
its execution then the execution continues for only that amount of time and after that,
other processes will be given chance for that amount of time only.
Unit-I Operating System 4 of 22
Advantages:
1. Since equal time quantum is given to each process, so each process gets equal
opportunity to execute.
2. The CPU will be busy in most of the cases and this is good to have case.
Disadvantages:
1. Process having higher priority will not get the chance to be executed first because
the equal opportunity is given to each process.
In a Distributed Operating System, we have various systems and all these systems have
their own CPU, main memory, secondary memory, and resources. These systems are
connected to each other using a shared communication network. Here, each system can
perform its task individually.
Advantages:
1. Since the systems are connected with each other so, the failure of one system can't
stop the execution of processes because other systems can do the execution.
2. Resources are shared between each other.
3. The load on the host computer gets distributed and this, in turn, increases the
efficiency.
Disadvantages:
1. Since the data is shared among all the computers, so to make the data secure and
accessible to few computers, you need to put some extra efforts.
2. If there is a problem in the communication network then the whole
communication will be broken.
Advantages:
1. Since it is dedicated to a particular job, so it is fast.
2. Low cost.
3. These consume less memory and other resources.
Unit-I Operating System 5 of 22
Disadvantages:
1. Only one job can be performed.
2. It is difficult to upgrade or is nearly scalable.
The Real-time Operating Systems are used in the situation where we are dealing with
some real-time data. So, as soon as the data comes, the execution of the process should
be done and there should be no dealy i.e. no buffer delays should be there. There are two
types of Real-time Operating System:
1. Hard Real-time: In this type, a small delay can lead to drastic change. So, when
the time constraint is very important then we use the Hard Real-time.
2. Soft Real-time: Here, the time constraint is not that important but here also we
are dealing with some real-time data.
Advantages:
1. There is maximum utilization of devices and resources.
2. These systems are almost error-free.
Disadvantages:
1. The algorithms used in Real-time Operating System is very complex.
2. Specific device drivers are used for responding to the interrupts as soon as
possible.
MAJOR OS COMPONENTS
An operating system is a large and complex system that can only be created by
partitioning into small pieces. These pieces should be a well-defined portion of the
system, which carefully defined inputs, outputs, and functions.
Components:
1. File Management
2. Process Management
3. I/O Device Management
4. Network Management
5. Main Memory management
6. Secondary-Storage Management
7. Security Management
8. Other Important Activities
Unit-I Operating System 6 of 22
File Management
A file is a set of related information which is should define by its creator. It commonly
represents programs, both source and object forms, and data. Data files can be numeric,
alphabetic, or alphanumeric.
The operating system has the following important given activities in connections with file
management:
Process Management
The process management component is a procedure for managing the many processes
that are running simultaneously on the operating system. Every software application
program has one or more processes associated with them when they are running.
Unit-I Operating System 7 of 22
For example, when you use a browser like Google Chrome, there is a process running for
that browser program. The OS also has many processes running, which performing
various functions.
All these processes should be managed by process management, which keeps processes
for running efficiently. It also uses memory allocated to them and shutting them down
when needed.
The execution of a process must be sequential so, at least one instruction should be
executed on behalf of the process.
One of the important use of an operating system that helps you to hide the variations of
specific hardware devices from the user.
Network Management
The computers in the network are connected through a communication network, which
can be configured in a number of different ways.
With the help of network management, the network can be fully or partially connected,
which helps users to design routing and connection strategies that overcome connection
and security issues.
Unit-I Operating System 8 of 22
• Distributed systems help you to various computing resources in size and function.
They may involve microprocessors, minicomputers, and many general-purpose
computer systems.
• A distributed system also offers the user access to the various resources the
network shares.
• It helps to access shared resources that help computation to speed-up or offers
data availability and reliability.
Main Memory is a large array of storage or bytes, which has an address. The memory
management process is conducted by using a sequence of reads or writes of specific
memory addresses.
However, it is mainly based on the hardware design of the system. Each algorithm
requires corresponding hardware support. Main Memory offers fast storage that can be
accessed directly by the CPU. It is costly and hence has a lower storage capacity. However,
for a program to be executed, it must be in the main Memory.
Secondary-Storage Management
The most important task of a computer system is to execute programs. These programs,
along with the data, helps you to access, which is in the main memory during execution.
This Memory of the computer is very small to store all data and programs permanently.
The computer system offers secondary storage to back up the main Memory.
Today modern computers use hard drives/SSD as the primary storage of both programs
and data. However, the secondary storage management also works with storage devices,
like a USB flash drive, and CD/DVD drives.
Programs like assemblers, compilers, stored on the disk until it is loaded into memory,
and then use the disk as a source and destination for processing.
Unit-I Operating System 9 of 22
• Storage allocation
• Free space management
• Disk scheduling
Security Management
The various processes in an operating system need to be secured from each other's
activities.
For that purpose, various mechanisms can be used to ensure that those processes which
want to operate files, memory CPU, and other hardware resources should have proper
authorization from the operating system.
An operating system is a construct that allows the user application programs to interact
with the system hardware. Operating system by itself does not provide any function but
it provides an atmosphere in which different applications and programs can do useful
work.
The major operations of the operating system are process management, memory
management, device management and file management. These are given in detail as
follows:
Process Management
The operating system is responsible for managing the processes i.e assigning the
processor to a process at a time. This is known as process scheduling. The different
algorithms used for process scheduling are FCFS (first come first served), SJF (shortest
job first), priority scheduling, round robin scheduling etc.
Unit-I Operating System 10 of 22
There are many scheduling queues that are used to handle processes in process
management. When the processes enter the system, they are put into the job queue. The
processes that are ready to execute in the main memory are kept in the ready queue. The
processes that are waiting for the I/O device are kept in the device queue.
Memory Management
Memory management plays an important part in operating system. It deals with memory
and the moving of processes from disk to primary memory for execution and back again.
The activities performed by the operating system for memory management are –
• The operating system assigns memory to the processes as required. This can be
done using best fit, first fit and worst fit algorithms.
• All the memory is tracked by the operating system i.e. it nodes what memory parts
are in use by the processes and which are empty.
• The operating system deallocated memory from processes as required. This may
happen when a process has been terminated or if it no longer needs the memory.
Device Management
There are many I/O devices handled by the operating system such as mouse, keyboard,
disk drive etc. There are different device drivers that can be connected to the operating
system to handle a specific device. The device controller is an interface between the
device and the device driver. The user applications can access all the I/O devices using
the device drivers, which are device specific codes.
File Management
Files are used to provide a uniform view of data storage by the operating system. All the
files are mapped onto physical devices that are usually non volatile so data is safe in the
case of system failure.
The files can be accessed by the system in two ways i.e. sequential access and direct
access –
• Sequential Access
The information in a file is processed in order using sequential access. The files records
are accessed on after another. Most of the file systems such as editors, compilers etc. use
sequential access.
• Direct Access
In direct access or relative access, the files can be accessed in random for read and write
operations. The direct access model is based on the disk model of a file, since it allows
random accesses.
• User Interface
• Program Execution
• File system manipulation
• Input / Output Operations
• Communication
• Resource Allocation
• Error Detection
• Accounting
• Security and protection
User Interface
Usually Operating system comes in three forms or types. Depending on the interface their
types have been further subdivided. These are:
The command line interface (CLI) usually deals with using text commands and a
technique for entering those commands. The batch interface (BI): commands and
directives are used to manage those commands that are entered into files and those files
get executed.
Program Execution
The operating system must have the capability to load a program into memory and
execute that program. Furthermore, the program must be able to end its execution, either
normally or abnormally / forcefully.
Programs need has to be read and then write them as files and directories. File handling
portion of operating system also allows users to create and delete files by specific name
along with extension, search for a given file and / or list file information. Some programs
comprise of permissions management for allowing or denying access to files or
directories based on file ownership.
Input/Output Operations
A program which is currently executing may require I/O, which may involve file or other
I/O device. For efficiency and protection, users cannot directly govern the I/O devices. So,
the OS provide a means to do I/O Input / Output operation which means read or write
operation with any file.
Communication
Process needs to swap over information with other process. Processes executing on same
computer system or on different computer systems can communicate using operating
Unit-I Operating System 12 of 22
system support. Communication between two processes can be done using shared
memory or via message passing.
Resource Allocation
When multiple jobs running concurrently, resources must need to be allocated to each of
them. Resources can be CPU cycles, main memory storage, file storage and I/O devices.
CPU scheduling routines are used here to establish how best the CPU can be used.
Error detection
Errors may occur within CPU, memory hardware, I/O devices and in the user program.
For each type of error, the OS takes adequate action for ensuring correct and consistent
computing.
Accounting
This service of the operating system keeps track of which users are using how much and
what kinds of computer resources have been used for accounting or simply to accumulate
usage statistics.
Protection includes in ensuring all access to system resources in a controlled manner. For
making a system secure, the user needs to authenticate him or her to the system before
using (usually via login ID and password).
SYSTEM CALLS
A system call is a method for a computer program to request a service from the kernel of
the operating system on which it is running. A system call is a method of interacting with
the operating system via programs. A system call is a request from computer software to
an operating system's kernel.
A system call is a way for a user program to interface with the operating system. The
program requests several services, and the OS responds by invoking a series of system
calls to satisfy the request. A system call can be written in assembly language
1. Process Control
2. File Management
3. Device Management
4. Information Maintenance
5. Communication
Process Control
Process control is the system call that is used to direct the processes. Some process
control examples include creating, load, abort, end, execute, process, terminate the
process, etc.
Unit-I Operating System 13 of 22
File Management
File management is a system call that is used to handle the files. Some file management
examples include creating files, delete files, open, close, read, write, etc.
Device Management
Device management is a system call that is used to deal with devices. Some examples of
device management include read, device, write, get device attributes, release device, etc.
Information Maintenance
Information maintenance is a system call that is used to maintain information. There are
some examples of information maintenance, including getting system data, set time or
date, get time or date, set system data, etc.
Communication
Communication is a system call that is used for communication. There are some examples
of communication, including create, delete communication connections, send, receive
messages, etc.
SYSTEM PROGRAMS
System programs provide an environment where programs can be developed and
executed. In the simplest sense, system programs also provide a bridge between the user
interface and system calls. In reality, they are much more complex. For example, a
compiler is a complex system program.
The system program serves as a part of the operating system. It traditionally lies between
the user interface and the system calls. The user view of the system is actually defined by
system programs and not system calls because that is what they interact with and system
programs are closer to the user interface.
An image that describes system programs in the operating system hierarchy is as follows:
Unit-I Operating System 14 of 22
System programs can be divided into seven parts. These are given as follows:
Status Information
The status information system programs provide required data on the current or past
status of the system. This may include the system date, system time, available memory in
system, disk space, logged in users etc.
Communications
These system programs are needed for system communications such as web browsers.
Web browsers allow systems to communicate and access information from the network
as required.
File Manipulation
These system programs are used to manipulate system files. This can be done using
various commands like create, delete, copy, rename, print etc. These commands can
create files, delete files, copy the contents of one file into another, rename files, print them
etc.
The system programs that deal with program loading and execution make sure that
programs can be loaded into memory and executed correctly. Loaders and Linkers are a
prime example of this type of system programs.
File Modification
System programs that are used for file modification basically change the data in the file
or modify it in some other way. Text editors are a big example of file modification system
programs.
Application Programs
Application programs can perform a wide range of services as per the needs of the users.
These include programs for database systems, word processors, plotting tools,
spreadsheets, games, scientific applications etc.
1. Simple Structure
2. Monolithic Approach
3. Layered Approach
4. Microkernels
Unit-I Operating System 15 of 22
Simple Structure
• Operating systems such as MS-DOS and the original UNIX did not have well-
defined structures.
• There was no CPU Execution Mode (user and kernel), and so errors in
applications could cause the whole system to crash.
Monolithic Approach
• Functionality of the OS is invoked with simple function calls within the kernel,
which is one large program.
• Device drivers are loaded into the running kernel and become part of the kernel.
Layered Approach
• This allows implementers to change the inner workings, and increases modularity.
• As long as the external interface of the routines don’t change, developers have
more freedom to change the inner workings of the routines.
• With the layered approach, the bottom layer is the hardware, while the highest
layer is the user interface.
o The main advantage is simplicity of construction and debugging.
o The main difficulty is defining the various layers.
Unit-I Operating System 16 of 22
o The main disadvantage is that the OS tends to be less efficient than other
implementations.
Microkernels
This structures the operating system by removing all nonessential portions of the kernel
and implementing them as system and user level programs.
Main disadvantage is poor performance due to increased system overhead from message
passing.
Unit-I Operating System 17 of 22
PROCESS
Defn
A process is a program at the time of execution. The process is more than the program
code. It includes the program counter, the process stack, and the content of the process
register, etc.
The purpose of the process stack is to store temporary data, such as subroutine
parameters, return address and temporary variables.
PROCESS SCHEDULING
The process scheduling is the activity of the process manager that handles the removal of
the running process from the CPU and the selection of another process on the basis of a
particular strategy.
The OS maintains all PCBs in Process Scheduling Queues. The OS maintains a separate
queue for each of the process states and PCBs of all processes in the same execution state
Unit-I Operating System 18 of 22
are placed in the same queue. When the state of a process is changed, its PCB is unlinked
from its current queue and moved to its new state queue.
The Operating System maintains the following important process scheduling queues –
1. Job queue − This queue keeps all the processes in the system.
2. Ready queue − This queue keeps a set of all processes residing in main memory,
ready and waiting to execute. A new process is always put in this queue.
3. Device queues − The processes which are blocked due to unavailability of an I/O
device constitute this queue.
Schedulers
Schedulers are special system software which handle process scheduling in various ways.
Their main task is to select the jobs to be submitted into the system and to decide which
process to run. Schedulers are of three types –
• Long-Term Scheduler
• Short-Term Scheduler
• Medium-Term Scheduler
It is also called a job scheduler. A long-term scheduler determines which programs are
admitted to the system for processing. It selects processes from the queue and loads them
into memory for execution. Process loads into the memory for CPU scheduling.
The primary objective of the job scheduler is to provide a balanced mix of jobs, such as
I/O bound and processor bound. It also controls the degree of multiprogramming. If the
degree of multiprogramming is stable, then the average rate of process creation must be
equal to the average departure rate of processes leaving the system.
On some systems, the long-term scheduler may not be available or minimal. Time-sharing
operating systems have no long term scheduler. When a process changes the state from
new to ready, then there is use of long-term scheduler.
It is also called as CPU scheduler. Its main objective is to increase system performance in
accordance with the chosen set of criteria. It is the change of ready state to running state
Unit-I Operating System 19 of 22
of the process. CPU scheduler selects a process among the processes that are ready to
execute and allocates CPU to one of them.
Short-term schedulers, also known as dispatchers, make the decision of which process to
execute next. Short-term schedulers are faster than long-term schedulers.
OPERATIONS ON PROCESSES
There are many operations that can be performed on processes. Some of these are
process creation, process preemption, process blocking, and process termination. These
are given in detail as follows.
Process Creation
Processes need to be created in the system for different operations. This can be done by
the following events.
A process may be created by another process using fork(). The creating process is called
the parent process and the created process is the child process. A child process can have
only one parent but a parent process may have many children. Both the parent and child
processes have the same memory image, open files, and environment strings. However,
they have distinct address spaces.
Unit-I Operating System 20 of 22
Process Preemption
Process Blocking
The process is blocked if it is waiting for some event to occur. This event may be I/O as
the I/O events are executed in the main memory and don't require the processor. After
the event is complete, the process again goes to the ready state.
Process Termination
After the process has completed the execution of its last instruction, it is terminated. The
resources held by a process are released after it is terminated.
A child process can be terminated by its parent process if its task is no longer relevant.
The child process sends its status information to the parent process before it terminates.
Also, when a parent process is terminated, its child processes are terminated as well as
the child processes cannot run if the parent processes are terminated.
Unit-I Operating System 21 of 22
Semaphore
Mutual Exclusion
Mutual exclusion requires that only one process thread can enter the critical section at a
time. This is useful for synchronization and also prevents race conditions.
Barrier
A barrier does not allow individual processes to proceed until all the processes reach it.
Many parallel languages and collective routines impose barriers.
Spinlock
This is a type of lock. The processes trying to acquire this lock wait in a loop while
checking if the lock is available or not. This is known as busy waiting because the process
is not doing any useful operation even though it is active.
Pipes
Pipe is widely used for communication between two related processes. This is a half-
duplex method, so the first process communicates with the second process. However, in
order to achieve a full-duplex, another pipe is needed.
Message Passing
Message Queues
A message queue is a linked list of messages stored within the kernel. It is identified by a
message queue identifier. This method offers communication between single or multiple
processes with full-duplex capacity.
Direct Communication
In this type of inter-process communication process, should name each other explicitly.
In this method, a link is established between one pair of communicating processes, and
between each pair, only one link exists.
Indirect Communication
Indirect communication establishes like only when processes share a common mailbox
each pair of processes sharing several communication links. A link can communicate with
many processes. The link may be bi-directional or unidirectional.
Shared Memory
Shared memory is a memory shared between two or more processes that are established
using shared memory between all the processes. This type of memory requires to
protected from each other by synchronizing access across all the processes.
FIFO
THREADS
Apart from this, there can be more than one thread inside a process. Each thread of the
same process makes use of a separate program counter and a stack of activation records
and control blocks. Thread is often referred to as a lightweight process.
Types of Threads
The kernel thread recognizes the operating system. There is a thread control block and
process control block in the system for each thread and process in the kernel-level thread.
The kernel-level thread is implemented by the operating system.
The kernel knows about all the threads and manages them. The kernel-level thread offers
a system call to create and manage the threads from user-space. The implementation of
kernel threads is more difficult than the user thread. Context switch time is longer in the
kernel thread. If a kernel thread performs a blocking operation, the Banky thread
execution can continue.
Benefits of Threads
1. Enhanced throughput of the system: When the process is split into many
threads, and each thread is treated as a job, the number of jobs done in the unit
time increases. That is why the throughput of the system also increases.
2. Effective Utilization of Multiprocessor system: When you have more than one
thread in one process, you can schedule more than one thread in more than one
processor.
3. Faster context switch: The context switching period between threads is less than
the process context switching. The process context switch means more overhead
for the CPU.
4. Responsiveness: When the process is split into several threads, and when a
thread completes its execution, that process can be responded to as soon as
possible.
6. Resource sharing: Resources can be shared between all threads within a process,
such as code, data, and files. Note: The stack and register cannot be shared
between threads. There is a stack and register for each thread.
Unit-II Operating System 2 of 8
MULTITHREADING MODELS
Multithreading allows the application to divide its task into individual threads. In multi-
threads, the same process or task can be done by the number of threads, or we can say
that there is more than one thread to perform the task in multithreading. With the use of
multithreading, multitasking can be achieved.
The many to one model maps many user levels threads to one kernel thread. This type of
relationship facilitates an effective context-switching environment, easily implemented
even on the simple kernel with no thread support.
The one-to-one model maps a single user-level thread to a single kernel-level thread. This
type of relationship facilitates the running of multiple threads in parallel. However, this
benefit comes with its drawback.
The generation of every new user thread must include creating a corresponding kernel
thread causing an overhead, which can hinder the performance of the parent process.
Windows series and Linux operating systems try to tackle this problem by limiting the
growth of the thread count.
In this type of model, there are several user-level threads and several kernel-level
threads. The number of kernel threads created depends upon a particular application.
The developer can create as many threads at both levels but may not be the same. The
many to many model is a compromise between the other two models. In this model, if any
thread makes a blocking system call, the kernel can schedule another thread for
execution.
Also, with the introduction of multiple threads, complexity is not present as in the
previous models. Though this model allows the creation of multiple kernel threads, true
concurrency cannot be achieved by this model. This is because the kernel can schedule
only one process at a time.
THREAD LIBRARIES
Thread Libraries has a collection of functions that useful in creating and controlling
threads. Programmers can access these thread libraries using an application
programming interface (API). Thread libraries can be the user level library or kernel
level library.
Unit-II Operating System 3 of 8
If the thread library is implemented at the user space then code and data of the thread
library would reside in user space. In this case, invoking any function from thread library
would be a simple function call and it won’t be a system call.
THREADING ISSUES
In case if a thread fork is the complete process copied or is the new process single
threaded? The answer is here it depends on the system and in case of if new process
execs immediately then there is no need to copy all the other thread and if it does not
create new process then the whole process should be copied.
Signal Handling
Whenever a multithreaded process receives a signal then to what thread should that
signal be conveyed? There are following four main option for signal distribution:
Thread Cancellation
Threads that are no-longer required can be cancelled by another thread in one of two
techniques:
1. Asynchronies cancellation
2. Deferred cancellation
Asynchronies Cancellation
It means cancellation of thread immediately. Allocation of resources and inter thread data
transfer may be challenging for asynchronies cancellation.
Deferred Cancellation
In this method a flag is sets that indicating the thread should cancel itself when it is
feasible. It’s upon the cancelled thread to check this flag intermittently and exit nicely
when it sees the set flag.
The benefit of using threads in the first place is that Most data is shared among the
threads but, sometimes threads also need thread explicit data. Major libraries of threads
are pThreads, Win32 and java which provide support for thread specific which is called
as TLS thread local storage.
Unit-II Operating System 4 of 8
Scheduler Activation
The process scheduling is the activity of the process manager that handles the removal of
the running process from the CPU and the selection of another process on the basis of a
particular strategy.
Scheduling Queues
The OS maintains all PCBs in Process Scheduling Queues. The OS maintains a separate
queue for each of the process states and PCBs of all processes in the same execution state
are placed in the same queue. When the state of a process is changed, its PCB is unlinked
from its current queue and moved to its new state queue.
The Operating System maintains the following important process scheduling queues –
• Job queue − This queue keeps all the processes in the system.
• Ready queue − This queue keeps a set of all processes residing in main memory,
ready and waiting to execute. A new process is always put in this queue.
• Device queues − The processes which are blocked due to unavailability of an I/O
device constitute this queue.
SCHEDULING CRITERIA
There are several different criteria to consider when trying to select the "best" scheduling
algorithm for a particular situation and environment, including:
1. CPU utilization - Ideally the CPU would be busy 100% of the time, so as to waste
0 CPU cycles. On a real system CPU usage should range from 40% ( lightly loaded )
to 90% ( heavily loaded. )
2. Throughput - Number of processes completed per unit time. May range from 10
/ second to 1 / hour depending on the specific processes.
4. Waiting time - How much time processes spend in the ready queue waiting their
turn to get on the CPU.
Unit-II Operating System 5 of 8
5. Load average - The average number of processes sitting in the ready queue
waiting their turn to get into the CPU. Reported in 1-minute, 5-minute, and 15-
minute averages by "uptime" and "who". )
6. Response time - The time taken in an interactive program from the issuance of a
command to the commence of a response to that command.
In general one wants to optimize the average value of a criteria ( Maximize CPU utilization
and throughput, and minimize all the others. ) However some times one wants to do
something different, such as to minimize the maximum response time.
Sometimes it is most desirable to minimize the variance of a criteria than the actual
value. I.e. users are more accepting of a consistent predictable system than an
inconsistent one, even if it is a little bit slower.
SCHEDULING ALGORITHMS
To decide which process to execute first and which process to execute last to achieve
maximum CPU utilization, computer scientists have defined some algorithms, they are:
First Come First Serve is the full form of FCFS. It is the easiest and most simple CPU
scheduling algorithm. In this type of algorithm, the process which requests the CPU gets
the CPU allocation first. This scheduling method can be managed with a FIFO queue.
Unit-II Operating System 6 of 8
As the process enters the ready queue, its PCB (Process Control Block) is linked with the
tail of the queue. So, when CPU becomes free, it should be assigned to the process at the
beginning of the queue.
The full form of SRT is Shortest remaining time. It is also known as SJF preemptive
scheduling. In this method, the process will be allocated to the task, which is closest to its
completion. This method prevents a newer ready state process from holding the
completion of an older process.
• This method is mostly applied in batch environments where short jobs are
required to be given preference.
• This is not an ideal method to implement it in a shared system where the required
CPU time is unknown.
• Associate with each process as the length of its next CPU burst. So that operating
system uses these lengths, which helps to schedule the process with the shortest
possible time.
Round-Robin Scheduling
Round robin is the oldest, simplest scheduling algorithm. The name of this algorithm
comes from the round-robin principle, where each person gets an equal share of
something in turn. It is mostly used for scheduling algorithms in multitasking. This
algorithm method helps for starvation free execution of processes.
SJF is a full form of (Shortest job first) is a scheduling algorithm in which the process with
the shortest execution time should be selected for execution next. This scheduling
method can be preemptive or non-preemptive. It significantly reduces the average
waiting time for other processes awaiting execution.
This algorithm separates the ready queue into various separate queues. In this method,
processes are assigned to a queue based on a specific property of the process, like the
process priority, size of the memory, etc.
However, this is not an independent scheduling OS algorithm as it needs to use other
types of algorithms in order to schedule the jobs.
The multiple CPUs in the system are in close communication, which shares a common
bus, memory, and other peripheral devices. So we can say that the system is tightly
coupled. These systems are used when we want to process a bulk amount of data, and
these systems are mainly used in satellite, weather forecasting, etc.
There are cases when the processors are identical, i.e., homogenous, in terms of their
functionality in multiple-processor scheduling. We can use any processor available to run
any process in the queue.
Real-time systems are systems that carry real-time tasks. These tasks need to be
performed immediately with a certain degree of urgency. In particular, these tasks are
related to control of certain events (or) reacting to them. Real-time tasks can be classified
as hard real-time tasks and soft real-time tasks.
A hard real-time task must be performed at a specified time which could otherwise lead
to huge losses. In soft real-time tasks, a specified deadline can be missed. This is because
the task can be rescheduled (or) can be completed after the specified time,
On the basis of synchronization, processes are categorized as one of the following two
types:
Independent Process : Execution of one process does not affects the execution of other
processes.
Cooperative Process : Execution of one process affects the execution of other processes.
Process synchronization problem arises in the case of Cooperative process also because
resources are shared in Cooperative processes.
Critical section is a code segment that can be accessed by only one process at a time. Critical section
contains shared variables which need to be synchronized to maintain consistency of data variables.
In the entry section, the process requests for entry in the Critical Section.
Any solution to the critical section problem must satisfy three requirements:
2. Progress : If no process is executing in the critical section and other processes are
waiting outside the critical section, then only those processes that are not
executing in their remainder section can participate in deciding which will enter
in the critical section next, and the selection can not be postponed indefinitely.
3. Bounded Waiting : A bound must exist on the number of times that other
processes are allowed to enter their critical sections after a process has made a
request to enter its critical section and before that request is granted.
SYNCHRONIZATION HARDWARE
1. Swap
2. Test() and Set()
3. Unlock and lock
In Test and Set the shared variable is a lock that is initialized to false.
The algorithm returns whatever value is sent to it and sets the lock to true. Mutual
exclusion is ensured here, as till the lock is set to true, other processes will not be able to
enter and the loop continues.
However, after one process is completed any other process can go in as no queue is
maintained.
Swap
In this algorithm, instead of directly setting the lock to true, the key is first set to true and
then swapped with the lock.
Similar to Test and Set, when there are no processes in the critical section, the lock turns
to false and allows other processes to enter. Hence, mutual exclusion and progress are
ensured but the bound waiting is not ensured for the very same reason.
In addition to Test and Set, this algorithm uses waiting[i] to check if there are any
processes in the wait. The processes are set in the ready queue with respect to the critical
section.
Unlike the previous algorithms, it doesn’t set the lock to false and checks the ready queue
for any waiting processes. If there are no processes waiting in the ready queue, the lock
is then set to false and any process can enter.
MUTEX LOCKS
It is a special type of binary semaphore used for controlling access to the shared resource.
It includes a priority inheritance mechanism to avoid extended priority inversion
problems. It allows current higher priority tasks to be kept in the blocked state for the
shortest time possible. However, priority inheritance does not correct priority inversion
but only minimizes its effect.
Advantages of Mutex
1. Mutex is just simple locks obtained before entering its critical section and then
releasing it.
2. Since only one thread is in its critical section at any given time, there are no race
conditions, and data always remain consistent.
SEMAPHORE
1. Wait: The wait operation decrements the value of its argument S if it is positive. If
S is negative or zero, then no operation is performed.
2. Signal for the process synchronization: The signal operation increments the
value of its argument S.
SEMAPHORE USAGES
In the case of a single buffer, we can separate the 4 KB buffer into four buffers of 1 KB.
Semaphore can be associated with these four buffers, allowing users and producers to
work on different buffers simultaneously.
Unit-III Operating System 4 of 5
Advantages of Semaphore
MONITORS
The monitor is one of the ways to achieve Process synchronization. The monitor is
supported by programming languages to achieve mutual exclusion between processes.
For example Java Synchronized methods. Java provides wait() and notify() constructs.
The processes running outside the monitor can’t access the internal variable of the
monitor but can call procedures of the monitor.
Advantages of Monitor:
Monitors have the advantage of making parallel programming easier and less error prone
than using techniques such as semaphore.
There exist some algorithm to solve Dining – Philosopher Problem, but they may have
deadlock situation. Also, a deadlock-free solution is not necessarily starvation-free.
Semaphores can result in deadlock due to programming errors. Monitors alone are not
sufficiency to solve this, we need monitors with condition variables
To code this solution, we need to distinguish among three states in which we may find a
philosopher. For this purpose, we introduce the following data structure:
Since a signaling process must wait until the resumed process either leaves or waits, an
additional semaphore, next, is introduced, initialized to 0. The signaling processes can
use next to suspend themselves. An integer variable next count is also provided to count
the number of processes suspended on next. Thus, each external function F is replaced by
wait(mutex);
...
body of F
...
if (next count > 0)
signal(next);
else
signal(mutex);
Unit-IV Operating System 1 of 8
MEMORY MANAGEMENT
BACKGROUND
Memory management is the functionality of an operating system which handles or
manages primary memory and moves processes back and forth between main memory
and disk during execution. Memory management keeps track of each and every memory
location, regardless of either it is allocated to some process or it is free.
It checks how much memory is to be allocated to processes. It decides which process will
get memory at what time. It tracks whenever some memory gets freed or unallocated and
correspondingly it updates the status.
SWAPPING
When a process is executed it must have resided in memory. Swapping is a process of
swap a process temporarily into a secondary memory from the main memory, which is
fast as compared to secondary memory. A swapping allows more processes to be run and
can be fit into memory at one time.
The main part of swapping is transferred time and the total time directly proportional to
the amount of memory swapped. Swapping is also known as roll-out, roll in, because if a
higher priority process arrives and wants service, the memory manager can swap out the
lower priority process and then load and execute the higher priority process.
After finishing higher priority work, the lower priority process swapped back in memory
and continued to the execution process.
The memory is usually divided into two partitions: one for the resident operating system
and one for the user processes. We normally need several user processes to reside in
memory simultaneously. Therefore, we need to consider how to allocate available
memory to the processes that are in the input queue waiting to be brought into memory.
In adjacent memory allotment, each process is contained in a single contiguous segment
of memory.
PAGING
SEGMENTATION
The partitions of secondary memory area unit known as as segments. The details
concerning every segment are hold in a table known as as segmentation table. Segment
table contains two main data concerning segment, one is Base, which is the bottom
address of the segment and another is Limit, which is the length of the segment.
In segmentation, CPU generates logical address that contains Segment number and
segment offset. If the segment offset is a smaller amount than the limit then the address
called valid address otherwise it throws miscalculation because the address is invalid.
VIRTUAL MEMORY
The addresses a program may use to reference memory are distinguished from the
addresses the memory system uses to identify physical storage sites, and program
generated addresses are translated automatically to the corresponding machine
addresses.
DEMAND PAGING
The process of loading the page into memory on demand (whenever page fault occurs)
is known as demand paging.
1. If CPU try to refer a page that is currently not available in the main memory, it
generates an interrupt indicating memory access fault.
Unit-IV Operating System 4 of 8
2. The OS puts the interrupted process in a blocking state. For the execution to
proceed the OS must bring the required page into the memory.
3. The OS will search for the required page in the logical address space.
4. The required page will be brought from logical address space to physical address
space. The page replacement algorithms are used for the decision making of
replacing the page in physical address space.
5. The page table will updated accordingly.
6. The signal will be sent to the CPU to continue the program execution and it will
place the process back into ready state.
Advantages:
1. More processes may be maintained in the main memory: Because we are going to
load only some of the pages of any particular process, there is room for more
processes. This leads to more efficient utilization of the processor because it is
more likely that at least one of the more numerous processes will be in the ready
state at any particular time.
2. Process may be larger than all of main memory: One of the most fundamental
restrictions in programming is lifted. A process larger than the main memory can
be executed because of demand paging. The OS itself loads pages of a process in
main memory as required.
COPY ON WRITE
Copy on Write or simply COW is a resource management technique. One of its main use
is in the implementation of the fork system call in which it shares the virtual
memory(pages) of the OS.
Unit-IV Operating System 5 of 8
In UNIX like OS, fork() system call creates a duplicate process of the parent process which
is called as the child process.
The idea behind a copy-on-write is that when a parent process creates a child process
then both of these processes initially will share the same pages in memory and these
shared pages will be marked as copy-on-write which means that if any of these processes
will try to modify the shared pages then only a copy of these pages will be created and
the modifications will be done on the copy of pages by that process and thus not affecting
the other process.
Suppose, there is a process P that creates a new process Q and then process P modifies
page 3.
The below figures shows what happens before and after process P modifies page 3.
PAGE REPLACEMENT
Page replacement algorithms are the techniques using which an Operating System
decides which memory pages to swap out, write to disk when a page of memory needs to
be allocated. Paging happens whenever a page fault occurs and a free page cannot be used
for allocation purpose accounting to reason that pages are not available or the number of
free pages is lower than required pages.
When the page that was selected for replacement and was paged out, is referenced again,
it has to read in from disk, and this requires for I/O completion. This process determines
the quality of the page replacement algorithm: the lesser the time waiting for page-ins,
the better is the algorithm.
Unit-IV Operating System 6 of 8
A page replacement algorithm looks at the limited information about accessing the pages
provided by hardware, and tries to select which pages should be replaced to minimize
the total number of page misses, while balancing it with the costs of primary storage and
processor time of the algorithm itself. There are many different page replacement
algorithms. We evaluate an algorithm by running it on a particular string of memory
reference and computing the number of page faults.
• Oldest page in main memory is the one which will be selected for replacement.
• Easy to implement, keep a list, replace pages from the tail and add new pages at
the head.
• Page which has not been used for the longest time in main memory is the one
which will be selected for replacement.
• Easy to implement, keep a list, replace pages by looking back into time.
ALLOCATION OF FRAMES
2. If an instruction (and its operands) spans a page boundary, then multiple pages
could be needed just for the instruction fetch.
The worst case involves indirect addressing, particularly where multiple levels of indirect
addressing are allowed. Left unchecked, a pointer to a pointer to a pointer to a pointer to
a . . . could theoretically touch every page in the virtual address space in a single machine
instruction, requiring every virtual page be loaded into physical memory simultaneously.
For this reason architectures place a limit ( say 16 ) on the number of levels of indirection
allowed in an instruction, which is enforced with a counter initialized to the limit and
decremented with every level of indirection in an instruction - If the counter reaches zero,
then an "excessive indirection" trap occurs. This example would still require a minimum
frame allocation of 17 per process.
Unit-IV Operating System 8 of 8
THRASHING
Thrashing in computing is an issue caused when virtual memory is in use. It occurs when
the virtual memory of a computer is rapidly exchanging data for data on hard disk, to the
exclusion of most application-level processing. As the main memory gets filled, additional
pages need to be swapped in and out of virtual memory.
The swapping causes a very high rate of hard disk access. Thrashing can continue for a
long duration until the underlying issue is addressed. Thrashing can potentially result in
total collapse of the hard drive of the computer.
Unit-V Operating System 1 of 5
I/O HARDWARE
One of the important jobs of an Operating System is to manage various I/O devices
including mouse, keyboards, touch pad, disk drives, display adapters, USB devices, Bit-
mapped screen, LED, Analog-to-digital converter, On/off switch, network connections,
audio I/O, printers etc.
Block devices − A block device is one with which the driver communicates by sending
entire blocks of data. For example, Hard disks, USB cameras, Disk-On-Key etc.
Character devices − A character device is one with which the driver communicates by
sending and receiving single characters (bytes, octets). For example, serial ports, parallel
ports, sounds cards etc
Device Controllers
Device drivers are software modules that can be plugged into an OS to handle a particular
device. Operating System takes help from device drivers to handle all I/O devices.
The Device Controller works like an interface between a device and a device driver. I/O
units (Keyboard, mouse, printer, etc.) typically consist of a mechanical component and an
electronic component where electronic component is called the device controller.
Any device connected to the computer is connected by a plug and socket, and the socket
is connected to a device controller. Following is a model for connecting the CPU, memory,
controllers, and I/O devices where CPU and device controllers all use a common bus for
communication.
I/O Interface
There is need of surface whenever any CPU wants to communicate with I/O devices. The
interface is used to interpret address which is generated by CPU. Thus, surface is used to
communicate to I/O devices i.e. to share information between CPU and I/O devices
interface is used which is called as I/O Interface.
Unit-V Operating System 2 of 5
Application of I/O is that we can say interface have access to open any file without any
kind of information about file i.e., even basic information of file is unknown. It also has
feature that it can be used to also add new devices to computer system even it does not
cause any kind of interrupt to operating system.
It can also used to abstract differences in I/O devices by identifying general kinds. The
access to each of general kind is through standardized set of function which is called
as interface.
Character-stream or Block
A character stream or block both transfers data in form of bytes. The difference between
both of them is that character-stream transfers bytes in linear way i.e., one after another
whereas block transfers whole byte in single unit.
To transfer data in fixed order determined by device, we use sequential device whereas
user to instruct device to seek to any of data storage locations, random-access device is
used.
Synchronous or Asynchronous
Sharable or Dedicated
Speed of Operation
The speed of device has range set which is of few bytes per second to few gega-bytes per
second.
Different devices perform different operations, some supports both input and output, but
others supports only one data transfer direction either input or output.
Kernel I/O Subsystem is responsible to provide many services related to I/O. Following
are some of the services provided.
Unit-V Operating System 3 of 5
Scheduling − Kernel schedules a set of I/O requests to determine a good order in which
to execute them. When an application issues a blocking I/O system call, the request is
placed on the queue for that device. The Kernel I/O scheduler rearranges the order of the
queue to improve the overall system efficiency and the average response time
experienced by the applications
Buffering − Kernel I/O Subsystem maintains a memory area known as buffer that stores
data while they are transferred between two devices or between a device with an
application operation. Buffering is done to cope with a speed mismatch between the
producer and consumer of a data stream or to adapt between devices that have different
data transfer sizes.
Caching − Kernel maintains cache memory which is region of fast memory that holds
copies of data. Access to the cached copy is more efficient than access to the original.
Spooling and Device Reservation − A spool is a buffer that holds output for a device,
such as a printer, that cannot accept interleaved data streams. The spooling system copies
the queued spool files to the printer one at a time. In some operating systems, spooling is
managed by a system daemon process. In other operating systems, it is handled by an in
kernel thread
Error Handling − An operating system that uses protected memory can guard against
many kinds of hardware and application errors.
The data transfers between the CPU and I/O devices may handle in a variety of
techniques. Some techniques use the CPU as an intermediate path, others transfer the
data directly to and from the memory unit. The data transfers to and from peripherals
that are saying Input-Output communication.
1. Programmed I/O
2. Interrupt-initiated I/O
3. Direct Memory Access (DMA)
Programmed I/O
In the programmed I/O method, the I/O device doesn’t have direct memory access. The
data transfer from an I/O device to memory requires the execution of a program or
several instructions by CPU So that this method says Programmed I/O.
In this method, the CPU stays in a program loop until the I/O unit indicates that is ready
for data transfer. It is a time-consuming process for the CPU. The programmed I/O
method is particularly useful in small low-speed computers. The CPU sends the ‘Read‘
command to the I/O device and wait in the program loop until it receives a response from
the I/O device.
Unit-V Operating System 4 of 5
Interrupt-initiated I/O
The problem in programmed I/O is that the CPU has to wait for the ready signal from the
I/O device. The alternative method for this problem is Interrupt-initiated
I/O or Interrupt driven I/O. In this method, the CPU issues a read command to the I/O
device about the status, and then go on to do some useful work. When the I/O device is
ready, the I/O device sends an interrupt signal to the processor.
When the CPU receives the interrupt signal from the I/O device, it checks the status. If the
status is ready, then the CPU read the word from the I/O device and write the word into
the main memory. If the operation was done successfully, then the processor goes on to
the next instruction.
Direct Memory Access (DMA) is a process where the data transforms between the
microprocessor (CPU). The memory and peripheral devices directly without the
involvement of the microprocessor (CPU). The CPU first initializes it and the CPU should
send some useful information to the DMA controller.
Letting the DMA controller is to manage the memory buses directly. It would improve the
speed of data transfer. A chip says a DMA Controller (DMAC) manages this process. The
CPU is idle and it has no control of the memory buses. A DMA controller takes over the
buses to manage the transfer directly between the I/O device and memory.
Unit-V Operating System 5 of 5
STREAMS
A STREAM consists of