Goals of The Operating System: Primary Goal
Goals of The Operating System: Primary Goal
Goals of The Operating System: Primary Goal
• Memory Management
• Processor Management
• Device Management
• File Management
• Network Management
• Security
• Control over system performance
• Job accounting
• Error detecting aids
• Coordination between other software and users
GOALS
1. Convenience
An Operating System's primary and first goal is to provide a friendly and
convenient environment to the user. It is optional to use Operating System. Still,
things become harder when the user has to perform all the process scheduling and
convert user commands to machine language so that system can perform tasks.
So, we use an Operating System to act as a bridge between us and the computer
hardware. We only have to give commands to the system, and OS will take the
instructions and do the rest of the work. Because of this operating system should
be convenient to use and operate by the user.
2. Efficiency
The second and important goal of an Operating System is efficiency. An operating
system should utilize all the resources efficiently. The management of resources
and programs should be done so that no resource is kept idle and memory is used
for no use.
3. Portability and Reliability
The operating system can work/operate on different machines with different
processors and memory configurations. This makes the operating system more
reliable.
Also, the operating system can protect itself and the user from accidental damage
from the user program.
4. Hardware Abstraction
The operating system can conceal or can be said to control all functions and
resources of the computer. The user can give commands and access any function
or resource of the computer without facing any difficulties. In this way, the
Operating system communicates between the user and computer hardware.
5. Security
An operating system provides the safety and security of data between the user and
the hardware. OS enables multiple users to securely share a computer(system),
including files, processes, memory, and device separately.
OS classification: single user, multiuser
Real-time operating system
Advantages
• It works very fast.
• It is time saving, as it need not be loaded from memory.
• Since it is very small, it occupies less space in memory.
Single-User Operating System
•
••
Single user operating system is also known as a single-tasking operating system, and a
single-user operating system is designed especially for home computers. A single user
can access the computer at a particular time. The single-user operating system allows
permission to access your personal computer at a time by a single user, but sometimes
it can support multiple profiles. It can also be used in official work and other
environments as well.
So this operating system does not require the support of memory protection, file
protection, and security system. The computers based on this operating system have a
single processor to execute only a single program at all times. This system provides all
the resources such as CPU, and I/O devices, to a single user at a time.
The operating system for those computers which support only one computer. In this
operating system, another user can not interact with another working user. The core
part of the single-user operating system is one kernel image that will run at a time i.e
there is no other facility to run more than one kernel image.
Disadvantages:
S.
Parallel System Distributed System
No
Parallel systems work with the The distributed system consists of a number
simultaneous use of multiple of computers that are connected and
computer resources which can managed so that they share the job
include a single computer with processing load among various computers
2. multiple processors. distributed over the network.
Tasks are performed with a more Tasks are performed with a less speedy
3. speedy process. process.
The process scheduling is the activity of the process manager that handles
the removal of the running process from the CPU and the selection of
another process on the basis of a particular strategy.
Categories of Scheduling
1. Non-preemptive: Here the resource can’t be taken from a process until
the process completes execution. The switching of resources occurs
when the running process terminates and moves to a waiting state.
Schedulers
• Long-Term Scheduler
• Short-Term Scheduler
• Medium-Term Scheduler
S.N Long-Term Short-Term Medium-Term
. Scheduler Scheduler Scheduler
It is a process
It is a job It is a CPU
1 swapping
scheduler scheduler
scheduler.
Speed is in
Speed is lesser
Speed is fastest between both
2 than short term
among other two short and long
scheduler
term scheduler.
It provides lesser
It controls the It reduces the
control over
3 degree of degree of
degree of
multiprogramming multiprogramming.
multiprogramming
It is almost
It is also minimal
absent or minimal It is a part of Time
4 in time sharing
in time sharing sharing systems.
system
system
It selects
It can re-introduce
processes from It selects those
the process into
pool and loads processes which
5 memory and
them into are ready to
execution can be
memory for execute
continued.
execution
The message size can be of fixed size or of variable size. If it is of fixed size, it is
easy for an OS designer but complicated for a programmer and if it is of variable
size then it is easy for a programmer but complicated for the OS designer. A
standard message can have two parts: header and body.
The header part is used for storing message type, destination id, source id,
message length, and control information. The control information contains
information like what to do if runs out of buffer space, sequence number, priority.
Generally, message is sent using FIFO style.
What is Critical Section in OS?
Synchronization Problems
These problems are used for testing nearly every newly proposed synchronization
scheme. The following problems of synchronization are considered as classical
problems:
1. Bounded-buffer (or Producer-Consumer) Problem,
1.Bounded-buffer (or Producer-Consumer) Problem,
2. Dining-Philosophers Problem,
3. Readers and Writers Problem,
4. Sleeping Barber Problem
Sec-c
Deadlock in Operating System
••
•
A process in operating system uses resources in the following way.
1. Requests a resource
2. Use the resource
3. Releases the resource
Prevention:
The idea is to not let the system into a deadlock state. This system will make
sure that above mentioned four conditions will not arise. These techniques are
very costly so we use this in cases where our priority is making a system
deadlock-free.
One can zoom into each category individually, Prevention is done by negating
one of the above-mentioned necessary conditions for deadlock. Prevention can
be done in four different ways:
1. Eliminate mutual exclusion 3. Allow preemption
2. Solve hold and Wait 4. Circular wait
Solution
Avoidance:
Avoidance is kind of futuristic. By using the strategy of “Avoidance”, we have to
make an assumption. We need to ensure that all information about resources
that the process will need is known to us before the execution of the process.
We use Banker’s algorithm (Which is in turn a gift from Dijkstra) to avoid
deadlock.
Detection :In this approach, The OS doesn't apply any mechanism to avoid or
prevent the deadlocks. Therefore the system considers that the deadlock will
definitely occur. In order to get rid of deadlocks, The OS periodically checks the
system for any deadlock. In case, it finds any of the deadlock then the OS will
recover the system using some recovery techniques.
The main task of the OS is detecting the deadlocks. The OS can detect the
deadlocks with the help of Resource allocation graph.
In single instanced resource types, if a cycle is being formed in the system then
there will definitely be a deadlock. On the other hand, in multiple instanced
resource type graph, detecting a cycle is not just enough. We have to apply the
safety algorithm on the system by converting the resource allocation graph into the
allocation matrix and request matrix.
Recovery
In order to recover the system from deadlocks, either OS considers resources or
processes.Ful
lscreen
For Resource
Preempt the resource:We can snatch one of the resources from the owner of the
resource (process) and give it to the other process with the expectation that it will
complete the execution and will release this resource sooner. Well, choosing a
resource which will be snatched is going to be a bit difficult.
Rollback to a safe state:System passes through various states to get into the
deadlock state. The operating system canrollback the system to the previous safe
state. For this purpose, OS needs to implement check pointing at every state.
The moment, we get into deadlock, we will rollback all the allocations to get into
the previous safe state.
For Process
-Kill a process:Killing a process can solve our problem but the bigger concern is
to decide which process to kill. Generally, Operating system kills a process which
has done least amount of work until now.
Kill all process:This is not a suggestible approach but can be implemented if the
problem becomes very serious. Killing all process will lead to inefficiency in the
system because all the processes will execute again from starting.
When a file is created on a system, a file name and Inode number is assigned to it.
Generally, to access a file, a user uses the file name but internally file name is first
mapped with respective Inode number stored in a table.
Inode Contents
o User ID of file
o Group ID of file
o Device ID
o File size
o Date of creation
o Permission
o Owner of the file
o File protection flag
o Link counter to determine number of hard links
Inode Table:The Inode table contains all the Inodes and is created when file
system is created.
Inode Number:Each Inode has a unique number and Inode number can be seen
with the help of ls -li command.
Note: The Inode doesn't contain file content, instead it has a pointer to that data.
What is Shells?
The shell provides you with an interface to the UNIX system. It gathers
input from you and executes programs based on that input. When a
program finishes executing, it displays that program's output.
Shell Types:
In UNIX there are two major types of shells:
1. The Bourne shell.- If you are using a Bourne-type shell, the default
prompt is the $ character.
2. The C shell.- If you are using a C-type shell, the default prompt is the
% character.
Trojan Horse
A trojan horse can secretly access the login details of a system. Then a
malicious user can use these to enter the system as a harmless being
and wreak havoc.
Trap Door
Worm
Denial of Service
The different methods that may provide protect and security for
different computer systems are −
Authentication
This deals with identifying each user in the system and making sure they
are who they claim to be. The operating system makes sure that all the
users are authenticated before they access the system. The different
ways to make sure that the users are authentic are:
• Username/ Password
Each user has a distinct username and password combination and they need to
enter it correctly before they can access the system.
• User Key/ User Card
The users need to punch a card into the card slot or use they individual key on a
keypad to access the system.
• User Attribute Identification
Different user attribute identifications that can be used are fingerprint, eye
retina etc. These are unique for each user and are compared with the existing
samples in the database. The user can only access the system if there is a match.
One Time Password
• Random Numbers
The system can ask for numbers that correspond to alphabets that are pre
arranged. This combination can be changed each time a login is required.
• Secret Key
A hardware device can create a secret key related to the user id for login. This
key can change each time.
Sec-B
Input-Output Processor
••
•
-The Input-Output Processor (IOP) is just like a CPU that handles the details
of I/O operations. It is more equipped with facilities than those available in
a typical DMA controller.
-The IOP can fetch and execute its own instructions that are specifically
designed to characterize I/O transfers.
- In addition to the I/O-related tasks, it can perform other processing tasks
like arithmetic, logic, branching, and code translation.
-It communicates with the processor by means of DMA.
-The Input-Output Processor is a specialized processor which loads and
stores data in memory along with the execution of I/O instructions.
-It acts as an interface between the system and devices. It involves a
sequence of events to execute I/O operations and then store the results in
memory.
I/O Requests in operating systems
I/O Requests are managed by Device Drivers in collaboration with some system
programs inside the I/O device. The requests are served by OS using three
simple segments :
1. I/O Traffic Controller: Keeps track of the status of all devices, control
units, and communication channels.
2. I/O scheduler: Executes the policies used by OS to allocate and access
the device, control units, and communication channels.
3. I/O device handler: Serves the device interrupts and heads the transfer
of data.
I/O Traffic Controller has 3 main tasks:
• The primary task is to check if there’s at least one path available.
• If there exists more than one path, it must decide which one to select.
• If all paths are occupied, its task is to analyze which path will be
available at the earliest.
I/O Scheduling
Scheduling in computing is the process of allocating resources to carry out tasks.
A process referred to as a scheduler is responsible for scheduling.
I/O Scheduler functions similarly to Process scheduler, it allocates the devices,
control units, and communication channels. However, under a heavy load of I/O
requests, Scheduler must decide what request should be served first and for that
we multiple queues to be managed by OS.
The major difference between a Process scheduler< and an I/O scheduler is that
I/O requests are not preempted: Once the channel program has started, it’s
allowed to continue to completion
Some modern OS allows I/O Scheduler to serve higher priority requests.
The I/O scheduler works in coordination with the I/O traffic controller to keep
track of which path is being served for the current I/O request.
I/O Device Handler manages the I/O interrupts (if any) and scheduling
algorithms.
A few I/O handling algorithms are :
1. FCFS [First come first server].
2. SSTF [Shortest seek time first].
3. SCAN
4. Look
• N-Step Scan
• C-SCAN
• C-LOOK
Every scheduling algorithm aims to minimize arm movement, mean response
time, and variance in response time.
Memory Management
Memory management is the functionality of an operating system which
handles or manages primary memory and moves processes back and forth
between main memory and disk during execution.
-Memory management keeps track of each and every memory location,
regardless of either it is allocated to some process or it is free.
-It checks how much memory is to be allocated to processes.
-It decides which process will get memory at what time.
-It tracks whenever some memory gets freed or unallocated and
correspondingly it updates the status.
If you are writing a Dynamically loaded program, then your compiler will
compile the program and for all the modules which you want to include
dynamically, only references will be provided and rest of the work will be
done at the time of execution.
At the time of loading, with static loading, the absolute program (and data)
is loaded into memory in order for execution to start.
If you are using dynamic loading, dynamic routines of the library are stored
on a disk in relocatable form and are loaded into memory only when they
are needed by the program.
Swapping
Swapping is a mechanism in which a process can be swapped temporarily
out of main memory (or move) to secondary storage (disk) and make that
memory available to other processes. At some later time, the system
swaps back the process from the secondary storage to main memory.
Fragmentation
As processes are loaded and removed from memory, the free memory
space is broken into little pieces. It happens after sometimes that
processes cannot be allocated to memory blocks considering their small
size and memory blocks remains unused. This problem is known as
Fragmentation.
External fragmentation
1
Total memory space is enough to satisfy a request or to reside a process in it, but
it is not contiguous, so it cannot be used.
Internal fragmentation
2
Memory block assigned to process is bigger. Some portion of memory is left
unused, as it cannot be used by another process.
Paging
Paging is a memory management technique in which process address
space is broken into blocks of the same size called pages (size is power of
2, between 512 bytes and 8192 bytes). The size of the process is
measured in the number of pages.
Address Translation
Page address is called logical address and represented by page number and
the offset.
When the system allocates a frame to any page, it translates this logical
address into a physical address and create entry into the page table to be
used throughout execution of the program.
Segmentation
If for a period T, the locality of reference is just 3 pages, then the process can be
allocated 3 frames for time T, and there will be no thrashing as all currently
required pages will be in the main memory. If the locality then increases to 5
pages, 2 more frames can be given to the process. If the locality decreases to 2
pages, 3 frames can be released for other processes. (Refer to figure) If WSS is
greater than available frames, that process is suspended till frames are
available.
Two-Level Directory
In this separate directories for each user is maintained.
• Path name: Due to two levels there is a path name for every file to locate that
file.
• Now, we can have the same file name for different users.
• Searching is efficient in this method.
Tree-Structured Directory
The directory is maintained in the form of a tree. Searching is efficient and also there is
grouping capability. We have absolute or relative path name for a file.
File Allocation Methods
There are several types of file allocation methods. These are mentioned below.
• Continuous Allocation
• Linked Allocation(Non-contiguous allocation)
• Indexed Allocation
Continuous Allocation
A single continuous set of blocks is allocated to a file at the time of file creation. Thus,
this is a pre-allocation strategy, using variable size portions. The file allocation table
needs just a single entry for each file, showing the starting block and the length of the
file. This method is best from the point of view of the individual sequential file. Multiple
blocks can be read in at a time to improve I/O performance for sequential processing. It
is also easy to retrieve a single block. For example, if a file starts at block b, and the ith
block of the file is wanted, its location on secondary storage is simply b+i-1.
Disadvantages of Continuous Allocation
• External fragmentation will occur, making it difficult to find contiguous blocks
of space of sufficient length. A compaction algorithm will be necessary to free
up additional space on the disk.
• Also, with pre-allocation, it is necessary to declare the size of the file at the
time of creation.
Linked Allocation(Non-Contiguous Allocation)
Allocation is on an individual block basis. Each block contains a pointer to the next
block in the chain. Again the file table needs just a single entry for each file, showing
the starting block and the length of the file. Although pre-allocation is possible, it is
more common simply to allocate blocks as needed. Any free block can be added to the
chain. The blocks need not be continuous. An increase in file size is always possible if a
free disk block is available. There is no external fragmentation because only one block
at a time is needed but there can be internal fragmentation but it exists only in the last
disk block of the file.
Disadvantage Linked Allocation(Non-contiguous allocation)
• Internal fragmentation exists in the last disk block of the file.
• There is an overhead of maintaining the pointer in every disk block.
• If the pointer of any disk block is lost, the file will be truncated.
• It supports only the sequential access of files.
Indexed Allocation
It addresses many of the problems of contiguous and chained allocation. In this case,
the file allocation table contains a separate one-level index for each file: The index has
one entry for each block allocated to the file. The allocation may be on the basis of
fixed-size blocks or variable-sized blocks. Allocation by blocks eliminates external
fragmentation, whereas allocation by variable-size blocks improves locality. This
allocation technique supports both sequential and direct access to the file and thus is
the most popular form of file allocation.
Disk Free Space Management
Just as the space that is allocated to files must be managed, so the space that is not
currently allocated to any file must be managed. To perform any of the file allocation
techniques, it is necessary to know what blocks on the disk are available. Thus we
need a disk allocation table in addition to a file allocation table. The following are the
approaches used for free space management.
1. Bit Tables: This method uses a vector containing one bit for each block on the
disk. Each entry for a 0 corresponds to a free block and each 1 corresponds to
a block in use.
For example 00011010111100110001
In this vector every bit corresponds to a particular block and 0 implies that
that particular block is free and 1 implies that the block is already occupied. A
bit table has the advantage that it is relatively easy to find one or a
contiguous group of free blocks. Thus, a bit table works well with any of
the file allocation methods. Another advantage is that it is as small as
possible.
2. Free Block List: In this method, each block is assigned a number sequentially
and the list of the numbers of all free blocks is maintained in a reserved block
of the disk.