Goals of The Operating System: Primary Goal

Download as pdf or txt
Download as pdf or txt
You are on page 1of 27

• An operating system is a program that acts as an interface between

the software and the computer hardware.


• It is an integrated set of specialized programs used to manage overall
resources and operations of the computer.
• It is a specialized software that controls and monitors the execution of
all other programs that reside in the computer, including application
programs and other system software.

Functions of an operating System.

• Memory Management
• Processor Management
• Device Management
• File Management
• Network Management
• Security
• Control over system performance
• Job accounting
• Error detecting aids
• Coordination between other software and users

Goals of the Operating System


There are two types of goals of an Operating System i.e. Primary Goals and
Secondary Goal.

• Primary Goal: The primary goal of an Operating System is to provide


a user-friendly and convenient environment. We know that it is not
compulsory to use the Operating System, but things become harder
when the user has to perform all the process scheduling and
converting the user code into machine code is also very difficult. So,
we make the use of an Operating System to act as an intermediate
between us and the hardware. All you need to do is give commands to
the Operating System and the Operating System will do the rest for
you. So, the Operating System should be convenient to use.
• Secondary Goal: The secondary goal of an Operating System is
efficiency. The Operating System should perform all the management
of resources in such a way that the resources are fully utilised and no
resource should be held idle if some request to that resource is there at
that instant of time.
So, in order to achieve the above primary and secondary goals, the
Operating System performs a number of functions.

GOALS

1. Convenience
An Operating System's primary and first goal is to provide a friendly and
convenient environment to the user. It is optional to use Operating System. Still,
things become harder when the user has to perform all the process scheduling and
convert user commands to machine language so that system can perform tasks.
So, we use an Operating System to act as a bridge between us and the computer
hardware. We only have to give commands to the system, and OS will take the
instructions and do the rest of the work. Because of this operating system should
be convenient to use and operate by the user.
2. Efficiency
The second and important goal of an Operating System is efficiency. An operating
system should utilize all the resources efficiently. The management of resources
and programs should be done so that no resource is kept idle and memory is used
for no use.
3. Portability and Reliability
The operating system can work/operate on different machines with different
processors and memory configurations. This makes the operating system more
reliable.
Also, the operating system can protect itself and the user from accidental damage
from the user program.
4. Hardware Abstraction
The operating system can conceal or can be said to control all functions and
resources of the computer. The user can give commands and access any function
or resource of the computer without facing any difficulties. In this way, the
Operating system communicates between the user and computer hardware.
5. Security
An operating system provides the safety and security of data between the user and
the hardware. OS enables multiple users to securely share a computer(system),
including files, processes, memory, and device separately.
OS classification: single user, multiuser
Real-time operating system

Real-time operating system is designed to run real-time applications. It can


be both single- and multi-tasking. Examples include Abbasi, AMX RTOS,
etc.

Advantages
• It works very fast.
• It is time saving, as it need not be loaded from memory.
• Since it is very small, it occupies less space in memory.
Single-User Operating System

••
Single user operating system is also known as a single-tasking operating system, and a
single-user operating system is designed especially for home computers. A single user
can access the computer at a particular time. The single-user operating system allows
permission to access your personal computer at a time by a single user, but sometimes
it can support multiple profiles. It can also be used in official work and other
environments as well.
So this operating system does not require the support of memory protection, file
protection, and security system. The computers based on this operating system have a
single processor to execute only a single program at all times. This system provides all
the resources such as CPU, and I/O devices, to a single user at a time.

Single-user operating system

The operating system for those computers which support only one computer. In this
operating system, another user can not interact with another working user. The core
part of the single-user operating system is one kernel image that will run at a time i.e
there is no other facility to run more than one kernel image.

Features of the Single-User Operating System:

• Interpreting user’s commands.


• File management.
• Memory management.
• Input/output management.
• Resource allocation.
• Managing process.
Advantages:

• This OS occupies less space in memory.


• Easy to maintain.
• Less chance of damage.
• This is a single-user interface it allows only one user’s tasks to execute in a
given time.
• In this operating system only one user work at a time, so there will be no
interruption of others.

Disadvantages:

• It can perform only a single task.


• The main drawback is, the OS remains idle for most of the time and is not
utilized to its maximum.
• Tasks take longer to complete.
• It has a high response time.

Types of Single-user Operating Systems:

1. Single User Single-Tasking (eg-MS-Dos)


2. Single User Multi-Tasking (eg-windows,mac)

Multi-User Operating System


In •a multiuser operating system, multiple numbers of users can access
different resources of a computer at the same time. The access is provided
using a network that consists of various personal computers attached to a
mainframe computer system. A multi-user operating system allows the
permission of multiple users for accessing a single machine at a time. The
various personal computers can send and receive information to the
mainframe computer system. Thus, the mainframe computer acts as the
server and other personal computers act as clients for that server.

Multi-user Operating system


Types of Multi-user Operating Systems:
1. Distributed System- collection of multiple computers located on different
computers. Examples: Electronic banking, Mobile apps
2. Time sliced system-A small time duration is allotted to every task. CPU time
is divided into small time slices, and one time is assigned to each other.
Example: Mainframe,
3. Multiprocessor system-It involves multiple processors at a time. If one
processor fails other continues working. Example: Spreadsheets, Music player

S.
Parallel System Distributed System
No

Parallel systems are the systems


that can process the data In these systems, applications are running
simultaneously, and increase the on multiple computers linked by
computational speed of a computer communication lines.
1. system.

Parallel systems work with the The distributed system consists of a number
simultaneous use of multiple of computers that are connected and
computer resources which can managed so that they share the job
include a single computer with processing load among various computers
2. multiple processors. distributed over the network.

Tasks are performed with a more Tasks are performed with a less speedy
3. speedy process. process.

These systems are multiprocessor In Distributed Systems, each processor has


4. systems. its own memory.

It is also known as a tightly coupled Distributed systems are also known as


5. system. loosely coupled systems.

These systems communicate with one


These systems have close
another through various communication
communication with more than one
lines, such as high-speed buses or
processor.
6. telephone lines.

These systems share a memory, These systems do not share memory or


7. clock, and peripheral devices clock in contrast to parallel systems.

E.g:- Hadoop, MapReduce, Apache E.g:- High-Performance Computing clusters,


8. Cassandra Beowulf clusters
Process Scheduling

The process scheduling is the activity of the process manager that handles
the removal of the running process from the CPU and the selection of
another process on the basis of a particular strategy.

Process scheduling is an essential part of a Multiprogramming operating


systems. Such operating systems allow more than one process to be
loaded into the executable memory at a time and the loaded process
shares the CPU using time multiplexing.

Categories of Scheduling
1. Non-preemptive: Here the resource can’t be taken from a process until
the process completes execution. The switching of resources occurs
when the running process terminates and moves to a waiting state.

2. Preemptive: Here the OS allocates the resources to a process for a


fixed amount of time. During resource allocation, the process switches
from running state to ready state or from waiting state to ready state.
This switching occurs as the CPU may give priority to other processes
and replace the process with higher priority with the running process.

Schedulers

Schedulers are special system software which handle process scheduling in


various ways. Their main task is to select the jobs to be submitted into the
system and to decide which process to run. Schedulers are of three types:

• Long-Term Scheduler
• Short-Term Scheduler
• Medium-Term Scheduler
S.N Long-Term Short-Term Medium-Term
. Scheduler Scheduler Scheduler
It is a process
It is a job It is a CPU
1 swapping
scheduler scheduler
scheduler.
Speed is in
Speed is lesser
Speed is fastest between both
2 than short term
among other two short and long
scheduler
term scheduler.
It provides lesser
It controls the It reduces the
control over
3 degree of degree of
degree of
multiprogramming multiprogramming.
multiprogramming
It is almost
It is also minimal
absent or minimal It is a part of Time
4 in time sharing
in time sharing sharing systems.
system
system
It selects
It can re-introduce
processes from It selects those
the process into
pool and loads processes which
5 memory and
them into are ready to
execution can be
memory for execute
continued.
execution

Shared Memory Method


Ex: Producer-Consumer problem
There are two processes: Producer and Consumer. The producer produces some
items and the Consumer consumes that item. The two processes share a
common space or memory location known as a buffer where the item produced
by the Producer is stored and from which the Consumer consumes the item if
needed. There are two versions of this problem: the first one is known as the
unbounded buffer problem in which the Producer can keep on producing items
and there is no limit on the size of the buffer, the second one is known as the
bounded buffer problem in which the Producer can produce up to a certain
number of items before it starts waiting for Consumer to consume it. We will
discuss the bounded buffer problem. First, the Producer and the Consumer will
share some common memory, then the producer will start producing items. If the
total produced item is equal to the size of the buffer, the producer will wait to
get it consumed by the Consumer. Similarly, the consumer will first check for the
availability of the item. If no item is available, the Consumer will wait for the
Producer to produce it. If there are items available, Consumer will consume
them.

Messaging Passing Method


In this method, processes communicate with each other without using any kind
of shared memory. If two processes p1 and p2 want to communicate with each
other, they proceed as follows:
• Establish a communication link (if a link already exists, no need to
establish it again.)
• Start exchanging messages using basic primitives.
We need at least two primitives:
– send(message, destination) or send(message)
– receive(message, host) or receive(message)

The message size can be of fixed size or of variable size. If it is of fixed size, it is
easy for an OS designer but complicated for a programmer and if it is of variable
size then it is easy for a programmer but complicated for the OS designer. A
standard message can have two parts: header and body.

The header part is used for storing message type, destination id, source id,
message length, and control information. The control information contains
information like what to do if runs out of buffer space, sequence number, priority.
Generally, message is sent using FIFO style.
What is Critical Section in OS?

A critical section in operating system denotes a specific segment of


code that deals with accessing shared resources like shared memory or
I/O devices. As multiple processes or threads can access these shared
resources concurrently, it becomes crucial to synchronize their access
to prevent race conditions and data inconsistencies. The purpose of a
critical section is to ensure that only one process or thread can access
the shared resource at any given time, effectively avoiding conflicts or
errors. Synchronization mechanisms, such as semaphores, monitors, or
critical section objects, are employed to regulate access to the critical
section in the operating system.

Problems Caused by Critical Section in OS:

• Deadlock: Deadlock occurs when two or more processes are


blocked, waiting for each other to release a shared resource. This
can lead to a situation where no process can proceed, causing the
entire system to hang.
• Starvation: Starvation occurs when a process is repeatedly denied
access to a shared resource, even though it is requesting it. This
can lead to a situation where a process is unable to proceed,
causing the entire system to hang.
• Race conditions: Race conditions occur when multiple processes
access a shared resource simultaneously, leading to inconsistent or
incorrect data. For example, if two processes are trying to
increment a shared variable at the same time, the final value of the
variable may be incorrect.
• Priority inversion: Priority inversion occurs when a low-priority
process holds a resource that is needed by a high-priority process.
This can lead to a situation where the high-priority process is
blocked, waiting for the low-priority process to release the resource.

To avoid these problems, it is important to synchronize access to shared


resources using appropriate synchronization mechanisms such as
semaphores, monitors, and mutual exclusion algorithms like Peterson’s
algorithm.

Semaphores: A semaphore is a data structure that is used to control access to


shared resources. It is typically implemented as a counter that is incremented or
decremented when a process enters or exits the critical section. When the counter
reaches zero, no other process is allowed to enter the critical section.
Peterson’s Algorithm: Peterson’s Algorithm is a solution to the Critical Section
Problem in os for two processes. It uses shared memory and atomic instructions
(such as test-and-set) to ensure that only one process can enter the critical section at
a time.

Synchronization Problems
These problems are used for testing nearly every newly proposed synchronization
scheme. The following problems of synchronization are considered as classical
problems:
1. Bounded-buffer (or Producer-Consumer) Problem,
1.Bounded-buffer (or Producer-Consumer) Problem,
2. Dining-Philosophers Problem,
3. Readers and Writers Problem,
4. Sleeping Barber Problem

Sec-c
Deadlock in Operating System
••

A process in operating system uses resources in the following way.
1. Requests a resource
2. Use the resource
3. Releases the resource

A deadlock is a situation where a set of processes are blocked because each


process is holding a resource and waiting for another resource acquired by
some other process.
Consider an example when two trains are coming toward each other on the
same track and there is only one track, none of the trains can move once
they are in front of each other. A similar situation occurs in operating
systems when there are two or more processes that hold some resources
and wait for resources held by other(s). For example, in the below diagram,
Process 1 is holding Resource 1 and waiting for resource 2 which is
acquired by process 2, and process 2 is waiting for resource 1.
Deadlock can arise if the following four conditions hold simultaneously
(Necessary Conditions)
Mutual Exclusion: Two or more resources are non-shareable (Only one process
can use at a time)
Hold and Wait: A process is holding at least one resource and waiting for
resources.
No Preemption: A resource cannot be taken from a process unless the process
releases the resource.
Circular Wait: A set of processes waiting for each other in circular form.

Methods for handling deadlock


There are three ways to handle deadlock
1) Deadlock prevention or avoidance:

Prevention:
The idea is to not let the system into a deadlock state. This system will make
sure that above mentioned four conditions will not arise. These techniques are
very costly so we use this in cases where our priority is making a system
deadlock-free.
One can zoom into each category individually, Prevention is done by negating
one of the above-mentioned necessary conditions for deadlock. Prevention can
be done in four different ways:
1. Eliminate mutual exclusion 3. Allow preemption
2. Solve hold and Wait 4. Circular wait
Solution
Avoidance:
Avoidance is kind of futuristic. By using the strategy of “Avoidance”, we have to
make an assumption. We need to ensure that all information about resources
that the process will need is known to us before the execution of the process.
We use Banker’s algorithm (Which is in turn a gift from Dijkstra) to avoid
deadlock.

2.Deadlock Detection and Recovery:

Detection :In this approach, The OS doesn't apply any mechanism to avoid or
prevent the deadlocks. Therefore the system considers that the deadlock will
definitely occur. In order to get rid of deadlocks, The OS periodically checks the
system for any deadlock. In case, it finds any of the deadlock then the OS will
recover the system using some recovery techniques.

The main task of the OS is detecting the deadlocks. The OS can detect the
deadlocks with the help of Resource allocation graph.
In single instanced resource types, if a cycle is being formed in the system then
there will definitely be a deadlock. On the other hand, in multiple instanced
resource type graph, detecting a cycle is not just enough. We have to apply the
safety algorithm on the system by converting the resource allocation graph into the
allocation matrix and request matrix.

Recovery
In order to recover the system from deadlocks, either OS considers resources or
processes.Ful

lscreen
For Resource
Preempt the resource:We can snatch one of the resources from the owner of the
resource (process) and give it to the other process with the expectation that it will
complete the execution and will release this resource sooner. Well, choosing a
resource which will be snatched is going to be a bit difficult.

Rollback to a safe state:System passes through various states to get into the
deadlock state. The operating system canrollback the system to the previous safe
state. For this purpose, OS needs to implement check pointing at every state.

The moment, we get into deadlock, we will rollback all the allocations to get into
the previous safe state.
For Process
-Kill a process:Killing a process can solve our problem but the bigger concern is
to decide which process to kill. Generally, Operating system kills a process which
has done least amount of work until now.

Kill all process:This is not a suggestible approach but can be implemented if the
problem becomes very serious. Killing all process will lead to inefficiency in the
system because all the processes will execute again from starting.

Unix File System


••

Unix File System is a logical method of organizing and storing large
amounts of information in a way that makes it easy to manage. A file is the
smallest unit in which the information is stored. All files are organized into
directories.
These directories are organized into a tree-like structure called the file
system. Files in Unix System are organized into multi-level hierarchy
structure known as a directory tree. At the very top of the file system is a
directory called “root” which is represented by a “/”. All other files are
“descendants” of root.
and Writers Problem,
Inodes
An Inode number is a uniquely existing number for all the files in Linux and all Unix
type systems.

When a file is created on a system, a file name and Inode number is assigned to it.

Generally, to access a file, a user uses the file name but internally file name is first
mapped with respective Inode number stored in a table.

Inode Contents

An Inode is a data structure containing metadata about the files.

Following contents are stored in the Inode from a file:

o User ID of file
o Group ID of file
o Device ID
o File size
o Date of creation
o Permission
o Owner of the file
o File protection flag
o Link counter to determine number of hard links

Inode Table:The Inode table contains all the Inodes and is created when file
system is created.
Inode Number:Each Inode has a unique number and Inode number can be seen
with the help of ls -li command.

Note: The Inode doesn't contain file content, instead it has a pointer to that data.

What is Shells?
The shell provides you with an interface to the UNIX system. It gathers
input from you and executes programs based on that input. When a
program finishes executing, it displays that program's output.

A shell is an environment in which we can run our commands, programs,


and shell scripts. There are different flavors of shells, just as there are
different flavors of operating systems. Each flavor of shell has its own set
of recognized commands and functions.

Shell Types:
In UNIX there are two major types of shells:

1. The Bourne shell.- If you are using a Bourne-type shell, the default
prompt is the $ character.
2. The C shell.- If you are using a C-type shell, the default prompt is the
% character.

Protection and Security in Operating System


Protection and security requires that computer resources such
as CPU, softwares, memory etc. are protected. This extends to the
operating system as well as the data in the system. This can be done by
ensuring integrity, confidentiality and availability in the operating
system. The system must be protect against unauthorized access,
viruses, worms etc.

Threats to Protection and Security

A threat is a program that is malicious in nature and leads to harmful


effects for the system. Some of the common threats that occur in a
system are −
Virus

Viruses are generally small snippets of code embedded in a system.


They are very dangerous and can corrupt files, destroy data, crash
systems etc. They can also spread further by replicating themselves as
required.

Trojan Horse

A trojan horse can secretly access the login details of a system. Then a
malicious user can use these to enter the system as a harmless being
and wreak havoc.

Trap Door

A trap door is a security breach that may be present in a system without


the knowledge of the users. It can be exploited to harm the data or files
in a system by malicious people.

Worm

A worm can destroy a system by using its resources to extreme levels.


It can generate multiple copies which claim all the resources and don't
allow any other processes to access them. A worm can shut down a
whole network in this way.

Denial of Service

These type of attacks do not allow the legitimate users to access a


system. It overwhelms the system with requests so it is overwhelmed
and cannot work properly for other user.

Protection and Security Methods

The different methods that may provide protect and security for
different computer systems are −

Authentication

This deals with identifying each user in the system and making sure they
are who they claim to be. The operating system makes sure that all the
users are authenticated before they access the system. The different
ways to make sure that the users are authentic are:

• Username/ Password
Each user has a distinct username and password combination and they need to
enter it correctly before they can access the system.
• User Key/ User Card
The users need to punch a card into the card slot or use they individual key on a
keypad to access the system.
• User Attribute Identification
Different user attribute identifications that can be used are fingerprint, eye
retina etc. These are unique for each user and are compared with the existing
samples in the database. The user can only access the system if there is a match.
One Time Password

These passwords provide a lot of security for authentication purposes. A


one time password can be generated exclusively for a login every time a
user wants to enter the system. It cannot be used more than once. The
various ways a one time password can be implemented are −

• Random Numbers
The system can ask for numbers that correspond to alphabets that are pre
arranged. This combination can be changed each time a login is required.
• Secret Key
A hardware device can create a secret key related to the user id for login. This
key can change each time.

Sec-B
Input-Output Processor
••

-The Input-Output Processor (IOP) is just like a CPU that handles the details
of I/O operations. It is more equipped with facilities than those available in
a typical DMA controller.
-The IOP can fetch and execute its own instructions that are specifically
designed to characterize I/O transfers.
- In addition to the I/O-related tasks, it can perform other processing tasks
like arithmetic, logic, branching, and code translation.
-It communicates with the processor by means of DMA.
-The Input-Output Processor is a specialized processor which loads and
stores data in memory along with the execution of I/O instructions.
-It acts as an interface between the system and devices. It involves a
sequence of events to execute I/O operations and then store the results in
memory.
I/O Requests in operating systems
I/O Requests are managed by Device Drivers in collaboration with some system
programs inside the I/O device. The requests are served by OS using three
simple segments :
1. I/O Traffic Controller: Keeps track of the status of all devices, control
units, and communication channels.
2. I/O scheduler: Executes the policies used by OS to allocate and access
the device, control units, and communication channels.
3. I/O device handler: Serves the device interrupts and heads the transfer
of data.
I/O Traffic Controller has 3 main tasks:
• The primary task is to check if there’s at least one path available.
• If there exists more than one path, it must decide which one to select.
• If all paths are occupied, its task is to analyze which path will be
available at the earliest.

I/O Scheduling
Scheduling in computing is the process of allocating resources to carry out tasks.
A process referred to as a scheduler is responsible for scheduling.
I/O Scheduler functions similarly to Process scheduler, it allocates the devices,
control units, and communication channels. However, under a heavy load of I/O
requests, Scheduler must decide what request should be served first and for that
we multiple queues to be managed by OS.
The major difference between a Process scheduler< and an I/O scheduler is that
I/O requests are not preempted: Once the channel program has started, it’s
allowed to continue to completion
Some modern OS allows I/O Scheduler to serve higher priority requests.
The I/O scheduler works in coordination with the I/O traffic controller to keep
track of which path is being served for the current I/O request.
I/O Device Handler manages the I/O interrupts (if any) and scheduling
algorithms.
A few I/O handling algorithms are :
1. FCFS [First come first server].
2. SSTF [Shortest seek time first].
3. SCAN
4. Look
• N-Step Scan
• C-SCAN
• C-LOOK
Every scheduling algorithm aims to minimize arm movement, mean response
time, and variance in response time.

Memory Management
Memory management is the functionality of an operating system which
handles or manages primary memory and moves processes back and forth
between main memory and disk during execution.
-Memory management keeps track of each and every memory location,
regardless of either it is allocated to some process or it is free.
-It checks how much memory is to be allocated to processes.
-It decides which process will get memory at what time.
-It tracks whenever some memory gets freed or unallocated and
correspondingly it updates the status.

Static vs Dynamic Loading


The choice between Static or Dynamic Loading is to be made at the time of
computer program being developed. If you have to load your program
statically, then at the time of compilation, the complete programs will be
compiled and linked without leaving any external program or module
dependency. The linker combines the object program with other necessary
object modules into an absolute program, which also includes logical
addresses.

If you are writing a Dynamically loaded program, then your compiler will
compile the program and for all the modules which you want to include
dynamically, only references will be provided and rest of the work will be
done at the time of execution.

At the time of loading, with static loading, the absolute program (and data)
is loaded into memory in order for execution to start.
If you are using dynamic loading, dynamic routines of the library are stored
on a disk in relocatable form and are loaded into memory only when they
are needed by the program.

Swapping
Swapping is a mechanism in which a process can be swapped temporarily
out of main memory (or move) to secondary storage (disk) and make that
memory available to other processes. At some later time, the system
swaps back the process from the secondary storage to main memory.

Though performance is usually affected by swapping process but it helps in


running multiple and big processes in parallel and that's the
reason Swapping is also known as a technique for memory compaction.

Fragmentation
As processes are loaded and removed from memory, the free memory
space is broken into little pieces. It happens after sometimes that
processes cannot be allocated to memory blocks considering their small
size and memory blocks remains unused. This problem is known as
Fragmentation.

Fragmentation is of two types −

External fragmentation
1
Total memory space is enough to satisfy a request or to reside a process in it, but
it is not contiguous, so it cannot be used.

Internal fragmentation
2
Memory block assigned to process is bigger. Some portion of memory is left
unused, as it cannot be used by another process.

Paging
Paging is a memory management technique in which process address
space is broken into blocks of the same size called pages (size is power of
2, between 512 bytes and 8192 bytes). The size of the process is
measured in the number of pages.

Similarly, main memory is divided into small fixed-sized blocks of


(physical) memory called frames.

Address Translation

Page address is called logical address and represented by page number and
the offset.

Logical Address = Page number + page offset


Frame address is called physical address and represented by a frame
number and the offset.

Physical Address = Frame number + page offset


A data structure called page map table is used to keep track of the relation
between a page of a process to a frame in physical memory.

When the system allocates a frame to any page, it translates this logical
address into a physical address and create entry into the page table to be
used throughout execution of the program.

Advantages and Disadvantages of Paging


• Paging reduces external fragmentation, but still suffer from internal
fragmentation.
• Paging is simple to implement and assumed as an efficient memory
management technique.
• Due to equal size of the pages and frames, swapping becomes very
easy.
• Page table requires extra memory space, so may not be good for a
system having small RAM.

Segmentation

Segmentation is a memory management technique in which each job is


divided into several segments of different sizes, one for each module that
contains pieces that perform related functions.
Virtual Memory
A computer can address more memory than the amount physically installed
on the system. This extra memory is actually called virtual memory

working set model


The working set model states that a process can be in RAM if and only if all of the
pages that it is currently using can be in RAM.

This prevents thrashing and reduces page faults.


A working set window is maintained of some x no. of pages (x is generally
denoted by delta: Δ). At every instance, this window examines the past Δ
references made by the process and determines the working set. The working
set is the set of unique pages from these past Δ references. The working set the
size of a process ‘i’ is denoted as WSSi and is used to determine the number of
frames to allocate to the processi.
Summation of all WSSi is the total frames required by all processes, denoted by
D. If the no. of total frames in the main memory is less than D, thrashing is
inevitable for some process as it won’t have adequate frames.

If for a period T, the locality of reference is just 3 pages, then the process can be
allocated 3 frames for time T, and there will be no thrashing as all currently
required pages will be in the main memory. If the locality then increases to 5
pages, 2 more frames can be given to the process. If the locality decreases to 2
pages, 3 frames can be released for other processes. (Refer to figure) If WSS is
greater than available frames, that process is suspended till frames are
available.

. Sleeping Barber Problem


What is a File System?
A file system is a method an operating system uses to store, organize, and manage
files and directories on a storage device. Some common types of file systems include:
1. FAT (File Allocation Table): An older file system used by older versions of
Windows and other operating systems.
2. NTFS (New Technology File System): A modern file system used by
Windows. It supports features such as file and folder permissions,
compression, and encryption.
3. ext (Extended File System): A file system commonly used on Linux and Unix-
based operating systems.
4. HFS (Hierarchical File System): A file system used by macOS.
5. APFS (Apple File System): A new file system introduced by Apple for their
Macs and iOS devices.
A file is a collection of related information that is recorded on secondary storage. Or
file is a collection of logically related entities. From the user’s perspective, a file is the
smallest allotment of logical secondary storage.
The name of the file is divided into two parts as shown below:
• name
• extension, separated by a period.
File Directories
The collection of files is a file directory. The directory contains information about the
files, including attributes, location, and ownership. Much of this information, especially
that is concerned with storage, is managed by the operating system. The directory is
itself a file, accessible by various file management routines.
Below are information contained in a device directory.
• Name
• Type
• Address
• Current length
• Maximum length
• Date last accessed
• Date last updated
• Owner id
• Protection information
The operation performed on the directory are:
• Search for a file
• Create a file
• Delete a file
• List a directory
• Rename a file
• Traverse the file system
Advantages of Maintaining Directories
• Efficiency: A file can be located more quickly.
• Naming: It becomes convenient for users as two users can have same name
for different files or may have different name for same file.
• Grouping: Logical grouping of files can be done by properties e.g. all java
programs, all games etc.
Single-Level Directory
In this, a single directory is maintained for all the users.
• Naming problem: Users cannot have the same name for two files.
• Grouping problem: Users cannot group files according to their needs.

Two-Level Directory
In this separate directories for each user is maintained.
• Path name: Due to two levels there is a path name for every file to locate that
file.
• Now, we can have the same file name for different users.
• Searching is efficient in this method.

Tree-Structured Directory
The directory is maintained in the form of a tree. Searching is efficient and also there is
grouping capability. We have absolute or relative path name for a file.
File Allocation Methods
There are several types of file allocation methods. These are mentioned below.
• Continuous Allocation
• Linked Allocation(Non-contiguous allocation)
• Indexed Allocation
Continuous Allocation
A single continuous set of blocks is allocated to a file at the time of file creation. Thus,
this is a pre-allocation strategy, using variable size portions. The file allocation table
needs just a single entry for each file, showing the starting block and the length of the
file. This method is best from the point of view of the individual sequential file. Multiple
blocks can be read in at a time to improve I/O performance for sequential processing. It
is also easy to retrieve a single block. For example, if a file starts at block b, and the ith
block of the file is wanted, its location on secondary storage is simply b+i-1.
Disadvantages of Continuous Allocation
• External fragmentation will occur, making it difficult to find contiguous blocks
of space of sufficient length. A compaction algorithm will be necessary to free
up additional space on the disk.
• Also, with pre-allocation, it is necessary to declare the size of the file at the
time of creation.
Linked Allocation(Non-Contiguous Allocation)
Allocation is on an individual block basis. Each block contains a pointer to the next
block in the chain. Again the file table needs just a single entry for each file, showing
the starting block and the length of the file. Although pre-allocation is possible, it is
more common simply to allocate blocks as needed. Any free block can be added to the
chain. The blocks need not be continuous. An increase in file size is always possible if a
free disk block is available. There is no external fragmentation because only one block
at a time is needed but there can be internal fragmentation but it exists only in the last
disk block of the file.
Disadvantage Linked Allocation(Non-contiguous allocation)
• Internal fragmentation exists in the last disk block of the file.
• There is an overhead of maintaining the pointer in every disk block.
• If the pointer of any disk block is lost, the file will be truncated.
• It supports only the sequential access of files.
Indexed Allocation
It addresses many of the problems of contiguous and chained allocation. In this case,
the file allocation table contains a separate one-level index for each file: The index has
one entry for each block allocated to the file. The allocation may be on the basis of
fixed-size blocks or variable-sized blocks. Allocation by blocks eliminates external
fragmentation, whereas allocation by variable-size blocks improves locality. This
allocation technique supports both sequential and direct access to the file and thus is
the most popular form of file allocation.
Disk Free Space Management
Just as the space that is allocated to files must be managed, so the space that is not
currently allocated to any file must be managed. To perform any of the file allocation
techniques, it is necessary to know what blocks on the disk are available. Thus we
need a disk allocation table in addition to a file allocation table. The following are the
approaches used for free space management.
1. Bit Tables: This method uses a vector containing one bit for each block on the
disk. Each entry for a 0 corresponds to a free block and each 1 corresponds to
a block in use.
For example 00011010111100110001
In this vector every bit corresponds to a particular block and 0 implies that
that particular block is free and 1 implies that the block is already occupied. A
bit table has the advantage that it is relatively easy to find one or a
contiguous group of free blocks. Thus, a bit table works well with any of
the file allocation methods. Another advantage is that it is as small as
possible.
2. Free Block List: In this method, each block is assigned a number sequentially
and the list of the numbers of all free blocks is maintained in a reserved block
of the disk.

Advantages of File System


• Organization: A file system allows files to be organized into directories and
subdirectories, making it easier to manage and locate files.
• Data protection: File systems often include features such as file and folder
permissions, backup and restore, and error detection and correction, to protect
data from loss or corruption.
• Improved performance: A well-designed file system can improve the
performance of reading and writing data by organizing it efficiently on disk.
Disadvantages of File System
• Compatibility issues: Different file systems may not be compatible with each
other, making it difficult to transfer data between different operating systems.
• Disk space overhead: File systems may use some disk space to store
metadata and other overhead information, reducing the amount of space
available for user data.
• Vulnerability: File systems can be vulnerable to data corruption, malware,
and other security threats, which can compromise the stability and security of
the system.

You might also like