Operating System
Operating System
Operating System
Overview
Definitions
An operating system is a program that acts as an interface between the user and
the computer hardware and controls the execution of all kinds of programs.
Definition:
Functions:
Examples:
2. OS Principles
Abstraction:
Resource Management:
Multitasking:
Security:
User Interface:
3. Concurrency
Concurrency in operating systems refers to the ability of an operating system to
handle multiple tasks or processes at the same time. With the increasing demand
for high performance computing, concurrency has become a critical aspect of
modern computing systems. Operating systems that support concurrency can
execute multiple tasks simultaneously, leading to better resource utilization,
improved responsiveness, and enhanced user experience. Concurrency is essential
in modern operating systems due to the increasing demand for multitasking, real-
time processing, and parallel computing. It is used in a wide range of applications,
including web servers, databases, scientific simulations, and multimedia processing.
However, concurrency also introduces new challenges such as race conditions,
deadlocks, and priority inversion, which need to be managed effectively to ensure
the stability and reliability of the system.
Principles of Concurrency
Synchronization:
Deadlock:
It is used for handling the process and makes scheduling for the process. The main
task of the scheduler is to select the process and make order of process, and to
decide which process runs first.
Types of Schedulers
Long term scheduling is performed when a new process is created, if the number of
ready processes in the ready queue becomes very high, and then there is an
overhead on the operating system, for maintaining long lists, containing switching
and dispatching increases. Therefore, allowing only a limited number of processes
into the ready queue, the long term scheduler manages this.
After execution then enters into input/ output operation state, then again comes
back into a ready state. So, at that time a medium-term scheduler is used.
All the processes are in a ready state, which process should run first, it has to be
decided by the short-term scheduler.
Dispatcher
The dispatcher is done after the scheduler. It gives control of the CPU to the process
selected by the short-term scheduler. After selecting the process, the dispatcher
gives CPU to it.
Functions
https://www.tutorialspoint.com/latest/courses?
utm_source=tutorialspoint&utm_medium=tutorials_3p&utm_campaign=internal
Differences
CPU Scheduling:
Context Switching:
Definition: The process of saving the state of a currently running process
and loading the state of the next process.
Overhead: Involves saving and loading registers, program counters, and
other process-specific information.
Dispatching:
5. Memory Management
This tutorial will teach you basic concepts related to Memory Management.
If you are writing a Dynamically loaded program, then your compiler will compile
the program and for all the modules which you want to include dynamically, only
references will be provided and rest of the work will be done at the time of
execution.
At the time of loading, with static loading, the absolute program (and data) is
loaded into memory in order for execution to start.
If you are using dynamic loading, dynamic routines of the library are stored on a
disk in relocatable form and are loaded into memory only when they are needed by
the program.
Static vs Dynamic Linking
As explained above, when static linking is used, the linker combines all other
modules needed by a program into a single executable program to avoid any
runtime dependency.
When dynamic linking is used, it is not required to link the actual module or library
with the program, rather a reference to the dynamic module is provided at the time
of compilation and linking. Dynamic Link Libraries (DLL) in Windows and Shared
Objects in Unix are good examples of dynamic libraries.
Swapping
Let us assume that the user process is of size 2048KB and on a standard hard disk
where swapping will take place has a data transfer rate around 1 MB per second.
The actual transfer of the 1000K process to or from memory will take
Memory Allocation
S.N
Memory Allocation & Description
.
Single-partition allocation
In this type of allocation, relocation-register scheme is used to protect
1 user processes from each other, and from changing operating-system
code and data. Relocation register contains value of smallest physical
address whereas limit register contains range of logical addresses. Each
logical address must be less than the limit register.
Multiple-partition allocation
In this type of allocation, main memory is divided into a number of fixed-
2 sized partitions where each partition should contain only one process.
When a partition is free, a process is selected from the input queue and is
loaded into the free partition. When the process terminates, the partition
becomes available for another process.
Fragmentation
As processes are loaded and removed from memory, the free memory space is
broken into little pieces. It happens after sometimes that processes cannot be
allocated to memory blocks considering their small size and memory blocks remains
unused. This problem is known as Fragmentation.
S.N
Fragmentation & Description
.
External fragmentation
1 Total memory space is enough to satisfy a request or to reside a process
in it, but it is not contiguous, so it cannot be used.
2 Internal fragmentation
Memory block assigned to process is bigger. Some portion of memory is
left unused, as it cannot be used by another process.
The following diagram shows how fragmentation can cause waste of memory and a
compaction technique can be used to create more free memory out of fragmented
memory −
Virtual Memory:
Memory Allocation:
Device management
An essential part of an operating system is device management, which controls how
software applications interact with the hardware attached to the computer system.
It entails the process of locating, setting up, allocating, and managing access to
devices like printers, scanners, storage units, and network interfaces. The device
management system guarantees that the hardware devices are used effectively by
the system software and applications by providing a consistent and dependable
interface to the hardware devices. It also covers input/output control, error
handling, and interrupt management. The operating system may more effectively
utilize the resources at its disposal thanks to the device management system, which
also enhances the computer system's overall performance.
Device Drivers
Device drivers are software modules that allow the OS to interact with a
particular device. They abstract away hardware complexity so higher level
programs don’t need device specifics.
Configuration Repositories
Centralized repositories store device metadata, drivers, policies, and other
configuration details that components leverage.
Platform Stability
With unreliable devices, the entire system can suffer. Device management
maintains platform stability.
Reduced Support Costs
Automation and diagnostics resolve more device issues without user
intervention, reducing support costs.
For these reasons and more, every modern OS invests heavily in robust device
management.
Trojan Horse
A trojan horse can secretly access the login details of a system. Then a malicious
user can use these to enter the system as a harmless being and wreak havoc.
Trap Door
A trap door is a security breach that may be present in a system without the
knowledge of the users. It can be exploited to harm the data or files in a system by
malicious people.
Worm
A worm can destroy a system by using its resources to extreme levels. It can
generate multiple copies which claim all the resources and don't allow any other
processes to access them. A worm can shut down a whole network in this way.
Denial of Service
These type of attacks do not allow the legitimate users to access a system. It
overwhelms the system with requests so it is overwhelmed and cannot work
properly for other user.
Authentication
This deals with identifying each user in the system and making sure they are who
they claim to be. The operating system makes sure that all the users are
authenticated before they access the system. The different ways to make sure that
the users are authentic are:
Username/ Password
Each user has a distinct username and password combination and they need to
enter it correctly before they can access the system.
User Key/ User Card
The users need to punch a card into the card slot or use they individual key on a
keypad to access the system.
Secret Key
A hardware device can create a secret key related to the user id for login. This key
can change each time.
Virtualization
Virtualization is the creation of a virtual version of an actual piece of technology,
such as an operating system (OS), a server, a storage device or a network resource.
Organizations that divide their hard drives into different partitions already engage in
virtualization. A partition is the logical division of a hard disk drive that, in effect,
creates two separate hard drives.
Xen hypervisor is an open source software program that manages the low-level
interactions that occur between virtual machines (VMs) and physical hardware. It
enables the simultaneous creation, execution and management of various VMs in
one physical environment. With the help of the hypervisor, the guest OS, which
normally interacts with true hardware, does so with a software emulation of that
hardware.
Although OSes running on true hardware often outperform those running on virtual
systems, most guest OSes and applications don't fully use the underlying hardware.
Virtualization removes dependency on a given hardware platform, creating greater
flexibility, control and isolation for environments. Plus, virtualization has spread
beyond servers to include applications, networks, data management and desktops.
Advantages of virtualization
The overall benefit of virtualization is that it helps organizations maximize output.
More specific advantages include the following:
Lower costs. Virtualization reduces the amount of hardware servers companies and
data centers require. This lowers the overall cost of buying and maintaining large
amounts of hardware.
Easier disaster recovery. DR is simple in a virtualized environment. Regular
snapshots provide up-to-date data, letting organizations easily back up and recover
VMs, avoiding unnecessary downtime. In an emergency, a virtual machine can
migrate to a new location within minutes.
Easier testing. Testing is less complicated in a virtual environment. Even in the
event of a large mistake, the test can continue without stopping and returning to
the beginning. The test simply returns to the previous snapshot and proceeds.
Faster backups. Virtualized environments take automatic snapshots throughout the
day to guarantee all data is up to date. VMs can easily migrate between host
machines and be efficiently redeployed.
Improved productivity. Virtualized environments require fewer physical resources,
which results in less time spent managing and maintaining servers. Tasks that take
days or weeks in physical environments are done in minutes. This lets staff
members spend their time on more productive tasks, such as raising revenue and
facilitating business initiatives
Single-minded servers. Virtualization provides a cost-effective way to separate
email, database and web servers, creating a more comprehensive and dependable
system.
Optimize deployment and redeployment. When a physical server crashes, the
backup server might not always be ready or up to date. If this is the case, then the
redeployment process can be time-consuming and tedious. However, in a
virtualized data center, virtual backup tools expedite the process to minutes.
Reduced heat and improved energy savings. Companies that use a lot of hardware
servers risk overheating their physical computing resources. Virtualization
decreases the number of servers used for data management.
Environmental consciousness. Companies and data centers that lots of electricity
for hardware have a large carbon footprint. Virtualization can reduce significantly
decreases the amount of cooling and power required and the overall carbon
footprint.
Cloud migration. VMs can be deployed from the data center to build a cloud-based
IT infrastructure. The ability to embrace a cloud-based approach with virtualization
eases migration to the cloud.
Lack of vendor dependency. VMs are agnostic in terms of hardware configuration.
As a result, virtualizing hardware and software means that a company no longer
requires a single vendor for these physical resources.
Limitations of virtualization
Before converting to a virtualized environment, organizations should consider the
various limitations:
Costs. The investment required for virtualization software and hardware can be
expensive. If the existing infrastructure is more than five years old, organizations
should consider an initial renewal budget. Many businesses work with a managed
service provider to offset costs with monthly leasing and other purchase options.
Software licensing considerations. Vendors view software use within a virtualized
environment in different ways. It's important to understand how a specific vendor
does this.
Time and effort. Converting to virtualization takes time and has a learning curve
that requires IT staff to be trained in virtualization. Furthermore, some applications
don't adapt well to a virtual environment. IT staff must be prepared to face these
challenges and address them prior to converting.
Security. Virtualization comes with unique security risks. Data is a common target
for attacks, and the chance of a data breach increases with virtualization.
Complexity. In a virtual environment, users can lose control of what they can do
because several parts of the environment must collaborate to perform the same
task. If any part doesn't work, the entire operation can fail.
References:
https://www.tutorialspoint.com/operating_system