Os Notes
Os Notes
Os Notes
WEB TECHNOLOGY
SUBMITTED BY :
NAME: MANVITHA H
REG NO : 23MCAD19
UNIT 5:VIRTUALIZATION CONCEPT
• Cloud Computing Overview
• Virtualization
• Containers, Isolation
• Resource Management
• Security Issues
• Efficiency
• Storage
• Centralized vs. Decentralized
Cloud Computing Overview
• What is Cloud
The term Cloud refers to a Network or Internet. In other words, we can say
that Cloud is something, which is present at remote location. Cloud can provide
services over public and private networks, i.e., WAN, LAN or VPN.
Applications such as e-mail, web conferencing, customer relationship management
(CRM) execute on cloud.
• Deployment Models:
Deployment models define the type of access to the cloud, i.e., how the cloud is located?
Cloud can have any of the four types of access: Public, Private, Hybrid, and Community .
Public Cloud:
The public cloud allows systems and services to be easily accessible to the general public. Public cloud
may be less secure because of its openness.
• Private Cloud
The private cloud allows systems and services to be accessible within an organization. It is
more secured because of its private nature.
• Community Cloud:
• The community cloud allows systems and services to be accessible by a group of organizations.
• Hybrid Cloud
• The hybrid cloud is a mixture of public and private cloud, in which the critical activities are
performed using private cloud while the non-critical activities are performed using public cloud.
Service Models
Cloud computing is based on service models. These are categorized into three basic service
models which are -
Anything-as-a-Service (XaaS)
is yet another service model, which includes Network-as-a-Service, Business-as-a-Service,
Identity-as-a-Service, Database-as-a-Service or Strategy-as-a-Service.
• The Infrastructure-as-a-Service (IaaS) is the most basic level of service. Each of the service
models inherit the security and management mechanism from the underlying model, as shown
in the following diagram:
Infrastructure-as-a-Service (IaaS):
IaaS provides access to fundamental resources such as physical machines, virtual
machines, virtual storage, etc.
• Platform-as-a-Service (PaaS)
• PaaS provides the runtime environment for applications, development and deployment
tools, etc.
• Software-as-a-Service (SaaS)
• SaaS model allows to use software applications as a service to end-users.
• Benefits
• Cloud Computing has numerous advantages. Some of them are listed below -
Virtualization :
• Virtualization is a technique of how to separate a service from the underlying physical
delivery of that service. It is the process of creating a virtual version of something like
computer hardware. It was initially developed during the mainframe era. It involves using
specialized software to create a virtual or software-created version of a computing resource
rather than the actual version of the same resource. With the help of Virtualization,
multiple operating systems and applications can run on the same machine and its same
hardware at the same time, increasing the utilization and flexibility of hardware.
• Benefits of Virtualization:
• High Initial Investment: Clouds have a very high initial investment, but it is also true that it will help in reducing the
cost of companies.
• Learning New Infrastructure: As the companies shifted from Servers to Cloud, it requires highly skilled staff who have
skills to work with the cloud easily, and for this, you have to hire new staff or provide training to current staff.
• Risk of Data: Hosting data on third-party resources can lead to putting the data at risk, it has the chance of getting
attacked by any hacker or cracker very easily.
• Containers:
• Cloud containers are software code packages that contain an application’s code, its libraries,
and other dependencies that it needs to run in the cloud. Any software application code
requires additional files called libraries and dependencies before it can run. Traditionally,
software had to be packaged in multiple formats to run in different environments such as
Windows, Linux, Mac, and mobile. However, a container packages the software and all of its
dependencies into a single file that can run anywhere. Running the container in the cloud
provides additional flexibility and performance benefits at scale.
What are the benefits of cloud containers?
• Applications can consist of tens, hundreds, or even thousands of containers. With cloud
containers, you can distribute and manage these containers across many different cloud
servers or instances. The cloud containers function as if they were colocated. There are
many benefits to distributed cloud computing application architectures.
Containers are unique because you can use them to deploy software to almost any
environment—without specifically bundling the software for the underlying architecture and
operating systems. Before containerization became popular, applications had to be bundled
with specific libraries to run on specific platforms. This meant that deploying a piece of
software on multiple operating systems would result in multiple software versions. Cloud
containers enable applications to run on any underlying architecture as long as the
containerization platform runs over the top. Now, you need only one version of the
production-grade container.
Flexibility:
• With cloud containerization, the underlying virtual machines (VM) are all cloud instances.
Cloud instances are available in various configurations, with fast spin-up, tear-down, and
on-demand cloud computing pricing. This reconfigurability means that you can swap
machines in and out as required, depending on the application’s demands. You can
optimize resource use by load-balancing container-based applications across various
cloud instances rather than individual servers.
• Resiliency:
• Cloud containers provide increased reliability and availability for applications. In a
distributed, containerized architecture, if a given machine fails, another can quickly spin
up the lost containers, strengthening the application’s resiliency. You can update a new
version of a single container in the application with minimal disruption to the rest of the
application. This results in longer uptimes.
• Scalability:
• In traditional application production environments, the application is limited by a single
server resource. Given the right application design and cloud containerization approach,
an application’s data processing and input/output are no longer throttled by single-server
limitations. They’re distributed among machines, so you can scale unlimitedly and ensure
consistent performance and user experience.
How can AWS support your cloud container requirements?
• Nearly 80 percent of all cloud containers run on Amazon Web Services (AWS) today.
AWS container services provide many system tools for managing your underlying
container infrastructure so that you can focus on innovation and your business needs.
• AWS Copilot is a command-line interface (CLI) for quickly launching and managing
containerized applications on AWS
• Amazon Elastic Container Service (Amazon ECS) is a fully managed container
orchestration service and system tool for efficiently deploying, managing, and scaling
containerized applications
• Amazon Elastic Kubernetes Service (Amazon EKS) is a managed Kubernetes service
for running Kubernetes container orchestration in the AWS Cloud and on-premises
data centers.
• Amazon Elastic Container Registry (Amazon ECR) is a fully managed container
registry for easy management of and access to container images and artifacts
• AWS Fargate is a serverless compute engine for containers that you can use to focus
on developing applications instead of managing infrastructure
What Is Data Isolation?
• Along with ransomware, data theft and sabotage represent urgent risks for
organizations. Cybercriminals leverage stolen data for fraud, identity theft,
and extortion. These challenges create a mandate for vendors to introduce
and organizations to adopt strong security for data management. In practice,
that means a data management platform that can resist and actively defend
against data-centric threats. To support these requirements, new data isolation
technology and techniques have emerged as key capabilities to support cyber
resiliency, including isolated or air gapped backup data stored in the cloud or
at another location with temporary, but highly secure, connections.
• Data isolation is a security strategy based on the idea that disconnecting data
from the network and creating physical distance between it and the rest of the
organization’s IT environment can add an impenetrable barrier against
harmful events or people. With data isolation—via an cloud air gap, air gapped
backup, or an air gapped copy of data—it becomes extraordinarily difficult to
access, steal, or corrupt data.
• In situations where valuable data has been destroyed or encrypted for
ransom, organizations that practice data isolation can remain resilient because
they always have a pristine copy of data that has been safely kept separate
from the compromised environment
How Isolation Works:
• Organizations can implement varying degrees of data isolation. These range from
completely disconnecting systems (physically and virtually) to having transient
network connections coupled with layered access controls. The key is to balance the
isolation method with business continuity needs. Each isolation technique boosted by
air gap technology must support the organization’s RTO/RPO objectives.
• Innovative isolation solutions that leverage strong access controls and temporary
network connections have emerged because complete physical and electronic
isolation (i.e., the textbook air gap definition) does not support most needs for
today’s enterprises.
• Types of Resources:
• Physical Resource: Computer, disk, database, network, etc.
Grid Strictly manages the workload of computing mode. Local resource manager such as Portable Batch
System, Condor, and Sun Grid Engine manages the compute resource for the Grid site. Identify the user to run
the job
Data Model:
• Data is Stored at an Un-Trusted Host:- Although may not seem the best policy to store data and let others
use the data without permission moving data off-premises increases the number of potential security risks.
• Data Replication over Large Areas:- Making sure data is available and durable whenever demanded is of
utmost importance for cloud storage providers. Data availability and durability are typically achieved
through under-the-covers replication i.e., data is automatically replicated without customer interference or
requests.
• Problems with Data Management:- Transactional data management is one of the biggest data
management problems. It is hard to ensure Atomicity, Consistency, Isolation, and Durability is maintained
during data replication over large distances. It is also risky to store such sensitive data in untrusted storage.
• Security Issues in Cloud Computing:
In this, we will discuss the overview of cloud computing, its need, and mainly our focus to cover the
security issues in Cloud Computing. Let’s discuss it one by one.
Cloud Computing :
Cloud Computing is a type of technology that provides remote services on the internet to manage, access,
and store data rather than storing it on Servers or local drives. This technology is also known as Serverless
technology. Here the data can be anything like Image, Audio, video, documents, files, etc.
• Need of Cloud Computing :
Before using Cloud Computing, most of the large as well as small IT companies use traditional methods
i.e. they store data in Server, and they need a separate Server room for that. In that Server Room, there should
be a database server, mail server, firewalls, routers, modems, high net speed devices, etc. For that IT
companies have to spend lots of money. In order to reduce all the problems with cost Cloud computing come
into existence and most companies shift to this technology.
• Data Loss –
Data Loss is one of the issues faced in Cloud Computing. This is also known as Data Leakage. As we know
that our sensitive data is in the hands of Somebody else, and we don’t have full control over our database. So,
if the security of cloud service is to break by hackers then it may be possible that hackers will get access to our
sensitive data or personal files.
• Interference of Hackers and Insecure API’s –
As we know, if we are talking about the cloud and its services it means we are talking about the
Internet. Also, we know that the easiest way to communicate with Cloud is using API. So it is important to
protect the Interface’s and API’s which are used by an external user. But also in cloud computing, few services
are available in the public domain which are the vulnerable part of Cloud Computing because it may be
possible that these services are accessed by some third parties. So, it may be possible that with the help of
these services hackers can easily hack or harm our data.
•
• User Account Hijacking –
Account Hijacking is the most serious security issue in Cloud Computing. If somehow the Account of User
or an Organization is hijacked by a hacker then the hacker has full authority to perform Unauthorized
Activities.
• Changing Service Provider –
Vendor lock-In is also an important Security issue in Cloud Computing. Many organizations will face
different problems while shifting from one vendor to another. For example, An Organization wants to shift
from AWS Cloud to Google Cloud Services then they face various problems like shifting of all data, also both
cloud services have different techniques and functions, so they also face problems regarding that. Also, it may
be possible that the charges of AWS are different from Google Cloud, etc.
• Lack of Skill –
While working, shifting to another service provider, need an extra feature, how to use a feature, etc. are
the main problems caused in IT Companies who doesn’t have skilled Employees. So it requires a skilled person
to work with Cloud Computing.
• Denial of Service (DoS) attack –
This type of attack occurs when the system receives too much traffic. Mostly DoS attacks occur in large
organizations such as the banking sector, government sector, etc. When a DoS attack occurs, data is lost. So, in
order to recover data, it requires a great amount of money as well as time to handle it.
• Shared Resources:
Cloud computing relies on a shared infrastructure. If one customer’s data or applications are
compromised, it may potentially affect other customers sharing the same resources, leading to a breach of
confidentiality or integrity.
• Compliance and Legal Issues:
Different industries and regions have specific regulatory requirements for data handling and storage.
Ensuring compliance with these regulations can be challenging when data is stored in a cloud environment
that may span multiple jurisdictions.
• Data Encryption:
While data in transit is often encrypted, data at rest can be susceptible to breaches. It’s crucial to ensure that
data stored in the cloud is properly encrypted to prevent unauthorized access.
• Insider Threats:
Employees or service providers with access to cloud systems may misuse their privileges, intentionally or
unintentionally causing data breaches. Proper access controls and monitoring are essential to mitigate these
threats.
• Data Location and Sovereignty:
Knowing where your data physically resides is important for compliance and security. Some cloud providers
store data in multiple locations globally, and this may raise concerns about data sovereignty and who has
access to it.
• Loss of Control:
When using a cloud service, you are entrusting a third party with your data and applications. This loss of
direct control can lead to concerns about data ownership, access, and availability .
• Incident Response and Forensics:
Investigating security incidents in a cloud environment can be complex. Understanding what happened and
who is responsible can be challenging due to the distributed and shared nature of cloud services.
• Data Backup and Recovery:
Relyingon cloud providers for data backup and recovery can be risky. It’s essential to have a robust backup
and recovery strategy in place to ensure data availability in case of outages or data loss.
• Vendor Security Practices:
The security practices of cloud service providers can vary. It’s essential to thoroughly assess the security
measures and certifications of a chosen provider to ensure they meet your organization’s requirements .
IoT Devices and Edge Computing:
The proliferation of IoT devices and edge computing can increase the attack surface. These devices often have
limited security controls and can be targeted to gain access to cloud resources .
Social Engineering and Phishing:
Attackers may use social engineering tactics to trick users or cloud service providers into revealing sensitive
information or granting unauthorized access.
Inadequate Security Monitoring:
Without proper monitoring and alerting systems in place, it’s challenging to detect and respond to security
incidents in a timely manner.
Operating system efficiency is a crucial factor that affects the performance,
reliability, and usability of any computer system. But how do you measure it? In this
article, you will learn about some common metrics and methods that can help you
evaluate and compare different operating systems based on their efficiency
• CPU utilization
One of the most basic and important metrics of operating system efficiency is
CPU utilization, which measures how well the operating system manages the processor
resources. CPU utilization is the percentage of time that the CPU is busy executing
processes, as opposed to being idle or waiting for input/output. A high CPU utilization
means that the operating system is making good use of the processor, while a low CPU
utilization means that the operating system is wasting processor cycles or facing
bottlenecks. CPU utilization can be measured using tools like top, htop, or Task Manager.
• Memory usage
Another key metric of operating system efficiency is memory usage, which
measures how well the operating system manages the memory resources. Memory
usage is the amount of physical or virtual memory that the operating system allocates to
processes, as well as the amount of memory that is free or available. A low memory
usage means that the operating system is optimizing the memory allocation and
avoiding memory leaks, while a high memory usage means that the operating system is
consuming too much memory or suffering from memory fragmentation. Memory usage
can be measured using tools like free, vmstat, or Performance Monitor.
• Throughput and latency:
Another way to measure operating system efficiency is to look at the throughput and latency
of the system, which measure how fast and responsive the operating system is. Throughput is the
amount of work that the operating system can complete in a given time, such as the number of
processes executed, the number of requests served, or the amount of data transferred. Latency is
the delay or time that the operating system takes to respond to a request, such as the time to
start a process, the time to switch between processes, or the time to access a file. A high
throughput and a low latency mean that the operating system is efficient and agile, while a low
throughput and a high latency mean that the operating system is inefficient and sluggish.
Throughput and latency can be measured using tools like ping, traceroute, or benchmarking
software.
•
Reliability and availability
Another aspect of operating system efficiency is reliability and availability, which
measure how well the operating system can handle errors and failures. Reliability is the
probability that the operating system will function correctly and without faults, such as
crashes, hangs, or data loss. Availability is the percentage of time that the operating
system is operational and accessible, as opposed to being down or unavailable. A high
reliability and availability mean that the operating system is robust and dependable, while
a low reliability and availability mean that the operating system is fragile and unreliable.
Reliability and availability can be measured using tools like uptime, dmesg, or event logs.
• User satisfaction:
Finally, one of the most subjective but also important metrics of operating system
efficiency is user satisfaction, which measures how well the operating system meets the
needs and expectations of the users. User satisfaction is influenced by factors such as
the usability, functionality, security, and compatibility of the operating system, as well
as the user's preferences, habits, and feedback. A high user satisfaction means that the
operating system is satisfying and enjoyable, while a low user satisfaction means that
the operating system is frustrating and disappointing. User satisfaction can be
measured using tools like surveys, ratings, or reviews.
• The storage systems above the Electronic disk are Volatile, where as those below are Non-Volatile.
• An Electronic disk can be either designed to be either Volatile or Non-Volatile. During normal
operation, the electronic disk stores data in a large DRAM array, which is Volatile. But many
electronic disk devices contain a hidden magnetic hard disk and a battery for backup power. If
external power is interrupted, the electronic disk controller copies the data from RAM to the
magnetic disk. When external power is restored, the controller copies the data back into the RAM.
• The design of a complete memory system must balance all the factors. It must use only as much
expensive memory as necessary while providing as much inexpensive, Non-Volatile memory as
possible. Caches can be installed to improve performance where a large access-time or transfer-
rate disparity exists between two components.
Difference between Centralization and Decentralization
According to Henri Fayol,” Everything which goes to increase the importance of a subordinate’s role is
decentralization, everything that goes to reduce it is centralization”.
What is Centralization?
Centralization refers to the concentration of authority at the top level of the organisation. It is the
systematic and consistent reservation of authority at the central points within an organisation. In a centralized
organisation, managers at the lower level have a limited role in decision-making. They just have to execute the
orders and decisions of the top level.
What is Decentralization?
Decentralization means the dispersal of authority throughout the organisation. It refers to a systematic
effort to delegate to the lowest levels all authority except which can be exercised at central points. It is the
distribution of authority throughout the organisation. In a decentralized organisation, the authority of major
decisions is vested with the top management and balance authority is delegated to the middle and lower
levels.
Centralization Decentralization