Os Notes

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 37

MASTERS OF COMPUTER APPLICATION

MASTERS OF COMPUTER APPLICATION

WEB TECHNOLOGY

SUBMITTED BY :
NAME: MANVITHA H
REG NO : 23MCAD19
UNIT 5:VIRTUALIZATION CONCEPT
• Cloud Computing Overview
• Virtualization
• Containers, Isolation
• Resource Management
• Security Issues
• Efficiency
• Storage
• Centralized vs. Decentralized
Cloud Computing Overview
• What is Cloud
The term Cloud refers to a Network or Internet. In other words, we can say
that Cloud is something, which is present at remote location. Cloud can provide
services over public and private networks, i.e., WAN, LAN or VPN.
Applications such as e-mail, web conferencing, customer relationship management
(CRM) execute on cloud.

• What is Cloud Computing?


Cloud Computing refers to manipulating, configuring, and accessing the
hardware and software resources remotely. It offers online data storage,
infrastructure, and application.
Cloud computing offers platform independency, as the software is not required to
be installed locally on the PC. Hence, the Cloud Computing is making our business
applications mobile and collaborative.
Basic Concepts:
• There are certain services and models working behind the scene making the cloud computing
feasible and accessible to end users. Following are the working models for cloud computing:

• Deployment Models:
Deployment models define the type of access to the cloud, i.e., how the cloud is located?
Cloud can have any of the four types of access: Public, Private, Hybrid, and Community .
Public Cloud:
The public cloud allows systems and services to be easily accessible to the general public. Public cloud
may be less secure because of its openness.

• Private Cloud
The private cloud allows systems and services to be accessible within an organization. It is
more secured because of its private nature.

• Community Cloud:
• The community cloud allows systems and services to be accessible by a group of organizations.

• Hybrid Cloud
• The hybrid cloud is a mixture of public and private cloud, in which the critical activities are
performed using private cloud while the non-critical activities are performed using public cloud.
Service Models
Cloud computing is based on service models. These are categorized into three basic service
models which are -

Anything-as-a-Service (XaaS)
is yet another service model, which includes Network-as-a-Service, Business-as-a-Service,
Identity-as-a-Service, Database-as-a-Service or Strategy-as-a-Service.

• The Infrastructure-as-a-Service (IaaS) is the most basic level of service. Each of the service
models inherit the security and management mechanism from the underlying model, as shown
in the following diagram:
Infrastructure-as-a-Service (IaaS):
IaaS provides access to fundamental resources such as physical machines, virtual
machines, virtual storage, etc.
• Platform-as-a-Service (PaaS)
• PaaS provides the runtime environment for applications, development and deployment
tools, etc.
• Software-as-a-Service (SaaS)
• SaaS model allows to use software applications as a service to end-users.

• Benefits
• Cloud Computing has numerous advantages. Some of them are listed below -
Virtualization :
• Virtualization is a technique of how to separate a service from the underlying physical
delivery of that service. It is the process of creating a virtual version of something like
computer hardware. It was initially developed during the mainframe era. It involves using
specialized software to create a virtual or software-created version of a computing resource
rather than the actual version of the same resource. With the help of Virtualization,
multiple operating systems and applications can run on the same machine and its same
hardware at the same time, increasing the utilization and flexibility of hardware.

• In other words, one of the main cost-effective, hardware-reducing, and energy-saving


techniques used by cloud providers is Virtualization. Virtualization allows sharing of a single
physical instance of a resource or an application among multiple customers and
organizations at one time. It does this by assigning a logical name to physical storage and
providing a pointer to that physical resource on demand. The term virtualization is often
synonymous with hardware virtualization, which plays a fundamental role in efficiently
delivering Infrastructure-as-a-Service (IaaS) solutions for cloud computing. Moreover,
virtualization technologies provide a virtual environment for not only executing
applications but also for storage, memory, and networking.
• Host Machine: The machine on which the virtual machine is going to be built is known as Host
Machine.

• Guest Machine: The virtual machine is referred to as a Guest Machine.


Work of Virtualization in Cloud Computing
• Virtualization has a prominent impact on Cloud Computing. In the case of cloud computing, users store data in the
cloud, but with the help of Virtualization, users have the extra benefit of sharing the infrastructure. Cloud Vendors
take care of the required physical resources, but these cloud providers charge a huge amount for these services
which impacts every user or organization. Virtualization helps Users or Organisations in maintaining those services
which are required by a company through external (third-party) people, which helps in reducing costs to the
company. This is the way through which Virtualization works in Cloud Computing.

• Benefits of Virtualization:

• More flexible and efficient allocation of resources.


• Enhance development productivity.
• It lowers the cost of IT infrastructure.
• Remote access and rapid scalability.
• High availability and disaster recovery.
• Pay peruse of the IT infrastructure on demand.
• Enables running multiple operating systems .
• Drawback of Virtualization:

• High Initial Investment: Clouds have a very high initial investment, but it is also true that it will help in reducing the
cost of companies.
• Learning New Infrastructure: As the companies shifted from Servers to Cloud, it requires highly skilled staff who have
skills to work with the cloud easily, and for this, you have to hire new staff or provide training to current staff.
• Risk of Data: Hosting data on third-party resources can lead to putting the data at risk, it has the chance of getting
attacked by any hacker or cracker very easily.

• Containers:
• Cloud containers are software code packages that contain an application’s code, its libraries,
and other dependencies that it needs to run in the cloud. Any software application code
requires additional files called libraries and dependencies before it can run. Traditionally,
software had to be packaged in multiple formats to run in different environments such as
Windows, Linux, Mac, and mobile. However, a container packages the software and all of its
dependencies into a single file that can run anywhere. Running the container in the cloud
provides additional flexibility and performance benefits at scale.
What are the benefits of cloud containers?
• Applications can consist of tens, hundreds, or even thousands of containers. With cloud
containers, you can distribute and manage these containers across many different cloud
servers or instances. The cloud containers function as if they were colocated. There are
many benefits to distributed cloud computing application architectures.

• Simplified application deployment

Containers are unique because you can use them to deploy software to almost any
environment—without specifically bundling the software for the underlying architecture and
operating systems. Before containerization became popular, applications had to be bundled
with specific libraries to run on specific platforms. This meant that deploying a piece of
software on multiple operating systems would result in multiple software versions. Cloud
containers enable applications to run on any underlying architecture as long as the
containerization platform runs over the top. Now, you need only one version of the
production-grade container.
Flexibility:
• With cloud containerization, the underlying virtual machines (VM) are all cloud instances.
Cloud instances are available in various configurations, with fast spin-up, tear-down, and
on-demand cloud computing pricing. This reconfigurability means that you can swap
machines in and out as required, depending on the application’s demands. You can
optimize resource use by load-balancing container-based applications across various
cloud instances rather than individual servers.
• Resiliency:
• Cloud containers provide increased reliability and availability for applications. In a
distributed, containerized architecture, if a given machine fails, another can quickly spin
up the lost containers, strengthening the application’s resiliency. You can update a new
version of a single container in the application with minimal disruption to the rest of the
application. This results in longer uptimes.
• Scalability:
• In traditional application production environments, the application is limited by a single
server resource. Given the right application design and cloud containerization approach,
an application’s data processing and input/output are no longer throttled by single-server
limitations. They’re distributed among machines, so you can scale unlimitedly and ensure
consistent performance and user experience.
How can AWS support your cloud container requirements?

• Nearly 80 percent of all cloud containers run on Amazon Web Services (AWS) today.
AWS container services provide many system tools for managing your underlying
container infrastructure so that you can focus on innovation and your business needs.
• AWS Copilot is a command-line interface (CLI) for quickly launching and managing
containerized applications on AWS
• Amazon Elastic Container Service (Amazon ECS) is a fully managed container
orchestration service and system tool for efficiently deploying, managing, and scaling
containerized applications
• Amazon Elastic Kubernetes Service (Amazon EKS) is a managed Kubernetes service
for running Kubernetes container orchestration in the AWS Cloud and on-premises
data centers.
• Amazon Elastic Container Registry (Amazon ECR) is a fully managed container
registry for easy management of and access to container images and artifacts
• AWS Fargate is a serverless compute engine for containers that you can use to focus
on developing applications instead of managing infrastructure
What Is Data Isolation?

• Data isolation is physical, network, and operational separation of


data to keep it safe from external cyberattacks and internal
threats, and it can have many forms. While traditional air gaps
isolate data physically and electronically yielding strong security,
they do not support the recovery time objectives (RTOs) nor
recovery point objectives (RPOs) of today’s 24/7 organizations.

• The solution is a modern data isolation strategy with ‘virtual air


gap’ technology that protects backups with temporary network
connections and very strong access controls, while further
isolating data with the cloud, as needed. This method provides a
tamper-resistant environment with the extra protection needed to
ward off ransomware and insider threats.
Why Is Data Isolation Important?

• Along with ransomware, data theft and sabotage represent urgent risks for
organizations. Cybercriminals leverage stolen data for fraud, identity theft,
and extortion. These challenges create a mandate for vendors to introduce
and organizations to adopt strong security for data management. In practice,
that means a data management platform that can resist and actively defend
against data-centric threats. To support these requirements, new data isolation
technology and techniques have emerged as key capabilities to support cyber
resiliency, including isolated or air gapped backup data stored in the cloud or
at another location with temporary, but highly secure, connections.
• Data isolation is a security strategy based on the idea that disconnecting data
from the network and creating physical distance between it and the rest of the
organization’s IT environment can add an impenetrable barrier against
harmful events or people. With data isolation—via an cloud air gap, air gapped
backup, or an air gapped copy of data—it becomes extraordinarily difficult to
access, steal, or corrupt data.
• In situations where valuable data has been destroyed or encrypted for
ransom, organizations that practice data isolation can remain resilient because
they always have a pristine copy of data that has been safely kept separate
from the compromised environment
How Isolation Works:
• Organizations can implement varying degrees of data isolation. These range from
completely disconnecting systems (physically and virtually) to having transient
network connections coupled with layered access controls. The key is to balance the
isolation method with business continuity needs. Each isolation technique boosted by
air gap technology must support the organization’s RTO/RPO objectives.
• Innovative isolation solutions that leverage strong access controls and temporary
network connections have emerged because complete physical and electronic
isolation (i.e., the textbook air gap definition) does not support most needs for
today’s enterprises.

What Is Isolation in Cloud Computing or Cloud Air Gap?


• Cloud computing is becoming a popular method for enterprises to ensure data
isolation. By trusting a public cloud provider to protect replicated data that can
only be accessed by a secure connection brought up and down in the same
instance, enterprises gain confidence. Should ransomware or disaster strike,
their off-site data (or air gapped data copy) would be available in near real
time in an cloud air gap.
Resource Management Models in Cloud Computing
• The term resource management refers to the operations used to control how capabilities provided by Cloud
resources and services are made available to other entities, whether users, applications, or services.

• Types of Resources:
• Physical Resource: Computer, disk, database, network, etc.

• Logical Resource: Execution, monitoring, and application to communicate

Resource Management in Cloud Computing Environment:


On the Cloud Vendor’s View
• Provision resources on an on-demand basis.
• Energy conservation and proper utilization is maintained in Cloud Data Centers
On the Cloud Service Provider’s View

• To make available the best performance resources at the cheapest cost.


• QoS (Quality of Service) to their cloud users
• On the Cloud User’s View
• Renting resources at a low price without compromising performance
• Cloud provider guarantees to provide a minimum level of service to the user.
Resource Management Models:
Compute Model
Resource in the cloud is shared by all users at the same time. It allows the user to reserve the VM’s memory to
ensure that the memory size requested by the VM is always available to operate locally on clouds with a good
enough level of QoS (Quality of Service) being delivered to the end user.

Grid Strictly manages the workload of computing mode. Local resource manager such as Portable Batch
System, Condor, and Sun Grid Engine manages the compute resource for the Grid site. Identify the user to run
the job
Data Model:

• It is related to plotting, separating, querying, transferring, caching, and replicating data.

• Data is Stored at an Un-Trusted Host:- Although may not seem the best policy to store data and let others
use the data without permission moving data off-premises increases the number of potential security risks.
• Data Replication over Large Areas:- Making sure data is available and durable whenever demanded is of
utmost importance for cloud storage providers. Data availability and durability are typically achieved
through under-the-covers replication i.e., data is automatically replicated without customer interference or
requests.
• Problems with Data Management:- Transactional data management is one of the biggest data
management problems. It is hard to ensure Atomicity, Consistency, Isolation, and Durability is maintained
during data replication over large distances. It is also risky to store such sensitive data in untrusted storage.
• Security Issues in Cloud Computing:
In this, we will discuss the overview of cloud computing, its need, and mainly our focus to cover the
security issues in Cloud Computing. Let’s discuss it one by one.
Cloud Computing :
Cloud Computing is a type of technology that provides remote services on the internet to manage, access,
and store data rather than storing it on Servers or local drives. This technology is also known as Serverless
technology. Here the data can be anything like Image, Audio, video, documents, files, etc.
• Need of Cloud Computing :
Before using Cloud Computing, most of the large as well as small IT companies use traditional methods
i.e. they store data in Server, and they need a separate Server room for that. In that Server Room, there should
be a database server, mail server, firewalls, routers, modems, high net speed devices, etc. For that IT
companies have to spend lots of money. In order to reduce all the problems with cost Cloud computing come
into existence and most companies shift to this technology.

• Security Issues in Cloud Computing :


There is no doubt that Cloud Computing provides various Advantages but there are also some security
issues in cloud computing. Below are some following Security Issues in Cloud Computing as follows.

• Data Loss –
Data Loss is one of the issues faced in Cloud Computing. This is also known as Data Leakage. As we know
that our sensitive data is in the hands of Somebody else, and we don’t have full control over our database. So,
if the security of cloud service is to break by hackers then it may be possible that hackers will get access to our
sensitive data or personal files.
• Interference of Hackers and Insecure API’s –
As we know, if we are talking about the cloud and its services it means we are talking about the
Internet. Also, we know that the easiest way to communicate with Cloud is using API. So it is important to
protect the Interface’s and API’s which are used by an external user. But also in cloud computing, few services
are available in the public domain which are the vulnerable part of Cloud Computing because it may be
possible that these services are accessed by some third parties. So, it may be possible that with the help of
these services hackers can easily hack or harm our data.

• User Account Hijacking –
Account Hijacking is the most serious security issue in Cloud Computing. If somehow the Account of User
or an Organization is hijacked by a hacker then the hacker has full authority to perform Unauthorized
Activities.
• Changing Service Provider –
Vendor lock-In is also an important Security issue in Cloud Computing. Many organizations will face
different problems while shifting from one vendor to another. For example, An Organization wants to shift
from AWS Cloud to Google Cloud Services then they face various problems like shifting of all data, also both
cloud services have different techniques and functions, so they also face problems regarding that. Also, it may
be possible that the charges of AWS are different from Google Cloud, etc.
• Lack of Skill –
While working, shifting to another service provider, need an extra feature, how to use a feature, etc. are
the main problems caused in IT Companies who doesn’t have skilled Employees. So it requires a skilled person
to work with Cloud Computing.
• Denial of Service (DoS) attack –
This type of attack occurs when the system receives too much traffic. Mostly DoS attacks occur in large
organizations such as the banking sector, government sector, etc. When a DoS attack occurs, data is lost. So, in
order to recover data, it requires a great amount of money as well as time to handle it.
• Shared Resources:
Cloud computing relies on a shared infrastructure. If one customer’s data or applications are
compromised, it may potentially affect other customers sharing the same resources, leading to a breach of
confidentiality or integrity.
• Compliance and Legal Issues:
Different industries and regions have specific regulatory requirements for data handling and storage.
Ensuring compliance with these regulations can be challenging when data is stored in a cloud environment
that may span multiple jurisdictions.
• Data Encryption:
While data in transit is often encrypted, data at rest can be susceptible to breaches. It’s crucial to ensure that
data stored in the cloud is properly encrypted to prevent unauthorized access.
• Insider Threats:
Employees or service providers with access to cloud systems may misuse their privileges, intentionally or
unintentionally causing data breaches. Proper access controls and monitoring are essential to mitigate these
threats.
• Data Location and Sovereignty:
Knowing where your data physically resides is important for compliance and security. Some cloud providers
store data in multiple locations globally, and this may raise concerns about data sovereignty and who has
access to it.
• Loss of Control:
When using a cloud service, you are entrusting a third party with your data and applications. This loss of
direct control can lead to concerns about data ownership, access, and availability .
• Incident Response and Forensics:
Investigating security incidents in a cloud environment can be complex. Understanding what happened and
who is responsible can be challenging due to the distributed and shared nature of cloud services.
• Data Backup and Recovery:
Relyingon cloud providers for data backup and recovery can be risky. It’s essential to have a robust backup
and recovery strategy in place to ensure data availability in case of outages or data loss.
• Vendor Security Practices:
The security practices of cloud service providers can vary. It’s essential to thoroughly assess the security
measures and certifications of a chosen provider to ensure they meet your organization’s requirements .
IoT Devices and Edge Computing:
The proliferation of IoT devices and edge computing can increase the attack surface. These devices often have
limited security controls and can be targeted to gain access to cloud resources .
Social Engineering and Phishing:
Attackers may use social engineering tactics to trick users or cloud service providers into revealing sensitive
information or granting unauthorized access.
Inadequate Security Monitoring:
Without proper monitoring and alerting systems in place, it’s challenging to detect and respond to security
incidents in a timely manner.
Operating system efficiency is a crucial factor that affects the performance,
reliability, and usability of any computer system. But how do you measure it? In this
article, you will learn about some common metrics and methods that can help you
evaluate and compare different operating systems based on their efficiency
• CPU utilization
One of the most basic and important metrics of operating system efficiency is
CPU utilization, which measures how well the operating system manages the processor
resources. CPU utilization is the percentage of time that the CPU is busy executing
processes, as opposed to being idle or waiting for input/output. A high CPU utilization
means that the operating system is making good use of the processor, while a low CPU
utilization means that the operating system is wasting processor cycles or facing
bottlenecks. CPU utilization can be measured using tools like top, htop, or Task Manager.
• Memory usage
Another key metric of operating system efficiency is memory usage, which
measures how well the operating system manages the memory resources. Memory
usage is the amount of physical or virtual memory that the operating system allocates to
processes, as well as the amount of memory that is free or available. A low memory
usage means that the operating system is optimizing the memory allocation and
avoiding memory leaks, while a high memory usage means that the operating system is
consuming too much memory or suffering from memory fragmentation. Memory usage
can be measured using tools like free, vmstat, or Performance Monitor.
• Throughput and latency:
Another way to measure operating system efficiency is to look at the throughput and latency
of the system, which measure how fast and responsive the operating system is. Throughput is the
amount of work that the operating system can complete in a given time, such as the number of
processes executed, the number of requests served, or the amount of data transferred. Latency is
the delay or time that the operating system takes to respond to a request, such as the time to
start a process, the time to switch between processes, or the time to access a file. A high
throughput and a low latency mean that the operating system is efficient and agile, while a low
throughput and a high latency mean that the operating system is inefficient and sluggish.
Throughput and latency can be measured using tools like ping, traceroute, or benchmarking
software.


Reliability and availability
Another aspect of operating system efficiency is reliability and availability, which
measure how well the operating system can handle errors and failures. Reliability is the
probability that the operating system will function correctly and without faults, such as
crashes, hangs, or data loss. Availability is the percentage of time that the operating
system is operational and accessible, as opposed to being down or unavailable. A high
reliability and availability mean that the operating system is robust and dependable, while
a low reliability and availability mean that the operating system is fragile and unreliable.
Reliability and availability can be measured using tools like uptime, dmesg, or event logs.
• User satisfaction:
Finally, one of the most subjective but also important metrics of operating system
efficiency is user satisfaction, which measures how well the operating system meets the
needs and expectations of the users. User satisfaction is influenced by factors such as
the usability, functionality, security, and compatibility of the operating system, as well
as the user's preferences, habits, and feedback. A high user satisfaction means that the
operating system is satisfying and enjoyable, while a low user satisfaction means that
the operating system is frustrating and disappointing. User satisfaction can be
measured using tools like surveys, ratings, or reviews.

• Here’s what else to consider


This is a space to share examples, stories, or insights that don’t fit into any of the
previous sections. What else would you like to add?
• Storage Structure in Operating Systems:
Basically we want the programs and data to reside in main memory permanently.
This arrangement is usually not possible for the following two reasons:
Main memory is usually too small to store all needed programs and data permanently.
Main memory is a volatile storage device that loses its contents when power is turned off or otherwise lost.

There are two types of storage devices:-

Volatile Storage Device –


It loses its contents when the power of the device is removed.
Non-Volatile Storage device –
It does not loses its contents when the power is removed. It holds all the data when the power is removed.
• Secondary Storage is used as an extension of main memory. Secondary storage devices can hold the data
permanently.
• Storage devices consists of Registers, Cache, Main-Memory, Electronic-Disk, Magnetic-Disk, Optical-Disk,
Magnetic-Tapes. Each storage system provides the basic system of storing a datum and of holding the datum
until it is retrieved at a later time. All the storage devices differ in speed, cost, size and volatility. The most
common Secondary-storage device is a Magnetic-disk, which provides storage for both programs and data.

• In this fig Hierarchy of storage is shown –


• In this hierarchy all the storage devices are arranged according to speed and cost. The higher
levels are expensive, but they are fast. As we move down the hierarchy, the cost per bit generally
decreases, where as the access time generally increases.

• The storage systems above the Electronic disk are Volatile, where as those below are Non-Volatile.
• An Electronic disk can be either designed to be either Volatile or Non-Volatile. During normal
operation, the electronic disk stores data in a large DRAM array, which is Volatile. But many
electronic disk devices contain a hidden magnetic hard disk and a battery for backup power. If
external power is interrupted, the electronic disk controller copies the data from RAM to the
magnetic disk. When external power is restored, the controller copies the data back into the RAM.

• The design of a complete memory system must balance all the factors. It must use only as much
expensive memory as necessary while providing as much inexpensive, Non-Volatile memory as
possible. Caches can be installed to improve performance where a large access-time or transfer-
rate disparity exists between two components.
Difference between Centralization and Decentralization
According to Henri Fayol,” Everything which goes to increase the importance of a subordinate’s role is
decentralization, everything that goes to reduce it is centralization”.

What is Centralization?
Centralization refers to the concentration of authority at the top level of the organisation. It is the
systematic and consistent reservation of authority at the central points within an organisation. In a centralized
organisation, managers at the lower level have a limited role in decision-making. They just have to execute the
orders and decisions of the top level.

What is Decentralization?
Decentralization means the dispersal of authority throughout the organisation. It refers to a systematic
effort to delegate to the lowest levels all authority except which can be exercised at central points. It is the
distribution of authority throughout the organisation. In a decentralized organisation, the authority of major
decisions is vested with the top management and balance authority is delegated to the middle and lower
levels.
Centralization Decentralization

• The concentration of authority at the top level is


known as Centralization. The evenly and systematic distribution of
authority at all levels is known as
Decentralization.

• There is no delegation of authority as all the


authority for taking decisions is vested in the hands • There is a systematic delegation of authority
of top-level management. at all levels.
• It is suitable for small organisations. • It is suitable for large organisations.
• There is no freedom of decision-making at the • There is freedom of decision-making at all
middle and lower level. levels of management.
• There is a vertical flow of information. • There is an open and free flow of information.
• Employees are demotivated as compared to • Employees are motivated as compared to
decentralization. centralization.
• There are least chances of any conflict in decision as • There are chances of conflict in decision as
only top-level management is involved. many people are involved
THANKYOU…...
THANK YOU……

You might also like