CC CH3 1

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 46

CHAPTER THREE

CLOUD RESOURCE
MANAGEMENT
Contents…
Concept of Virtualization and Load Balancing
Key challenges in managing information
Virtualization

Virtualization can be defined as a process that enables the
creation of a virtual version of a desktop, operating system,
network resources, or server.

This ensures that the physical delivery of the resource or an
application is separated from the actual resource itself. It helps
reduce the space or cost involved with the resource.

This technique enables the end-user to run multiple desktop
operating systems and applications simultaneously on the same
hardware and software.
•Virtualization provides flexibility that physical hardware is unable to offer.
A single computer running virtualization hardware can emulate multiple
virtual machines simultaneously, each completely independent from the
other.
•For example, a Windows server can run a dozen VMs at once — some
Windows and some various distributions of open source platforms like
Linux.
•Users interacting with one VM are unaware to those on the other VMs.
What is virtual machine?

•Special virtualization software is designed to create a virtual


computer known as a virtual machine that mimics the
operation of a real computer — operating system, applications
and all.
•A virtual machine is not a physical machine. It’s a file that
replicates the computing environment of a physical machine.
It’s similar to how virtual reality (VR) environments replicate the
real world. VR isn’t a physical space; it’s a virtual imitation
What are VMs used for?
Many applications for this technology exist. The list below outlines five examples:
Cloud computing. Virtualization technology creates virtual resources from physical hardware.
Then, cloud computing distributes those virtualized resources via the internet.
Software testing. You can use virtual machines to create fully functional software development
environments. These environments are useful because they’re isolated from the surrounding
infrastructure. Isolation allows developers to test software without impacting the rest of the
system.
Malware investigations. VMs enable malware researchers to test malicious programs in
separate environments. Instead of spreading to the rest of the infrastructure, a VM contains the
malware for study.
Disaster management. You can use a virtual machine to replicate a system in a cloud
environment. This replication ensures that if the system is compromised, another version
exists to replace it.
For example, iPhone users regularly back up their data by syncing their devices with iCloud.
The iCloud stores a virtual version of the phone, allowing users to transport their existing data
onto a new device in the event of theft or damage.
Running programs with incompatible hardware. Suppose you have an old application on
your phone. It hasn’t released an update in a few years, but your phone has updated several
times since then. Since the application hasn’t been updated with your phone, it may no longer
be compatible with your phone’s current operating system (OS). You can use a virtual
machine to simulate the previous OS and run the old application there.
Advantages and disadvantages of virtual machines
Advantages of VMs Disadvantages of VMs
Infected VMs. It can be risky to create VMs from
Portability. VMs allow users to move systems to
weak host hardware. An improperly structured host
other computing environments easily.
system may spread its OS bugs to VMs.

Server sprawl. The ability to create virtual machines


Speed. Creating a VM is much faster than installing
can quickly lead to a crowded network. It’s best to
a new OS on a physical server. VMs can also be
monitor the creation of VMs to preserve
cloned, OS included.
computational resources.

Security. VMs help provide an extra layer of security


because they can be scanned for malware. They also Complexity. System failures can be challenging to
enable users to take snapshots of their current states. pinpoint in infrastructure with multiple local area
If an issue arises, users can review those snapshots to networks (LANs).
trace it and restore the VM to a previous version.
Virtualization as a Concept of
Cloud Computing

Cloud computing simply would not exist without virtualization.

In cloud computing, Virtualization facilitates the creation of virtual machines
and ensures the smooth functioning of multiple operating systems. It also helps
create a virtual ecosystem for server operating systems and multiple storage
devices, and it runs multiple operating systems.

Cloud Computing is identified as an application or service that involves a
virtual ecosystem. Such an ecosystem could be of public or private nature. With
Virtualization, the need to have a physical infrastructure is reduced.

The terms Cloud Computing and Virtualization are now being used
interchangeably, and they are being unified quickly.

Virtualization and Cloud Computing work hand in hand to ensure that you
will get advanced and sophisticated levels of computing. It ensures that
applications can be shared across multiple network threads of different
enterprise and active users

Cloud Computing delivers scalability, efficiency, and economic value. It
offers streamlined workload management systems.

In simpler words, Cloud Computing in collaboration with Virtualization
ensures that the modern-day enterprise gets a more cost-efficient way to run
multiple operating systems using one dedicated resource.
Characteristics of Virtualization
Virtualization offers several features or characteristics as listed below:

Distribution of resources: Virtualization and Cloud Computing technology ensure
end-users develop a unique computing environment. It is achieved through the
creation of one host machine. Through this host machine, the end-user can restrict
the number of active users. By doing so, it facilitates easy of control.

Accessibility of server resources: Virtualization delivers several unique features
that ensure no need for physical servers. Such features ensure a boost to uptime, and
there is less fault tolerance and availability of resources.

Resource Isolation: Virtualization provides isolated virtual machines. Each
virtual machine can have many guest users, and guest users could be either
operating systems, devices, or applications.

The virtual machine provides such guest users with an isolated virtual
environment. This ensures that the sensitive information remains protected,
and, at the same time, guest users remain inter-connected with one another.

Security and authenticity: The virtualization systems ensure continuous
up time of systems, and it does automatic load balancing and ensures there
is less disruption of services.

Aggregation: Aggregation in Virtualization is achieved through cluster
management software. This software ensures that the homogeneous sets of
computers or networks are connected and act as one unified resource.
The Benefits of Virtualization
•Cost savings: Virtual servers are far less expensive to operate than physical servers and other
hardware resources. In a cloud environment, operating a virtual server can be done on a pay-
as-you-go basis, while operating computer hardware or physical servers require a significant
outlay of upfront capital and ongoing maintenance expenses for electricity, cooling and
personnel.
•Better flexibility:
If you need to run a Linux app in your all-Windows shop, a virtualization app can give you
quick access to an alternative operating environment without purchasing a second machine.

•Improved uptime: While virtual servers do fail, they can be recreated almost instantaneously,
with no extra costs attached, resulting in less downtime. Most cloud service providers that sell
access to virtual servers guarantee uptime levels that no on-premises data center can match.
•Ease of migration:
Moving to different operating systems or from one physical server to
another is a daunting task. In the virtual space, resources can be
transferred from one user to another or migrated to another environment
with just a few keystrokes — even between completely different operating
environments.

•Secure sandbox operations: Virtualization lets an organization create a


secure sandbox in which experimental code can be tested safely; if things
go awry, the server can be instantly killed without worry of damaging the
rest of the network
Types of Virtualization

Application Virtualization

Network Virtualization

Desktop Virtualization

Storage Virtualization

Server Virtualization

Data Virtualization
Application Virtualization

This can be defined as the type of Virtualization that enables the end-user of an
application to have remote access.

This is achieved through a server. This server has all personal information and
other applicable characteristics required to use the application.

The server is accessible through the internet, and it runs on a local workstation.
With Application virtualization, an end-user can run two different versions of the
same software or the same application.

Application virtualization is offered through packaged software or a hosted
application.
Network Virtualization

This kind of virtualization can execute many virtual networks, and each
has a separate control and data plan. It co-occurs on the top of a physical
network, and it can be run by parties who are not aware of one another.

Network virtualization creates virtual networks, and it also maintains a
provision of virtual networks.

Through network virtualization, logical switches, firewalls, routers, load
balancers, and workload security management systems can be created
Desktop Virtualization

This can be defined as the type of Virtualization that enables the operating system of end-
users to be remotely stored on a server or data center.

It enables the users to access their desktops remotely and do so by sitting in any
geographical location. They can also use different machines to virtually access their
desktops.

With desktop virtualization, an end-user can work on more than one operating systems
basis the business need of that individual.

If the individual wants to work on an operating system other than the Window Operating
System, he can use desktop virtualization. This provides the individual an opportunity to
work on two different operating systems.

Therefore, desktop virtualization delivers a host of benefits. It delivers portability, user
mobility, easy software management with patches and updates.
Storage Virtualization

This type of Virtualization provides virtual storage systems that
facilitate storage management.

It facilitates the management of storage effectively and through
multiple sources accessed from a single repository.

Storage virtualizations ensure consistent performance and smooth
performance.

It also offers continuous updates and patches on advanced functions. It
also helps cope with the changes that come up in the underlying
storage equipment.
Server Virtualization

This kind of Virtualization ensures masking of servers.

The main or the intended server is divided into many
virtual servers.

Such servers keep changing their identity numbers and
processors to facilitate the masking process.

This ensures that each server can run its own operating
systems in complete isolation.
Data Virtualization

This can be defined as the type of Virtualization wherein data are sourced and
collected from several sources and managed from a single location. There is no
technical knowledge from where such data is sourced and collected, stored, or
formatted for such data.

The data is arranged logically, and the interested parties and stakeholders then access
the virtual view of such data. These are reports are also accessed by end-users on a
remote basis.

The application of data virtualization ranges from data integration to business
integration. They are also used for service-oriented architecture data services, and
they help find organizational data.
Architecture of Virtualization

The architecture in Virtualization is defined as a model that describes
Virtualization conceptually.

Virtualization application in Cloud Computing is critical.

In Cloud Computing, the end-users share the data on applications
termed as the clouds. However, end users can share the entire IT
infrastructure with Virtualization itself.

The virtual application services help in application management, and the
virtual infrastructure services can help in infrastructure management.

Both services are embedded into a virtual data center or an
operating system. The virtual services can be used in any platforms
and programming environment.

Virtualization is generally achieved through the hypervisor.

A hypervisor enables the separation of operating systems with the
underlying hardware. It enables the host machine to run many
virtual machines simultaneously and share the same physical
computer resources.
There are two methods through which virtualization
architecture is achieved described below:
Type one: bare-metal hypervisor.

They directly run over the top of the hardware of the host system. They deliver effective resource
management and ensure the high availability of resources. It delivers direct access to the hardware
system, ensuring better scalability, performance, and stability.

Type two: hosted hypervisor.



This is installed on the host operating system, and the virtual operating system runs directly above
the hypervisor. It is the kind of system that eases and simplifies system configuration.

It additionally simplifies management tasks. The presence of the host operating system at times
limits the performance of the virtualization-enabled system, and it even generates security flaws or
risks.
Important Terminologies of
Virtualization
There are a few essential technologies in Virtualization, which are defined as follows: –

Virtual machine: A virtual machine can be defined as the computer of a virtual type that operates beneath a
hypervisor.

Hypervisor: This can be defined as the operating system that runs on actual hardware. A virtual counterpart of the
operating system is a subpart that executes or emulates the virtual process. They are defined as Domain 0 or Dom0.

Container: These can be defined as virtual machines of lightweight nature that are a subset of the same operating
system instance or the hypervisor. They are a collection of processes that executes along with corresponding
namespace or identifiers of process.

Virtual network: This is defined as the network being separated logically and is present inside the servers. Such
networks can be expanded across multiple servers.

Virtualization software: This type of software helps deploy Virtualization on the computer device.

Virtualization helps create virtual versions of desktops, servers,
operating systems, and applications.

Virtualization comprises the host machine and virtual machine.

Each virtualization system comprises of hypervisor, container,
and virtual network.

Virtualization offers scalability efficiency and helps in effective
resource management.
Assignment 1
1) Amazon Web Services (AWS)
2) Microsoft Azure
3) Google Cloud Platform (GCP)
4) Alibaba Cloud
5) Oracle Cloud
6) IBM Cloud
7) Tencent Cloud
8) OVHcloud
9) DigitalOcean
10) Linode (Akamai)
Load Balancing in
cloud computing
Load Balancing
Load balancing is the method of distributing a set of tasks over a set
of recourses with the aim of making the overall processing more
efficient.
Modern applications must process millions of users simultaneously
and return the correct text, videos, images and other data to each user
in a fast and reliable manner.
A load balancer is a device that sits between the user and the server
group and acts as an invisible facilitator, ensuring that all resource
servers are used equally.
A Load balancer is a device or process in a network that analyzes
incoming request and diverts them to the relevant servers.
Load balancer can be physical devices in the network, virtualized
instances running on specialized hardware(virtual load balancer) or
even software process.
Load balancers improve application availability and responsiveness
and prevent server overload.
Each load balancer sits between client devices and backend servers,
receiving and then distributing incoming requests to any available
server capable of filling them.
Understanding the functions of load balancers
A Load Balancer acts as a ‘traffic controller’ for your server and directs the
requests to an available one, capable of fulfilling the request efficiently. This
ensures that requests are responded to fast and no server is over-stressed to
degrade the performance.
By helping servers move data efficiently, information flow between the
server and endpoint device is also managed by the load balancer. It also
assesses the request-handling health of the server, and if necessary, Load
Balancer removes the unhealthy server until it is restored.
As the servers can also be physical or virtual, a load balancer can also be a
hardware appliance or a software-based virtual one. When a server goes
down, the requests are directed to the remaining servers and when a new
server gets added, the requests automatically start getting transferred to it
Types of load balancer
Load Balancers are also classified as
1. Hardware Load Balancers:
As the name suggests, this is a physical, on-premise, hardware equipment to
distribute the traffic on various servers. Though they are capable of handling a
huge volume of traffic but are limited in terms of flexibility, and are also fairly
high in prices.
2. Software Load Balancers:
They are the computer applications that need to be installed in the system and
function similarly to the hardware load balancers. They are of two kinds-
Commercial and Open Source and are a cost-effective alternative to the
hardware counterparts.
3. Virtual Load Balancers:
This load balancer is different from both the software and hardware load balancers as
it is the combination of the program of a hardware load balancer working on a virtual
machine.
Through virtualization, this kind of load balancer imitates the software driven
infrastructure.
The program application of hardware equipment is executed on a virtual machine to
get the traffic redirected accordingly. But such load balancers have similar challenges
as of the physical on-premise balancers viz. lack of central management, lesser
scalability and much limited automation.
Load Balancing Methods
All kinds of Load Balancers receive the balancing requests, which are processed in
accordance with a pre-configured algorithm.
The most common load balancing methodologies include:
Round Robin Algorithm
Weighted Round Robin Algorithm
Least Connections Algorithm
Least Response Time Algorithm
IP Hash Algorithm
Round Robin Algorithm
Round robin load balancing is a simple way to distribute client requests
across a group of servers. A client request is forwarded to each server in turn.
The algorithm instructs the load balancer to go back to the top of the list and
repeats again.
The biggest drawback of using the round robin algorithm in load balancing
is that the algorithm assumes that servers are similar enough to handle
equivalent loads. If certain servers have more CPU, RAM, or other
specifications, the algorithm has no way to distribute more requests to these
servers. As a result, servers with less capacity may overload and fail more
quickly while capacity on other servers lie idle.
Weighted Round Robin Algorithm
This algorithm is deployed to balance loads of different servers with
different characteristics.
The weighted round robin algorithm maintains a weighted list of servers
and forwards new connections in proportion to the weight, or preference,
of each server.
This algorithm uses more computation times than the round robin
algorithm. However, the additional computation results in distributing the
traffic more efficiently to the server that is most capable of handling the
request
Least Connections Algorithm
In this algorithm, traffic is directed to the server having the least traffic. This
helps maintain the optimized performance, especially at peak hours by
maintaining a uniform load at all the servers.
Least connections load balancing is a dynamic load balancing algorithm
where client requests are distributed to the application server with the least
number of active connections when the client request is received.
In cases where application servers have similar specifications, one server
may get overloaded due to longer-lived connections. This algorithm considers
the dynamic connection load and doesn't send requests to servers that cannot
handle them.
A use case for least connections load balancing is when incoming
requests have varying connection times and a set of relatively similar
servers in processing power and resources are available.
If clients can maintain connections for an extended period, there is a
possibility that a single server will end up with all its capacity used by
multiple connections like this. Using the least connections algorithm will
mitigate this risk.
Least Response Time Algorithm
A more sophisticated version of the slightest connection method, the
least response time method relies on the time taken by a server to
respond to a health monitoring request.
The response time indicates how loaded the server is and how well
the users receive your site or service. Some load balancers can also
consider the number of active connections on each server.
IP Hash Algorithm
Methods in this category make decisions based on various data from the incoming
packet. This includes connection or header information, such as source/destination IP
address, port number, URL, or domain name.
The source IP hash load balancing algorithm uses the client's source and destination
IP addresses to generate a unique hash key to tie the client to a particular server. As
the key can be regenerated if the session disconnects, this allows reconnection
requests to get redirected to the same server used previously. This is called server
affinity.
This load balancing method is most appropriate when a client must always return to
the same server on each successive connection, like in shopping cart scenarios where
items placed in a cart on one server should be there when a user connects later.
IP hash load balancing algorithm
Benefits of Load Balancers
Benefits to the website/application:
1. Enhanced Performance:
Load Balancers reduce the additional load on a particular server and ensures
seamless operations and response, giving the clients a better experience.
2. Resilience:
The failed and under-performing components can be substituted immediately and
giving information about which equipment needs service, with nil or negligible
downtime.
3. Security:
Without any change in any form, Load Balancer gives an additional layer of
security to your website and applications.
Benefits to the organizations
1. Scalability:
Without disrupting the services, Load Balancers make it easy to
change the server infrastructure anytime.
2. Predictive Analysis:
Software load balancers can predict traffic bottlenecks before they
happen in reality.
3. Big Data:
Actionable insights out of the big data generated by global users can
be analyzed to drive better and informed business decisions.

You might also like