UNIT-II_CC.ppt

Download as pdf or txt
Download as pdf or txt
You are on page 1of 88

UNIT-2

• Understanding Services and Application by Type


• Understanding Abstraction and Virtualization
• Capacity Planning
PART A
Understanding Services and Application by Type

Defining IaaS:
In IaaS, we can rent IT infrastructures like servers and virtual machines (VMs),
storage, networks, operating systems from a cloud service vendor. We can create VM
running Windows or Linux and install anything we want on it. Using IaaS, we don’t
need to care about the hardware or virtualization software, but other than that, we do
have to manage everything else. Using IaaS, we get maximum flexibility, but still, we
need to put more effort into maintenance.
Kubernetes:

Google first developed Kubernetes (K8s) to address the growing need for cloud-native
application frameworks. Google later donated it to the Cloud Native Computing
Foundation, where it became one of the most prolific open source projects in history.

Kubernetes is an open-source platform that automates container operations by


eliminating the manual processes involved in deploying and scaling containerized
applications. It allows developers to organize containers hosting microservices into
clusters, handles scaling, and automates failover procedures.
Deployment & Scaling Computing application
Features of Kubernetes:
• Automated deployments and rollbacks
• Automated discovery and load balancing
• Automated resource management
• Storage Orchestration
Kubernetes Cluster:
A Kubernetes (K8s) cluster is a group of computing nodes that run containerized
applications. Containerization is a process that bundles an application's code
with the files and libraries it needs to run on any infrastructure.
Kubernetes clusters allow for applications to be more easily developed, moved, and
managed.
They can be created on either a physical or a virtual machine and allow engineers to
orchestrate and monitor containers across multiple physical, virtual, and cloud servers.

Some key components of a Kubernetes cluster include:


Control plane: Ensures that cluster configurations are automatically
implemented and manages the cluster's desired state, such as which applications
are running and which container images they use.
Workloads: The applications that Kubernetes runs.
Pods: One or more containers that share storage and network resources.
Master nodes: Handle administration and management.
Worker nodes: Run the applications.
Kubernetes cluster
When a developer deploys Kubernetes, it results in the creation of a cluster. A Kubernetes
cluster contains a set of machines called nodes that host containerized applications. These
nodes host the Pods; the smallest deployable computing units created and managed in
Kubernetes. The control plane oversees the nodes and the Pods in the cluster.

Nodes- In Kubernetes, a node is a virtual or physical machine that runs workloads. Each
node contains the services necessary to run pods:
Container runtime– The software responsible for running containers.
Kubelet– An agent that runs on each node in a cluster. It ensures that the containers running
in a pod are in a healthy state.
Kube-proxy– A network proxy service that runs on each node in a given cluster. It maintains
network rules on nodes that allow network communications to pods from within or outside
the cluster.
Pods- A pod consists of containers and supplies shared storage, network resources and
specifies how the containers will run.
Control Plane- The Control Plane is responsible for maintaining the desired end state of the
Kubernetes cluster as defined by the developer.
Kubernetes Architecture
Containers
Containers are self-contained packages that contain everything needed to run an
application, including code, files, and libraries. They are independent of the
underlying host infrastructure, making them easier to deploy in different cloud or
OS environments.

Kubernetes works by coordinating connected clusters that can work together as a


single entity. Kubernetes allows you to deploy containerized apps in clusters
without having to assign them to specific machines. For this to work, applications
must be architected and packaged without being coupled with individual hosts.

Kubernetes automates operational tasks of container management, such


as: Deploying applications, Rolling out changes to applications, Scaling
applications up and down, and Monitoring applications.

Kubernetes works by orchestrating containers across a cluster of machines,


providing high availability and efficient resource utilization. The smallest unit of
execution in Kubernetes is called a pod, which consists of one or more containers,
each with one or more applications and its binaries
Pods:

In Kubernetes, a pod is a computing unit that can contain one or more Linux
containers that work together. Pods are the smallest deployable units in Kubernetes.
Pods can be made up of a single container, which is more common, or multiple
tightly coupled containers, which is more advanced. Each pod has its own
namespace, storage volumes, and configuration data, and provides a layer of
abstraction between the containers and the underlying infrastructure. Pods are
responsible for hosting and managing the execution of an application's individual
components
There are two primary ways to use pods in Kubernetes:

Pods run a single container: In the most common use case, the pod acts as a
wrapper around a single container allowing Kubernetes to manage each pod rather
than individual containers.
Pod runs multiple containers dependent on one another: In this case, the pod
encapsulates an application composed of multiple containers that need to share
resources.
Workloads:

A Kubernetes workload is a set of containers and pods that run an application or


service on a Kubernetes cluster.
Workloads are essential for managing, scaling, and deploying applications on
Kubernetes. They can be made up of multiple or single components that work
together.
Workload objects set deployment rules for pods, which Kubernetes uses to perform
deployments and update the workload with the application's current state.
Workloads also allow users to define rules for scaling, application scheduling, and
upgrades.
Kubernetes offers different types of controllers for different workloads, including:

Deployments
Best for stateless applications, where the workload's state doesn't need to be maintained
ReplicaSets
Automatically replace failed or deleted pods to ensure a specified number of pods are
running at any given time
Jobs
May automatically restart terminated pods, or mark them as completed if they
terminate normally
CronJobs
Manage jobs on a schedule ,(A CronJob creates Jobs on a repeating schedule. CronJob
is meant for performing regular scheduled actions such as backups, report generation,
and so on.)

Each controller has a specific function and features that suit different use cases and
scenarios, depending on the application’s characteristics and requirements.
Properly managing and configuring Kubernetes workloads is important for maintaining
application performance, security, and resilience.
Types of Workload Resources

Kubernetes provides several built-in controllers that manage the pods and
containers of a workload.
The following are some of the most common types of workload resources in
Kubernetes:

ReplicaSet ensures that a specified number of pod replicas are running at any
given time.
Deployment is a higher-level controller that manages a ReplicaSet and provides
declarative updates for pods and containers.
StatefulSet manages pods that need to maintain a stable identity and persistent
state. It ensures that each pod has a unique name and a stable network address and
that the pods are created and updated in a predictable order.
DaemonSet ensures that a pod runs on every node or a subset of nodes in the
cluster. It is useful for running daemon processes or agents that provide node-level
services, such as monitoring, logging, or networking.
Job creates one or more pods and ensures they complete a task.
CronJob creates a Job based on a schedule. It helps run periodic or recurring tasks
like backups, reports, or notifications.
Key Components of Kubernetes Workload Management

The following are some of the vital elements of Kubernetes workload


management:
Deployments
A deplyoment is a Kubernetes object that defines how to create and update a set of
pods. It can be triggered by various events, such as a code change, a configuration
change, a scaling request, or a manual action. A deployment can also be automated
using tools and pipelines that integrate with the Kubernetes API.
Services
A service is a Kubernetes object that defines how to access a group of pods. It
enables communication and interaction between workloads and external clients or
systems.
Services expose workloads for internal and external access and provide load
balancing and service discovery features.
Managing Workloads Across Environments:

Kubernetes workloads can be deployed and managed across different environments,


such as on-premises, cloud, multi-cloud, or hybrid cloud.
On-Premises and Cloud Deployments: On-premises deployments offer more control
and security but less scalability and availability. Cloud deployments offer less
overhead and maintenance but more scalability and availability.
Multi-Cloud and Hybrid Cloud Deployments: Multi-cloud and hybrid
cloud deployments offer more flexibility and choice, improved performance, and
reduced dependency and risk of vendor lock-in, outage, or failure, but they come
with more complexity and inconsistency.
Aggregation:

In cloud computing, aggregation is a service model used by Cloud Services


Brokers (CSBs) to combine multiple cloud services into a single package for
customers.
This can offer several benefits, including:

Cost savings: Customers only pay one bill to the broker instead of paying for
each service separately.

Time savings: The broker can help customers understand how to use the service

Tailored services: The broker can offer a complete package that's more cost
effective than purchasing each cloud service separately
Benefits of Cloud Aggregators

Cloud aggregators “offer a single dashboard for partners or


customers to find, activate and manage multiple cloud applications.
The combination of multiple services into one is also often more
cost-effective for customers.
A cloud aggregator handles all the research and planning across your
organization, focusing on the individual needs of each department
and how they roll up to the broader needs of the organization.
Experts work with the business leaders at your company to hand
select the right cloud solution for each need, keeping your current
on-premise resources in mind while ensuring optimal levels of
security and control.
Once these solutions are selected, they are purchased and deployed
together as a single package.
The goal of engaging a cloud aggregator is to make a cloud
deployment plan secure, manageable, and scalable as your business
needs evolve.
Silo:
In cloud computing, a silo is when different teams or departments within an
organization use cloud resources and services independently without following
shared standards, practices, or visibility into each other's activities.
Silos can be a result of a hierarchical organizational structure where different
departments operate independently with limited communication and
collaboration. Each department may have its own goals, processes, and data
management practices, leading to the creation of isolated data silos.
Silos can make it difficult for IT teams to respond to changing business
demands. Siloed data and applications can hamper efforts to build end-to-end
processes, while siloed IT teams can slow down efforts to provide services and
solutions that require cooperation across functions.
Breaking down silos and gathering everyone under one larger data venue can make
for better integration, improved collaboration, and more informed decision
making.
The specific ways that data silos can harm an organization include the
following:

Incomplete data sets.


Inconsistent data
Duplicate data platforms and processes.
Less collaboration between end users
A silo mentality in departments
Data security and regulatory compliance issues.
Steps to break down data silos:

Breaking down data silos lets an organization manage and use data more effectively. It
often also helps lower technology and data management costs. The following
approaches can be used separately or in tandem to remove silos and connect data assets
to better support business operations:
Data integration: Integrating data with other systems is the most straightforward
method for breaking down silos. The most popular form of data integration is extract,
transform and load (ETL), which extracts data from source systems, consolidates it and
loads it into a target system or application.
Data warehouses and data lakes: The most common target system in data integration
jobs is a data warehouse, which stores structured transaction data for BI, analytics and
reporting applications. Increasingly, organizations also build data lakes to hold sets of
big data, which can include large volumes of structured, unstructured and
semi-structured data used in data science applications.
Culture change: To really put a stop to data silos, it might be necessary to change an
organization's culture. Efforts to do so can be part of the data strategy development
process or a data governance initiative.
PaaS:
This service provides an on-demand environment for developing, testing,
delivering, and managing software applications. The developer is responsible for
the application, and the PaaS vendor provides the ability to deploy and run it.
Using PaaS, the flexibility gets reduce, but the management of the environment
is taken care of by the cloud vendors.
PaaS includes infrastructure (servers, storage, and networking) and platform
(middleware, development tools, database management systems, business intelligence,
and more) to support the web application life cycle.

Examples: Google App Engine, Force.com, Joyent, Azure.


Services Provided by PaaS are:
Programming Languages: A variety of programming languages are supported
by PaaS providers, allowing developers to choose their favorite language to
create apps. Languages including Java, Python, Ruby,.NET, PHP, and Node.js
are frequently supported.

Application Frameworks: Pre-configured application frameworks are offered


by PaaS platforms. These frameworks include features like libraries, APIs, and
tools for quick development, laying the groundwork for creating scalable and
reliable applications. Popular application frameworks include Laravel, Django,
Ruby on Rails, and Spring Framework.

Databases: Managed database services are provided by PaaS providers,


making it simple for developers to store and retrieve data. These services
support relational databases (like MySQL, PostgreSQL, and Microsoft SQL
Server) and NoSQL databases (like MongoDB, Cassandra, and Redis).
Additional Tools and Services: PaaS providers provide a range of extra tools
and services to aid in the lifecycle of application development and deployment.
These may consist of the following:
Development Tools: to speed up the development process, these include
integrated development environments (IDEs), version control systems, build and
deployment tools, and debugging tools.

Collaboration and Communication: PaaS platforms frequently come with


capabilities for team collaboration, including chat services, shared repositories,
and project management software.

Analytics and Monitoring: PaaS providers may give tools for tracking
application performance, examining user behavior data, and producing insights
to improve application behavior and address problems.

Security and Identity Management: PaaS systems come with built-in security
features like access control, encryption, and mechanisms for authentication and
authorization to protect the privacy of applications and data.

Scalability and load balancing: PaaS services frequently offer automatic


scaling capabilities that let applications allocate more resources as needed to
manage a spike in traffic or demand.
Advantages of PaaS
There are the following advantages of PaaS -

1) Simplified Development
PaaS allows developers to focus on development and innovation without worrying about
infrastructure management.

2) Lower risk
No need for up-front investment in hardware and software. Developers only need a PC and an
internet connection to start building applications.

3) Prebuilt business functionality


Some PaaS vendors also provide already defined business functionality so that users can avoid
building everything from very scratch and hence can directly start the projects only.

4) Instant community:
PaaS vendors frequently provide online communities where the developer can get ideas, share
experiences, and seek advice from others.

5) Scalability
Applications deployed can scale from one to thousands of users without any changes to the
applications.
Disadvantages of PaaS

1)Vendor lock-in
One has to write the applications according to the platform provided by the PaaS vendor,
so the migration of an application to another PaaS vendor would be a problem.

2) Data Privacy
Corporate data, whether it can be critical or not, will be private, so if it is not located
within the walls of the company, there can be a risk in terms of privacy of data.

3)Integration with the rest of the systems applications:


It may happen that some applications are local, and some are in the cloud. So there will
be chances of increased complexity when we want to use data in the cloud with the local
data.

4)Limited Customization and Control: The degree of customization and control over
the underlying infrastructure is constrained by PaaS platforms
IaaS ( Infrastructure as a Service):
In IaaS, we can rent IT infrastructures like servers and virtual machines (VMs),
storage, networks, operating systems from a cloud service vendor. We can create
VM running Windows or Linux and install anything we want on it. Using IaaS,
we don’t need to care about the hardware or virtualization software, but other
than that, we do have to manage everything else. Using IaaS, we get maximum
flexibility, but still, we need to put more effort into maintenance

With the help of the IaaS cloud computing platform layer, clients can
dynamically scale the configuration to meet changing requirements and are
billed only for the services actually used.
IaaS provider provides the following services :
IaaS is offered in three models: public, private, and hybrid cloud.
IaaS is offered in three models: public, private, and hybrid cloud. The private
cloud implies that the infrastructure resides at the customer's premise. In the
case of the public cloud, it is located at the cloud computing platform vendor's
data center, and the hybrid cloud is a combination of the two in which the
customer selects the best of both public cloud and private cloud.

Characteristics of IaaS:
Scalability: IaaS enables users to adjust computing capacity according to their
demands without requiring long lead times or up-front hardware purchases.
Virtualization: IaaS uses virtualization technology to generate virtualized
instances that can be managed and delivered on-demand by abstracting
physical computer resources.
Resource Pooling: This feature enables users to share computer resources,
such as networking and storage, among a number of users, maximizing
resource utilization and cutting costs.
Elasticity: IaaS allows users to dynamically modify their computing resources
in response to shifting demand, ensuring optimum performance and financial
viability.
Self-Service: IaaS offers consumers "self-service" portals that let them
independently deploy, administer, and monitor their computing resources
without the assistance of IT employees.
Availability: To ensure the high availability and reliability of services, IaaS
providers often run redundant and geographically dispersed data centers.
Security: To safeguard their infrastructure and client data, IaaS companies
adopt security measures, including data encryption, firewalls, access
controls, and threat detection.
Customization: IaaS enables users to alter the operating systems, application
stacks, and security settings of their virtualized instances to suit their unique
requirements.

IaaS, or infrastructure as a service, is a cloud computing model that offers


users virtualized computer resources on a pay-per-use basis.

Users can scale their resources up or down in accordance with their demands
while taking advantage of high availability, security, and customization
possibilities.
Advantages of IaaS
There are the following advantages of the IaaS computing layer -

1.Shared infrastructure:
IaaS allows multiple users to share the same physical infrastructure.

2. Web access to the resources


Iaas allows IT users to access resources over the internet.

3. Pay-as-per-use model
IaaS providers provide services based on a pay-as-per-use basis. The users are
required to pay for what they have used.

4.Focus on the core business


IaaS providers focus on the organization's core business rather than on IT
infrastructure.

5. On-demand scalability
On-demand scalability is one of the biggest advantages of IaaS. Using IaaS, users
do not worry about upgrading software and troubleshooting issues related to
hardware components.
Disadvantages of IaaS
Security: In the IaaS context, security is still a major problem. Although IaaS
companies have security safeguards in place, it is difficult to achieve 100% protection.
To safeguard their data and applications, customers must verify that the necessary
security configurations and controls are in place.

Maintenance and Upgrade: The underlying infrastructure is maintained by IaaS


service providers, but they are not required to automatically upgrade the operating
systems or software used by client applications. Compatibility problems could come
from this, making it harder for customers to maintain their current software.

Performance Variability: Due to shared resources and multi-tenancy, the performance


of VMs in the IaaS system can change.

Dependency on Internet Connectivity: Internet access is crucial to IaaS, which is


largely dependent on it. Any interruptions or connectivity problems could hinder access
to cloud infrastructure and services, which would have an impact on productivity and
business operations.

Cost Management: Cost Control: IaaS provides scalability and flexibility, but it can
also result in difficult cost control. In order to prevent unforeseen charges, customers
must keep an eye on and manage their resource utilization.
IDaaS-
Employees in a company require to login to system to perform various tasks.
These systems may be based on local server or cloud based. Following are the
problems that an employee might face:

Remembering different username and password combinations for accessing


multiple servers.
If an employee leaves the company, it is required to ensure that each account of
that user is disabled. This increases workload on IT staff.

To solve above problems, a new technique emerged which is known


as Identity-as–a-Service (IDaaS).
IDaaS offers management of identity information as a digital entity. This identity
can be used during electronic transactions.
Identity
Identity refers to set of attributes associated with something to make it
recognizable. All objects may have same attributes, but their identities cannot
be the same. A unique identity is assigned through unique identification
attribute.

There are several identity services that are deployed to validate services such
as validating web sites, transactions, transaction participants, client, etc.
Identity-as-a-Service may include the following:

Directory services
Federated services
Registration
Authentication services
Risk and event monitoring
Single sign-on services
Identity and profile management
Single Sign-On (SSO)
To solve the problem of using different username and password combinations
for different servers, companies now employ Single Sign-On software, which
allows the user to login only one time and manage the access to other systems.

SSO has single authentication server, managing multiple accesses to other


systems, as shown in the following diagram:
SSO Working:
There are several implementations of SSO. Here, we discuss the common ones:

Following steps explain the working of


Single Sign-On software:

User logs into the authentication server


using a username and password.

The authentication server returns the user's


ticket.

User sends the ticket to intranet server.

Intranet server sends the ticket to the


authentication server.

Authentication server sends the user's


security credentials for that server back to
the intranet server.

If an employee leaves the company, then disabling the user account at the authentication
server prohibits the user's access to all the systems.
Federated Identity Management (FIDM)
FIDM describes the technologies and protocols that enable a user to package
security credentials across security domains. It uses Security Markup Language
(SAML) to package a user's security credentials as shown in the following
diagram:

OpenID
It offers users to login into multiple websites with single account. Google,
Yahoo!, Flickr, MySpace, WordPress.com are some of the companies that
support OpenID.
IDaas interoperability:

Identity as a Service (IDaaS) interoperability allows organizations to integrate


identity services into applications with minimal development effort. IDaaS is a
cloud-based service model that provides identity and access management (IAM)
services to organizations. It helps organizations manage user authentication and
authorization for their cloud applications and services.

IDaaS interoperability includes services such as:

User centric authentication: Usually in the form of information cards,


supported by OpenID and CardSpace specifications
XACML policy language: A general-purpose authorization policy language
that enables a distributed ID system to write and enforce custom policy
expressions
APIs: Assist in interoperability with other security software tools
PART B
Understanding Abstraction and
Virtualization
Using Virtualization Technologies
Load Balancing and Virtualization
Understanding Hypervisors
Understanding Machine Imaging
Abstraction and Virtualization

Abstraction makes it possible to encapsulate the physical


implementation so that the technical details may be
concealed from the customers.

Virtualization makes it possible to create a virtual


representation of anything, which may include computer
resources, a virtual computer hardware platform, or storage
devices.
Virtualization
Virtualization is a technology that allows creating an abstraction (a
virtual version) of computer resources, such as hardware
architecture, operating system, storage, network, etc. With this
abstraction, for example, a single machine can act like many
machines working independently.

The usual goal of virtualization is to centralize administrative tasks


while improving scalability and workloads.

It is not a new concept or technology in computer sciences. Virtual


machine concept was in existence since 1960s when it was first
developed by IBM to provide concurrent, interactive access to a
mainframe computer
VMM(Virtual Machine Monitor)
• VMM is the primary software behind virtualization
environments and implementations. When installed over a
host machine, VMM facilitates the creation of VMs, each with
separate operating systems (OS) and applications. VMM
manages the backend operation of these VMs by allocating
the necessary computing, memory, storage and other
input/output (I/O) resources.
• VMM also provides a centralized interface for managing the
entire operation, status and availability of VMs that are
installed over a single host or spread across different and
interconnected hosts.
About API,ABI, ISA
Application Binary Interface (ABI):
•Application Binary Interface works as an interface between the
operating system and application programs in the context of
object/binary code.
ABI handles the followings:
•Calling conventions
•Data type
•How functions arguments are passed
•How functions return values retrieved
•Program libraries
•The binary format of object files
•Exception propagation
•Register Use
Application Program Interface(API):
Application Program Interface works as an interface
between the operating system and application
programs in the context of source code.
Instruction Set Architecture(ISA):
•ISA is an instruction set architecture.
•An Instruction Set Architecture (ISA) is part of the
abstract model of a computer that defines how the CPU
is controlled by the software
•ISA works as an intermediate interface between
computer software and computer hardware.
VIRTUALIZATION SCENARIOS
a) Server Consolidation: To consolidate workloads of multiple under-utilized
machines to fewer machines to save on hardware, management, and
administration of the infrastructure.

b) Application consolidation: A legacy application might require newer hardware


and/or operating systems. Fulfilment of the need of such legacy applications
could be served well by virtualizing the newer hardware and providing its access
to others.

c) Sandboxing: Virtual machines are useful to provide secure, isolated environments


(sandboxes- A sandbox is an isolated testing environment that enables users to
run programs or open files without affecting the application, system or platform on
which they run) for running foreign or less-trusted applications. Virtualization
technology can, thus, help build secure computing platforms.

d) Multiple execution environments: Virtualization can be used to create multiple


execution environments (in all possible ways) and can increase the QoS by
guaranteeing specified amount of resources
e) Virtual hardware: It can provide the hardware one never had, e.g. Virtual
SCSI (small computer system interface) drives, Virtual ethernet adapters,
virtual ethernet switches and hubs, and so on.

f) Multiple simultaneous OS: It can provide the facility of having multiple


simultaneous operating systems that can run many different kind of
applications.

g) Debugging: It can help debug complicated software such as an operating


system or a device driver by letting the user execute them on an emulated PC
with full software controls.

h) Software Migration: Eases the migration of software and thus helps


mobility.
MORE VIRTUALIZATION TECHNIQUES
Virtualization techniques can be applied at different layers in a
computer stack: hardware layer (including resources such as the
computer architecture, storage, network, etc.), the operating
system layer and application layer. Examples of virtualization
types are:
1)Emulation (EM)
2)Native Virtualization (NV) or Full Virtualization
3)Para virtualization (PV)
4)Operating System Level Virtualization (OSLV)
5)Resource Virtualization (RV)
6)Application Virtualization (AV)
EMULATION (EM)

1)A typical computer consists of processors, memory chips, buses, hard drives,
disk controllers, timers, multiple I/O devices, and so on.
2)An emulator tries to execute instructions issued by the guest machine (the
machine that is being emulated) by translating them to a set of native
instructions and then executing them on the available hardware.
3)A program can be run on different platforms, regardless of the processor
architecture or operating system (OS). EM provides flexibility in that the guest
OS may not have to be modified to run on what would otherwise be an
incompatible architecture.
4)The performance penalty involved in EM is significant because each
instruction on the guest system must be translated to the host system.
Native Virtualization (NV) or Full Virtualization:
1)In NV, a virtual machine is used to simulate a complete hardware
environment in order to allow the operation of an unmodified
operating system for the same type of CPU to execute in
complete isolation within the Virtual Machine Monitor (VMM or
Hypervisor).
2)An important issue with this approach is that some CPU
instructions require additional privileges and may not be executed
in user space thus requiring the VMM to analyze executed code
and make it safe on-the-fly.
3)NV could be located as a middle ground between full emulation,
and para virtualization, and requires no modification of the guest
OS to enhance virtualization capabilities
PARAVIRTUALIZATION (PV)
1)In this technique a modified guest OS is able to speak directly to
the VMM.
2)A successful para virtualized platform may allow the VMM to be
simpler (by relocating execution of critical tasks from the virtual
domain to the host domain), and/or reduce the overall
performance degradation of machine execution inside the
virtual-guest.
3)Para virtualization requires the guest operating system to be
explicitly ported for the para virtualization-API.
4)A conventional OS distribution which is not para
virtualization-aware cannot be run on top of a para virtualizing
VMM.
OPERATING SYSTEM LEVEL VIRTUALIZATION (OSLV)

1)A server virtualization method where the kernel of an operating


system allows for multiple isolated user-space instances, instead
of just one.
2)It does provide the ability for user-space applications (that would
be able to run normally on the host OS) to run in isolation from
other software.
3)Most implementations of this method can define resource
management for the isolated instances.
RESOURCE VIRTUALIZATION (RV)
A method in which specific resources of a host system are used
by the Guest OS. These may be software based resources such as
domain names, certificates, etc., or hardware based for example
storage and network virtualization.
1)Storage Virtualization (SV): SV provides a single logical disk from
many different systems that could be connected by a network.
This virtual disk can then be made available to Host or Guest
OS's. Storage systems can provide either block accessed storage,
or file accessed storage.
2)Network Virtualization (NV): It is the process of combining
hardware and software network resources and network
functionality into a single, software-based administrative entity,
a virtual network
APPLICATION VIRTUALIZATION (AV)

1)Refers to software technologies that improve portability,


manageability and compatibility of applications by encapsulating
them from the underlying operating system on which they are
executed.
2)The Java Virtual Machine (JVM), Microsoft .NET CLR are
examples of this type of virtualization.
Key Enablers of virtualization
Virtualization in the Cloud is the key enabler of the first four of
five key attributes of cloud computing:
1) Service-based: A service-based architecture is where clients
are abstracted from service providers through service interfaces.
2) Scalable and elastic: Services can be altered to affect capacity
and performance on demand.
3) Shared services: Resources are pooled in order to create
greater efficiencies.
4)Metered usage: Services are billed on a usage basis.
5)Internet delivery: The services provided by cloud computing
are based on Internet protocols and formats
Hypervisor
• A hypervisor is a software that you can use to run multiple
virtual machines on a single physical machine. Every virtual
machine has its own operating system and applications. The
hypervisor allocates the underlying physical computing
resources such as CPU and memory to individual virtual
machines as required.
• Hypervisors are the underlying technology behind
virtualization or the decoupling of hardware from software. IT
administrators can create multiple virtual machines on a
single host machine. Each virtual machine has its own
operating system and hardware resources such as a CPU, a
graphics accelerator, and storage. You can install software
applications on a virtual machine, just like you do on a
physical computer.
Benefits of a hypervisor

Hardware independence
A hypervisor abstracts the host's hardware from the operating software environment.
IT administrators can configure, deploy, and manage software applications without
being constrained to a specific hardware setup. For example, you can run macOS on
a virtual machine instead of iMac computers.
Efficiency
Hypervisors make setting up a server operating system more efficient. Manually
installing the operating system and related software components is a time-consuming
process. Instead, you can configure the hypervisor to immediately create your virtual
environment.
Scalability
Organizations use hypervisors to maximize resource usage on physical computers.
Instead of using separate machines for different workloads, hypervisors create
multiple virtual computers to run several workloads on a single machine.
Type 1 Hypervisor
The hypervisor runs directly on the underlying host system. It is also known as a “Native
Hypervisor” or “Bare metal hypervisor”. It does not require any base server operating
system. It has direct access to hardware resources. It replaces the host operating system, and
the hypervisor schedules VM services directly to the hardware.
The type 1 hypervisor is very much commonly used in the enterprise data center or other
server-based environments.
Examples of Type 1 hypervisors include VMware ESXi, Citrix XenServer, and Microsoft
Hyper-V hypervisor.

Type 2 Hypervisor
A Host operating system runs on the underlying host system. It is also known as ‘Hosted
Hypervisor”. Such kind of hypervisors doesn’t run directly over the underlying hardware
rather they run as an application in a Host system(physical machine). Basically, the software
is installed on an operating system. Hypervisor asks the operating system to make hardware
calls. An example of a Type 2 hypervisor includes VMware Player or Parallels Desktop.
Hosted hypervisors are often found on endpoints like PCs. The type-2 hypervisor is very
useful for engineers, and security analysts (for checking malware, or malicious source code
and newly developed applications).
Individual users who wish to operate multiple operating systems on a personal computer
should use a form 2 hypervisor.
Choosing the right hypervisor :
Understand your needs: Your company and you can have different needs for a
virtualization hypervisor. They are as
a. Flexibility
b. Scalability
c. Usability
d. Availability
e. Reliability
f. Efficiency
g. Reliable support

The cost of a hypervisor: While a number of entry-level solutions are free, or


practically free, the prices at the opposite end of the market can be staggering.
Licensing frameworks also vary, so it’s important to be aware of exactly what you’re
getting for your money.

Virtual machine performance: Virtual systems should meet or exceed the


performance of their physical counterparts, at least in relation to the applications
within each server. Everything beyond meeting this benchmark is profit.
Ecosystem: We need to check hypervisor’s ecosystem – that
is, the availability of documentation, support, training,
third-party developers and consultancies, and so on – is
cost-effective in the long term or not.

Test for yourself: You can gain basic experience from your
existing desktop or laptop. You canrun both VMware vSphere
and Microsoft Hyper-V in either VMware Workstation or
VMware Fusion to create a nice virtual learning and testing
environment.
Differences between Type 1 Hypervisor and Type 2 Hypervisor
Load Balancing
Load Balancing means the ability to distribute the workload
across multiple computing resources for an overall performance
increase.
It represents the ability to transfer any portion of the processing
for a system request to another independent system that will
handle it concurrently. E.g. Web/Database Server.
Cloud computing provide services with the help of internet. No
matter where you access the service, you are directed to the
available resources.
The technology used to distribute service requests to resources is
referred to as load balancing.
Load balancing technique can be implemented in hardware or in
software. So with load balancing reliability is increased by using
multiple components instead of single component.
Load Balancing and Virtualization
1) Optimization Technique
2) Increase Resource utilization
3) Lower Latency
4) Reduce response time
5) Avoid System Overload
6) Maximize throughput
7) Increased Reliability
The different network resources that can be load balanced are as
follows:
1.Storage resources
2.Connections through intelligent switches
3.Processing through computer system assignment
4.Access to application instances
5.Network interfaces and services such as DNS, FTP, and HTTP
6.In Load balancing Scheduling algorithms are used to assign
resources
7.The various scheduling algorithm that are in use are round robin
and weighted round robin fastest response time, least connections
and weighted least connections, and custom assignments.
It is the responsibility of load Balancer to listen for
service request.
When the service request arises then load balancer uses
scheduling algorithm to assign resources for a particular
request.
Load balancer is like a work load manager.
Load balancer generates a Session ticket for a particular
client so that other request from the same client can be
routed to the same resource.
Understanding Machine Imaging
A machine image is a Compute Engine resource that stores all the
configuration, metadata, permissions, and data from multiple
disks of a virtual machine (VM) instance. You can use a machine
image in many system maintenance, backup and recovery, and
instance cloning scenarios.
Machine imaging is a process that is used to provide system
portability, and provision and deploy systems in the cloud
through capturing the state of systems using a system image.
A system image makes a copy or a clone of the entire computer
system inside a single file. The image is made by using a program
called system imaging program and can be used later to restore a
system image.
For example Amazon Machine Image (AMI) is a system image
that is used in the cloud computing.
The Amazon Web Services uses AMI to store copies of a virtual
machine. An AMI is a file system image that contains an
operating system, all device drivers, and any applications and
state information that the working virtual machine would have.

The AMI files are encrypted and compressed for security purpose
and stored in Amazon S3 (Simple Storage System) buckets as a
set of 10MB chunks. Machine imaging is mostly run on
virtualization platform due to this it is also called as Virtual
Appliances and running virtual machines are called instances.
Part C
Capacity Planning
Topics:
▪Capacity Planning
▪Defining Baseline and metrics
▪Network capacity
Capacity Planning
For available resources, capacity planning seeks a heavy demand.
It determines whether the systems are working properly, used to
measure their performance, determine the usage of patterns
and predict future demand of cloud-capacity.
This also adds an expertise planning for improvement
and optimizes performance.
The goal of capacity planning is to maintain the workload
without improving the efficiency. Tuning of performance
and work optimization is not the major target of capacity
planners.
It measures the maximum amount of task that it can perform. The
capacity planning for cloud technology offers the systems with
more enhanced capabilities including some new challenges over a
purely physical system.
Goals of capacity planners
• Capacity planners try to find the solution to meet future
demands on a system by providing additional capacity to fulfill
those demands.
• Capacity planning & system optimization are two both
different concepts, and you mustn't mix them as one.
Performance & capacity are two different attributes of a
system.
• Cloud 'capacity' measures & concerns about how much
workload a system can hold whereas 'performance' deals with
the rate at which a task get performed.
Capacity planning steps
1) Determine the distinctiveness of the present system.
2) Determine the working load for different resources in the
system such as CPU, RAM, network, etc.
3) Load the system until it gets overloaded; & state what's
requiring to uphold acceptable performance.
4) Predict the future based on older statistical reports & other
factors.
5) Deploy resources to meet the predictions & calculations.
6) Repeat step (i) through (v) as a loop.
Defining Baseline and Metrics
• In business, the current system capacity or workload should
be determine as a measurable quantity over time.
• Many developers create cloud-based applications and Web
sites based on a LAMP solution stack
Linux: the operating system
Apache HTTP server: the web server
MySql : the data base server
PhP(hyper Text Processor): the scripting language
These four technologies are open source software.
Baseline Measurements
Two important overall workload metrics
in this LAMP system.
•Page views or hits on the Web site, as measured
in hits per second.
•Transactions completed on the database server,
as measured by transactions per second or
perhaps by queries per second
Load Testing
• Server administrator checks for servers under load for system
metrics to give capacity planners enough information to do
significant capacity planning. Capacity planners should know
about the increase in load to the system. Load-testing needs
to query the following questions:
• What is the optimum load that the system can support?
• What actually blocks the current system & limits the system's
performance?
• Can the configuration be altered in the server to use capacity?
• How will the server react concerning performance with other
servers having different characteristics?
Load Testing Solutions
• LoadView
• Flood IO
• Loader
• Blaze meter
• Load Focus
• K6
• Load Ninja
• Gatling
Network Capacity Planning
• Network capacity planning is the process of
planning a network for utilization, bandwidth,
operations, availability and other network
capacity constraints.
• It is a type of network or IT management
process that assists network administrators in
planning for network infrastructure and
operations in line with current and future
operations.
Network capacity planning is generally done to identify
shortcomings or parameters that can affect the
network’s performance or availability within a
predictable future time, usually in years.
Typically, network capacity planning requires
information about:
•Current network traffic volumes
•Network utilization
•Type of traffic
•Capacity of current infrastructure
This analysis helps network administrators understand
the maximum capability of current resources and the
amount of new resources needed to cater to future
requirements.

You might also like