UNIT 1-Cloud Computing

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 34

SARANATHAN COLLEGE OF ENGINEERING

DEPARTMENT OF CSE
CS8791 CLOUD COMPUTING
IV YEAR – VII SEMESTER
UNIT I INTRODUCTION
Introduction to Cloud Computing – Definition of Cloud – Evolution of Cloud Computing –Underlying
Principles of Parallel and Distributed Computing – Cloud Characteristics – Elasticity in Cloud – On-
demand Provisioning.

UNIT II CLOUD-ENABLING TECHNOLOGIES


Service Oriented Architecture – REST and Systems of Systems – Web Services –Publish-Subscribe
Model – Basics of Virtualization – Types of Virtualization –Implementation Levels of Virtualization –
Virtualization Structures – Tools and Mechanisms – Virtualization of CPU –Memory – I/O Devices –
Virtualization Support and Disaster Recovery.

UNIT III CLOUD ARCHITECTURE, SERVICES, AND STORAGE


Layered Cloud Architecture Design – NIST Cloud Computing Reference Architecture –Public, Private,
and Hybrid Clouds – IaaS – PaaS – SaaS – Architectural Design Challenges – Cloud Storage – Storage-as-
a-Service – Advantages of Cloud Storage – Cloud Storage Providers – S3.

UNIT IV RESOURCE MANAGEMENT AND SECURITY IN CLOUD


Intercloud Resource Management – Resource Provisioning and Resource Provisioning Methods –
Global Exchange of Cloud Resources – Security Overview – Cloud Security Challenges –Software-as-a-
Service Security – Security Governance – Virtual Machine Security – IAM –Security Standards.

UNIT V CLOUD TECHNOLOGIES AND ADVANCEMENTS


Hadoop – MapReduce – Virtual Box — Google App Engine – Programming Environment for Google App
Engine –– Open Stack –Federation in the Cloud – Four Levels of Federation –Federated Services and
Applications – Future of Federation.
UNIT I – INTRODUCTION

Introduction to Cloud Computing – Definition of Cloud – Evolution of Cloud Computing –


Underlying Principles of Parallel and Distributed Computing – Cloud Characteristics –
Elasticity in Cloud – On-demand Provisioning.

INTRODUCTION TO CLOUD COMPUTING:


 A data centre or data center is a large group of networked computer servers typically
used by organizations for the remote storage, processing, or distribution of large amounts
of data.
 An on-premises data centre is a group of servers that is privately owned and controlled
by an individual or an organization. The management of on-premises data centres is not
an easy task as the purchasing and installation of the hardware, virtualization, installation
of the operating system, other required applications, networks, configuration of the
firewall and the storage set-up for storing the data have to be managed which constitutes
to the disadvantages of data centres.
 To overcome the disadvantages of the data centres, cloud computing is opted.
 Difference between classical computing and cloud computing:

COST MODEL:
VARIOUS ASPECTS OF CLOUD COMPUTING:

DEFINITION OF CLOUD:
Cloud:
 "The cloud" refers to servers that can be accessed over the Internet, and the software and
databases that run on those servers.
 Cloud servers are located in data centres all over the world. And the cloud servers that
provide cloud services are called cloud vendors.
 By using cloud computing, users and companies do not have to manage physical servers
themselves or run software applications on their own machines instead the wide variety
of services provided can be rented based to the needs and the user will be charged
accordingly.
Important concepts in Cloud:
1. Abstraction:

 Cloud Computing abstracts the details of the system implementation from users
and developers.
 Applications run on physical systems that aren’t specified.
 Data is stored in locations that are unknown.
 The administration of the system is outsourced to others.

2. Virtualization:

 Cloud Computing virtualizes systems by pooling and sharing resources.


 Systems and storage can be provisioned as needed from a centralized
infrastructure.

Cloud Computing:
 Definition – Cloud computing is a model for enabling ubiquitous, convenient, on-
demand network access to a shared pool of configurable computing resources (e.g.,
networks, servers, storage, applications, and services) that can be rapidly provisioned and
released with minimal management effort or service provider interaction.
 Cloud Computing is composed of
1) 5 essential characteristics
2) 4 deployment models
3) 4 service modes
TYPES OF CLOUD:
1) Public cloud
2) Private cloud
3) Hybrid cloud
4) Community cloud
Public Cloud:
 Public clouds are managed by third parties which provide cloud services over the
internet to the public, these services are available as pay-as-you-go billing models. 
 The fundamental characteristics of public clouds are multitenancy.
 A public cloud is meant to serve multiple users, not a single customer. A user requires a
virtual computing environment that is separated, and most likely isolated, from other
users.

Private Cloud:

 Private clouds are distributed systems that work on private infrastructure and provide
the users with dynamic provisioning of computing resources.
 Instead of a pay-as-you-go model in private clouds, there could be other schemes that
manage the usage of the cloud and proportionally billing of the different departments or
sections of an enterprise. Private cloud providers are HP Data Centers, Ubuntu, Elastic-
Private cloud, Microsoft, etc…
Hybrid Cloud:
 A hybrid cloud is a heterogeneous distributed system formed by combining facilities of
the public cloud and private cloud. For this reason, they are also called heterogeneous
clouds. 
 A major drawback of private deployments is the inability to scale on-demand and
efficiently address peak loads. Here public clouds are needed. Hence, a hybrid cloud
takes advantage of both public and private clouds.

Community Cloud:
 Community clouds are distributed systems created by integrating the services of
different clouds to address the specific needs of an industry, a community, or a business
sector. But sharing responsibilities among the organizations is difficult.
 In the community cloud, the infrastructure is shared between organizations that have
shared concerns or tasks. The cloud may be managed by an organization or a third
party.
TYPES OF CLOUD SERVICES:

1) Infrastructure as a Service (IaaS)


2) Platform as a Service (PaaS)
3) Software as a Service (SaaS)

Infrastructure as a Service (IaaS):


 This model puts together infrastructures demanded by users—namely servers, storage,
networks, and the data center fabric.
 The user can deploy and run on multiple VMs running guest OS on specific applications.
 The user does not manage or control the underlying cloud infrastructure, but can
specify when to request and release the needed resources.

Platform as a Service (PaaS):

 This model enables the user to deploy user-built applications onto a virtualized
cloud platform. PaaS includes middleware, databases, development tools, and
some runtime support such as Web 2.0 and Java.
 The platform includes both hardware and software integrated with specific
programming interfaces.
 The provider supplies the API and software tools (e.g., Java, Python, Web 2.0,
.NET). The user is freed from managing the cloud infrastructure.
Software as a Service (SaaS):
 This refers to browser-initiated application software over thousands of paid cloud
customers. The SaaS model applies to business processes, industry applications,
consumer relationship management (CRM), enterprise resources planning (ERP),
human resources (HR), and collaborative applications.
 On the customer side, there is no upfront investment in servers or software
licensing. On the provider side, costs are rather low, compared with conventional
hosting of user applications.

EXAMPLE:
OBJECTIVES OF CLOUD DESIGN :
1) Shifting computing from desktops to data centers
2) Service provisioning and cloud economics
3) Scalability in performance
4) Data privacy protection
5) High-quality cloud services
6) New standards and interfaces
ADVANTAGES OF CLOUD COMPUTING:

1. Back-up and restore data:

Once the data is stored in the cloud, it is easier to get back-up and restore that data using the
cloud.

2. Improved Collaboration:

Cloud applications improve collaboration by allowing groups of people to quickly and easily
share information in the cloud via shared storage.

3. Excellent Accessibility:

Cloud allows us to quickly and easily access store information anywhere, anytime in the
whole world, using an internet connection. An internet cloud infrastructure increases
organization productivity and efficiency by ensuring that our data is always accessible.

4. Low Maintenance Cost:

Cloud computing reduces both hardware and software maintenance costs for organizations.

5. Mobility:
Cloud computing allows us to easily access all cloud data via mobile.

6. Services in the pay-per-use model:

Cloud computing offers Application Programming Interfaces (APIs) to the users for access
services on the cloud and pays the charges as per the usage of service.

7. Unlimited Storage capacity:

Cloud offers us a huge amount of storing capacity for storing our important data such as
documents, images, audio, video, etc. in one place.

8. Data Security:

Data security is one of the biggest advantages of cloud computing. Cloud offers many
advanced features related to security and ensures that data is securely stored and handled.

DISADVANTAGES OF CLOUD COMPUTING:


1. Internet Connectivity:

As you know, in cloud computing, every data (image, audio, video, etc.) is stored on the
cloud, and we access these data through the cloud by using the internet connection. If you
do not have good internet connectivity, you cannot access these data. However, we have
no any other way to access data from the cloud.

2. Vendor Lock-In:

Vendor lock-in is the biggest disadvantage of cloud computing. Organizations may face
problems when transferring their services from one vendor to another. As different
vendors provide different platforms, that can cause difficulty moving from one cloud to
another.

3. Limited Control:

As we know, cloud infrastructure is completely owned, managed, and monitored by the


service provider, so the cloud users have less control over the function and execution of
services within a cloud infrastructure.
4. Security:

Although cloud service providers implement the best security standards to store
important information. But, before adopting cloud technology, you should be aware that
you will be sending all your organization's sensitive information to a third party, i.e., a
cloud computing service provider. While sending the data on the cloud, there may be a
chance that your organization's information is hacked by Hackers.

EVOLUTION OF CLOUD COMPUTING:


1. Disturbed Systems:

 It is a composition of multiple independent systems but all of them are depicted as a


single entity to the users. The purpose of distributed systems is to share resources and
also use them effectively and efficiently.

 Distributed systems possess characteristics such as scalability, concurrency, continuous


availability, heterogeneity, and independence in failures. But the main problem with
this system was that all the systems were required to be present at the same
geographical location.

 Thus, to solve this problem, distributed computing led to three more types of computing
and they were-Mainframe computing, cluster computing, and grid computing.

2. Mainframe Computing:

 Mainframes that first came into existence in 1951 are highly powerful and reliable
computing machines. These are responsible for handling large data such as massive
input-output operations. Even today these are used for bulk processing tasks such as
online transactions etc.
 These systems have almost no downtime with high fault tolerance. After distributed
computing, these increased the processing capabilities of the system. But these were
very expensive. To reduce this cost, cluster computing came as an alternative to
mainframe technology.

3. Cluster Computing:

 In 1980s, cluster computing came as an alternative to mainframe computing. Each


machine in the cluster was connected to each other by a network with high bandwidth.
These were way cheaper than those mainframe systems. These were equally capable of
high computations.

 Also, new nodes could easily be added to the cluster if it was required.

 Thus, the problem of the cost was solved to some extent but the problem related to
geographical restrictions still pertained. To solve this, the concept of grid computing
was introduced.
4. Grid Computing:

 In 1990s, the concept of grid computing was introduced. It is the type of computing in
which different systems were placed at entirely different geographical locations and
these all were connected via the internet.
 These systems belonged to different organizations and thus the grid consisted of
heterogeneous nodes. Although it solved some problems but new problems emerged as
the distance between the nodes increased.
 The main problem which was encountered was the low availability of high bandwidth
connectivity and other network-related issues. Thus. cloud computing is often referred
to as the “Successor of grid computing”.

5. Virtualization:

 It was introduced nearly 40 years back. It refers to the process of creating a virtual layer
over the hardware which allows the user to run multiple instances simultaneously on the
hardware.
 It is a key technology used in cloud computing. It is the base on which major cloud
computing services such as Amazon EC2, VMware vCloud, etc work on. Hardware
virtualization is still one of the most common types of virtualization.

6. Web 2.0:

 It is the interface through which the cloud computing services interact with the clients.
It is because of Web 2.0 that we have interactive and dynamic web pages.
 It also increases flexibility among web pages. Popular examples of web 2.0 include
Google Maps, Facebook, Twitter, etc. Needless to say, social media is possible because
of this technology only. It gained major popularity in 2004.

7. Service Orientation:

 It acts as a reference model for cloud computing. It supports low-cost, flexible, and
evolvable applications.
 Two important concepts were introduced in this computing model. These were Quality
of Service (QoS) which also includes the SLA (Service Level Agreement) and Software
as a Service (SaaS).
8. Utility Computing:

 It is a computing model that defines service provisioning techniques for services such
as compute services along with other major services such as storage, infrastructure, etc
which are provisioned on a pay-per-use basis.

UNDERLYING PRICIPLES OF PARALLEL AND DISTRIBUTED COMPUTING :


Instead of using a centralized computer to solve computational problems, a parallel
and distributed computing system uses multiple computers to solve large-scale problems over
the Internet. Thus, distributed computing becomes data-intensive and network-centric.

The Age of Internet Computing:


High-performance computing (HPC) applications is no longer optimal for
measuring system performance. The emergence of cloud computing instead demands
high-throughput computing (HTC) systems built with parallel and distributed
computing technologies. The data centres have to be upgraded using fast servers,
storage systems, and high-bandwidth networks.

The Platform Evolution:

 1950 – 1970: Mainframes like IBM 360 and CDC 6400 were used widely.
 1960 – 1980: Lower-cost minicomputers such as the DEC PDP !! and VAX
series.
 1970 – 1990: Widespread use of personal computers built with VLSI
microprocessor.
 1980 – 2000: Massive numbers of portable computers and pervasive devices
appeared in both wired and wireless applications
 Since 1990: The use of both HPC and HTC systems hidden in clusters, grids, or
Internet clouds has proliferated.
 On the HPC side, supercomputers (massively parallel processors or MPPs) are
gradually replaced by clusters of cooperative computers out of a desire to share
computing resources. The cluster is often a collection of homogeneous
compute nodes that are physically connected in close range to one another.
 On the HTC side, peer-to-peer (P2P) networks are formed for distributed file
sharing and content delivery applications. A P2P system is built over many
client machines (a concept we will discuss further in Chapter 5). Peer machines
are globally distributed in nature. P2P, cloud computing, and web service
platforms are more focused on HTC applications than on HPC applications.
Clustering and P2P technologies lead to the development of computational grids
or data grids.

 For many years, HPC systems emphasize the raw speed performance. The speed
of HPC systems has increased from Gflops in the early 1990s to now Pflops in
2010.
 The development of market-oriented high-end computing systems is undergoing
a strategic change from an HPC paradigm to an HTC paradigm. This HTC
paradigm pays more attention to high-flux computing. The main application for
high-flux computing is in Internet searches and web services by millions or more
users simultaneously. The performance goal thus shifts to measure high
throughput or the number of tasks completed per unit of time. HTC technology
needs to not only improve in terms of batch processing speed, but also address
the acute problems of cost, energy savings, security, and reliability at many data
and enterprise computing centers.
 Advances in virtualization make it possible to see the growth of Internet clouds
as a new computing paradigm. The maturity of radio-frequency identification
(RFID), Global Positioning System (GPS), and sensor technologies has triggered
the development of the Internet of Things (IoT). These new paradigms are only
briefly introduced here.
 The high-technology community has argued for many years about the precise
definitions of centralized computing, parallel computing, distributed computing,
and cloud computing. In general, distributed computing is the opposite of
centralized computing. The field of parallel computing overlaps with distributed
computing to a great extent, and cloud computing overlaps with distributed,
centralized, and parallel computing.

CHARACTERISTICS OF CLOUD:
1. On-demand self-service:

 A consumer can unilaterally provision computing capabilities, such as server


time and network storage, as needed automatically without requiring human
interaction with each service provider.
2. Broad network Access:

 Capabilities are available over the network and accessed through standard mechanisms
that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones,
tablets, laptops, and workstations).

3. Resource pooling:

 The provider’s computing resources are pooled to serve multiple consumers using a
multi-tenant model, with different physical and virtual resources dynamically assigned
and reassigned according to consumer demand.
 There is a sense of location independence in that the customer generally has no control or
knowledge over the exact location of the provided resources but may be able to specify
location at a higher level of abstraction (e.g., country, state, or data center). Examples of
resources include storage, processing, memory, and network bandwidth.
4. Rapid Elasticity:

 Capabilities can be elastically provisioned and released, in some cases


automatically, to scale rapidly outward and inward commensurate with demand.
To the consumer, the capabilities available for provisioning often appear to be
unlimited and can be appropriated in any quantity at any time.

5. Measured Service:

 Cloud systems automatically control and optimize resource use by leveraging a metering
capability at some level of abstraction appropriate to the type of service (e.g., storage,
processing, bandwidth, and active user accounts).
 Resource usage can be monitored, controlled, and reported, providing transparency for
both the provider and consumer of the utilized service.

ELASTICITY IN THE CLOUD:


 Elasticity refers to the ability of a cloud to automatically expand(add) or
compress(remove) the infrastructural resources (such as CPU cores, memory,
VM and container instances) on a sudden-up and down in the requirement so
that the workload can be managed efficiently. This elasticity helps to minimize
infrastructural costs. It is a dynamic property.

 Elasticity is the degree to which a system is able to adapt to workload changes by


provisioning and deprovisioning resources in an autonomic manner, such that at
each point in time the available resources match the current demand as closely as
possible.

 Elasticity in cloud is not applicable to all kinds of environments, it is helpful to address


only those scenarios where the resources requirements fluctuate up and down suddenly
for a specific time interval. It is not quite practical to use where persistent resource
infrastructure is required to handle the heavy workload.

 It is most commonly used in pay-per-use, public cloud services. Where IT managers are
willing to pay only for the duration for which they consumed the resources.
 Elasticity is built on top of scalability. It can be considered as an automation of the
concept of scalability and aims to optimize at best and as quickly as possible the
resources at a given time.

 Another term associated with elasticity is the efficiency, which characterizes


how cloud resource can be efficiently utilized as it scales up or down. It is the
amount of resources consumed for processing a given amount of work, the
lower this amount is, the higher the efficiency of a system.

 Elasticity also introduces a new important factor, which is the speed. Rapid
provisioning and deprovisioning are key to maintaining an acceptable
performance in the context of cloud computing. Quality of service is subjected to
a service level agreement.

 EXAMPLE:
Consider an online shopping site whose transaction workload increases during
the festive season. So, for this specific period of time, the resources need a
spike up. In order to handle this kind of situation, we can go for Cloud-
Elasticity service rather than Cloud Scalability. As soon as the season goes out,
the deployed resources can then be requested for withdrawal.

Classification of Elasticity in Cloud:

Elasticity solutions can be arranged in different classes based on

1. Scope
2. Policy
3. Purpose
4. Method
Scope
 Elasticity can be implemented on any of the cloud layers.

 Most commonly, elasticity is achieved on the IaaS level, where the


resources to be provisioned are virtual machine instances. Other
infrastructure services can also be scaled.

 On the PaaS level, elasticity consists in scaling containers or databases for instance.
Finally, both PaaS and IaaS elasticity can be used to implement
elastic applications, be it for private use or in order to be provided SaaS.

 The elasticity actions can be applied either at the infrastructure or


application/platform level. The elasticity actions perform the decisions
made by the elasticity strategy or management system to scale the
resources.

 Google App Engine and Azure elastic pool are examples of elastic Platform
as a Service (PaaS).

 Elasticity actions can be performed at the infrastructure level where the


elasticity controller monitors the system and takes decisions. The cloud
infrastructures are based on the virtualization technology, which can be
VMs or containers. In the embedded elasticity, elastic applications are able
to adjust their own resources according to runtime requirements or due to
changes in the execution flow.There must be a knowledge of the source code
of the applications.

 Application Map: The elasticity controller must have a complete map of the
application components and instances.

 Code embedded: The elasticity controller is embedded in the application source code.

 The elasticity actions are performed by the application itself. While moving the
elasticity controller to the application source code eliminates the use of monitoring
systems. There must be a specialized controller for each application.

Policy

 Elastic solutions can be either manual or automatic. A manual elastic solution would
provide their users with tools to monitor their systems and add or remove resources
but leaves the scaling decision to them.

 Automatic mode: All the actions are done automatically, and this could
be classified into reactive and proactive modes. Elastic solutions can be
either reactive or predictive.

 Reactive mode: The elasticity actions are triggered based on certain


thresholds or rules, the system reacts to the load (workload or resource
utilization) and triggers actions to adapt changes accordingly.
 Proactive mode: This approach implements forecasting techniques,
anticipates the future needs and triggers actions based on this anticipation.

 An elastic solution is reactive when it scales a posteriori, based on a


monitored change in the system. These are generally implemented by a set of
Event-Condition-Action rules. A predictive or proactive elasticity solution
uses its knowledge of either recent history or load patterns inferred from
longer periods of time in order to predict the upcoming load of the system
and scale according to it.

Purpose

 An elastic solution can have many purposes. The first one to come to mind is
naturally performance, in which case the focus should be put on their speed.
Another purpose for elasticity can also be energy efficiency, where using
the minimum amount of resources is the dominating factor. Other
solutions intend to reduce the cost by multiplexing either resource
providers or elasticity methods.

 Elasticity has different purposes such as improving performance,


increasing resource capacity, saving energy, reducing cost and ensuring
availability. Once we look to the elasticity objectives, there are different
perspectives. Cloud IaaS providers try to maximize the profit by
minimizing the resources while offering a good Quality of Service (QoS).
PaaS providers seek to minimize the cost they pay to the cloud.

 The customers (end-users) search to increase their Quality of Experience


(QoE) and to minimize their payments. QoE is the degree of delight or
annoyance of the user of an application or service.

Method

 Vertical elasticity, changes the amount of resources linked to existing instances on-
the- fly. This can be done in two manners.

 The first method consists in explicitly re-dimensioning a virtual machine instance,


i.e., changing the quota of physical resources allocated to it. This is however poorly
supported by common operating systems as they fail to take into account changes in
CPU or memory without rebooting, thus resulting in service interruption.
 The second vertical scaling method involves VM migration: moving a virtual machine
instance to another physical machine with a different overall load changes its
available resources. Vertical scaling is the process of modifying resources (CPU,
memory, storage or both) size for an instance at run time. It gives more flexibility for
the cloud systems to cope with the varying workloads.

 Horizontal scaling is the process of adding/removing instances, which may be


located at different locations. Load balancers are used to distribute the load among the
different instances.

ON -DEMAND PROVISIONING:
 Resource Provisioning means the selection, deployment, and run-time
management of software (e.g., database server management systems, load
balancers) and hardware resources (e.g., CPU, storage, and network) for
ensuring guaranteed performance for applications. It is an important and
challenging problem in the large-scale distributed systems such as Cloud
computing environments.

 There are many resource provisioning techniques, both static and dynamic
each one having its own advantages and also some challenges. These
resource provisioning techniques used must meet Quality of Service (QoS)
parameters like availability, throughput, response time, security, reliability
etc., and thereby avoiding Service Level Agreement (SLA) violation.

 Over provisioning and under provisioning of resources must be avoided.


Another important constraint is power consumption. The ultimate goal of the
cloud user is to minimize cost by renting the resources and from the cloud
service provider’s perspective to maximize profit by efficiently allocating the
resources.

 In order to achieve the goal, the cloud user has to request cloud service
provider to make a provision for the resources either statically or
dynamically. So that the cloud service provider will know how many
instances of the resources and what resources are required for a particular
application. By provisioning the resources, the QoS parameters like
availability, throughput, security, response time, reliability, performance etc
must be achieved without violating SLA.

 There are two types:

1. Static provisioning:
 For applications that have predictable and generally unchanging
demands/workloads, it is possible to use “static provisioning"
effectively.

 With advance provisioning, the customer contracts with the provider for
services. The provider prepares the appropriate resources in advance of start
of service. The customer is charged a flat fee or is billed on a monthly
basis.

2. Dynamic Provisioning:
 In cases where demand by applications may change or vary,
“dynamic provisioning" techniques have been suggested whereby
VMs may be migrated on-the-fly to new compute nodes within the
cloud.

 The provider allocates more resources as they are needed and


removes them when they are not. The customer is billed on a pay-
per-use basis. When dynamic provisioning is used to create a
hybrid cloud, it is sometimes referred to as cloud bursting.

Parameters for Resource provisioning:

1. Response time: The resource provisioning algorithm designed must take


minimal time to respond when executing the task.
2. Minimize Cost: From the Cloud user point of view cost should be minimized.
3. Revenue Maximization: This is to be achieved from the Cloud Service
Provider’s view.
4. Fault tolerant: The algorithm should continue to provide service in spite
of failure of nodes.
5. Reduced SLA Violation: The algorithm designed must be able to reduce
SLA violation.
6. Reduced Power Consumption: VM placement & migration techniques must
lower power consumption.

Dynamic provisioning Types:


1. Local On-demand Resource Provisioning
2. Remote On-demand Resource Provisioning
Local On-demand Resource Provisioning
 The Engine for the Virtual Infrastructure - The OpenNebula Virtual Infrastructure
Engine
 OpenNebula creates a distributed virtualization layer
• Extend the benefits of VM Monitors from one to multiple resources
• Decouple the VM (service) from the physical location
 Transform a distributed physical infrastructure into a flexible an elastic
virtual infrastructure, which adapts to the changing demands of the VM (service)
workloads.

Virtualization of Cluster and HPC Systems


Separation of Resource Provisioning from Job Management
• New virtualization layer between the service and the infrastructure layers
• Seamless integration with the existing middleware stacks.
• Completely transparent to the computing service and so end users

BENEFITS:
Benefits for Existing Grid Infrastructures
 The virtualization of the local infrastructure supports a virtualized
alternative to contribute resources to a Grid infrastructure
• Simpler deployment and operation of new middleware distributions
• Lower operational costs
• Easy provision of resources to more than one infrastructure
• Easy support for VO-specific worker
nodes
• Performance partitioning between local
and grid clusters

Other Tools for VM Management


 VMware DRS, Platform Orchestrator, IBM Director, Novell ZENworks,
Enomalism, Xenoserver
Advantages:
 Open-source (Apache license v2.0)
 Open and flexible architecture to integrate new virtualization technologies
 Support for the definition of any scheduling policy (consolidation, workload balance,
affinity, SLA)
 LRM-like CLI and API for the integration of third-party tools

Remote on-Demand Resource Provisioning
On-demand Access to Cloud Resources
 Supplement local resources with cloud resources to satisfy peak or fluctuating demands

CHALLENGES IN CLOUD COMPUTING:

 Data Protection:
The data protection is the crucial element of security that warrants scrutiny. In cloud, as data is stored on
remote data center and managed by third party vendors. So, there is a fear of losing confidential data.
Therefore, various cryptographic techniques have to be implemented to protect the confidential data.
 Data Recovery and Availability:
In cloud, the user’s data is scattered across the multiple datacenters therefore the recovery of such data is very
difficult as user never comes to know what is the exact location of their data and don’t know how to recover
that data. The availability of the cloud services are highly associated with downtime of the services which is
mentioned in the agreement called Service Level Agreement (SLA). Therefore, any compromise in SLA may leads
increase in downtime with less availability and may harm your business productivity.
 Regulatory and Compliance Restrictions:
Many of the countries have Compliance Restrictions and regulation on usage of Cloud services. Therefore, the
Government regulations in such countries do not allow providers to share customer's personal information and
other sensitive information to outside states or country. In order to meet such requirements, cloud providers
need to setup a data center or a storage site exclusively within that country to comply with regulations
 Management Capabilities :
The involvement of multiple cloud providers for in house services may leads to difficulty in management.
 Interoperability and Compatibility Issue:
The services hosted by the organizations should have freedom to migrate the services in or out of the cloud
which is very difficult in public clouds. The compatibility issue comes when organization wants to change the
service provider. Most of the public cloud provides vendor dependent APIs for access and they may have their
own proprietary solutions which may not be compatible with other providers.

You might also like