UNIT 1

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 14

UNIT 1

INTRODUCTION

1.1 Introduction to Cloud-Computing

Cloud-Computing is the delivery of computing services- including server, storage, databases,


networking, software, analytics and intelligence-over the internet (―the cloud‖) to offer faster
innovation, flexible resources, and economies of scale. User typically pay only for cloud
services that they use. It helps in lowering operating cost, run the infrastructure more
efficiently and scale business needs change.

Fig 1.1 Cloud-Computing Environment

1.1.1 The NIST Definition of Cloud Computing:


Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access
to a shared pool of configurable computing resources (e.g., networks, servers, storage,
applications, and services) that can be rapidly provisioned and released with minimal
management effort or service provider interaction. This cloud model is composed of five
essential characteristics, three service models (SaaS, PaaS, IaaS), and four deployment
models (Public, Private, Hybrid and Community Cloud). The various essential characteristics
have been given below.
1.1.2 Essential Characteristics:
 On-demand self-service: A consumer can unilaterally provision computing
capabilities, such as server time and network storage, as needed automatically
without requiring human interaction with each service provider.
 Broad network access: Capabilities are available over the network and accessed
through standard mechanisms that promote use by heterogeneous thin or thick client
platforms (e.g., mobile phones, tablets, laptops, and workstations).
 Resource pooling: The provider‘s computing resources are pooled to serve multiple
consumers using a multi-tenant model, with different physical and virtual resources
dynamically assigned and reassigned according to consumer demand. There is a
sense of location independence in that the customer generally has no control or
knowledge over the exact location of the provided resources but may be able to
specify location at a higher level of abstraction (e.g., country, state, or datacenter).
Examples of resources include storage, processing, memory, and network bandwidth.
 Rapid elasticity: Capabilities can be elastically provisioned and released, in some
cases automatically, to scale rapidly outward and inward commensurate with demand.
To the consumer, the capabilities available for provisioning often appear to be
unlimited and can be appropriated in any quantity at any time.
 Measured service: Cloud systems automatically control and optimize resource use
by leveraging a metering capability1 at some level of abstraction appropriate to the
type of service (e.g., storage, processing, bandwidth, and active user accounts).
Resource usage can be monitored, controlled, and reported, providing transparency
for both the provider and consumer of the utilized service.

1.1.3 Examples of Cloud Computing services:

 Dropbox
 Gmail
 Facebook
 Google-Drive
 Google App Engine
 Amazon Web Services (AWS) etc.

1.2 Definition of “Cloud”:

 The word "cloud" often refers to the Internet, which more precisely means a data
center full of servers connected to the Internet performing a service.
 A cloud can be a wide area network (WAN) like the public Internet or a private network
of any size, local or global.

Fig 1.2 The “Cloud”


1.3 Evolution of Cloud-Computing:

Fig 1.3 Evolution of Cloud-Computing

 Distributed Systems:
It is a composition of multiple independent systems but all of them are depicted as a single
entity to the users. The purpose of distributed systems is to share resources and also use
them effectively and efficiently. Distributed systems possess characteristics such as
scalability, concurrency, continuous availability, heterogeneity, and independence in failures.

Problems: The main problem with this system was that all the systems were required to be
present at the same geographical location.

Solution: Thus, to solve this problem, distributed computing led to three more types of
computing and they were-Mainframe computing, cluster computing, and grid computing.
 Mainframe computing:
Mainframes which first came into existence in 1951 are highly powerful and reliable
computing machines. These are responsible for handling large data such as massive input-
output operations. Even today these are used for bulk processing tasks such as online
transactions etc.

Problems: These systems have almost no downtime with high fault tolerance. After
distributed computing, these increased the processing capabilities of the system. But these
were very expensive.

Solution: To reduce this cost, cluster computing came as an alternative to mainframe


technology.

 Cluster computing:
In 1980s, cluster computing came as an alternative to mainframe computing. Each machine
in the cluster was connected to each other by a network with high bandwidth. These were
way cheaper than those mainframe systems.

Problems: These were equally capable of high computations. Also, new nodes could easily
be added to the cluster if it were required. Thus, the problem of the cost was solved to some
extent, but the problem related to geographical restrictions still pertained.

Solution: To solve this, the concept of grid computing was introduced.

 Grid computing:
In 1990s, the concept of grid computing was introduced. It means that different systems
were placed at entirely different geographical locations and these all were connected via the
internet. These systems belonged to different organizations and thus the grid consisted of
heterogeneous nodes.
Problems: Although it solved some problems, but new problems emerged as the distance
between the nodes increased.
The main problem which was encountered was the low availability of high bandwidth
connectivity and with-it other network associated issues.

Solution: Thus. cloud computing is often referred to as “Successor of grid computing”.

 Virtualization:
It was introduced nearly 40 years back. It refers to the process of creating a virtual layer
over the hardware which allows the user to run multiple instances simultaneously on the
hardware. It is a key technology used in cloud computing. It is the base on which major
cloud computing services such as Amazon EC2, VMware vCloud, etc work on. Hardware
virtualization is still one of the most common types of virtualization.

 Web 2.0:
It is the interface through which the cloud computing services interact with the clients. It is
because of Web 2.0 that we have interactive and dynamic web pages. It also increases
flexibility among web pages. Popular examples of web 2.0 include Google Maps, Facebook,
Twitter, etc. Needless to say, social media is possible because of this technology only. In
gained major popularity in 2004.

 Service orientation:
It acts as a reference model for cloud computing. It supports low-cost, flexible, and
evolvable applications. Two important concepts were introduced in this computing model.
These were Quality of Service (QoS) which also includes the SLA (Service Level
Agreement) and Software as a Service (SaaS).
 Utility computing:
It is a computing model that defines service provisioning techniques for services such as
compute services along with other major services such as storage, infrastructure, etc which
are provisioned on a pay-per-use basis.
Thus, the above technologies contributed to the making of cloud computing.

1.4 Underlying Principles of Parallel and Distributed Computing

1.4.1 Introduction to Serial Computing:

Initially, software has been written for serial computation.

 To be run on a single computer having a single Central Processing Unit.


 A problem is broken into a discrete series of instructions.
 Instructions are executed one after another.
 Only one instruction may execute at any moment in time

Fig 1.4 Serial Computing

1.4.2 Parallel Computing:

In the simplest sense, Parallel Computing is the simultaneous use of multiple compute
resources to solve a computational problem:

• A problem is broken into discrete parts that can be solved concurrently


• Each part is further broken down to a series of instructions

• Instructions from each part execute simultaneously on different processors

• An overall control/coordination mechanism is employed

Fig 1.5 Parallel Computing

1.4.3 Why we use Parallel Computing:

 Save time and/or money:


 Solve larger / more complex problems:
 Provide concurrency:
 Take advantage of non-local resources:
 Make better use of underlying parallel hardware

1.4.4 Terminologies used in Parallel Computing:


Some of the more commonly used terms associated with parallel computing are listed below.

a) Supercomputing / High Performance Computing (HPC):


Using the world's fastest and largest computers to solve large problems.
b) Node:

A standalone "computer in a box". Usually comprised of multiple CPUs/processors/cores,


memory, network interfaces, etc. Nodes are networked together to comprise a
supercomputer.

c) CPU / Socket / Processor / Core:


 This varies, depending upon the task.
 In the past, a CPU (Central Processing Unit) was a singular execution component for a
computer.
 Then, multiple CPUs were incorporated into a node.
 Then, individual CPUs were subdivided into multiple "cores", each being a unique
execution unit.

d) Task:

 A logically discrete section of computational work.


 A task is typically a program or program-like set of instructions that is executed by a
processor.
 A parallel program consists of multiple tasks running on multiple processors.

e) Pipelining:

Breaking a task into steps performed by different processor units, with inputs streaming
through, much like an assembly line;

f) Shared Memory:

 From a strictly hardware point of view, describes a computer architecture where all
processors have direct (usually bus based) access to common physical memory.
 In a programming sense, it describes a model where parallel tasks all have the same
"picture" of memory and can directly address and access the same logical memory
locations regardless of where the physical memory actually exists.

g)Symmetric Multi-Processor (SMP):

Shared memory hardware architecture where multiple processors share a single address
space and have equal access to all resources.

h)Distributed Memory:

In hardware, refers to network-based memory access for physical memory that is not
common. As a programming model, tasks can only logically "see" local machine memory and
must use communications to access memory on other machines where other tasks are
executing.

i) Communications:

Parallel tasks typically need to exchange data. There are several ways this can be
accomplished, such as through a shared memory bus or over a network, however the actual
event of data exchange is commonly referred to as communications regardless of the method
employed.

j) Synchronization:

 The coordination of parallel tasks in real time, very often associated with
communications.
 Often implemented by establishing a synchronization point within an application where
a task may not proceed further until another task(s) reaches the same or logically
equivalent point.
 Synchronization usually involves waiting by at least one task and can therefore cause
a parallel application's wall clock execution time to increase.
k) Parallel Overhead:

The amount of time required to coordinate parallel tasks, as opposed to doing useful work.
Parallel overhead can include factors such as:

 Task start-up time


 Synchronizations
 Data communications
 Software overhead imposed by parallel languages, libraries, operating system, etc.
 Task termination time

1.5 Cloud Characteristics

Cost: Cloud computing eliminates the capital expense of buying hardware and software and
sing up and running on-site datacenters—the racks of servers, the round-the-clock electricity
for power and cooling, the IT experts for managing the infrastructure. It adds up fast.

Speed: Most cloud computing services are provided self service and on demand, so even
vast amounts of computing resources can be provisioned in minutes, typically with just a few
mouse clicks, giving businesses a lot of flexibility and taking the pressure off capacity
planning.

Global scale: The benefits of cloud computing services include the ability to scale elastically.
In cloud speak, that means delivering the right amount of IT resources—for example,
computing power, storage, bandwidth—right when it is needed and from the right geographic
location.

Productivity: On-site data centres typically require a lot of ―racking and stacking‖—hardware
setup, software patching, and other time-consuming IT management chores. Cloud
computing removes the need for many of these tasks, so IT teams can spend time on
achieving more important business goals.
Performance: The biggest cloud computing services run on a worldwide network of secure
data centers, which are regularly upgraded to the latest generation of fast and efficient
computing hardware. This offers several benefits over a single corporate datacenter,
including reduced network latency for applications and greater economies of scale.

Reliability: Cloud computing makes data backup, disaster recovery and business continuity
easier and less expensive because data can be mirrored at multiple redundant sites on the
cloud provider‘s network.

Security: Many cloud providers offer a broad set of policies, technologies and controls that
strengthen your security posture overall, helping protect your data, apps and infrastructure
from potential threats.

1.6 Elasticity in Cloud:


 In cloud computing, elasticity is defined as "the degree to which a system is able to adapt
to workload changes by provisioning and de-provisioning resources in an autonomic
manner, such that at each point in time the available resources match the current
demand as closely as possible".

 The purpose of elasticity is to match the resources allocated with actual amount of
resources needed at any given point in time.

1.7 On-Demand provisioning:

1.7.1Cloud-Provisioning:
 In general, provisioning means "providing" or making something available.

 Cloud provisioning refers to the “Processes for the deployment and integration of
cloud computing services within an enterprise IT infrastructure.”
 Cloud provisioning primarily defines how, what and when an organization will provision
(provide) cloud services. These services can be internal, public or hybrid cloud
products and solutions.

 For example, the creation of virtual machines, the allocation of storage capacity and/or
granting access to cloud software.

Types of Provisioning:

Provisioning can be categorized as follows:

1. Over-Provisioning:

over-provisioning of cloud resources represents unused resources which represents


a zero-ROI (return on investment) expense.

Over-provisioning of cloud resources has, in the absence of other choices, become an


epidemic. The result is that many organizations are investing in cloud resources they
simply do not use. Unused resources produce, of course, a return on investment (ROI)
of exactly zero.

2. Under Provisioning:

Under-provisioning, i.e., allocating fewer resources than required, must be avoided,


otherwise the service cannot serve its users with a good service. Under-provisioning
the website may make it seem slow or unreachable.

1.7.3 On-Demand Provisioning:

 On-demand provisioning is a delivery model in which computing resources are made available
to the user as needed. The resources may be maintained within the user's enterprise or made
available by a cloud service provider.
 The customer or requesting application is provided with resources on run time.

You might also like