UNIT 1
UNIT 1
UNIT 1
INTRODUCTION
Dropbox
Gmail
Facebook
Google-Drive
Google App Engine
Amazon Web Services (AWS) etc.
The word "cloud" often refers to the Internet, which more precisely means a data
center full of servers connected to the Internet performing a service.
A cloud can be a wide area network (WAN) like the public Internet or a private network
of any size, local or global.
Distributed Systems:
It is a composition of multiple independent systems but all of them are depicted as a single
entity to the users. The purpose of distributed systems is to share resources and also use
them effectively and efficiently. Distributed systems possess characteristics such as
scalability, concurrency, continuous availability, heterogeneity, and independence in failures.
Problems: The main problem with this system was that all the systems were required to be
present at the same geographical location.
Solution: Thus, to solve this problem, distributed computing led to three more types of
computing and they were-Mainframe computing, cluster computing, and grid computing.
Mainframe computing:
Mainframes which first came into existence in 1951 are highly powerful and reliable
computing machines. These are responsible for handling large data such as massive input-
output operations. Even today these are used for bulk processing tasks such as online
transactions etc.
Problems: These systems have almost no downtime with high fault tolerance. After
distributed computing, these increased the processing capabilities of the system. But these
were very expensive.
Cluster computing:
In 1980s, cluster computing came as an alternative to mainframe computing. Each machine
in the cluster was connected to each other by a network with high bandwidth. These were
way cheaper than those mainframe systems.
Problems: These were equally capable of high computations. Also, new nodes could easily
be added to the cluster if it were required. Thus, the problem of the cost was solved to some
extent, but the problem related to geographical restrictions still pertained.
Grid computing:
In 1990s, the concept of grid computing was introduced. It means that different systems
were placed at entirely different geographical locations and these all were connected via the
internet. These systems belonged to different organizations and thus the grid consisted of
heterogeneous nodes.
Problems: Although it solved some problems, but new problems emerged as the distance
between the nodes increased.
The main problem which was encountered was the low availability of high bandwidth
connectivity and with-it other network associated issues.
Virtualization:
It was introduced nearly 40 years back. It refers to the process of creating a virtual layer
over the hardware which allows the user to run multiple instances simultaneously on the
hardware. It is a key technology used in cloud computing. It is the base on which major
cloud computing services such as Amazon EC2, VMware vCloud, etc work on. Hardware
virtualization is still one of the most common types of virtualization.
Web 2.0:
It is the interface through which the cloud computing services interact with the clients. It is
because of Web 2.0 that we have interactive and dynamic web pages. It also increases
flexibility among web pages. Popular examples of web 2.0 include Google Maps, Facebook,
Twitter, etc. Needless to say, social media is possible because of this technology only. In
gained major popularity in 2004.
Service orientation:
It acts as a reference model for cloud computing. It supports low-cost, flexible, and
evolvable applications. Two important concepts were introduced in this computing model.
These were Quality of Service (QoS) which also includes the SLA (Service Level
Agreement) and Software as a Service (SaaS).
Utility computing:
It is a computing model that defines service provisioning techniques for services such as
compute services along with other major services such as storage, infrastructure, etc which
are provisioned on a pay-per-use basis.
Thus, the above technologies contributed to the making of cloud computing.
In the simplest sense, Parallel Computing is the simultaneous use of multiple compute
resources to solve a computational problem:
d) Task:
e) Pipelining:
Breaking a task into steps performed by different processor units, with inputs streaming
through, much like an assembly line;
f) Shared Memory:
From a strictly hardware point of view, describes a computer architecture where all
processors have direct (usually bus based) access to common physical memory.
In a programming sense, it describes a model where parallel tasks all have the same
"picture" of memory and can directly address and access the same logical memory
locations regardless of where the physical memory actually exists.
Shared memory hardware architecture where multiple processors share a single address
space and have equal access to all resources.
h)Distributed Memory:
In hardware, refers to network-based memory access for physical memory that is not
common. As a programming model, tasks can only logically "see" local machine memory and
must use communications to access memory on other machines where other tasks are
executing.
i) Communications:
Parallel tasks typically need to exchange data. There are several ways this can be
accomplished, such as through a shared memory bus or over a network, however the actual
event of data exchange is commonly referred to as communications regardless of the method
employed.
j) Synchronization:
The coordination of parallel tasks in real time, very often associated with
communications.
Often implemented by establishing a synchronization point within an application where
a task may not proceed further until another task(s) reaches the same or logically
equivalent point.
Synchronization usually involves waiting by at least one task and can therefore cause
a parallel application's wall clock execution time to increase.
k) Parallel Overhead:
The amount of time required to coordinate parallel tasks, as opposed to doing useful work.
Parallel overhead can include factors such as:
Cost: Cloud computing eliminates the capital expense of buying hardware and software and
sing up and running on-site datacenters—the racks of servers, the round-the-clock electricity
for power and cooling, the IT experts for managing the infrastructure. It adds up fast.
Speed: Most cloud computing services are provided self service and on demand, so even
vast amounts of computing resources can be provisioned in minutes, typically with just a few
mouse clicks, giving businesses a lot of flexibility and taking the pressure off capacity
planning.
Global scale: The benefits of cloud computing services include the ability to scale elastically.
In cloud speak, that means delivering the right amount of IT resources—for example,
computing power, storage, bandwidth—right when it is needed and from the right geographic
location.
Productivity: On-site data centres typically require a lot of ―racking and stacking‖—hardware
setup, software patching, and other time-consuming IT management chores. Cloud
computing removes the need for many of these tasks, so IT teams can spend time on
achieving more important business goals.
Performance: The biggest cloud computing services run on a worldwide network of secure
data centers, which are regularly upgraded to the latest generation of fast and efficient
computing hardware. This offers several benefits over a single corporate datacenter,
including reduced network latency for applications and greater economies of scale.
Reliability: Cloud computing makes data backup, disaster recovery and business continuity
easier and less expensive because data can be mirrored at multiple redundant sites on the
cloud provider‘s network.
Security: Many cloud providers offer a broad set of policies, technologies and controls that
strengthen your security posture overall, helping protect your data, apps and infrastructure
from potential threats.
The purpose of elasticity is to match the resources allocated with actual amount of
resources needed at any given point in time.
1.7.1Cloud-Provisioning:
In general, provisioning means "providing" or making something available.
Cloud provisioning refers to the “Processes for the deployment and integration of
cloud computing services within an enterprise IT infrastructure.”
Cloud provisioning primarily defines how, what and when an organization will provision
(provide) cloud services. These services can be internal, public or hybrid cloud
products and solutions.
For example, the creation of virtual machines, the allocation of storage capacity and/or
granting access to cloud software.
Types of Provisioning:
1. Over-Provisioning:
2. Under Provisioning:
On-demand provisioning is a delivery model in which computing resources are made available
to the user as needed. The resources may be maintained within the user's enterprise or made
available by a cloud service provider.
The customer or requesting application is provided with resources on run time.