Cloud Computing

Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 82
At a glance
Powered by AI
Some of the key takeaways from the document are that cloud computing provides on-demand access to configurable computing resources over the internet, allows dynamic scaling of resources, and offers everything as a service. It reduces costs for businesses and provides flexibility.

The different types of cloud computing discussed are public cloud, private cloud, hybrid cloud, and community cloud.

The key components of cloud computing architecture discussed are cloud modeling and design, cloud technology, and cloud architecture.

CLOUD COMPUTING

UNIT – I
Cloud Computing Foundation : Introduction to Cloud Computing – Move to Cloud Computing –
Types of Cloud – Working of Cloud Computing. Cloud Computing Architecture : Cloud Computing
Technology – Cloud Architecture – Cloud Modeling and Design.

INTRODUCTION TO CLOUD COMPUTING

CLOUD COMPUTING BASICS



Cloud computing is an internet based computing.

It is evolved from grid computing, utility computing, parallel computing, distributed computing and
virtualization.

It has more powerful computing infrastructure with a pool of thousands of computers and servers.

It provides computational resources like server, storage, software, memory, network etc., as on-demand
services.

It helps to reduce the computational infrastructure investment and maintenance cost of IT requisite for
Small and Medium scale Enterprises (SMEs).

It provides Everything (X) as a Service (XaaS) where ‘X’ denotes software, OS, server, hardware,
storage, etc.

Cloud services are scaling up and down based on the users’ demand.

Cloud has multiple data centres placed in different geographical locations in the world to provide reliable
services to the users.

It provides unlimited service provisioning without any human intervention.

Cloud automates the service provisioning by way of running a number of Application Programming Interface
(API) in the cloud storage environment.

Definition
Cloud Computing is a model for enabling convenient, on-demand network access to a shared
pool of configurable computing resources like N/W, Servers, Storage, Applications and other
services.

History of Cloud Computing


Cloud computing comprises of various phases, which include grid and utility computing,
application service providers (ASP) and Software as a Service (SaaS). Though the concept of
delivering computing resources through a universal network was started in the 1960s, scientist John
McCarthy predicts that the computation being available as pubic service. In 1960, the cloud
computing evolved along with numbers of lines, Web 2.0 being the most recent development.
Salesforce.com was the first cloud computing to have arrived in the 1999, which pioneered the idea
of delivering enterprise applications through a simple website.
The concepts behind the cloud computing are not new, whereas all these concepts are really
needed for the current trends.
The beginning of what was recognized as the concept of cloud computing can be traced back to
the mainframe days of the 1960s, when the idea of ‘utility computing’ was propounded by MIT
computer scientist and Turing’s medal winner John McCarthy. Utility computing ended up becoming
a thing of large business units like IBM. The concept was so simple that the computing power could
be wrecked down as a metered service for the business, similar to how telephone companies operate
for their consumers.

The other major constraint in the design of utility computing which could shrink the growth of the
usage of personal computer was the technical restrictions on bandwidth as well as disk spaces.

Essentially, Amazon is far from being a company that specializes in retail. Its assistance to cloud
computing will be discussed shortly in a profile of companies using cloud technology, but it is clear
to any IT expert that Amazon is the first company that built on the basics of technical innovation,
particularly after the dot-com bubble time.

Online Computing business needs the following


Dynamism
It is quite simple, similar to how you use your mobile phone connection. If you want to talk
more, you will buy a top-up card. If you are a post-paid customer, you will change your plan to meet
your requirement. Your need is dynamism. Therefore, your infrastructure should support your
changing needs.
Abstraction
From an end user’s point of view, they do not have to worry for the OS, the plug-ins, web
security or the software platform. Everything is its place.
The business/consumer should focus his attention more on its core competency rather than
worrying himself over secondary resources such as the OS or the software.
Resource Sharing
The whole architecture should be implemented such that it provides you the flexible
environment where it is possible to share applications as well as other network resources.
This will provide you with need-based elastic architecture, where the resources will grow
without any major configuration modifications.

The Solution
There is one model of computing which satisfies the three requirements mentioned above in
business and is becoming the technology trend of the future, it is known as cloud computing. Have
you ever used cloud computing? Most of you will answer in the negative.
You are already on the cloud. An e-mail like Gmail, Yahoo and Hotmail are cloud-based
examples of SaaS (Software as a Service). SaaS is a piece of cloud computing.
Cloud is an acronym of the phrase: Common, Location-independent, Online Utility that is
available on Demand.
IT professionals recognized that there are eight basic components that are very important in
enabling the cloud computing concept (Figure 1.1) for the cloud to work in the public or private
sector, they are as follows:
1. Worldwide connectivity: users should have near-ubiquitous access to the Internet.
2. Open access: Users should have fair, open-minded access to the Internet.
3. Reliability: The cloud’s performance should equal to or better than recent standalone
systems.
4. Interoperability and user choice: Users must be able to progress among different clouds.
5. Security: It should ensure that data of users are safe.

Figure 1.1 Basic Components of Cloud Computing


6. Privacy: Users’ rights must be clearly defined and allow access based on rights.
7. Economic value: The cloud must provide substantial savings and benefits.
8. Sustainability: The cloud must increase power effectiveness and reduce environmental
impact.

Characteristics of Cloud Computing


o Server is the most important element in cloud computing.
o It plays a vital role since it is the brain behind the entire processing environment. o
In cloud computing, server environment need not be a high-end hardware.
Cloud has five essential characteristics which provide unique features to the cloud than other
computing.
1. On-Demand Self-Service: It enables users to use cloud computing resources without human
intervention between the users and the Cloud Service Providers (CSP). Instant usage of
resources and elimination of human intervention provide efficiencies and cost savings to both the
users and the CSPs.
2. Broad Network Access: Cloud computing is an efficient and effective replacement for in-house
data centres. High bandwidth communication links must be available to connect to the cloud
services. High-bandwidth network communication provides access to a large pool of computing
resources.
3. Location-Independent and Resource Pooling: Computing resources are pooled to serve
multiple users using a multi-tenant model, with different physical and virtual resources
dynamically assigned and reassigned according to users’ demand. Applications require
resources. However, these resources can be located anywhere in the geographic locations
physically and assigned as virtual components whenever they are needed. There is a sense of
location independence that the users generally have no control or knowledge over the exact
location of the provided resources. At the same time, this helps to specify location at a higher
level of abstraction (e.g., country, state, or data centre).
4. Scalability: It enables new nodes to be added or dropped from the network like physical servers,
with limited modifications to infrastructure set up and software. Cloud architecture can scale
horizontally or vertically, according to users’ demand.
5. Measured Service: The usage of cloud resources by the users are monitored by APIs in the
cloud. Users are billed automatically based on the usage of cloud resources. Resource usage can
be monitored, controlled, and reported by providing transparency for both the CSPs and the
users of the utilized service.

What cloud computing really is?


In simple terms, What cloud computing really represents is huge: it facilitates small organizations
to compete with much larger ones, it helps in saving lot of money and to utilize energy efficiency in
operations.
Cloud computing as it relates to Internet technology is all around us for example, accessing e-mail
and searching for information in the world wide web.
In these examples, the power of processing technology is used, which exists in distant locations
and are not known to the users. Actually, network connection these days is so important to perform
basic applications.
One of the biggest benefits would be storage. Large number of servers possesses massive amounts
of storage. One major benefit of this is data loss prevention.
The less data loss in the transactions is the main feature of cloud computing that attracts the
potential clients.
What Cloud Computing Really Isn’t?
 Not a data centre, although they can certainly play a part . Users can deploy an environment
that supports cloud computing in a data centre, but cloud computing is not about data centres.
 The cloud computing is an unique technique that incorporates even the basic client server
computing concepts. With cloud computing, we have a generalized resource to which one can
initiate the work and that could form part of a client/server application. The cloud computing
has more autonomy.
 Not a grid computing, but again users can avail cloud computing environments to support grid
computing.
 Not a comeback to mainframes and mini systems or centralized computing , cloud computing
generally involves multiple computers rather than one and reduces computing power
according to user needs.

MOVE TO CLOUD
COMPUTING PROS AND CONS OF CLOUD COMPUTING
Cloud computing can enable constant flow of information between service providers and the end
users.

Advantages of Cloud Computing


Cost reduction: Cloud computing lessens paperwork, enterprise deal charges and minimizes the
financial endeavour in hardware. Moving your enterprise to ‘the cloud’ in addition lessens the want
for an employee.
Scalability: Cloud computing services sanction enterprises to only compensate for what they use like
electrical power and water. As the business grows, user can put up by adding more server space.
Easier collaboration: Cloud computing services allow to access any time from any computer, it is
easy to work together with employees in remote locations.
Affordable: With cloud computing, it is possible to reduce operational costs and investment
expenditures on hardware, software licenses and implementation services.
Scalable and flexible: Cloud computing can sanction to maximize supplies for better competence
and lessen unused capacity. It can also scale up or downward to meet the varying demands of the
business.
Efficiency: Cloud computing renders the gain of divided hardware, automated and recognizable
technologies. The employees have the right to use the database from everywhere by using any PC,
mobile device or browser. It also reduces overall energy usage and physical presence.
Interconnectivity: The interconnectivity of computer servers is the main advantages of cloud
computing. This characteristic can allow an organization to carry out a variety of tasks in different
locations. Therefore, cloud computing can facilitate proper management of information technology
resources within the organization.
Outsourcing: it allows outsourcing of a key function of the company’s work portfolio. Just like the
call centres that have decided to relocate to cheaper environments, implementing a cloud computing
project can significantly reduce your IT budget. It is on the costlier side in terms of staff supervision,
but overall you will be able to save in the long run.
Figure 2.1 Advantages of Cloud Computing
Figure 2.1 shows the merits of cloud computing and states the importance of migrating to cloud
computing, as it supports/provides the following benefits:
 Online storage
 Accessible in different platforms
 Using online resources
 Online collaboration and
 Easy outsourcing processes
Initially, it is important to recognize that cloud computing is expected to bring in a higher level of
automation than the ordinary systems of communication between the various sections of an
organization.

Figure 2.2 Key Elements of Cloud Computing

5
A good cloud computing package must only charge you for the services that you use. Figure 2.2
shows some key elements of cloud computing, without which computing cannot be established.
Elements are divided into four layers.
Layer 1 contains the physical machines, where the required software and operating systems are
installed.
Layer 2 forms virtual machines. Layer 3 explains the service level agreements (SLA) and resource
allocator to the virtual machines (VM). This layer also accounts for the job, prices it and dispatches
the jobs to the VM. Layer 4 contains the users or brokers using the computing.
Upgraded Software: Software provided online is upgraded and maintained by the provider, so that
small business owner do not have to purchase the newest version of software program or download
fixes and patches. No need to buy a program, but entering into a monthly or annual contract is also
attractive, as is the reality that several applications are offered for free.

Disadvantages of Cloud Computing


The main trouble with cloud computing are interrelated to the loss of control to another party.
This can lead to management problems and inconsistency within the information technology
departments. Industries that transact sensitive data will be anxious about security when it comes to
cloud computing.
Security concerns: The main concern with cloud computing is having your data easily reached via
the web. Although security is stretched and is getting even more advanced as technology providers
perfect the framework, it is still an anxiety.
Risk of losing internet connection: If there is no Internet connection, the database accessing is very
difficult.
Limited resources for customizations: One can require in-depth customizations and integration with
his current systems for his daily business functions. Cloud computing may not be accommodating to
his needs.
Availability: If it happens, the cloud service goes down unexpectedly, leaving you without important
information for hours or more? Then how is it possible to get reliability in retrieval of data is yet
another challenge.
Data mobility and ownership: In cloud environment, it is possible get back the data safely even
when the cloud service is stopped. How can you be assured that the service provider will wipe out
your data once you have cancelled the service?
Privacy: How much data the cloud service companies are collecting and how are they using the
information?

NATURE OF THE CLOUD


Cloud computing reached widely because, its not a hype and also it adopts existing computing
technologies.
Delivery of information technology services (including infrastructure, platform and applications)
from the cloud has both capital expense advantages and operation disbursement advantages.
Cloud computing provides enterprises with two-fold solutions:
1. Organizational perspective, the cloud give services for client and enterprise needs in a
simplified way, brought ahead of scale and high worth of service that drives the capability
for expansion and innovation.
2. User’s perspective, it enables computing services in a simpler, more responsive model
without complete knowledge of the underlying technology. It is an effective service
acquisition and delivery model for IT resources if properly implemented within an all-
inclusive technology strategy. Cloud computing can help to improve overall business
performance while controlling the costs of distributing IT resources to the organization.

TECHNOLOGIES IN CLOUD COMPUTING


 Cloud computing is based on the advance distributed technologies.
 The name of cloud computing is being derived from the subsistence of data and applications on a
‘cloud’ of web servers.
 Cloud computing can be defined as getting the work complete by sharing and using resources
and applications of a network environment without apprehending about the owner and manager
of these resources and applications.
 Now with the help of this technology, resource and data required to do a job is no longer
restricted by one’s personal computer. The resources which are hosted elsewhere enable it to be
accessible at any time and at any location and this benefit lift the bar of time and place on the
work to be completed.

Other Cloud-related Technologies


Grid computing: It defined as an extension of distributed and parallel computing in which a super
and virtual computer consists of a number of networked and loosely coupled computers that act
together to perform enormous tasks.
Utility computing: When the resources used in computing process are packaged as a metered service
similar to electricity—a traditional public utility.
Autonomic computing: It defines that systems are capable of self-management.

Working of Cloud
 Cloud computing uses information technology as a service over the network.
 Cloud computing consists of Infrastructure as a service (IaaS), Platform as a service (PaaS),
Hardware as a Service (HaaS) and Software as a service (SaaS).
 Cloud computing finally enables the user to rent a virtual server loaded with software and turn it
on and off according to the need from the user and it can furthermore be cloned to meet an
immediate workload demand.
 Cloud computing also stores a large amount of data that can be accessed by the certified users
with the authorized applications.
 A cloud is used as a storage medium which handles applications, business and personal data.

Key Characteristic of Cloud and its Role


The two keys that enable technologies based upon the practices in the areas of service
provisioning and solution design would play a very significant role in the revolutionary phase of
cloud computing, they are: (i) virtualization and (ii) SOA.
Virtualization Technique
Virtualization works on the management of how the likeness of the OS, middleware and
programs procreated and assigned to a personal system or part of the server stack away.
These technologies in addition helps in reusing certificates of OS, middleware or programs
requests after the customer distributes their service from the cloud computing platform.
Service-oriented Architecture (SOA)
Cloud computing is basically a collection of services, which communicate with each other.
Figure 2.3 displays the basic architecture of SOA and its components such as: (i) service providers,
(ii) service requestor and (iii) contract details or in other words service level agreements.
Several big companies such as Google, Microsoft, Sun and even Amazon have the
competence of providing services instead of directly promoting the software to the user. Those
companies that desire cost cuts through choosing to rent rather than purchasing most definitely need
these characteristics. Anyhow issues like security, cost, availability and integration of applications
will play a vital role in adopting these architectures.

Figure 2.3 Service-oriented Architecture


MIGRATING INTO THE CLOUD

Cloud Computing Migration Issues: What You Need to Know?


When you migrate from a client to the cloud, the issues you will face fall into the following
overall categories.
Security
Security is an obvious threshold question, if the cloud is not secure, enterprises will not
consider migrating to it fearing their sensitive data will be tampered. Users must ensure that they
understand the underlying infrastructure of the cloud to which they migrate from their clients and
must also advise clients to include security in their cloud SLAs and terms of service.
Vendor Management
When the user is going to migrate with the outsource providers, then the service level
agreements and its terms are thoroughly checked. While the whole idea behind cloud computing is to
propose a standardized, multi-tenant infrastructure, cloud vendors may not offer the same level of
custom SLAs as IT managers.
Technical Integration
The technical issues are also complex. Now most firms that migrate to the cloud environment
in a hybrid model, are keeping certain key elements of their infrastructure in-house and under their
direct control, while outsourcing less susceptible or core components.
Process and Culture
There is also the ever-present political and cultural landmine, when anyone with a credit card
can surf the web site of a public cloud vendor and dial up teraflops of cloud capacity, how does IT
maintain control of its application architecture?
We have seen that cloud services are offered with a credit card. When IT power becomes
economical and easily accessed, it’s control over its domestic customers can be liquefied.
The Business View
Enterprises around the world have invested trillions of dollars in technology, hoping to
improve their execution potential, drive productivity, improve profitability and attain continued
competitive advantage.

Migrating to the Cloud: Deployment Considerations


The cloud migration has started. Cloud computing straightaway addresses a massive measure of
challenges. How many of the following complaints have you heard about your company?
 The IT environment is too large and complex.
 Sluggishness of the existing systems that do not meet user expectations.
 Inability to consistently and effectively scale to support rapid growing requirements.
 Composite rules to obtain performance metrics.
 Widening gap between available functionality and tangible features used.
 We can’t find the aptitude to support new technology.
 High operating costs, high investment costs.
The cloud does suggest solutions to the problems listed above, its actual efficiency for enterprises
lie in, how they address the following key questions.
 What are the underlying drivers for patter into the cloud? Is it for new
functionality/application or moving from an existing result? How clearly are these distinct
and communicated to the project team?
 What business needs and solution do cloud serve?
 Will the cloud-based solution work in segregation or work with other systems?
 Is the planned solution part of a recognized cloud platform?
 How many customers will access the cloud? What are the training and support levels
necessary?
 What is the total lifecycle cost of the solution and reason?
 Does the ‘pay-per-use’ model improve our cash flow?
The Process: Key Stages in Migrating to the Cloud
A well-planned and executed cloud computing solution cannot only provide needed functionality
but can also propose an opportunity to improve processes that were supported directly or indirectly
by inheritance systems. IT and business stakeholders must work together and have to:
 Clearly state business objectives for the cloud migration.
 Define project scope of the cloud migration.
 Provide a set of guiding principles for all to follow.
Cloud migration process can be divided into three areas:
1. Plan
1. Determine key business drivers
2. Define business objectives
3. Get executive sponsorship
4. Set project guiding principles
5. Form project team made up of IT and business representatives
Develop a project plan by including the following:
6. Define business requirements
7. Set key success metrics
8. Set timeline
9. Identify decision-making authorities
2. Execute
0. Execute the plan
1. Stay away from ‘scope creep’—stay focused on original project scope; this
becomes a challenge particularly in cases, where a major legacy application with
large users set is being replaced
2. Remember to follow the guiding principles at all times
3. Communicate to all stakeholders regularly (no surprises!)
4. Train users
3. Monitor
0. Monitor adoption
1. Track success metrics
2. Stay away from scope creep (this one may well decide the success or failure of
the project)
3. Follow guiding principles
4. Only implement changes based on quantifiable business needs
SEVEN-STEP MODEL
In the interest of pursuing a more rational point of view, there are seven regions to think in
evaluating and transitioning to cloud-based solutions.
1. Know that there are many different variances of cloud services
2. Move towards the cloud as a tool or an additional option to supply IT functionality
3. Recognize which constituent of your environment may be ‘cloud compatible’
4. To better compute the advantage of cloud services lies on understanding about current costs
5. Preparation of organization to ‘manage’ rather than ‘operate’
6. To simplify and de-risk your migration
7. Question to gain more knowledge

TYPES OF CLOUD
Types of Cloud Computing (deployment models)
Cloud computing can be classified into four types based on the location of the cloud. Many types
of cloud deployment models are available; they are private, public and hybrid models as shown in
Figure 3.0. Private cloud is an on-premises or internal cloud setup, whereas public cloud is off-
premises or external one.
Both private and public cloud set-up may provide three different services, that is SaaS, PaaS and
IaaS. NIST (National Institute of Standards and Technology) provides a standard definition for cloud
computing and its models.
Public cloud is a widely used model, where infrastructure comprising of hardware systems,
networks, storages and applications are owned by the provider.

Figure 3.0 Cloud Deployment Models

1. Public cloud: This computing infrastructure is hosted at the vendor’s workplace. The end
user cannot view the infrastructure. The computing infrastructure is shared between
companies.

Figure Public Cloud


2. Private cloud: Here the computing infrastructure is dedicated to the customer and is not
shared with any other companies. They are costly and highly secure than public clouds.
Private clouds may be hosted externally as well as in their own premise hosted clouds.

Figure Private Cloud

3. Hybrid cloud: Organizations can submit less valued applications in public cloud and high
valued applications in the private cloud. The combination is known as hybrid cloud. Cloud
bursting is used to define a system where the organization uses its own infrastructure for
normal usage and cloud is used for peak times.

Figure Hybrid Cloud

4. Community cloud: The cloud infrastructure is shared between the companies of the same
community. For example, all the government organizations in a city can share the same
cloud but not the non-governmental organizations.

Figure Community Cloud

Six different types of cloud computing and their offering to businesses are listed as follows:
1. WWW-based cloud computing service is a type of cloud service that exploits certain web
service functionalities, rather than deploying applications. For example, it can use Google
Maps API.
2. Software as a service is an idea, where an application can be used by multiple tenants,
using the browser. For example, SaaS solutions are used in sales, ERP and HR.
3. Platform as a service is a variant of SaaS, one can run their own applications but by
executing on the cloud provider’s infrastructure.
4. Utility cloud computing services offer virtual storage and server options, where the
companies can access it on demand. This allows easy creation of virtual data centre.
5. Managed services are the oldest cloud computing solutions. In this, a cloud computing
provider utilizes an application than the end customers. Examples are using anti-spam
services and application monitoring.
6. Service commerce is a mix of SaaS and managed services. It provides a hub of services,
where the end user interacts. Examples are tracking expenses, virtual assistant services and
travel bookings.

CLOUD INFRASTRUCTURE
Cloud Computing Infrastructure
Cloud computing infrastructure functions like an electricity grid. When you need light in a
room, you turn the switch on, the signal travels through the electricity grid, then power is transmitted
to your switch and you have light.
A cloud computing infrastructure works similarly. Whenever you need resources such as
information on software, they are stored in a network called a cloud.
Figure 3.1 shows the basic infrastructure for a cloud, comprising of client and server
machines.
Application, platform and infrastructure services are used by two machines. Servers deploy
services and act as a provider, whereas a client uses it and acts as a requestor.

Figure 3.1 Cloud Computing Infrastructure


One can get cloud computing infrastructure for his business, within the following five steps:
1. Choose on-demand technology which will be the foundation for your infrastructure.
2. Determine how your employees can access information from the infrastructure.
3. Prepare the infrastructure with the necessary software and hardware.
4. Set up each computer to access the infrastructure.
5. Integrate all aspects of the infrastructure so that all employees can participate in resource
sharing.
CLOUD APPLICATION ARCHITECTURE
The latest technology in sharing of resources is cloud computing. It maintains large numbers
of servers and can be billed in terms of on-demand and pay-per-cycle. The end users have no idea
about the location of the servers in the cloud network.
Cloud computing is fully enabled by virtualization (hypervisors). A virtualized application is
an application that is combined with all the components for execution with an operating system.
This flexibility is advantageous to cloud computing and it is varies from other computing such as
grid or utility and SaaS. Launching new instances for an application is easy and it provides the
following:
 Scale up and down rapidly
 Increased fault tolerance
 Bring up development or test instances
 Speedier versions to the customer base
 Load and test an application

WORKING OF CLOUD
COMPUTING TRENDS IN COMPUTING
Information technology (IT) is evolving rapidly. It becomes outdated as fast as it evolves.
Cloud computing technology changed its focus from industry to real-world problems. The major
trends that emerged in cloud computing technology are:
 Small, medium business and micro-business
 Supply chains management, media and digital content, and legacy systems
 On-the-fly access
 Hybrid cloud model
 Growth in stack-as-a-service
Technology Trends
Virtualization
Infrastructure, applications, server, desktop, storage, network and hardware compose
virtualization. Virtualization can supply extra power on demand and is compatible with today’s
environmental measures. For small and medium business (SMBs), virtualization affords incredibly
easy migration.
Data Growth
According to Gartner, enterprise data growth is expected to increase more in the next five
years and 80% will remain unstructured. Due to this trend in the IT, the complexity will also
increase, despite continued budget constraints. More access will lead to more data, resulting in
increased compliance, backup, audit and security.
Energy and Green IT
In Green IT, performance and its effectiveness will play a vital role. Corporate social
responsibility will become a primary concern as the power issue moves up the food chain.
Complex Resource Tracking
Complex resource tracking monitors energy consumption made by resources and
automatically optimizes it by moving workloads dynamically. Organizations will have to manage
new KPI (knowledge power infrastructures) based on power and there will be a growing demand for
new vendors and skills.
Consumerization and Social Software
Social collaboration (wikis, blogs, Facebook, Twitter), social media (content sharing and
aggregation) and social validation (social ratings, rankings and commentary) will continue to be a
major force in shaping consumerization and the software, compelling organizations to focus on early
pattern detection and ‘collectiveness’.

CLOUD SERVICE MODELS


Figure 4.2 shows the various cloud service models such as software, platform and
infrastructure. Service models are types of services that are required by customers.

Figure 4.2 Cloud Service Models


Service Models
 SaaS (Software as a Service)
 PaaS (Platform as a Service)
 IaaS (Infrastructure as a Service)
SaaS: Software as a Service
Provider of SaaS has full administrative rights for its application and responsible for activities
such as deployment, maintenance and update. This type is suitable for customers, who want less
management hassles and worries regarding installation of application, software and its updation.
Figure 4.3 shows the levels of rights between the subscriber and the provider, i.e., SaaS
component stack and scope of control. SaaS subscribers can be individual users, users from
organizations and users from enterprises. If the focus in on improving of the business, SaaS is the
best option.

Figure 4.3 SaaS Component Stack and Scope of Control


By opting SaaS, replacing of old hardware and maintaining infrastructure can be avoided,
thus saving on time and cost of hiring of technical staff. Applications, which supports productivity
and collaboration are the best options. For example, Google Apps. Other examples are as follows:
PaaS: Platform as a Service
PaaS is service, where application/software can be build, tested and deployed as a single unit.
PaaS is useful for application builders, developers, deployers and testers.
Figure 4.4 depicts rights of control between the subscriber and provider, i.e., PaaS component
stack and scope of control. From the figure, we can understand that the cloud provider has total
control over the hardware and operating system, admin control over the middleware and no control
over the application. Cloud users do not have control over the OS or the hardware.
PaaS consists of environment for developing applications, languages for writing programs,
compilers and tools for testing and deployment.

Figure 4.4 PaaS Component Stack and Scope of Control

Figure 4.5 IaaS Component Stack and Scope of


Control IaaS: Infrastructure as a Service
When the customer requires an end-to-end infrastructure such as computer resources, storages
and network, he/she can opt for IaaS. The usage fee is billed at CPU hour, size (GB) of data accessed
or stored/hour, bandwidth consumed, etc. Figure 4.5 depicts the IaaS component stack and scope of
control.
Figure 4.5 depicts the rights of control between a subscriber and a provider, that is, IaaS
component stack and scope of control. From the figure, it is clear that cloud provider has total control
only over the hardware and has admin rights for virtualization part, that is, hypervisor. He/she has no
control over the application, middleware and guest operating system.
IaaS providers in India are Amazon, Rackspace, Joyent, GoGrid, Verizon Teeremark and
Rightscale. NetMagic Solutions and InstaCompute (from Tata Communications).

Cloud Service Models


Figure 4.6 depicts the levels of service provided by each service model. Service models are
categorized into five types:
1. Business as a service
2. Software as a service
3. Platform as a service
4. Infrastructure as a service
5. Management as a service
Figure 4.6 Cloud Service Models
Comparison Other aspects of cloud service models are as follows:
 It provides management as a part of the service. Managing multiple services, service
models and on-premise applications and systems are management functions of large
organizations.
 An infrastructure service from a cloud service provider, which can be built on top of your
applications and services, and if a development platform is required, you can build this on
the infrastructure service as well.
 A platform service includes the required infrastructure service to support the platform.
 An application (software) service includes the overall infrastructure and platform services
to support the application.
 Business process systems facilitate the development of business processes including
business process inventory, definition, development, deployment, management and
measurement.

SUMMARY OF UNIT I

 Cloud computing is the use of computing assets (hardware and software) that are consigned as a service
over a mesh (i.e., Internet).
 Cloud computing is a technology that values the Internet and isolated servers to sustain data and
applications.
 Cloud computing permits buyers and enterprises to use applications without setting up and accessing their
personal documents at any computer with Internet access.
 Cloud computing technology permits for much more effective computing by centralizing data storage,
processing and bandwidth.
 Consumers assert that cloud computing permits businesses to bypass upfront infrastructure charges, and
aims on tasks that differentiate their enterprises rather than the infrastructure.
 Cloud computing relies on distributing of assets to accomplish coherence and finances of scale alike to a
utility (i.e., electrical power grid) over a network.
 The base of cloud computing is a broader notion of converged infrastructure and distributed services.
 Cloud computing is the result of evolution and adoption of surviving technologies and paradigms.
 The aim of cloud computing is to permit users to take advantage from all of these technologies, without
the requirement of in-depth knowledge or know-how with each one of them.
 The major support technologies for cloud computing are virtualization and autonomic computing.
 Cloud computing is having more advantage over the latest distributed computing techniques in terms of
QoS and Reliability.
 Tim Berners–Lee suggested the concept of distributing information known on multiple servers to be made
accessible to the world via client computers. Thus, world wide web was born.
 Enterprise computing proceeds from a server-centric to an application-centric operations model.
 The cloud will assist IT rapidly to establish new capabilities—applications, services, accessibility—that
will endow enterprises to function more effectively and efficiently.
 The salient characteristics of cloud computing defined by the National Institute of Standards and
Terminology (NIST) are (i) on-demand self-service, (ii) broad network access, (iii) resource pooling, (iv)
rapid elasticity and (v) measured service.
 Cloud computing permits enterprises to boost IT capability (or add capabilities) on the go and in less time
without buying new infrastructure, staff or programs, and as a payper-use service.
 The renowned cloud deliver models are (i) Cloud Software as a Service (SaaS), (ii) Cloud Platform as a
Service (PaaS) and (iii) Cloud Infrastructure as a Service (IaaS).
 The well-known cloud deployment models are (i) private cloud, (ii) community cloud, (iii) public cloud
and (iv) hybrid cloud.
 Cloud computing is often far more protected than customary computing, because organizations like
Google and Amazon have high-skilled employees who are updated in cyber security.
 Cloud computing actually is not (i) a data centre, (ii) a client/server computing, (iii) grid computing or
(iv) a centralized computing system.
 Cloud computing can empower the unfailing flow of knowledge between the service provider and the end
user.
 The pay as you go model of cloud computing adds ample reservation to the company’s portfolio.
 Cloud computing adopts Internet-based services to support enterprise processes.
 It is important to know about cloud computing before making the decision to progress the enterprise in
‘the cloud’.
 The interconnectivity of computer servers is the first constituent that identifies cloud computing.
 Cloud computing can alleviate the appropriate organization technical knowledge resources within the
organization.
 Characteristic of the cloud computing is that it sanctions outsourcing company’s work portfolio, which is
the key component.
 Cloud computing is expected to carry in a higher stage of automation than the common procedures of
making acquaintance between the assorted sections of an organization.
 A good cloud computing package must only charge you for the services that you use.
 Software provided online is upgraded and upheld by the provider, so no need to pay for or download fixes
and patches.
 Chances are there for anonymous computer hackers to gain entry to the enterprise knowledge in the
cloud.
 The key to employ cloud hosting carefully is finding a conscientious provider that provides back-up
programs.
 Advantages of cloud computing are (i) Cost reduction, (ii) scalability, (iii) levels the playing field, (iv)
easier collaboration, (v) scalable and flexible and (vi) efficiency.
 Disadvantages of cloud computing are (i) Security concerns, (ii) risk of losing Internet connection, (iii)
limited resources for customizations, (iv) availability, (v) data mobility and ownership, and (vi) privacy.
 The growth of cloud computing could drastically change the way companies manage their technical assets
and computing needs.
 The cloud model can yield enhanced effects to the client of an IT service.
 Cloud computing is developed from the existing advance distributed technologies.
 Other cloud-related technologies are (i) Grid computing, (ii) utility computing and (iii) autonomic
computing.
 Cloud computing uses IT as a service over the network.
 Cloud computing contains Infrastructure as a service (IaaS), Platform as a service (PaaS), Hardware as a
Service (HaaS) and Software as a service (SaaS).
 A cloud is employed as a storage medium which handles applications, enterprise and private data.
 Cloud migration process can be pulled apart into three environs, (i) plan, (iii) execute and (iii) monitor.
 Five things to be known while migrating to cloud are (i) Start small, (ii) trust cloud vendors to protect
data, (iii) consider importance of security aspects, (iv) be an identity provider and (v) plan for latency and
outages.
 There are seven regions to consider in evaluating and transitioning to cloud-based solutions.
 Understanding an organization’s prevailing surroundings, articulating its prerequisites and then arranging
and establishing the transition are the approaches to productively delineating and understanding in cloud.
 Information technology is continuously evolving.
 It becomes outdated as fast as it evolves.
 Technologies are based on two parameters: (i) the current investment rate and (ii) the current adoption
rate.
 Cloud computing technology changed its focus from industry to real-world problems.
 Technology trends to watch for are: (i) virtualization, (ii) data growth, (iii) energy and Green IT, (iv)
complex resource tracking and (v) consumerization and social software.
 A cloud service can be replaced with any one of the following as Cloud * as a Service, where ‘*’ can be
replaced as, ‘Desktop, data, platform, IT, infrastructure, testing, computing, security, software, storage,
hardware, database, etc.’
 Cloud service models are (i) SaaS (Software as a Service), (ii) PaaS (Platform as a Service and (iii) IaaS
(Infrastructure as a Service).
 Private and public clouds are defined based on their relationship and as the subsets of the Internet.
 Many types of cloud deployment models are available; they are private, public and hybrid models.
 Cloud computing has many benefits and risks.
 Storage services based on cloud computing does cost cutting, but increase in data transfer (bandwidth),
which is the main concern.
CLOUD COMPUTING TECHNOLOGY

CLOUD LIFECYCLE MODEL


The lifecycle management of cloud is so efficient that the IT sector can easily achieve the primary
goals of a cloud environment such as agility, cost savings and optimal use of resources.
Cloud lifecycle management provides:
 Ease in administrating cloud and service portal
 Manageable service
 Established multi-tenancy
 Include performance and capacity management
 Support heterogeneity
The Cloud Life Cycle (CDLC) is the repeated life cycle model for growth, deployment and
delivery of cloud. Cloud is organized in a linear manner and every phase is processed individually.
Therefore, it is the most simplest and flexible process model. The outcome of the one phase of
CDLC becomes input to another phase. In this model, a cloud environment development begins with
requirement and analysis phase.
Phases of CDLC
Figure 5.1 shows the different activities and feedback of CDLC in achieving the desired
cloud. In this lifecycle model, feedback is used in which a phase gives the necessary information to
the preferred upper phase. Among these phases, the second, fourth, fifth and sixth phase give
response to the first phase.
Requirement and Analysis
Requirement and analysis method is used to evaluate and understand the requirements of an end user.

Figure 5.1 The Cloud Development Lifecycle


Figure 5.2 Cloud Architecture
Architect
The structural behaviour of the cloud architecture gives solution to the cloud system which
comprises of on-premise resource, cloud resources, cloud services, cloud middleware, software
components, data server location and externally visible properties of data server location.
Figure 5.2 shows that the components of cloud architecture are reference architecture,
technical architecture and deployment and operational architecture.
Implementation and Integration
Third phase of CDLC is the actual formation and enablement of the private, public,
community, hybrid, inter and hosted cloud solutions to a computing problem.
Implementation
Two components of cloud computing are implemented in this phase. The implementation of
file system is the first case. The implementation of map-reduce system is the second case. This also
performs the task of integrating the different cloud solutions in one cloud environment.
Integration
Integration is intermediate between the source and target systems for extracting data,
mediating and publishing it. Five possibilities and recommendations for integrating into cloud
effectively are as follows:
1. Plan and set realistic goals
2. Learn from other’s experience
3. Require IT specialist team
4. Address security concerns
5. Maximize connectivity options
Quality Assurance and Verification
In this phase, cloud auditing is done to ensure the quality of the cloud network. It also confirms the
performance, reliability, availability, elasticity and safety of cloud network at the service level.
Deploy, Testing and Improvement
Different platform service providers drastically reduce the deployment cost of the application by pre-
building and pre-configuring a stack of application infrastructure in this phase.
Monitor, Migrate and Audit
This phase is marked by periodically monitoring the cloud environment and measuring the
performance of the system.

Cloud Management Lifecycle


The enterprise manager (Figure 5.3) manages the lifecycle such as planning, setting up, building,
testing and deploying, monitoring, managing, metering, charging and optimizing.
 Planning: Enterprise manager helps in creating a cloud set-up with brand new hardware,
new software and even a new data centre.
 Set-up: Enterprise manager adopts the IaaS, PaaS and DBaaS model clouds and the various
services offered by these models.
 Building: Packing and publishing of applications are done with the help of the available
cloud computing services.

Figure 5.3 The Cloud Lifecycle


 Testing and deploying: After building an application, it has to be tested. The testing
portfolio available in enterprise manager does this job. The resultant changes due to the
testing are stored in the database. Testing also estimates the load capacity after
deployment.
 Monitoring and managing: It monitors the settings, standards, policies and organizes for
better management.
 Metering, charging and optimization: Usage of resources such as CPU, storage (GB) and
memory are to be metered and charged accordingly.

ROLE OF CLOUD MODELLING AND ARCHITECTURE


Cloud Computing Model
Cloud computing model supports convenient, on-demand software using the Internet. The
computing devices used are released after usage without any manual intervention.
The model for cloud computing supports the availability comprising of five required
characteristics, four deployments and three service structures.
Necessary Characteristics
 On-demand self-service: Any customers can unilaterally use computing capabilities such as
network storage and server time as desired without human interaction with every service
provider.
 Broad network access: Services are networked and can be accessed over standard
mechanisms which promote use in mixed thick or thin user platforms (e.g., handheld
devices such as mobile phones, laptops and PDAs).
 Resource pooling: Resources of providers are grouped to serve multiple users by means of
a multi-tenant structure along with different virtual and physical resources assigned
dynamically.
 Rapid elasticity: Services can be run elastically and rapidly to speed up scale out and fast
release. As for the customer, the services available for running often, appear to be
unlimited that can be bought in any amount at any point of time.
 Measured service: In cloud system, controlling and optimization of resources happen
automatically and it is done by controlling and metering at some stage of abstraction
appropriate to the kind of service, for example, the bandwidth, processing, storage and
accounts of active users.
Service Models
 Cloud software as a service: These are capabilities provided to the customer to deploy
applications in the infrastructure provided by the service provider. Deployed applications
can be accessed by any device supported by WWW.
In this case, controlling or managing the network, server, operating systems, storage,
memory or even single application with the possible payment of user-specific application
setting and configuration are not done by the customer.
 Cloud platform as a service: The service includes installation on the cloud system
infrastructure created by the user itself or is an acquired application that may be written in
some programming language using tools that are supported and/or provided by the service
provider.
The end user does not control or manage the infrastructure of cloud computing system that
comprises servers, networks storages or operating systems.
 Cloud infrastructure as a service: In this, same capabilities and resources are provided but
the consumer can deploy and run the software. The user does not control the infrastructure.
Deployment Models
 Private cloud: These are functions within the organization and behind the firewall.
 Community cloud: This cloud infrastructure is common to several organizations.
 Public cloud: This cloud infrastructure is available to public or large industries.
 Hybrid cloud: It is a composite of two and more clouds.

REFERENCE MODEL FOR CLOUD COMPUTING


A reference architecture (RA) provides the blueprint and/or architecture reused by others with
some changes.
A reference model (RM) explains what the reference architecture comprises and its various
relationships. RA and RM help cloud computing in terms of quick formation of framework. The
detailed and generalized version of reference framework is shown in Figure 5.4.

Figure 5.4 Generalized Reference Framework


A reference framework consists of reference model, reference architecture, process and
organization.
Reference model takes care of laying foundations in principal and designs models such as meta
model, maturity model and capability model. Reference architecture is divided into two parts:
1. Views in-terms of business, implementation, deployment and technology
2. Practice in-terms of standards, patterns, deliverables and models
Process does decomposition of the given job and sequences it. Organization specifies the roles and
responsibilities of the in-house staff according to their skills.
The advantage of this framework is that elements can be mapped in different ways, that is,
different problem scenarios and solutions, but a single framework.
Reference Architecture, Frameworks and Models for Cloud Computing
There are many frameworks and models for cloud computing. Reference models are of two
types—role based and layer based.
In role-based model, cloud provider and consumer are considered as roles. Examples are
DMTF, IBM and NIST cloud models.
In layer-based model, application and resources are considered and layers and their
capabilities are mapped. In both types, they contain roles, activities and layered architecture.

CLOUD ARCHITECTURE
CLOUD COMPUTING LOGICAL ARCHITECTURE
Cloud Computing Architecture
Cloud computing is an Internet-based technique using shared resources available remotely.
Cloud computing system can be divided into two parts: front end and back end. The interconnection
between them is done via the Internet. Front end is used by the customers and back end refers to the
service providers.
The front end contains customer’s devices comprising of computers and a network and
applications for accessing the back end system, that is, the cloud systems.
Front end refers to the interface through which a customer can make use of the services rendered
by the cloud computing system.
Back end contains physical devices or peripherals. It also contains various computer resources
such as CPU and data storage systems.
A combination of these resources is termed as cloud computing system. A dedicated server is used
for administration purpose. It monitors the consumer’s demands, traffics, etc.

Cloud Computing Service Architecture


Google Apps is a computation service of Google’s business solution. Other big names in cloud
computing services are Microsoft, IBM, Amazon, HP and DELL.
The following three types of services are available with a cloud service provider (Figure 6.1).
1. Infrastructure as a service: The service provider takes care of the cost for the resources
such as servers, equipments for networking, backups and storage.
2. Platform as a service: The provider only renders the platform or solutions for the
consumers.
3. Software as a service: The provider will provide the consumers with software applications
available in his/her premises.
Understanding Cloud Computing Architecture Models
The major challenge in cloud computing is that, there is no defined standard or architecture.
Cloud architectures can be viewed as a collection of different functionalities and capabilities. A
cloud computing system has various IT resources deployed in remote places designed to run
applications dynamically.
Figure 6.1 Cloud Computing Stack

CLOUD SYSTEM ARCHITECTURE


Figure 6.2 depict the examples of cloud-based architectures. These architectures can be
deployed using the existing templates available in MultiCloud Marketplace.
Points to Consider
Factors to be considered while designing cloud-based architectures are highlighted as follows:
 Cost: Clearly understand the pricing details for various cloud models.
 Complexity: Analyse the complexity before customizing the cloud solution and check the
requirements thoroughly before deployment.
 Speed: Check the speed for cloud model. Speed in terms of advanced CPU architecture,
high memory, lowest latency and network infrastructure.
 Cloud portability: Check the portability. This allows the consumer to move from one
vendor to another without making much changes in the architecture.
 Security: Check for the security measurements provided by the vendor.

Figure 6.2 Single Cloud Site Architecture


Example Reference Diagram
Figure 6.2 shows the progress from simple to complex reference architectures.
Single Cloud Site Architectures
Figure 6.2 shows that, in the single cloud site architecture, load balancer, application logic, databases
and storage are located in the cloud, that is, the load balancing server, application server and database
server.
Redundant 3-tier Architecture
In Figure 6.3 we can see that two servers in load balancer, application and database are
available. System downtime is reduced while adopting redundant architecture.
The figure demonstrates using of a striped volume set in redundant 3-tier architecture at the
database, when it is huge and need faster backups for data storage.
Multi-datacentre Architecture
If the cloud infrastructure has many datacentres, it is recommended to distribute the system
architecture to the datacentres for redundancy and protection.
Figure 6.4 shows the multiple datacentres in reference architecture.
It shows the multiple datacentre architecture in which it has two datacentres, each having a load
balancing applications, volume and master database.
When one datacentre goes down, then automatically the other datacentre can be used.

Figure 6.3 Redundant 3-Tier Architecture


Figure 6.4 Multi-Datacentre Architecture
.
Types of Cloud Deployment Model
There are three types of cloud deployment models; however, there is yet another type of
cloud deployment model known as community cloud, which is being used in some instances. Table
6.1 lists the various cloud deployment models and highlights its characteristics.
Table 6.1 Cloud Deployment Models
Public Cloud Private Cloud Hybrid Cloud

 Provider owned  Client dedicated


and managed  Access defined by • Consume more resource
 Access by client in peak hours
subscription  Data governance • Economic benefits
 Economic benefits rules/regulations • Scale private cloud for
 Reduced IT service  More secure BAU
 Delivery cost  Economic benefits • Maintain service levels
 Reduced HW,  Reduced capex by scaling externally
systems, software,  Reduced opex Share cost with vertical with
management and  Service level charge back options
application costs discipline

Key Patterns

 Users initiate the  Resource-driven  SLA exists and


amount of use of provisioning of policies are driven
resources development, test and based on SLA
 Scalability for production systems,  Consumption of
compute resource managing E2E resources (storage,
Public Cloud Private Cloud Hybrid Cloud

is automated lifecycle compute) are done


 Pay per use  Ease of deploying automatically
metering and applications
billing

CLOUD MODELLING AND


DESIGN CLOUD COMPUTING: BASIC PRINCIPLES
Cloud computing is a means to reduce cost and complexity of the IT, and helps to optimize the
workload in hand.
Cloud computing uses the infrastructure designed especially to work on the vast computing
services with limited resources.
The rapid growth of cloud computing offers efficiency in cost saving for businesses and individual
users.
The main advantages of cloud are ability to scale data storage and dynamic computing power
saving cost. These benefits will
 Improve government services and citizens’ access.
 Transform businesses.
 Provide new innovations to consumers.
 Create energy savings.

Key Principles of Cloud Computing


Three key principles of cloud computing are abstraction, automation and elasticity.
Abstraction
IT providers are in need of standardizing their IT operations, so optimizing their operation
will be made easy. Cloud computing gives some basic but well-defined services.
Managing the software services is passed onto the developer or user. A well-defined abstraction
layer acts a as grease between clouds and developers or users, which helps to work efficiently and
independent of each other. The three abstraction layers in clouds are:
1. Application as a Service (AaaS)
2. Platform as a Service (PaaS)
3. Infrastructure as a Service (IaaS)
Automation
The developers or users have complete control over their resources, this is said to be
automation in the cloud. There is no human interaction, even from a developer’s or user’s side.
This automatic process reduces cost and complexity, and it puts the developer or user in
control.
Elasticity
In the dot-com era, people started scaling horizontally, which allowed them to add capacity
according to their needs.
Using elasticity, people can easily scale up and down according to their daily usage.
MODEL FOR FEDERATED CLOUD COMPUTING
Cloud Federation
Cloud federation is interconnecting the cloud computing environments with two or more
service providers for balancing the traffic load and to surge spikes while there is demand.
Cloud federation offers two benefits to the cloud providers. First, it generates revenues from
the idle computer resources to providers. Second, it enables providers to move across borders.
What is Cloud Federation?
Federation means different cloud flavours are interconnected and so are their internal
resources. IT organizations can select their flavours based on their needs such as computing and
workload.
Federation acts as a bridge between two cloud environments.

CLOUD ECOSYSTEM MODEL


Cloud Ecosystem
Cloud ecosystem is a term, which defines the complexity of the systems in terms of its
interdependent components that work together to enable cloud services.
Cloud Broker/Cloud Agent
 A cloud broker is a third-party person or commerce, that acts as an officer between the
consumer and sellers.
 A cloud broker is a software application, that facilitates the sharing of work between
various cloud service providers. Another name for cloud broker is cloud agent.
Cloud Outlook
We can experience a phenomenal growth in the coming years in cloud adoption and
implementations, etc. Figure 7.1 shows the same. Areas such as big cloud data, business cloud,
mobile cloud and gamification cloud are the key trends. The following are the areas that highlight the
key trends.

Figure 7.1 Cloud Outlook


Big Data Cloud
The amount of data created and replicated in 2012 surpassed 1.9 ZB.
It is estimated by IDC that the total size of data in the universe will reach 9 ZB within four
years and nearly 21% of the information will be touched by the cloud.
The big data cloud enables an economical way to extract value from very large volumes of
data by high-velocity capture, discovery, transformation and analysis.
Business Cloud
Business cloud is not like SaaS, PaaS, IaaS and BPaaS. It is more than that.
Mobile Cloud
Mobile applications will continue to develop with the social pressure, which will accelerate
the progress of cloud computing to empower the users and increase consumerization anybody,
anywhere, anytime and any device. The mobile cloud will push several organizations to reorganize
their business models.
Gamification Cloud
The gamification cloud will make technology edutainment, guide a participant with a path to
mastery and autonomy, encourage users to involve in desired behaviors and make use of human
psychological predisposition to engage in gaming.

UNIT – II
Virtualization: Foundation – Grid, Cloud and Virtualization – Virtualization and Cloud Computing.
Data Storage and Cloud Computing: Data Storage – Cloud Storage – Cloud Storage from LANs to
WANs
FOUNDATIONS
DEFINITION OF VIRTUALIZATION

‘Virtualization is a methodology for dividing the computer resources to more than one
execution environment by applying more concepts like partitioning, time-sharing, machine
simulation and emulation.’

Virtualization reduces the burden of workloads of users by centralizing the administrative
tasks and improving the scalability and workloads.

It contains three layers: layer 1 comprising of network, layer 2 comprising of virtual
infrastructures and layer 3 contains virtual machines where different operating system and
applications are deployed.

A single virtual infrastructure can support more than one virtual machine, that is, more than
one OS and application can be deployed.

Physical resources of multiple machines of entire infrastructure are shared in virtual
environment.

Virtualization is a method in which multiple independent operating systems run on a physical
computer. It maximizes the usage of available physical resources.

Diagrammatic Representation of Virtualization

Following are some reasons for using virtualization:


 Virtual machines (VM) consolidate the workloads of under-utilized servers. Because of
this one can save on hardware, environmental costs and management.
 To run legacy applications, VM is used.
 VM provides a secured and sandbox for running an untrusted application.
 VM helps in building secured computing platform.
 VM provides an illusion of hardware.
 VM simulates networks of independent computers.
 VM supports to run distinct OS with different versions.
 VMs are uses for performance monitoring. Operating systems can be checked without
disturbing the productivity.
 VM provides fault and error containment.
 VM tools are good for research and academic experiments.
 VM can encapsulate the entire state of a system by saving, examining, modifying and
reloading.
 VM enables to share memory in multiprocessor architecture.
 VM makes the job easier for the administrative staff in migration, backup and recovery.

TYPES OF VIRTUALIZATION
Today virtualization is applied in many places—server virtualization, client/desktop/application
virtualization, storage virtualization and service/application infrastructure virtualization.
Diagram depicts the various types of virtualization. A broad mixture of virtualization technology
has been conceptualized, developed and enhanced. This gives the consumer flexibility, greater
efficiencies and cost-effectiveness. Various virtualization types shown in Figure 8.2are as follows:
 Server virtualization is a kind of virtualization, used for masking of server resources, which
includes number of physical servers, processors and operating systems.
 Network Virtualization is a method where network resources are combine based on available
bandwidth.
 Storage virtualization is a type of virtualization, where a pool of physical storage from
different network of storage devices appears as a single storage device.

Types of Virtualization

 Desktop virtualization supports various computing such as utility and dynamism, testing,
development and security.
 Application virtualization allows server consolidation, application and desktop
deployment, and business continuity. Apart from this, recovery when disaster, lower TCO
with higher ROI, dynamic computing, testing and development are possible.
 Management virtualization allows variety of features which are as follows: server
consolidation, centralized policy-based management, business continuity and disaster
recovery, lower TCO with higher ROI, utility and dynamic computing, testing and
development and security.

VIRTUALIZATION APPLICATION
Application virtualization is a term, which describes a new software technology has a
technical edge over improving portability, compatibility and manageability of various applications
by encapsulating them from its base OS, on which they are executed.
A virtualized application is not installed as in case of any other software/application, but it is
executable as it is installed.
Technology Types Under Application Virtualization
 Application streaming
 Desktop virtualization/virtual desktop infrastructure (VDI)
Benefits of Application Virtualization
 Non-native applications can be executed (i.e., windows applications in Linux)
 Protection for the operating system
 Lesser resources are used
 Able to run applications with bugs (i.e., accessing read-only system owned location for
storing user data)
 Incompatible applications can be executed with lesser regression testing
 Migration of various operating systems is simplified
 Faster application deployment and on-demand application streaming
 Security is improved as applications are isolated from operating systems
 Enterprises can easily track license usage
 Tracking license usage is done easily for applications
 No need to install the applications, as it can be imported from portable media to client
computers
Limits for Application Virtualization
 All software cannot be virtualized. Examples include device driver and 16-bit applications.
 Anti-virus packages require direct OS integration, these packages cannot be virtualized.
 For legacy applications, file and registry level compatibility issues can be resolved using
virtualization in newer operating systems. For example, Windows Vista applications will
not run where they don’t manage the heap correctly. For this reason, application
compatibility fixes are needed.

GRID, CLOUD AND VIRTUALIZATION


 Virtualization is often coupled with virtual machines and corresponding CPU abstractions.
 The majority of current applications are in the area of CPU, storage and network virtualizations.
 Usage of virtualization can be grouped in two types: platform virtualization and resource
virtualization.
 From the grid computing’s part, virtualization gets more and more attention, but not in terms of
service consolidation and growing server utilization.

VIRTUALIZATION IN GRID
Grid Computing
The main focal point in grid computing lies in secure resource sharing in accessing computers,
software and data in a dynamic atmosphere. Sharing of those resources has to be fine-tuned and
handled in a highly controlled manner.
Grid Computing and Virtualization
 Virtualization is not a solution for enterprises to manage their resources although it provides
richer capabilities in managing and moving the OS in different hardware.
 It helps to run multiple workloads in a single machine with clear distinction between them.
 Virtualization can do suspending, resuming and migrating images in run-time.

VIRTUALIZATION IN CLOUD
 Virtualization is a tool for system administrators, which has many technical uses than a cloud.
 Virtualization allows IT organizations to perform multiple operations using a single physical
hardware. Multiple OS instances running on single device is cost-effective than multiple
servers for each task.
 Virtualization and cloud computing can go hand in hand. Virtualizing everything started a
year ago when processing power, software and servers are virtualized. following table shows
the similarities between cloud computing and virtualization.

Similarities Between Cloud Computing and Virtualization

Cloud Computing Virtualization

Location of virtual machine On any host On a specific host

VM/instance storage Shortly lived Persistent

VM resource (CPU, RAM, Standard Customizable


etc.)

Resource changes Spin p new instance Resize VM itself

Recovery from failures Discard instance spin up new Attempt to recover failed
one VM

SUMMARY OF UNIT II

 Cloud computing is capable of transforming the IT sector and enhance responsiveness towards
business requirements.
 IT organizations are approaching cloud computing for implementing virtualization in their data
centres.
 The traditional virtualized environment can be extended to cloud lifecycle management that can
deliver an operational model for cloud services and deployment.
 Cloud lifecycle management provides five services.
 Cloud computing is an effective model, which enables convenient on-demand network access for the
shared resources.
 There are five possibilities and recommendations for integrating into cloud effectively.
 Organizations require an effective solution to support the thousands of parallel applications used for
their business demands. Organizations can share the various resources such as storage, server and
workloads using cloud computing models such as IaaS, PaaS and DaaS.
 The advantages of cloud computing are (i) increased QoS, (ii) rapid application development and
deployment, (iii) elasticity and (iv) speed.
 Cloud computing model supports convenient, on-demand software using the Internet.
 Cloud computing supports availability and comprises of five characteristics, four deployment and
three service structures.
 A reference architecture (RA) provides a blueprint and/or architecture, reused by others with slight
modifications.
 A reference model (RM) explains what the reference architecture comprises and its various
relationships.
 In cloud computing, RAs and RMs help in forming frameworks speedily.
 APIs are using for storing and transferring data in cloud computing.
 A well-established document if published would help the IT industry in defining the cloud computing
process better.
 Cloud computing is a type of computing environment, where IT businesses outsource their computing
needs which includes software application services when they are in need of computing power or other
resources like storage, database, e-mails, etc., which are accessed via WWW.
 Cloud computing system can be divided into two parts: front end and back end.
 A good example is Gmail and Yahoo, both use cloud computing technology.
 The operating cost of the cloud computing is comparatively low considering personal infrastructure.
 The only concern in the cloud computing technology is security and privacy.
 Cloud computing environment can be broadly classified based on the infrastructure: (i) public cloud, (ii)
private cloud and (iii) hybrid cloud.
 The main advantage in cloud computing is that consumers need not pay for the infrastructure and its cost
for maintenance.
 Mainly three types of services are available from a cloud service provider: (i) Infrastructure as a service,
(ii) Platform as a service and (iii) Software as a service
 Cloud architectures can be viewed as a collection of different functionalities and capabilities.
 Simplest cloud computing model can be viewed as a collection of servers, which are virtualized using a
tool.
 The cloud computing reference model (CC-RM) facilitates the process of modelling the cloud,
architecture and planning the deployment activities.
 The cloud reference model consists of four elements/models: (i) cloud enablement model, (ii) cloud
deployment model, (iii) cloud governance and operations model and (iv) cloud ecosystem model.
 The CC-RM has four sub-models: (i) cloud virtualization tier, (ii) cloud operating system tier, (iii) cloud
platform tier and (iv) cloud business tier.
 Cloud-based solution architecture are (i) single cloud site architectures, (ii) redundant 3-tier architecture
and (iii) multi-datacentre architecture.
 Cloud computing is a means to reduce the IT cost and complexity, and helps to optimize the workload in
hand.
 Factors needed for promoting the cloud computing are: (i) assurance regarding security risk, (ii) no illicit
activities, (iii) data portability and (iv) SLA regarding authentication of consumer data and (v) way to go
beyond boundaries.
 Three key principles of cloud computing are (i) abstraction, (ii) automation and (iii) elasticity:
 Cloud federation is the interconnection the cloud computing environments with two or more service
providers for balancing the traffic and to surge spikes while there is demand.
 Cloud ecosystem characterizes the complexity of the schemes in terms of its interdependent constituents
that work simultaneously to endow cloud services.
 The concept of ‘governance’ means different opinions for different people.
 Governance in the cloud means, service level should be given importance. Developers must know about
their providers SLAs.
 Virtualization is a methodology for dividing the computer resources to more than one execution
environments by applying concepts such as partitioning, time-sharing, machine simulation and emulation.
 Virtualization eases the work of users by centralizing the administrative tasks and improving the
scalability and workloads.
 Virtualization is a very powerful tool that drives significant benefits for cost, agility and the
environment.
 Virtualization provides multiple environments for execution termed as virtual machines; examples are (i)
Wine, (ii) FreeBSD, (iii) Hive, (iv) Microsoft Virtual Server, (v) Nemesis and (vi) SimOS.
 Examples of virtual machines programming languages are (i) UCSD P-System, (ii) JVM.
 The needs for server virtualization are (i) consolidation, (ii) redundancy, (iii) legacy systems and (iv)
migration.
 There are three ways to create virtual servers (i) full virtualization, (ii) paravirtualization and (iii) OS-
level virtualization.
 OS virtualization somewhat differs from server virtualization. In this, the host runs a single OS kernel and
exports different operating system functionalities to every visitors.
 Storage systems use virtualization concepts for better functionality and have more features within the
storage system.
 Common network virtualization scenarios and examples are (i) external network virtualization and (ii)
internal network virtualization.
 Some pitfalls of virtualization adoption and strategies are given are (i) religious battles, (ii) procurement
and business changes, (iii) myopic virtualization strategy, (iv) physical cost recovery models, (v) physical
asset-based security, (vi) still support programs and (vii) over-virtualization.
 By avoiding the following potential pitfalls can be overcome: (i) poor preparation, (ii) insufficient server
capacity, (iii) mismatched servers, (iv) slow network communications, (v) slow mechanical disks, (vi)
uneven workload distribution and (vii) security risks.
 Virtualization is a core technique for many application environments in computing systems.
 Virtualization is often coupled with virtual machines and corresponding CPU abstraction.
 In grid computing, virtualization gets more attention, but not in terms of service consolidation and
growing server utilization.
 Virtualization is not a solution for the enterprises to manage their resources although it provides richer
capabilities in managing and moving OS in different hardwares.
 Virtualization helps to run multiple workloads in a single machine with huge separation between those
workloads.
 Virtual machines can check the execution of applications and also they are a useful tool for grid system
administrators.
 Virtualization is a tool for system administrators, which has many technical advantages than in cloud.
 Virtualization allows IT organizations to perform multiple operations using a single physical
hardware.
 Cloud computing and virtualization modernizes IT organizations. By combining them, companies can run
their applications without the need of running updates and backups, as they all will be done by the
provider.
 Virtualization has enabled consumers to consolidate the servers and do more with fewer
infrastructures.
 Cloud computing is changing itself to meet the demands of customers in terms of software and
hardware.
 Amazon, Microsoft and Google are the players using cloud computing technology.
 Cloud computing environment separates the computing environment away from the developers and lets
them to focus on improving their application.
 Cloud services always bundles language run-time dynamically for efficient interpretation across many
application instances.
 Companies such as Aptana, CohesiveFT, RightScale are some examples of cloud hosting providers.
 Virtualization abstracts services and physical resources. It simplifies the job of managing the resources
and offers a great flexibility in resource usage.
 CPU virtualization is not multi-tasking or multi-threading.
 Network virtualization provides a way to run multiple networks, multiple consumers over a shared
substrate.
 A storage system is also called as storage array or disk array.

DATA STORAGE
Storage is a resource to be allocated to organizations to add more value. Data storage
management includes a set of tools to configure, backup, assign to users according to defined
policies.
INTRODUCTION TO ENTERPRISE DATA STORAGE
Understanding storage system is an important point in building effective storage system. The
various types of storage subsystems are:
 Direct Attached Storage (DAS)
 Storage Area Network (SAN)
 Network Attached Storage (NAS)
DAS is the basic in a storage system and employed in building SAN and NAS either directly or
indirectly. NAS is the top most layer, having SAN and DAS as its base. SAN lies between a DAS
and a NAS.
DAS: Direct Attached Storage
DAS is the basic storage system providing block-level storage and used for building SAN
and NAS. The performance of SAN and NAS depends on DAS.
Performance of DAS will always be high, because it is directly connected to the system.
SAN: Storage Area Network
When multiple hosts want to connect a single storage device, then SAN is used. SAN
provides block-level storage and simultaneous access is not permitted and hence it is suitable for
clustering environment.
NAS: Network Attached Storage
 For file-level storage, NAS is used.
 SAN and DAS act as base system for NAS. NAS is also called as ‘File Server’.
 The main advantages of NAS are that multiple hosts can share a single volume at the same time,
whereas when using SAN or DAS only one client can access the volume at a time.
DATA STORAGE MANAGEMENT
Data storage is expensive; therefore, storage administrators are trying to use tiered storage.
Today IT organizations are implementing tiered storage as a mix of storage technologies that meet
the performance needs and are cost effective.
Data Storage Management Tools
 Maintaining storage devices is a tedious job for storage administrators. They adopt some utilities
to monitor and manage storage devices.
 Management level tasks are configuration, migration, provisioning, archiving and storage
monitoring/reporting.
 Storage Resource Management (SRM) tools include configuration tools, provisioning tools and
measurement tools.
 Configuration tools handle the set-up of storage resources. These tools help to organize
and manage RAID devices by assigning groups, defining levels or assigning spare drives.
 Provisioning tools define and control access to storage resources for preventing a network
user from being able to use any other user’s storage.
 Measurement tools analyse performance based on behavioural information about a storage
device. An administrator can use that information for future capacity and upgrade
planning.

Cloud File System


In cloud file systems, the considerations are:
 It must sustain basic file system functionality.
 It should be an open source.
 It should be grown-up enough that users will at least think about trusting their data to it.
 It should be shared, i.e., available over a network.
 It should be paralleling scalable.
 It should provide honest data protection, still on commodity hardware with only internal
storage.
A cloud file system should be scalable enough to adopt large organizations file systems under
different workloads with good performance requirements. Cloud file systems should have high
throughputs then local file systems. Cloud file system should have minimal operation latency.
Following are some of the cloud file systems.
Ghost File System
Gluster File System
Hadoop File System
XtreemFS: A Distributed and Replicated File System
Kosmos File System
CloudFS

CLOUD DATA STORES


A data store is a data repository where data are stored as objects. Data store includes data
repositories, flat files that can store data. Data stores can be of different types:
 Relational databases (Examples: MySQL, PostgreSQL, Microsoft SQL Server, Oracle
Database)
 Object-oriented databases
 Operational data stores
 Schema-less data stores, e.g. Apache Cassandra or Dynamo
 Paper files
 Data files (spread sheets, flat files, etc)
Types of Data Stores
Established IT organizations have started using advanced technologies for managing large
size data, which come from social computing and data analysis applications.
BigTable
BigTable is a compressed, high performance and proprietary data storage system construct on
Google File System, Chubby Lock Service, SSTable and a small number of other Google
technologies.
Other similar softwares are as follows:
 Apache Accumulo: Construct on top of Hadoop, ZooKeeper and economy. Server-side
programming mechanism deployed in Java environment.
 Apache Cassandra: Dynamo’s distributed design and BigTable’s facts and numbers form
adds simultaneously in Apache Cassandra, which uses Java.
 Hbase: Supports BigTable and Java programming language.
 Hypertable: Designed for cluster of servers especially for storage and processing.
 KDI: Kosmix stab to make a BigTable clone and is written in C++.
Dynamo: A Distributed Storage System
Dynamo is a vastly offered, proprietary key-value structured storage system or a dispersed
data store. It can act as databases and also distributed hash tables (DHTs). It is used with parts of
Amazon web services such as Amazon S3.

CLOUD STORAGE

WHAT IS CLOUD STORAGE?


Most of the organizations in an effort to cut cost are switching to taking cloud computing and
cloud storage solutions. Cloud computing is a model which wraps around current technologies, for
example, server virtualization to make use of resources more efficiently.
For solving these problems, the best practices adopted are listed as follows:
 Unpredictable storage growth: IT organizations should constantly monitor storage
consumption to track whether the actual growth rates are in line with initial projections.
 Cost and complexity of conventional storage: Enterprises must think Storage-as-a-Service
solutions for remote and branch offices when it is possible.
 Security: As employees move between offices all over the world and take their data with
them, enterprise should ensure that in-house and customer data is always protected and
safe.
IT organizations with less staff members cannot depute staff for their remote offices. Such IT
organizations can end up with a series of problems in terms of structures that operate differently and
inefficiently.
To avoid these problems, the following solutions can be applied:
 IT organizations should aim to centralize data storage and protection.
 IT organizations should eliminate the need for personnel on-site and establish a single
point-of-control.
 Need a clear service level agreement between remote organization and the central
organization.

OVERVIEW OF CLOUD STORAGE


Cloud storage is a subset of cloud computing. Standards and services pertaining to cloud
storage have to be understood before its implementation.
Resources that are exposed to clients are called as functional interfaces, that is, data paths.
Resources maintained by the service providers are called as management interfaces, that is, control
paths.
Cloud storage came under the limelight because of the following attributes available in cloud
computing: pay-as-you-use, elasticity and simplicity (management).
It is important that any provider providing storage as a service should also provide these
attributes to the consumer.
Following are some additional cloud storage attributes:
 Resource pooling and multi-tenancy: Multiple consumers can use shared single storage
device. Storage resources are pooled and consumers can be assigned and unassigned
resources according to their needs.
 Scalable and elastic: Virtualized storage can be easily expanded on need basis.
 Accessible standard protocols including HTTP, FTP, XML, SOAP and REST.
 Service-based: Consumers no need to invest, that is, no CAPEX (Capital Expenditure) and
only pay for usage, that is, OPEX (Operational Expenditure).
 Pricing based on usage
 Shared and collaborative
 On-demand self-service

DATA MANAGEMENT FOR CLOUD STORAGE


To support enterprise applications, quality of service has to be increased and extra services
deployed. Cloud storage will lose its abstraction and its benefits such as simplicity, heterogeneity and
good performance, if complex management services are added. Cloud storage should incorporate
new services according to change of time.
For cloud storage, a standard document is placed by SNIA Storage Industry Resource
Domain Model (SIRDM). It states the importance of simplicity for cloud storage. Figure 12.1shows
the SIRDM model which uses CDMI standards. SIRDM model adopts three metadata: system
consisting of storage metadata, data metadata and user metadata.

Cloud Storage Usage of SIRDM Model

Storage system metadata is used by the cloud to offer basic storage functions like assigning,
modifying and access control.

Cloud Data Management Interface (CDMI)


To create, retrieve, update and delete objects in a cloud the cloud data management interface
(CDMI) is used. The functions in CDMI are:
 Cloud storage offerings are discovered by clients
 Management of containers and the data
 Sync metadata with containers an objects
CDMI is also used to manage containers, domains, security access and billing information. CDMI
standard is also used as protocols for accessing storage.
CDMI defines how to manage data and also ways of storing and retrieving it. ‘Data path’ means how
data is stored and retrieved. ‘Control path’ means how data is managed. CDMI standard supports
both data path and control path interface.

Cloud Storage Requirements


Multi-tenancy
In a multi-tenancy model, resources provided are pooled, so that it may be shared by multiple
customers based on their needs. Due to the elasticity property in cloud computing, shared pool of
storage model makes the provider cost effective and billing is made easy.
Security
Secure cloud storage requires a secure transmission channel and methods. Securing data can be
done using encryption, authentication and authorization.
 Encryption is the process of scrambling data in such a manner as to make it unreadable
without special information, called a key, to make it readable again.
 Authentication is the process of determining their identity. Authentication can employ
passwords, biometrics, identifying tokens and other means.
 Authorization determines access rights on the data and the levels of authorization. To
provide secure cloud storage, access must be restricted for the communication channel, the
data source and the cloud storage sites.
Secure Transmission Channel
The four primary methods used to secure network communications are as follows:
1. Transport Layer Security (TLS) and Secure Sockets Layer (SSL)
2. Hypertext Transfer Protocol Secure (HTTPS)
3. Private Networks
4. Virtual Private Networks (VPNs)
Performance
Cloud storage performance can be categorized into two: speed and latency. Factors that affect
cloud storage performance are: available network bandwidth, types of systems available in provider’s
end, method adopted for compression and caching.
Quality of Service (QoS)
Quality of service (QoS) refers to levels of performance and efficiency of the system that they
can provide.
Data Protection and Availability
To ensure that data is protected from loss and theft, providers must take some precautionary
measures:
 Physical site security
 Protection against power loss
 Protection against loss of network access
 Data redundancy
 Server redundancy and server fail-over
 Redundant data sites
 Levels of redundancy
 Versioning and data retention
 Accessibility of cloud storage as live data
 Backup to tape or other media
 Data availability, when contract disputes
Metering and Billing
Metering and billing in cloud storage are done based on: data uploaded, data downloaded,
data stored and depends on requests and types of request.

CLOUD STORAGE FROM LANS TO WANS


Data management applications are promising for candidates who opt for deployment of the
cloud. For multiple businesses, the pay-as-you-go cloud computing form is very attractive.
Thus, cloud computing is important of the Application Service Provider (ASP) and Database-
as-a-Service (DaaS) paradigms.
CLOUD CHARACTERISTIC
There are three characteristics of a cloud computing natural environment that are most pertinent
to be considered before choosing storage in cloud.
1. Computer power is elastic, when it can perform parallel operations. In general,
applications conceived to run on the peak of a shared-nothing architecture are well
matched for such an environment..
2. Data is retained at an unknown host server. The idea is that the data is physically stored
in a specific host country and is subject to localized laws and regulations.
3. Data is duplicated often over distant locations. Data accessibility and durability is
paramount for cloud storage providers, as data tampering can be impairing for both the
business and the organization’s reputation. Data accessibility and durability are normally
accomplished through hidden replications.
DISTRIBUTED DATA STORAGE
Distributed storage means are evolving from the existing practices of data storage for the new
generation of WWW applications through organizations like Google, Amazon and Yahoo.
Emerging answers are Amazon Dynamo, CouchDB and ThruDB.
Amazon Dynamo
Amazon Dynamo is a widely used key-value store. It is one of the main components of
Amazon. com, the biggest e-commerce stores in the world. It has a primary-key only interface. This
demands that data is retained as key-value in twos, and the only interface to get access to data is by
identifying the key.
CouchDB
CouchDB is a document-oriented database server, accessible by REST APIs. Couch is an
acronym for ‘Cluster Of Unreliable Commodity Hardware’, emphasizing the distributed environment
of the database. CouchDB is designed for document-oriented applications, for example, forums, bug
following, wiki, Internet note, etc.
ThruDB
ThruDB aspires to be universal in simplifying the administration of the up-to-date WWW
data level (indexing, caching, replication, backup) by supplying a reliable set of services:
 Thrucene for indexing
 Throxy for partitioning and burden balancing
 Thrudoc for article storage
ThruDB builds on top of some open source projects: Thrift, Lucene (indexing), Spread (message
bus), Memcached (caching), Brackup (backup to disk/S3) and also values Amazon S3.
There are some more systems out in the untamed as well as appearing systems. Prominent amidst
them are:
 Amazon Simple Storage Service is a straightforward data storage scheme with a hash-table
like API. It is a hosted service with interior architecture minutia not available. It is
proclaimed that the conceive obligations of S3 are scalable, reliable, fast, inexpensive and
simple.
 Amazon SimpleDB is a hosted WWW service for running queries on organized data in real
time. It has the prime functionality of a database, real-time lookup and straightforward
querying of organized data.
 MemcacheDB is a distributed key-value storage scheme conceived for persistence. It
conforms to the memcache protocol. Memcachedb values Berkeley DB as a saving
backend, so allotments of characteristics encompassing transaction and replication are
supported.
Distributed storage has numerous anxieties, scalability, hardware obligations, query form,
malfunction management, data consistency, durability, reliability, effectiveness, etc.
The future of data storage can be viewed in addition, for example, CouchDB Integration with
Abdera, an Atom store.

CLOUD COMPUTING AT WORK CLOUD SERVICE


DEVELOPMENT TOOL
Servers and software programs are very expensive. Buying assets for developing software,
which will be used only for few months, is impractical and a total waste of resources.
There are abundant choices to assist developers to get a jump start in the cloud with no cost at
all and no necessity to buy or install any programs or hardware.
With the myriad of free development devices and services in the cloud, one can start evolving
much quicker, reduce time-to-deployment, with no upfront payment or managing hardware hassles.
Some free cloud resources available, which are discussed as follows.
Application Development Using IDE
Integrated Development Environments (IDEs) comprise of source codes, automation experts and
a debugger.
Cloud IDEs, for example, koding.com, Cloud9 or eXo are all free of subscription as they are in
the cloud and it allows for developers to conveniently edit and code anytime, anywhere.
In addition to cloud IDEs, there are also Platform-as-a-Service (PaaS) solutions such as Heroku
or Engine Yard for Ruby, CloudBees for Java developers, or AppFog for PHP.
Each PaaS encourages us to utilize their services, which are free, to both cipher in the cloud and
effortlessly set-up the application subsequently when it is ready.

SUMMARY OF UNIT III

 Storage is a resource to be allocated to the organizations to provide more benefits.


 Understanding storage system is an important point in building effective storage system.
 The various types of storage subsystems are (i) Direct Attached Storage (DAS), (ii) Storage Area
Network (SAN) and (iii) Network Attached Storage (NAS).
 Data storage is expensive, therefore, storage administrators are trying to use tiered storage.
 Managing traditional storage devices is a complicated task because of high operational cost, performance
and scalability issues.
 A file system is a structure used in computer to store data on a hard disk. There are three file systems in
use in Windows OS, they are: NTFS, FAT32 and rarely-used FAT.
 A cloud file system should be scalable to adopt large organizations file systems under different workloads
with good performance requirements.
 Cloud file systems should have high throughputs then local file systems. Cloud file system should have
minimal operation latency.
 Examples of cloud file systems are: (i) Ghost File Systems, (ii) Gluster File System, (iii) Hadoop File
System, (iv) XtreemFS: A distributed and replicated file system, (v) Kosmos File System and (vi)
CloudFS.
 A data store is a data repository where data are stored as objects. Data store includes data repositories, flat
files that can store data.
 A distributed data store is like a distributed database where users store information on multiple nodes.
 Grid computing established its stand as an understood architecture, as it provides users and applications to
use shared pool of resources.
 Storage for grid computing requires a common file system to present as a single storage space to all
workloads.
 GOS is a successor of Network-Attached Storage (NAS) products in the grid computing era. GOS
accelerates all kinds of applications in terms of performance and transparency.
 Cloud computing is a model which wraps around current technologies, for example, server virtualization,
to use resources optimally.
 The benefits of cloud storage are scalability and elasticity, along with management.
 Cloud storage is nothing but virtualized storage on demand called as Data storage as a Service (DaaS) and
also a subset of cloud computing.
 Cloud storage attributes are (i) resource pooling and multi-tenancy, (ii) scalable and elastic, (iii)
accessible standard protocols, (iv) service-based, (v) pricing based on usage, (vi) shared and collaborative
and (vii) on-demand self-service.
 CDMI defines how to manage data and also means to store and retrieve them. ‘Data path’ means how
data stored and retried. ‘Control path’ means management of data. CDMI standard supports both data
path and control path interface.
 CDMI is capable of the following: (i) cloud storage offerings are discovered by clients, (ii) management
of containers and the data and (iii) sync metadata with containers as objects.
 Cloud storage requirements are (i) multi-tenancy, (ii) security, (iii) secure transmission channel, (iv)
performance, Quality of Service (QoS), (v) data protection and availability and (vi) metering and billing
 Characteristics of cloud are as follows: (i) compute power is elastic, (ii) data is stored with a trust-worthy
host and (iii) data is duplicated.
 Distributed storage means are evolving the de facto procedure of data storage by using WWW
applications by organizations like Google, Amazon and Yahoo.
 Amazon Dynamo is a widely used key-value store. It is one of the main components of Amazon.com, the
biggest e-commerce store in the world. It has a primary key-only interface.
 CouchDB is a document-oriented database server, accessible by REST APIs. Couch is an acronym of
‘Cluster of Unreliable Commodity Hardware’, emphasizing the circulated environment of the database.
 ThruDB aspires to be an entire package to simplify the administration of the up-to-date WWW data level
(indexing, caching, replication, backup) by supplying a reliable set of services: Thrucene for indexing,
Throxy for partitioning and burden balancing, and Thrudoc for article storage.
 Distributed storage has numerous anxieties, scalability, hardware obligations, query form, malfunction
management, data consistency, durability, reliability, effectiveness, etc.
 Applications utilizing cloud storage are (i) DropBox, (ii) Box.net, (iii) Live Mesh, (iv) Oosah and (v)
JungleDisk.
 Examples of cloud storage companies are (i) Box Cloud Storage, (ii) Amazon Cloud, (iii) SugarSync
Online Backup, (iv) Hubic online Storage, (v) Google Cloud Drive.
 Social Bookmarking is a method in which client uses bookmark and manages the sheets they wish to
recall or share with their friends.
 Cloud computing is a period that recounts a broad kind of services and programs submissions that share
one widespread facet they all run on Internet and not on a user’s PC.
 Example Online Photo Editors are (i) Photoshop Express Editor, (ii) Picnik, (iii) Splashup, (iv)
FotoFlexer and (v) Pixer.us.
 Virtualization is a technology which is around for a decade but gaining popularity only recently for its
advantages.
 With cloud computing, there is no point-to-point contact between the client and the computing
infrastructure.
 Advantages of cloud computing are (i) resilience, (ii) scalability, (iii) flexibility and effectiveness and (iv)
outsourcing.
 Infrastructure as a Service (IaaS): hardware associated services are supplied and utilized herein. These
encompass disk storage and virtual servers.
 Examples of IaaS vendors are Amazon EC2, Amazon S3, Rackspace Cloud Servers.
 Platform as a service (PaaS) act as the consignment of a computing platform and solution for a service.
PaaS helps developers to deploy their applications with minimal cost and complexity.
 Examples of PaaS vendors are Google App Engine, Microsoft Azure and Saleforces. com.
 Software as a Service (SaaS) is the most widespread system of cloud computing which is in froce, for
examples, Salesforce.com’s CRM, Google apps, Salesforce.com’s CRM, Gmail and Hotmail.
 SaaS is an application hosted on an isolated server and accessed through the Internet.
 Two main categories of SaaS are: line of enterprise services and customer-oriented services.
 Platform as a Service (PaaS) is a computing platform which promises deployment of applications without
the complexity of buying and managing inherent hardware and software.
 IaaS is developed from virtual private server offerings. IaaS fundamentally constitutes computer
infrastructure, normally a virtualization natural environment, as a service.
 SaaS is a model in which large business houses lease their accessible storage space to small-sized
businesses or individuals.
 Purchasing, establishing and organizing database systems are getting costlier and complicated. DBaaS
consigns the database programs and its constituents as a service.
 INaaS permits users to (i) Distribute data over the enterprise as a distributed service, (ii) create a single
source of reality, (iii) reduce operational hurdles and (iv) simplify and streamline data.
 IaaS reduces cost, time and promotes components to convey back-end data.
 IaaS is a boon to business enterprises and the IT sectors because of its agility and effectiveness.Servers
and software programs are very expensive.
 Buying assets is impractical for developers who may use it only for a couple of months to access the
software.
 Integrated Development Environments (IDEs) comprise of source codes, automation experts and a
debugger.
 Cloud IDEs, for example, koding.com, Cloud9 or eXo are all free. As they are in cloud it permits
developers to conveniently edit and code anytime, anywhere.
 CloudBees boasts an IaaS solution called Run@Cloud, which presents all services required to establish in
the cloud, free of charge.
 There are two routes that a vendor can take to develop a platform for cloud computing: cloud-first or tool-
first.
 Cloud-first development method has distinct disadvantages. First, it places the data centre procedures
business, and develops cloud environment.
 A tool-first method is much more straightforward. Start by conceiving a host development studio and
manage the construction and check on the benchmark hardware.
 Even management is a process-intensive attempt. There’s so much involved in organizing even a least
significant event, the power of cloud computing can be taken advantage of.
 Cloud computing also takes event management from the agency to the event site.
 Some noteworthy applications of collaborating event management are (i) Event Planning and
Workflow
Management, (ii) Advance Registration, (iii) Payment Processing, (iv) Contact Management, (v) Budget
Management, (vi) Post-Event Reporting and Analysis, (vii) 123 Signup, (viii) Acteva,
(ix) Conference.com, (x) Event Wax, (xi) Tendenci.
 Contact management is the act of saving data of friends, family and business associates for retrieval at a
future date.
 CRM software not only shares clientele communicate data, it also shares and analyses all data pertaining
to a specific clientele and then values that data to assist you plan how best to communicate with that
customer.
 Projects act as catalysts that play a crucial role in constructing ‘a better tomorrow.’
 Mindjet Connect is a free collaboration platform that endows task groups to visually capture concepts and
data, share documents and collaborate with others in real time or anytime

UNIT – III
3 Cloud Computing and Security: Risks in Cloud Computing – Data Security in Cloud – Cloud
Security Services – Cloud Computing Tools: Tools and Technologies for Cloud – Cloud Mashaps –
Apache Hadoop – Cloud Tools
RISKS IN CLOUD COMPUTING

Cloud computing is recognized as the most widely used computing paradigm for the last few
years.
The most significant risks presented by cloud computing are: SLAs violations, proficiency to
amply consider risks of a cloud provider, blame to defend perceptive data, virtualization-related
risks, lessening of direct command of assets and software programs, compliance risks and decreased
reliability since service providers may proceed out of business.
The levels, from base to peak, are: infrastructure, storage, platform, application, services and
client.
Infrastructure: At the base is the infrastructure of the service or the platform virtualization. Users get
the server environment as they want. This is the rudimentary proposal, still clients need to handle the
server, all software programs installed and maintained on their own.
Storage: With the storage level, one can get a database or something similar and pay per gigabyte per
month. A storage level is nothing new or exceptional, except for the full stack of services. There are
some possibilities for storage. Examples are relational databases, Google’s BigTable and Amazon’s
SimpleDB.
Platform: The platform level has solution stacks, for example, Ruby on Rails, LAMP or Python
Django. A start-up organization need not deal with the setting up of server programs, or upgrade their
versions, because that comes with the service. They can aim on evolving and trading their
application.
Application: The application level comprises applications that are suggested as services. The most
well-known demonstrations are Salesforce.com and Google Docs, but there are hundreds and
thousands of genuine applications that can be bought as services.
Services: The services level comprises interoperable machine-to-machine procedures over the
network. The most common examples of this level is web services. Other examples encompass
payments schemes, for example, PayPal and mapping services such as Google Maps and Yahoo
Maps.
Client: At the peak of the stack is the consumer level, which comprises the users of the cloud
systems. Clients are, for example, desktop users and mobile users (Symbian, Android, iPhone). There
are possibilities for vendors to exit and adapt new services, and for clients to find new services and
applications to solve their problems.
However, there are some risks that clients need to understand. There are some points to address
before taking up cloud-based services.
Make sure that there is a straightforward means to get your data out of the service.
If something goes wrong with the service provider, for example, if servers break down, the
clientele cannot manage anything. For issues like this, it’s better to select a service provider who
caters similar sites.
Although cloud computing can offer small enterprises important cost-saving advantages, namely,
pay-as-you-go access to complicated programs and mighty hardware, the service does come with
certain security risks. While evaluating the promise of cloud-based service providers, one should
hold these top five security anxieties in mind.
1. Secure data transfer
2. Secure programs interfaces
3. Secure retained data
4. User access to control
5. Data separation
Cloud Computing Risks
 Risk #1—The solution may not meet its economic objectives: Do the short-run and long-run
ROI work. The key components to address when considering cloud ROI risk likelihood
encompass utilization, speed, scale and quality.
 Risk #2—The solution may not work in the context of the client enterprise’s association and
culture: This should encompass the establishment of a clear roadmap for procurement or
implementation of cloud services and applications that use them and coordination of
stakeholders and vying schemes to get agreement for storage, computing, mesh and
applications to bypass isles of demand usage.
 Risk #3—The solution may be tough to evolve due to the adversity of incorporating the cloud
services involved: The service integration risk can be considered by contemplating interface
alteration cost, proficiency to change the existing system and available skills.
 Risk #4—A catastrophe may occur from which the solution will not recover: As part of a risk
investigation, it should recognize the unplanned happenings that could damage and assess
their probabilities and impacts. One may also wish to make general provision for unforeseen
happenings that disturb the cloud services that use or impair the data.
 Risk #5—System value may be insufficient, in order that it does not meet the users’ needs: The
value of an external service can be considered utilizing the identical components as for the
value of the solution. In addition, look at the track records of suppliers very carefully.
 Risk #6—There may be an existing need for service orientation: Not having full-fledged SOA
isn’t inevitably strategic in itself when opting for cloud. But the incompetence to precede
methods from present interfaces and inherent applications to more agile cloud services could
actually mess up things. Finally it will make cloud more costly than leaving things as it is.
RISK MANAGEMENT
Risk management is a significant part of business planning. The method of risk management
is believed to reduce or eradicate the risk of certain types of happenings or having an influence on the
business.
Risk management is a method for recognizing, considering and prioritizing risks of distinct kinds.
Once the risks are recognized, the risk supervisor will conceive a design to minimize or eradicate the
influence of contradictory events.
There are several risk administration measures, encompassing those evolved by the Project
Management Institute, the International Organization for Standardization (ISO), the National
Institute of Science and Technology and societies.
There are numerous distinct kinds of risks that risk management designs can mitigate.
Risk Management in Cloud Computing
Google, Microsoft, IBM and all other renowned and unidentified cloud providers offer an
array of foremost cost saving options to the customary data centre and IT department.
45% of IT professionals believe the risks far outweigh the advantages and only 10% of those
reviewed said they would prefer going objective critical applications to the cloud.
Cloud computing is somewhat new in its present pattern, granted that, it is best directed to reduce
intermediate risk enterprise areas.
Don’t hesitate to inquire and if need be, enlist an unaligned conferring business to direct through
the process.

CLOUD IMPACT
Cloud’s Impact on IT Operations
Cloud computing has provided possibilities for organizations of all types to reduce the risks
affiliated to IT acquisition (software and hardware), in sync with enterprise desires and total costs.
Some have even developed their interior IT department from a reactive data centre to a more
proactive service consignment center.
As cloud computing starts to mature and hybrid clouds start to verify their enterprise worth,
organizations will aim more on taking up both public and personal cloud environments and having
them possibly work seamlessly together.

ENTERPRISE WIDE RISK MANAGEMENT


Risk can be characterized as ‘the likelihood of loss or wound, an unsafe component or
component, or an exposure to hazard or danger’.
It is very tough to consider any enterprise function, method, or undertaking that will not take
advantage by methodically considering the risks that can have a contradictory influence on an
enterprise’s competitiveness and profitability.
Effectively organizing or commanding the origin of risk can have an effect in market authority:
robust development, premium supply charges and investor confidence.
What is Risk Management?
Risk Management is the practice followed to avert as many errors as possible and devising
fee procedures for the rest.
Risk management is technical and set about considering the untainted risks faced by users and
businesses.
A risk supervisor is generally a highly trained person who makes risk management his/her
full time job or the responsibilities may be dispersed within a risk management department.
Risk management is not just buying protection for a company. It also considers both insurable and
uninsurable risks and an alternative method.
The focus of risk administration is not getting the most protection for the money expended, but to
reduce the cost of managing risk by adapting the most suitable means.
The Risk Management Process
Figure 18.1 displays the methods in risk management. The method comprises of six steps which
either an expert or non-professional risk supervisor can chart to an organizations enterprise
conclusions and business goals.
 Step 1: Determination of the objectives of the risk administration program, concluding
accurately what the association anticipates its risk administration program to do. One prime
target of the risk administration effort is to maintain the functioning effectiveness of the
organization. The second target is the humanitarian aim of defending workers from
misfortunes that might outcome in death or grave injury.

Figure 18.1 Six-step Risk Administration Process


 Step 2: The identification of the risks involves somebody being cognizant of the risks. The
next tools or methods supply awareness:
 Risk analysis questionnaires
 Exposure checklists
 Insurance policy checklists
 Flowcharts
 Analysis of financial statements
 Other internal records
 Inspections
 Interviews
 Step 3: Once the risks are recognized, the risk supervisor should evaluate the
risks.Evaluation entails assessing the promise dimensions of the reduction and the
likelihood that it is probable to occur. The evaluation needs grading of main concerns as
critical risks, significant or insignificant risks.
 Step 4: Consideration of options and assortment of the risk remedy device, examines
diverse advances utilized to deal with risks and the assortment of the method that should be
utilized for each one.
 Step 5: Risk financing means encompass risk-keeping and risk moving or risk shifting.
Risk remedy apparatus are utilized in concluding which method to use to deal with a
granted risk, the risk supervisor considers the dimensions of the promise decrease, its
likelihood and the assets that would be accessible to meet the loss if it should occur.
The last step, evaluation and reconsider are absolutely crucial to the program for two reasons.
Within the risk administration method the enterprise environment alterations, new risks originate and
old ones disappear. Techniques befitting last year may have become obsolete this year and so
constant attention to risk is required.
Enterprise Risk Management (ERM) in enterprise encompasses the procedures and methods
utilized by organizations to organize risks and grab possibilities associated to the accomplishment of
their objectives.
TYPES OF RISKS IN CLOUD COMPUTING
Threat #1—Misuse and illicit use of cloud computing: Lawless individuals may take
advantage of the befitting registration, straightforward methods and somewhat anonymous access to
cloud services to launch diverse attacks. Examples of such attacks include: password and key
breaking, DDOS, malicious data hosting, commencing dynamic strike points, botnet
command/control and CAPTCHA-solving farms. Targets are IaaS, PaaS.

Threat #2—Insecure interfaces and APIs: Customers organize and combine with cloud
services through interfaces or APIs. Providers should double-check that security is incorporated into
their service forms, while users should be cognizant of security risks in the use, implementation, and
administration and monitoring of such services.
Threat #3—Vicious insiders: Vicious insiders represent a larger risk in a cloud computing
environment, since clients manage not have a clear outlook of provider principles and procedures.
Vicious insiders can gain unauthorized access into organizations and their assets.
Threat #4—Issues-related technology sharing: IaaS is based on distributed infrastructure,
which is often not conceived to accommodate a multi-tenant architecture. Overlooked flaws have
authorized visitors to gain unauthorized rights and/or leverage on the platform.
Threat #5—Data loss or leakage: Compromised data may encompass (i) deleted or changed
data without producing a backup, (ii) unlinking a record, (iii) decrease of an encoding key and (iv)
unauthorized access to perceptive data. The likelihood of data compromise considerably rises in
cloud computing, due to the architecture and operations. Examples of data loss/leakage include: (i)
insufficient authentication, (ii) authorization, (iii) review (AAA) controls, (iv) inconsistent
encryption, (v) inconsistent programs keys, (vi) operational flops, (vii) disposal challenges, (viii) risk
of association, (xi) jurisdiction/political issues, (x) persistence and trials, (xi) data centre reliability
and catastrophe recovery.
Threat #6—Hijacking (Account/Service): Account or service hijacking is generally carried
out with pilfered credentials. Such attacks encompass phishing, deception and exploitation of
programs vulnerabilities.
Threat #7—Unknown Risk Profile: Cloud services signify that organizations are less engaged
with hardware and software ownership and maintenance.

Internal Security Risk


Cloud computing presents flexibility by outsourcing the services, but it also adds inherent
risks of malicious insiders and abusive use of login access by an unauthorized person.

External Security Risk


Cloud computing technologies can be utilized as a platform for commencing attacks, hosting
Spam/Malware, programs exploits announcing and numerous other unethical purposes. Cloud
computing service platform, particularly PaaS with its increased service portfolio and the self-
determination, permits any individual to propagate their malicious intent.

Data Protection Risk


Public cloud infrastructure constituents are normally not conceived for compartmentalization
and are prone to vulnerabilities than can be exploited. There might be scenarios where an attacker
tries to gain unauthorized access to or excessively use the assets which can sway the presentation of
other client residing in the same location.
In cloud computing, it’s very tough to get forensic clues in case of a break because the data might
be dispersed over numerous distinct hosts and data hubs and probably resides in a multi-tenant
environment.

Data Loss
Cloud computing architecture presents larger trials in commanding and mitigating risks due
to its exclusive structure and operational attributes. Data in the cloud is prone to numerous risks, for
example, deletion of record, loss of encryption key and feeble encryption, corruption of data.
There is a large-scale inquiry when it comes to geographical position of data in the cloud
computing environment.

The data can be retained on numerous severs, in distinct positions, probably distinct towns, even
distinct homeland or continent.

DATA SECURITY IN CLOUD

 Data security risks are aggregated by the open environment of cloud computing.
 Accessibility of data is the basic concern in cloud-based systems.
 If a system that presents improved accessibility and opens up the platform to multi-node access,
then client should take into account the risks associated with this advancement.
 One way this can be done is by adding an element of control, in the pattern of access control, to
pay for risk mitigation for a platform.
 Information-centric access can assist to balance advanced accessibility with risk, by associating
access directions with distinct data residing inside an open and accessible platform, without
mislaying the inherent usability of that platform

Security Issues and Challenges


IaaS, PaaS and SaaS are three general forms of cloud computing. Each of these forms has
different influences on application security, whereas in a normal situation where an application is
deployed in a cloud, two broad security studies occur; they are:
 How is data protected?
 How is code protected?
Cloud computing environment is usually presumed to be economical as well as provides higher
service quality. Security, availability and reliability are the foremost values of cloud service users.
Security Advantages in Cloud Environments
Current cloud service providers function as very large systems. They have complicated methods
and professional staff for sustaining their systems, which small enterprises may not have control
over.
As an outcome, there are numerous direct and indirect security benefits for the cloud users.
Some of the key security benefits of a cloud computing environment are as follows:
 Data centralization: In a cloud atmosphere, the service provider takes responsibility of
storage and small organizations need not spend more money for personal storage devices.
Also, cloud-based storage provides a method to centralize the data much faster and
probably with low cost.
 Incident response: IaaS providers contribute dedicated legal server which can be used on
demand. Whenever there is a violation of the security policy, the server can be intimated
through online.
 When there is an inquest, a backup of the environment can be effortlessly made and put
up on the cloud without affecting the usual course of business.
 Forensic image verification time: Some cloud storage implementations reveal a
cryptographic ascertain addition or hash. For example, MD5 hash function is developed
automatically by Amazon S3 during object storage. Therefore in principle, the time to
develop MD5 checkups utilizing external devices is eliminated.
 Logging: In a usual computing paradigm by and large, logging is regular feature. In
general, insufficient computer disk space is assigned that makes logging either non-existent
or minimal. However, in a cloud, storage requirement for benchmark logs is mechanically
solved.
Security Disadvantages in Cloud Environments
In spite of security features, cloud computing adds some key security issues. Some of these
key security challenges are summarized as follows:
 Investigation: Investigating an illegal undertaking may be unrealistic in cloud
environments. Cloud services are particularly hard to enquire, because data for multiple
clients may be co-located and may also be dispersed over multiple datacentres.
 Data segregation: Data in the cloud is normally in a distributed simultaneously with data
from other customers. Encryption will not be presumed as the single solution for data
segregation issues. Some clients may not desire to encrypt data because there may be a
case when encryption misleads can decimate the data.
 Long-term viability: Service providers should double-check the data security in altering
enterprise positions, such as mergers and acquisitions. Customers should double-check data
accessibility in these situations. Service provider should furthermore confirm data security
in contradictory situations such as extended outage, etc.
 Compromised servers: In a cloud computing environment, users do not even have an
alternative of utilizing personal acquisition toolkit. In a situation where a server is
compromised, they require to shut their servers down until they get a backup of the data.
This will further create source accessibility concerns.
 Regulatory compliance: Traditional service providers are exempted from outside audits
and security certifications. If a cloud service provider does not adhere to these security
audits, then it directs to a conspicuous decline in clientele trust.
 Recovery: Cloud service providers should double-check the data security in natural and
man-made disasters. Generally, data is duplicated over multiple sites. However, in the case
of any such redundant happenings, provider should do an absolute and fast restoration.

CONTENT LEVEL SECURITY (CLS)


Content level application of data security authorizes you to double-check that all four levels
can be contacted by a single architecture, rather than of multiple models of operations which can
cause interoperability and can add extra components of human mistake, foremost to reduce of
security.
CLS was evolved to meet the marketplace demand and propelled by the demands of purchaser
institutions.
Content level security endows organizations to organize data and content at the organizational
level, other than at the institutional level.
CLS presents the proficiency to outlook, edit and delete data based on client functions and
permissions for both application-level security and content-level security.
The new functionality presents users with content that is applicable to them, decreasing the need
for applications to run on multiple servers and permitting applications to assist different
organizations inside the institution.
The CLS solution can be rolled out over an unlimited number of distinct partitions and agencies,
with each organization sustaining a concentrated outlook over all of its pertinent functions.

CLOUD SECURITY SERVICES


Cloud computing has become a foremost development in IT. Enterprises should adapt to the
changes it brings to maximize the return on investment.
To assist organizations worldwide to get the most from the cloud, Information System Audit
and Control Association (ISACA) handed out a new directive explaining how to apply productive
controls and governance for cloud computing.
According to the ISACA, when enterprises decide to use cloud computing for IT services, its
operational methods are affected and governance becomes critical to:
 Effectively organize risk
 Ensure continuity of critical organizational methods that continue after the data centre
 Speak enterprise objectives internally and to third parties clearly
 Adapt competently
 Facilitate continuity of IT information, which is absolutely crucial to maintain and augment
the enterprise
 Handle myriad regulations
The directive states that enterprises should look into the key inquiries for correctness in
governance of cloud computing:
 What is the enterprise’s anticipated availability?
 How the identity and access are organized in the cloud?
 Where will the enterprise’s data be located?
 What are the cloud service provider’s catastrophe recovery capabilities?
 How is the security of the enterprise’s data managed?
 How is the entire scheme defended from Internet threats?
 How are undertakings supervised and audited?
 What kind of certification or promises can the enterprise anticipate from the provider?

CONFIDENTIALITY, INTEGRITY AND AVAILABILITY


We may have heard of data security where experts mentioning ‘CIA’. CIA is a broadly
utilized standard for evaluation of data systems security, focusing on the three centre goals of
confidentiality, integrity and availability of information.
Data Confidentiality
Confidentiality refers to limiting data access only to authorized users, and stopping access to
unauthorized ones. Confidentiality is associated to the broader notion of data privacy limiting access
to individual’s information.
Confidentiality double-checks that data is accessible only to those authorized persons, in spite of
where the data is retained or how it is accessed. Following are some confidentiality topics that
double-check an agreeable level of information is imparted upon employees of the organization.
 Access control: Access control is the means utilized for controlling which assets a client
can get access to and the jobs which can be presented with the accessed resources.
 Passwords: Passwords are a basic component of network security. An intruder in the
organization’s confidential locality may check under keyboards and in drawers to find
passwords that may have been in written down and then use it to gain access to personal
information.
 Biometrics: Biometric expertise can recognize persons based on the individual
characteristics like human body parts.
 Encryption: Encryption is any method that converts readable (plain text) data into mystery
cipher (cipher text) to avert unauthorized access of information which is used in Internet
transactions, e-mail and wireless networking. An encryption algorithm is a mathematical
technique which changes the data to unreadable data by unauthorized parties.
 Privacy: Privacy is the upkeep of confidential or individual data from being viewed by
unauthorized parties and the command over its assemblage, usage and distribution.
 Ethics: Employees should be granted clear direction by principle, on what the organization
considers agreeable demeanour and should furthermore be acquainted with the methods in
location for clarification of ethical anxieties and for revelation of unethical activities.
Data Integrity
Data integrity is characterized as safeguarding the correctness and completeness of data and
processing procedures from intentional, unauthorized or unintentional changes.
Maintaining data integrity is absolutely crucial to the privacy, security and reliability of
enterprise data.
Integrity of data can be compromised by malicious users, hackers, programs mistakes,
computer virus, hardware constituent flops and by human mistake while moving data.
Mitigating data integrity risk can be there for fast recovery of data.
Employees can mitigate risk by normal data backups and off-site protected storage of backup,
supervising integrity devices and encryption. Integrity means trustworthiness of data resources.
Data Availability
Availability means, availability of data resources.
A data system that is not accessible when required is not good.
It may be calculated on how reliant the institute has become on carrying out a computer and
communications infrastructure.
Almost all premier organizations are highly reliant on functioning data systems.
Availability, like other facets of security, may be solely influenced by technical matters such as
natural phenomena or human causes.
Availability is double-checking that the authorized users have access to data and affiliated assets
when required.
This can be carried out by utilizing data backup, catastrophe recovery and enterprise
continuity/recovery plans.
Data Backup Plan
Data backups are an absolutely crucial part of data security and an organization should be
adept to refurbish data in the happening of data corruption or hardware failure.
Backups should be completed on a normal basis and the frequency depends upon how much
data an organization is agreeable to lose in the event of loss.
Backups should also be occasionally refurbished to check systems that should double-check
that the method are functioning correctly inside the particular time limit before the requirement for
the backup really arises.
Disaster Recovery Plan (DRP)
A DRP is a design that is utilized to retrieve rapidly after a catastrophe with a smallest of
influence to the organization.
DR designing should be part of the primary stage of applying IT systems.
DR designs are evolved in answering to risk evaluations and conceived to mitigate those
risks. Risk evaluations work out the frequency and span of promise disasters.

SECURITY AUTHORIZATION CHALLENGES IN THE CLOUD


Authorization entails for double-checking that only authorized persons are able to get access
to resources within a system.
In an effort to carry out authorization, the first step is to authenticate the individual, the
second step is to get information about the individual and the last step is to permit or refute access to
the individual based on the applicable principles for that resource.
An authorization service is responsible for assessing an authorization query, assembling essential
data about the individual and the asset and assessing a principle to work out if the access to should be
conceded or denied.
Cloud computing is not a single capability, but an assemblage absolutely of crucial characteristics
that are manifested through diverse kinds of expertise deployment and service models.
Auditing
The use of cloud computing is quickly catching all over the world at an astonishing stride
because of its substantial advantages in decreased cost of IT services by deploying them over the
Internet. Possible benefits are rather obvious:
 Ability to reduce capital expenditure.
 Share the services double-checking often apparently unlimited scalability.
 The proficiency to dial up usage or pay as you use when needed.
 Reduce IT associated costs and thereby enhance comparable benefit along the base line.
In a usual cloud service form, External Service Provider (ESP) boasts diverse IT services to the
enterprise, counting on the SLAs and assortment of services.
Risk Administration

TOOLS AND TECHNOLOGIES FOR CLOUD

PARALLEL COMPUTING
It is obvious that silicon-based processor chips are available to their physical limits in processing
speed.
A viable answer to overwhelm this limitation is to attach multiple processors employed in
coordination with each other to explain large dispute problems.
Hence, high-performance computing needs the use of Massively Parallel Processing (MPP)
systems encompassing thousands of mighty CPUs.
A superior agent computing system constructed utilizing an MPP set is C-DAC’s PARAM
supercomputer.
At the end of this century, every-high performance system becomes a parallel computer system.
High-end computers will be the extraordinarily parallel processing systems including thousands of
processors that are interconnected.
To present well, these parallel systems need a functioning system fundamentally distinct from
present ones.
Most investigators in the area of functioning systems have discovered that these new functioning
systems will have to be much lesser than customary ones to accomplish the effectiveness and
flexibility needed.

ERAS OF COMPUTING
The most famous two eras of computing are the sequential and parallel eras. In the last 10
years, parallel machines have developed into a significant challenge to vector machines in the chase
for high-performance computing.
A 100-year broad outlook of development of computing eras is shown in Figure 24.1. The
computing era begins with a development in hardware architectures, pursued by system programs,
applications and coming to its saturation point with its development due to difficulty in
environments.
Every component of computing undergoes three phases: R&D, commercialization and
commodity.
Figure 24.1 Two Eras of Computing
Cloud computing refers to both the applications consigned as services over the Internet and
the hardware and systems programs in the data hubs that supply those services.
When a cloud is made accessible in a pay-as-you-go kind to the general public, it can be
called as a public cloud, and the service being traded is utility computing. Cloud computing is the
addition of SaaS and utility computing.
Infrastructure services (infrastructure-as-a-service) supplied by cloud vendors permit any
client to use a large number of compute examples effortlessly by utilizing the virtual assets to present
data/compute-intensive works.

HIGH PERFORMANCE PARALLEL COMPUTING WITH CLOUD AND CLOUD


TECHNOLOGIES
The introduction of financial/commercial cloud infrastructure services, for example, Amazon
EC2/S3, GoGrid and ElasticHosts, permits users to use clusters effortlessly and rapidly by giving a
monetary worth only for the duration used. The provisioning of assets occurs in minutes.
The accessibility of open source cloud infrastructure software, for example, Nimbus and
Eucalyptus, and the open source virtualization software, for example, Xen Hypervisor, permit
organizations to construct private clouds to advance the asset utilization of the accessible
computation facilities.
We have some cloud technologies for HPC, for example, Hadoop, Dryad and CGLMapReduce, to
diverse technical applications, for example:
 Cap3 data investigation
 High Energy Physics (HEP) data investigation
 Word Histogramming
 Distributed GREP
 K-Means clustering
Cloud technologies like Google MapReduce, Google File System (GFS), Hadoop Distributed File
System (HDFS), Microsoft Dryad and CGL-MapReduce take a more data-centred set regarding two
parallel runtimes.

CLOUD COMPUTING APPLICATION PLATFORM


Cloud computing is a foremost change in the IT industry. One of the most significant
components of that change is the advent of cloud platforms.
In general, when we get into cloud services we will understand about cloud platforms. Services in
the cloud can be grouped into three very broad categories.
1. Software as a service (SaaS): A SaaS application sprints solely in the cloud. The on-
premises purchaser is normally a browser or some other straightforward client.
2. Attached services: Every on-premise application presents helpful purposes on its own. An
application can occasionally enhance these by accessing application-specific services
supplied in the cloud.
3. Cloud platforms: A cloud platform presents cloud-based services for creating applications.
Rather than construction their own made-to-order base, for example, the creators of a new
SaaS application would rather than construct on a cloud platform.
Understanding cloud platforms needs some affirmation on what the phrase ‘platform’ entails in
this context. One very broad way to believe it is to view a platform as a software that presents
developer-accessible services for creating applications.

CLOUD COMPUTING PLATFORM


Abicloud Cloud Computing Platform
Abicloud is a cloud computing platform evolved by Abiquo, a business established in
Barcelona, Spain, focusing on the development of cloud platform.
It can be utilized to construct, incorporate and organize public as well as personal cloud in
homogeneous environments.
Using Abicloud, clients can effortlessly and mechanically establish and organize the server,
storage system, mesh, virtual apparatus and applications and so on. The major distinction between
Abicloud and other cloud computing platforms is its mighty web-based administration
function and its centre encapsulation manner.
Using Abicloud, clients can complete establishing a new service by just pulling a virtual
appliance with a mouse.
This is much simpler and more flexible than other cloud computing platforms that establish
new services through order lines.
Eucalyptus Cloud Platform
Eucalyptus is an elastic computing structure that can be utilized to attach the users’ programs
to the helpful systems; it is an open-source infrastructure utilizing clusters or workstations
implementation of elastic, utility and cloud computing and a well-liked computing benchmark
founded on service grade protocol that allows users lease mesh for computing capability.
Currently, Eucalyptus matches EC2 from Amazon and supports more types of purchasers
with smallest modification and extension.
Nimbus Cloud Computing Platform
Nimbus is an open tool set and furthermore a cloud computing solution supplying IaaS.
Based on technical study in the early platform, Nimbus has sustained numerous non-scientific
study domain applications.
It allows users lease-isolated assets and constructs the needed computing natural environment
through the deployment of virtual machines.
The Nimbus cloud computing platform encompasses numerous distinct constituents, state
purchasers, agencies, resource supervisors and so on.
In general, all these functional elements are classified into three types. One first type is client-
supported modules which are utilized to support all kinds of cloud clients.
Context client module, cloud client module, quotation client module and EC2 client module all
pertain to this kind of component. The second kind of component is mostly service-supported
modules of cloud platform, supplying all types of cloud services.

OpenNebula Cloud Computing Platform


In virtualization infrastructure and cloud computing of European Union, OpenNebula is one
of the main technologies of reservoir design and the flagship study task.
Like nimbus, OpenNebula is furthermore an open source cloud service framework.
It permits clients to establish and organize virtual machines on personal assets and it can set
users’ data hubs or clusters to a flexible virtual infrastructure that can mechanically acclimatize to the
change of service load.
The major distinction of OpenNebula and nimbus is that nimbus applies an isolated interface
founded on EC2 or WSRF (Web Services Research Framework) through which clients can method
all security associated matters, while OpenNebula does not.
OpenNebula is furthermore an open and flexible virtual infrastructure administration device,
which can be used to synchronize the storage, mesh and virtual methods and let users dynamically
establish services on the circulated infrastructure as asserted by the share systems at the data centre
and isolated cloud resources.
OpenNebula is mostly utilized to organize the data centre of a private cloud and infrastructure of
cluster and it furthermore supports hybrid clouds to attach the localized and public infrastructure.

TOOLS FOR BUILDING CLOUD


Today, two development methodologies are conveying many cloud-based developments:
distributed and agile. These notions are impelling the wrapper on existing development apps,
needing a new set of devices that can accommodate new development, checking and deployment
methods.
Distributed computing is a by-product of Internet. Distributed development is global development,
which adds its own trials with collaboration and code management. There are large applications
currently out there for distributed code administration; git and subversion are two such tools and are
broadly utilized in distributed environments already.
Open Source Tools for Construction and Organizing Cloud
Open source expertise is going to gravely influence the cloud computing world and there are two
major causes why: Open source software is vitally free and it is not usually encumbered by the
software license of proprietary software.
A number of open source tools have currently had a large influence on cloud computing, Linux
and Xen, for example. But there are other significant open source offerings that can advantage cloud
users. These encompass KVM, Deltacloud, Eucalyptus, Cloud.com’sCloudStack.
There are eight key components to address when constructing an internal or external compute
cloud. They are:
1. Shared infrastructure
2. Self-service automated portal
3. Scalable
4. Rich application container
5. Programmatic control
6. 100% virtual hardware abstraction
7. Strong multi-tenancy
8. Chargeback
UNIT V

PROGRAMMING IN CLOUD
Cloud computing comprises of two aspects of meaning, to recount the rudimentary platform
amenities and, on the other hand, to construct applications on this platform.
First, a cloud computing–based platform for programming is built. There are numerous programs
which can be utilized to construct a basis for cloud computing platform programming.
MapReduce Distributed Programming
MapReduce is a mighty distributed programming procedure, which is furthermore a function of
dialect form utilized to deal with huge data groups and in which only two functions are provided:
Map and Reduce.
Map function presents a dedicated procedure for each data item set and comes back as a new data
set after disposing of the project. In a data item set, a Reduce function gets implementation of the aim
procedure in the data piece sets.
Chubby
Chubby is a highly accessible, distributed data secure service. When any machine falls short,
Chubby double-checks the consistency of the backup utilizing the Paxos algorithm.
Each unit in little distributed document systems of Chubby can be utilized to supply secure
services.
Currently, this language is mostly utilized on the basis of Google’s cloud computing
platform.
Hadoop and PIG language are constructed on top of Hadoop Project and is a kind of open-
source implementation for programs model.
Dryad and DryadLINQ
Dryad and DryadLINQ, created by Microsoft Research Silicon Valley, are created to supply a
distributed computing platform. In recent years, this platform has been broadly utilized internally at
Microsoft, and specifically utilized on Microsoft’s own cloud computing platform Azure.
Dryad is developed for extending the computing platforms of all dimensions, from single-core
computers, to small cluster of the composition of a multiple computers and then to having data hubs
comprised of thousands of computers. The aim of DryadLINQ is to supply a high-level language
interface, for programmers to effortlessly convey large-scale distributed computing.
Dryad can develop distributed operations performed on Dryad engines and is responsible for self-
acting parallel processing of the jobs and sequence of functions required when data is delivered.
Programming of Dynamics Languages
Computing assets can extend dynamically as asserted by the dimensions of the task, since a
computing platform has a solid high expansion flexibility and platform abstraction.
As an outcome, programs can run without being influenced by the influence of infrastructure
change.
The fast and well-inspired way for enterprise applications is to use the programming
procedure of dynamic language.
Therefore, not only can the code created by dynamic language be established to achieve
enterprise applications in the Cloud’s Client, but cloud projects accomplished by dynamic languages
can also be discovered in parts of cloud infrastructure.

MASHUPS
‘Mashup’ is the word with a different context and has different meanings in different places.
Mashups use API software (application programming interface) to combine one or more website
elements.
A cloud mashup is simply an instance of a web-based mashup, but the application content
resides in the cloud. Cloud mashups can be understood based on its differing scopes and relies on
their real purpose
Examples are given as follows:
 In terms of music, a mashup is a composition or a song developed by adding more than one
song.
 In terms of cinematography, a video mashup is a collection of multiple video sources.
 In terms of digital production, a digital mashup is a media file containing text, audio,
graphics and video taken from existing sources to develop a new work.
Mashups stands on the basic concept of data and services integration. To function in this way,
combination, aggregation and visualization are three main primitives:
1. Combination collects data from heterogeneous sources, uses it among the same application.
2. Aggregation operates on collected data having a measure and builds new information using
the obtained data.
3. Visualization is used to integrate data in a diagram way using maps or by using other
multimedia objects.

APACHE HADOOP

INTRODUCTION
Hadoop is creating worth for enterprises, organizations and individuals. With its proficiency to
unlock value from data, Hadoop is quickly being adopted by enterprises in effectively all parts and
industries.
Hadoop is a quickly developing ecosystem of constituents for developing the Google MapReduce
algorithms in a scalable hardware.
Hadoop endows users to store and use large volumes of data and investigate it in ways with less
scalable solutions or benchmark SQL-based approaches.
Hadoop is a highly scalable compute and storage platform. Hadoop is an open-source
implementation of Google MapReduce, encompassing a distribute file system.

WHAT IS HADOOP?
Hadoop is a sub-project of Lucene, under the Apache Software Foundation.
Hadoop parallelizes data processing over numerous nodes (computers) in a compute cluster, racing
up large computations and concealing I/O latency through improved concurrency.
Hadoop is particularly well-suited to large data processing jobs (like seeking and indexing). It can
also leverage its distributed file system at a reduced cost and reliably duplicate chunks of data to
nodes in the cluster, producing data accessible in the local area on the appliance that is processing it.
To the application programmer Hadoop presents the abstraction of map and reduce. Map and
reduce are accessible in numerous languages, for example, Lisp and Python.
Map and Reduce
The MapReduce paradigm takes idea from the map and it reduces programming constructs
widespread in abundant programming languages.

CHALLENGES IN HADOOP
Deployment of the servers and programs is an important concern with all large environments.
These best practices are applied through a set of tools to automate the configuration of the hardware,
set up the OS and set up the Hadoop programs stack from Cloudera.
As with numerous other kinds of data expertise (IT) solutions, change administration and system
supervising are a prime concern inside Hadoop.
The IT procedures desire to double-check tools in correct place and apply alterations and notify
employees when unforeseen happenings take place inside the Hadoop environment.
Hadoop is a certainly increasing, convoluted ecosystem of programs and presents no guidance to
the best stage for it to run on.
Hadoop environment and will change over time as job structure alterations, data layout
developments and increased data capacity.

Hadoop Nodes
Hadoop has nodes inside each Hadoop cluster. They are DataNodes, NameNodes and
EdgeNodes. Names of these nodes can change from location to location, but the functionality is
widespread over the sites. Hadoop’s architecture is modular, permitting individual constituents to be
levelled up and down as the desires of the environment change. The base nodes for a Hadoop cluster
are as follows:
 NameNode: The NameNode is the centred position for data about the file system
established in a Hadoop environment.
 DataNode: DataNodes make up the most of the servers comprised in a Hadoop
environment. The DataNode assists two functions: It comprises a piece of the data in the
Hadoop Distributed File System (HDFS) and it actions as a compute stage for running
occupations, some of which will utilize the localized data inside the HDFS.
 EdgeNode: The EdgeNode is the access point for external applications, devices and users
that require utilizing the Hadoop environment. The EdgeNode is seated between the
Hadoop cluster and the business mesh to supply access to command, principle
enforcement, logging and entrance services to the Hadoop environment.
Hadoop was initially evolved to be an open implementation of Google MapReduce and Google
File System.

HADOOP AND ITS ARCHITECTURE


The Hadoop Distributed File System (HDFS) is a distribute file system conceived to run on a
hardware. HDFS is highly fault-tolerant and is created to be deployed on low-cost hardware.
HDFS presents high throughput access to application data and is apt for applications that have
large data sets. HDFS rests a couple of POSIX obligations to endow streaming access to the file
system data.
HDFS was initially constructed as infrastructure for the Apache Nutch web search project.
HDFS is now an Apache Hadoop subproject.
HDFS: Goals and Assumptions
Hardware failure: Hardware malfunction is the norm other than the exception. An HDFS example
may comprise of hundreds or thousands of server machines, each saving part of the file system’s
data. There are large number of components and each component has a non-trivial likelihood of
malfunction that some components of HDFS are habitually non-functional. Therefore, detection of
obvious errors and fast recovery from them is a centre architectural aim of HDFS.
Streaming data access: Applications running on HDFS need access to their data sets. They are not
common applications that generally run on common file systems. HDFS is conceived more for batch
processing other than interactive use by users. The focus is on high throughput of data access to other
than reduced latency of data access.
Large data sets: Applications running on HDFS contain huge data sets. A usual file in HDFS is
gigabytes to terabytes in size. Therefore, HDFS is adjusting to support huge files. It should supply
high collective data bandwidth and range to hundreds of nodes in a single cluster that should support
tens of millions of files in a single instance.
Simple coherency model: HDFS applications require a write-once-read-many access for files. A file
one time conceived, in writing and shut, requires not be changed. This assumption simplifies data
coherency matters and endows high throughput data access. A MapReduce application or a world
broad web crawler application aligns flawlessly with this model. There is a design approach to
support appending-writes to files in the near future.
Moving computation is cheaper than moving data: A computation demanded by an application is
much more effective if it is performed beside the data it functions on. This is particularly factual
when the dimensions of the data set are huge. This minimizes mesh jamming and raises the general
throughput of the system.
Portability over heterogeneous hardware and software platforms: HDFS has been conceived to be
effortlessly portable from one stage to another. This helps prevalent adoption of HDFS as a stage of
alternative for a large set of applications.
HDFS Architecture
Architecture of HDFS is master/slave architecture. An HDFS cluster comprises a single
NameNode, a server that organizes the file system namespace and regulates access to files by clients.
In supplement, there are several DataNodes, generally one per node in the cluster which organizes
storage adhered to the nodes that they run on.
HDFS reveals a file system namespace and permits client data to be retained in files. Internally, a
file is divided into one or more blocks and these blocks are retained in a set of DataNodes.
Opening, closing and renaming files and directories are file system namespace procedures
executed by the NameNode. It furthermore works out the mapping of blocks to DataNodes.
The DataNodes are responsible for assisting read and compose demands from the file system’s
clients. The DataNodes also present impede creation, deletion and replication upon direction from
the NameNode.
To share large files over a cluster of machines, a reliable file system called HDFS is formed. It
shares each file as a sequence of blocks.
All blocks in a file except the last impede are identical in size. The blocks of a file are duplicated
for obvious error tolerance.
The replica factor and replication component are configurable per file. An application can identify
the number of replicas of a file.
The replication factor must be particular at file creation time and that can be altered afterward.
Files in HDFS are write-once and have firmly one author at any time.

MAPREDUCE
The Internet presents an asset for amassing tremendous amount of data, often beyond the
capability of individual computer disks and too large for processing with a single CPU.
Google’s MapReduce, constructed on peak of the distributed Google File System, presents a
parallelization structure which has garnered substantial acclaim for its ease-of-use, scalability and
fault-tolerance.
The achievement at Google provoked the development of the Hadoop task, an open-source attempt
to duplicate Google’s implementation, hosted as a sub-project of the Apache Software Foundation’s
Lucene seek motor library. Hadoop is still in early phases of development.
MapReduce Programming Model
MapReduce is a programming model and an affiliated implementation for processing and
developing large data sets.
A Map function is created by users, which contains key/value pair and they build an intermediate
set of same pairs. A function called reduce merges all intermediate values with the same pairs. Many
genuine world jobs are expressible in this model.
Programs written in this purposeful method are automatically parallelized and performed on a
large cluster of machines.
The run-time system takes care of the minutia of partitioning the input data, arranging the
program’s execution over a set of machines, managing machine failures and organizing the needed
inter-machine communication.
This permits programmers with aligned and distributed systems to effortlessly utilize the assets of
a large distributed system.
The computation takes a set of input key/value in pairs and makes a set of output key/value pairs.
The client of the MapReduce library expresses the computation as two functions:

Map and Reduce


MAP function: Map Function, written by the client, takes an input pair and makes a set of
Intermediate Key/Value Pairs. The MapReduce library assembles all intermediate values affiliated
with the identical intermediate key ‘I’ simultaneously and passes them to the Reduce function.
Reduce function: Reduce function, written by the client, acknowledges an intermediate key ‘I’
and a set of values for that key. It merges these values to pattern a lesser set of values. Typically, just
none or one yield value is made per Reduce invocation. The intermediate values are provided to the
user’s reduce function by an iterator.

CLOUD TOOLS
VMWARE
VMware, Inc. is a company providing virtualization software, evolved in 1998. The company was
acquired by EMC Corporation in 2004 and functions as a distinct software subsidiary.
VMware’s desktop software sprints on Microsoft Windows, Linux and Mac OS-X, while
VMware’s enterprise software hypervisors for servers, VMware ESX and VMware ESXi and are
bare-metal embedded hypervisors that run exactly on server hardware without needing an added
inherent functioning system.
VMware software presents an absolutely virtualized set of hardware to the visitor functioning
system. It virtualizes the hardware for a video adapter, a mesh adapter and hard computer disk
adapters.
The host presents pass-through drivers for quest USB, serial and parallel devices. In this way, the
VMware virtual machine becomes highly portable between computers, because every host examines
almost equal to the guest.
VMware supports:
 Desktop software consisting of:
 VMware workstation
 VMware fusion
 VMware player
 In the server software
 VMware markets two virtualization products for servers: VMware ESX and
VMware ESXi.
 The VMware server is furthermore supplied as freeware for non-commercial use,
like VMware player, and it is likely to conceive virtual machines with it. It is a
‘hosted’ application, which sprints inside an existing Linux or Windows OS.
 The cloud management software consists of:
 VMware vCloud
 VMware Go
EUCALYPTUS
For Linking Your Programs To Useful Systems can use Elastic Utility Computing
Architecture is the GPL-licensed software which presents tooling to create and organize a private
cloud that can even be accessed as a public cloud.
It is a compatible platform for Amazon EC2 and S3 storage. It makes its services accessible
through EC2/S3 compatible APIs. Features in it are:
 Interface compatibility with EC2
 Simple setting up and deployment utilizing rocks
 Simple set of extensible cloud share policies
 Overlay functionality needing no modification into the Linux environment
 Basic administrative tools for system administration and client accounting
 Configuring multiple clusters with private mesh locations into a single cloud
 Portability
Eucalyptus was initially developed to supply an inexpensive, extensible and straightforward
platform to establish open source cloud structure for the world of academia. It was developed by
computer researchers and scientists requiring elastic compute resources.
Components of Eucalyptus
Eucalyptus has three foremost components:
1. Cloud controller (CLC): Comprises the front-end services and the Walrus storage system.
2. Cluster controller (CC): Gives support for the virtual mesh overlay.
3. Node controller (NC): Interacts with VT to classify individual VMs.
The two constituents are utilized for storage administration:
1. Storage controller (SC): Presents continual impede storage for the instances.
2. Walrus storage controller (WSC): Presents continual and straightforward storage service.
Node Controller (NC)
The NC is accountable for executing a task on the private resources that host VM instances such as
launch, check shutdown and clean-up. A Eucalyptus cloud may comprise of some node controllers.
NC is a virtual fatal endowed server capable of running Kernel-based virtual machine (KVM) as
the hypervisor. The VMs running on the hypervisor are controlled by the instances.
The node controller interacts with the operating system and the hypervisor running on the node,
while on the other hand it furthermore interacts with the cluster controller (CC).
NC queries the OS on the node to find out the node’s resources. They are a number of cores,
memory and disk space. NC furthermore monitors the state of the VM instances running on the node
to propagate this data to the CC.
Cluster Controller (CC)
The CC is responsible for managing an assemblage of NCs (cluster) that work together. The
CC has access to both the private and public systems and is generally established on the cluster’s
head node or front-end server.
The CC supervises the state data of all instances in the pool of NCs and coordinates the
incoming input requests flow.
Walrus Storage Controller (WS3)
WS3 is a continual and straightforward storage service. WS3 uses REST and SOAP APIs, which
are compatible with S3 API.
Its features are:
 Store machine images
 Store snapshots
 Store and serve documents utilizing S3 API
It should be considered as a straightforward file storage system.
Storage Controller (SC)
It presents continual impede storage for the instances. It resembles like elastic block storage
service from Amazon.
 It creates and organizes continual impede storage devices.
 It creates snapshots of volumes.
Cloud Controller (CLC)
Incoming demands from external clients or administrators are processed by CLC.
CLC is responsible for handling demands. Each Eucalyptus cloud will have a distinct CLC. It is
the user-visible entry point and decision-making constituent that makes high-level VM instances
scheduling conclusions, process authentication and sustains continual system and client metadata.
CLC is the front end to the whole cloud infrastructure.
It presents EC2/S3 compliant web services interface to the client tools and interacts with other
constituents of the Eucalyptus infrastructure. Its features are:
 Monitoring resources of the cloud infrastructure
 Resource arbitration
 Monitoring running instances
CLC has comprehensive information of the state of the cloud with esteem to accessibility and usage
of resources primarily.

CLOUDSIM
Cloud computing is the expertise which delivers dependable, protected, fault-tolerant, sustainable
and scalable computational services.
Moreover, these services are suggested in private data hubs (private clouds), commercially
suggested for clients (public clouds), or yet it is likely that both public and private clouds are blended
in hybrid clouds.
The very high demand for energy-efficient IT technologies, and controllable methodologies for
evaluation of algorithms, applications, and principles, made hard-hitting in development of cloud
products.
An alternate is the utilization of replication devices, which open the likelihood of assessing the
hypothesis prior to the software’s development in an environment where one can duplicate tests.
Specifically in the case of cloud computing, where access to the infrastructure acquires payments
in currency, simulation-based advances offer important benefits. It permits cloud clients to check
their services free of cost in a repeatable and controllable environment.
The prime objective of the CloudSim project is to supply a generalized and extensible replication
structure that endows seamless modelling, replication and experimentation of cloud computing
infrastructures and application services.
By utilizing CloudSim, investigators and industry-based developers can aim at the exact system
design issues that they desire to enquire, without getting worried about the reduced level associated
to cloud-based infrastructures and services. CloudSim is driven by jProfiler.
CloudSim functionalities are as follows:
 Support for modelling and replication of large-scale cloud computing data centres.
 Support for modelling and replication of virtualized server hosts, with customizable
principles for provisioning host assets to virtual machines.
 Support for modelling and replication of energy-aware computational resources.
 Support for modelling and replication of data centre mesh topologies and message-passing
applications.
 Support for modelling and replication of federated clouds.
 Support for dynamic insertion of replication components, halt and restart of simulation.
 Support for user-defined principles for share of hosts to virtual appliances and principles
for share of owner assets to virtual machines.

OPENNEBULA
OpenNebula is actually the premier and most sophisticated structure for cloud computing. It is
exceedingly straightforward to setup.
Furthermore, it is flexible, extensible and with very good presentation and scalability to organize
tens of thousands of VMs, Private cloud with Xen, KVM and VMware.
Cloud computing arrives and aims only when there is a requirement to boost capability or add
capabilities on the go without buying new infrastructure, training new staff or authorizing new
software.
Cloud computing supports subscription-based or pay-per-use service that, with time over the
Internet, expands IT’s existing capabilities.
A cloud service has three different characteristics that make a distinction from custom hosting.
It is traded on demand, normally by the minute or the hour, it is elastic, that is, a client can have as
much or as little service as they desire at any granted time, and the service is completely organized
by the provider.
Significant innovations in virtualization and distributed computing advanced access to high-speed
Internet and accelerated concern in cloud computing.
OpenNebula is a completely open-source toolkit to construct IaaS private, public and hybrid
clouds.
An OpenNebula private cloud presents infrastructure with an elastic stage for very fast
consignment and scalability of services to rendezvous dynamic claims of service end users.
OpenNebula does the following:
 Management of the network, computing and storage capacity
 Management of VM life-cycle
 Management of workload placement
 Management of virtual networks
 Management of VM images
 Management of information and accounting
 Management of security
 Management of remote cloud capacity
 Management of public cloud servers

NIMBUS
Nimbus is an open-source toolkit concentrated on supplying infrastructure as a service (IaaS). It
provides capabilities to the scientific community. To accomplish it focuses on three goals:
 Enables asset providers to construct personal and community IaaS cloud.
 Enables users to use IaaS clouds.
 Enables developers to continue, trial and customize
IaaS. Major features are as follows:
 Open source IaaS: Nimbus presents a 100% freely accessible and open source
infrastructure as a service (IaaS) system. Every characteristic that a community develops is
freely accessible and there are no add-on or improvement costs.
 Storage cloud service: Cumulus is a storage cloud service that is matching with the S3
REST API. It can be utilized contrary to numerous existing purchasers (boto, s3cmd, jets3t,
etc.) to supply data storage and transfer services.
 EC2 based clients are capable of utilizing Nimbus installations. Both SOAP API and the
REST API have been applied in Nimbus. S3 REST API clients can also be utilized for
organizing VM storage with the Nimbus system.
 Easy to use cloud client: The workspace cloud client permits authorized clients to get
access to numerous workspace service characteristics in a client amicable way. It is
conceived to get users up and running in a time-span of minutes, even from laptops, NATs,
etc. The workspace cloud client supports for storing data in cloud also acts as IaaS. Even
the uninitiated finds this completely incorporated device so straightforward to use.
 Per-user storage quota: Cumulus (the VM likeness repository supervisor for Nimbus) can
be configured to enforce per client storage usage limits.
 Easy client management: New in Nimbus 2.5 are a set of client administration tools that
make administering a Nimbus cloud considerably easier. The tools are both straightforward
to use and scriptable.

SUMMARY OF UNIT IV

 Cloud computing is identified as the most widely used computing paradigm for the last few years. It’s
currently a true change for the Information and Communications Technology.
 There are mainly two kinds of cloud providers: Cloud Service Providers (CSP) and Cloud Infrastructure
Providers (CIP).
 Levels in cloud computing are: infrastructure, storage, platform, application, services and client.
 When assessing with cloud service providers, these peak five security anxieties are to be checked: (i)
secure data transfer, (ii) secure programs interfaces, (iii) secure retained data, (iv) user access to control
and (v) data separation.
 The method of risk administration is to reduce or eradicate the risk of certain types of happenings or
having an influence on the business.
 Cloud computing has made it possible for organizations of all types to reduce the risks affiliated with IT
acquisition (software and hardware), elaborate in sync with enterprise desires and comprise costs.
 Risk can be defined as ‘the likelihood of loss or wound, an unsafe component or component, or an
exposure to hazard or danger’.
 Risk management process includes, (i) determination of objectives, (ii) identification of the risks, (iii)
evaluation of the risks, (iv) consideration of alternatives and selection of risk treatment, (v) implement of
the decision and (vi) evaluation and review.
 Enterprise Risk Management (ERM) encompasses the procedures and methods utilized by organizations
to organize risks and grab opportunities to accomplish their objectives.
 Types of risks in cloud computing are (i) misuse and illicit use of cloud computing, (ii) insecure
interfaces and APIs, (iii) vicious insiders, (iv) issues-related technology sharing, (v) data loss or leakage,
(vi) hijacking (account/service) and (vii) unknown risk profile.
 Cloud computing has flexibility, as it outsources the services. This property adds risks, because of
malicious intents who can make the unauthorized persons to login into the system.
 Cloud computing technologies can be utilized as a platform for commencing attacks, hosting
spam/malware, programs exploits, and for numerous other unethical reasons.
 Cloud computing architecture presents larger trials in commanding and mitigating risks due to its
exclusive structure and operational attributes.Cloud computing is a development that is intended to permit
more open accessibility and simpler and advanced data sharing.
 Data is uploaded upon a cloud and retained in a data centre for access by users from that data centre, or in
a fully cloud-based model.
 Access becomes much more basic concern in cloud-based schemes because of the accessibility of the
data.
 Information-centric access can assist to balance advanced accessibility with risk, by associating access
directions with distinct data residing, without mislaying the inherent usability of that platform.
 Contrary to customary computing paradigms, in a cloud computing environment, data and application are
controlled by the service provider.
 IaaS, PaaS and SaaS are three general forms of cloud computing. Each of these forms have distinct
influence on application security.
 Cloud computing environment is usually presumed as a reasonable solution as well as provider of higher
service quality.
 Security, availability and reliability are the foremost value anxieties of cloud service users.
 Key security benefits of a cloud computing environment are (i) data centralization, (ii) incident response,
(iii) forensic image verification time and (iv) logging.
 Key security issues are (i) investigation, (ii) data segregation, (iii) long-term viability, (iv) compromised
servers, (v) regulatory compliance, (vi) recovery.
 Cloud computing boasts private and organization a much more fluid and opens way of broadcasting
information.
 Cloud computing is a platform for conceiving the digital matching of this fluid, human-to-human data
flow, which is a sure thing that internal computing systems have not yet achieved.
 In the context of computing, the terms security, privacy and trust may seem one and same but have
distinct meanings.
 CLS evolved to meet the marketplace demands and propelled by the wishes of customer institutions.
 Content level security endows organizations to organize data and content at the organizational level,
rather than at the institutional level.Cloud computing has become a foremost technology development in
IT. Enterprises started adopting it because of the changes it does to maximize the return on investment.
 Confidentiality refers to limiting data access only to authorized users, and stopping access to unauthorized
ones.
 Confidentiality double-checks that the data is accessible only to those authorized to have access, despite
of where the data is retained or how it is accessed.
 Maintaining data integrity is absolutely crucial to the privacy, security and reliability of enterprise data.
 Integrity of data can be compromised by malicious users, hackers, programs mistakes, computer virus,
hardware constituent errors and by human error while moving data.
 Availability option is double-checking that the authorized users have got access to data.
 Data backups are an absolutely crucial part of data security and an organization should be able to
refurbish data when there is data corruption or hardware failure.
 Virtualization and cloud computing lend larger flexibility and effectiveness by giving you the proficiency
to proceed servers and add or eliminate assets as required to maximize the use of systems and reduce
expenses.
 Testing all the levels from the application to the cloud service provider appears that the tester will have to
become effective in program testing.
 Cloud-based testing is a means for organizations to discover the cloud and lower the charges of testing at
the same time.
 Cloud tools are a set of tools for establishing, organizing and testing Java EE applications on elastic
computing cloud owned by Amazon.
 PushToTest Test Maker is a distributed testing environment that can run tests on test gear or in a cloud
computing environment.
 Cloud computing is the newest large-scale system to strike the IT companies and it is beginning to make
swell on the software testing services front.
 Software testing companies no longer have to integrate large infrastructure charges into their yearly
budgets. Cloud computing reduces all responsibilities to prepare and upkeep that turns out to be the
responsibility of the cloud vendor.
 High performance computing needs the use of Massively Parallel Processing (MPP) systems
encompassing thousands of mighty CPUs.
 The most famous two eras of computing are the (i) sequential and (ii) parallel eras.
 Cloud computing refers to both the applications consigned as services over the Internet and the hardware
and systems programs in the data hubs that supply those services.
 Cloud technologies for HPC are Hadoop, Dryad and CGL-MapReduce.
 Cloud technologies like Google MapReduce, Google File System, Hadoop and Hadoop Distributed File
System, Microsoft Dryad and CGL-MapReduce take a more data-centred set regarding two parallel
runtimes.
 Services in the cloud can be grouped into three categories: (i) Software as a Service (SaaS), (ii) attached
services and (iii) cloud platforms.
 Development tools are another significant part in platforms. Modern tools assist developers in
constructing applications utilizing the components of an application platform.
 On-premises platform is split into two very broad categories: (i) packaged applications and (ii) custom
applications.
 Cloud computing platforms are (i) Abicloud Cloud Computing Platform, (ii) Eucalyptus Cloud Platform,
(iii) Nimbus Cloud Computing Platform and (iv) OpenNebula Cloud Computing Platform.
 Distributed computing is a by-product of Internet. Distributed development is global development, which
adds its own trials with collaboration and code management.
 Git and subversion are two tools broadly utilized in distributed environments.
 There are eight key components to address when constructing an internal or external compute cloud: (i)
shared infrastructure, (ii) self-service automated portal, (iii) scalable, (iv) rich application container, (v)
programmatic control, (vi) 100% virtual hardware abstraction, (vii) strong multi-tenancy and (viii)
chargeback.
 Hadoop is an open source program that endows distributed processing of large data over inexpensive
servers.
 Hadoop is creating worth for enterprises, organizations and individuals.
 The MapReduce paradigm takes idea from the map and it reduces programming constructs widespread in
abundant programming languages.
 The Hadoop environment and will change over time as job structure alterations, data layout developments
and increased data capacity.
 Hadoop has nodes inside each Hadoop cluster. They are DataNodes, NameNodes and EdgeNodes.
 The Hadoop Distributed File System (HDFS) is a distribute file system conceived to run on hardware.
 Hadoop is furthermore designed to effectively distribute large amounts of work over a set of machines.
 Hadoop presents no security model, neither safeguards contrary to maliciously injected data.
 Hadoop is developed to be an effective method for large volumes of data by connecting numerous
computers to work in parallel.
 VMware’s desktop software sprints on Microsoft Windows, Linux and Mac OS-X.
 VMware software presents an absolutely virtualized set of hardware to the visitor functioning system.
 Eucalyptus was initially developed to supply an inexpensive, extensible and straightforward platform to
establish an open source cloud structure for the world of academia.
 Components of Eucalyptus are (i) cloud controller (CLC), (ii) cluster controller (CC), (iii) node controller
(NC), (iv) storage controller (SC) and (v) Walrus storage controller (WSC).
 The NC is accountable for executing a task on the private resources that host VM instances such as
launch, check shutdown and clean-up.
 The CC is responsible for managing an assemblage of NCs (cluster) that work together.
 WSC presents a continual and straightforward storage service. WSC uses REST and SOAP APIs, which
are compatible with S3 API.
 SC presents continual impede storage for the instances. It creates and organizes continual impede storage
devices and snapshots of volumes.
 Incoming demands from external clients or administrators are processed by CLC. CLC is responsible for
handling demands.
 Cloud computing is the expertise which delivers dependable, protected, fault-tolerant, sustainable and
scalable computational services.
 The objective of the CloudSim project is to supply a generalized and extensible replication structure that
endows seamless modelling, replication and experimentation of cloud computing infrastructures and
application services.
 OpenNebula is actually the premier and most sophisticated structure for cloud computing.
 OpenNebula is a completely open-source toolkit to construct IaaS private, public and hybrid clouds.
 Nimbus is an open-source toolkit concentrated on supplying infrastructure as a service (IaaS).
UNIT – V
Cloud Applications – Moving Applications to the Cloud – Microsoft Cloud Services – Google Cloud
Applications – Amazon Cloud Services – Cloud Applications
MOVING APPLICATIONS TO THECLOUD
CLOUD OPPORTUNITIES
Cloud computing presents an opening for business discovery and supplies platform to turn IT
into a more productive and responsive business service.
Ensuring on-demand access to pools of trusted infrastructure and services, cloud pledges to
de-couple business plans from the IT capabilities driving by them. For IT, it entails some basic re-
structuring and re-skilling.
For enterprise, it entails potential transformation in the speed, flexibility, effectiveness,
competitiveness and discovery of organizations.
Some of the cloud possibilities are listed in the following text.
Cloud for cost reduction: Under the pressure to decrease the cost of procedures, organizations of all
dimensions anticipate their IT to consign more worth for less expense.
By eradicating up-front spend on IT and supplying IT capability on pay-per-use, cloud
promises to restructure the IT budget, moving key applications and services to multi-tenancy
architectures.
Cloud for enterprise growth: Cloud permits organizations to quickly and effortlessly scale up their
procedures to support enterprise goals, in-terms of:
 Expanding into new markets
 Attracting and keeping new clients
 Executing the amalgamation and acquisition system or racing up time-to-market for new
goods and services
Cloud for fast innovation: Cloud promises a spectacular change in the enterprises by endowing fast
innovation. Cloud eliminates obstacles for larger collaboration while lowering the risk and cost of
both going into new markets experimenting and checking new goods and services.
Cloud for enterprise agility: Cloud computing with its flexible infrastructures and on-demand
charging is a beginning to reset the anticipations for IT business. It presents the opening for IT to be
re-cast as an enabler of enterprise agility other than an inhibitor of enterprise change.
Cloud possibilities will be in three forms. Vendors hoping to find sales should aim on three
categories:
1. Working out an organization’s cloud system
2. Endowing an organization’s readiness to proceed to the cloud
3. Applying a cloud-based solution.

APPLICATIONS IN THE CLOUD


In cloud computing, the applications are virtually limitless. Possibly, everything from generic
phrase processing programs to customized computer programs formed for a correct business might
work on a cloud computing system.
Here are a couple of causes why any individual likes to depend on another computer system to
run programs and store data:
1. Clients would be adept to get access to their applications and data from any location at any
time. They could get access to the cloud computing system utilizing any computer
connected to the Internet.
2. It could convey hardware charges down. The cloud computing system would decrease the
requirement for sophisticated hardware on the purchaser side.
Corporations that depend on computers have to confirm they have the right software in location to
accomplish goals.
Cloud computing systems offer these organizations a company-wide entrée to computer
applications. The businesses do not have to purchase a set of software or licenses for every
employee. Instead, the business could pay a metered charge to a cloud computing company.
Applications Shifted to the Cloud
There are application vendors who have certain application categories that are well
established in-terms of reliability, security and fairness.
It is a good time to take a gaze at the cloud again. Here are some applications that can be
shifted to the cloud.
E-mail: E-mail is the lifeblood of numerous organizations, and as an outcome numerous businesses
are not eager to let go of it.
E-mail architecture has become rather normalized and there is actually no value-add to
holding it inside the firewall other than mitigating regulatory concerns.
Conferencing software: Setting up and sustaining conferencing programs is not fun. To make
matters poorer, when it is down, it desires to be up in a hurry.
Like e-mail, there is no advantage to pin pointing this inside the firewall, and furthermore the
setup and configuration can be convoluted in the need of an expert.
CRM: The conclusion to outsource CRM can be scary. After all, like e-mail, CRM is where many of
the company’s crest jewels are stored. But there are no technical advantages or benefits in having
CRM in-house. The authorizing of numerous CRM systems can be a hassle. Moving to a hosted
CRM system can free us to spend more time on more significant issues.
Web hosting: Many vendors have moved to a virtualized hosting environment and this has
spectacularly expanded uptime, decreased security risks and permitted them to supply much more
open and direct access to the servers.

Batch processing applications: One kind of application that polishes in the cloud is the batch
processing application, for example, a data warehouse. As long as the data required is accessible into
the cloud without disturbing the procedures, the proficiency to quickly scale capability in the cloud
can result in marvellous savings.

MANAGING DESKTOP AND DEVICES IN CLOUD


Desktop and device asset administration assists us to choose, purchase, use and sustain desktop
hardware and software.
From an administration viewpoint, one should realize that cloud computing desktop virtualization
does not eliminate the requirement for administration at the desktop. Here is a list of essential
undertakings to organize desktops and wireless devices thoroughly:
 Establish a comprehensive hardware asset register
 Establish a software register
 Control software licenses
 Manage device costs

CLOUD DESKTOP
Access anywhere, everywhere, anytime: Cloud Desktops presents completely purposeful,
person-alizable and continual desktops without the cost and complexity affiliated with getting
hardware, configuring OS or constructing Virtual Desktop Infrastructures (VDI). Cloud Desktops
provides protected and dependable access to desktops in the cloud from any client device.
Personalized and persistent: Cloud Desktops is neither distributed nor temporary.
Personalize the desktops required and add the applications needed. The desktop, data and
personalization are with us until we delete it.
Inexpensive and hassle-free: Cloud Desktops is accessible for $20 a month. Pay no up-front
charges and you are not locked into any long-term contracts.
Secure and reliable: Cloud Desktops is constructed on Amazon EC2, which commits to
99.95% accessibility and presents ways for protecting hosted desktops. In addition, it simplifies and
protects the cloud desktop login utilizing an encrypted, single-use token to authenticate users into
their desktops.
Easy to manage: Cloud Desktops Web interface provides easy designing, imaging, deleting
and tracking desktop usage in the cloud environment. One can organize multiple users, each with
their own individual desktops.
SCIENTIFIC APPLICATIONS IN THE CLOUD
Scientific computing engages the building of mathematical models and numerical solution
methods to solve technical, scientific and technology problems.
These models often need a huge number of computing assets to present large scale experiments or
to slash down the computational complexity in a sensible time frame.
These desires have been primarily addressed with dedicated high-performance computing (HPC)
infrastructures, for example, clusters or with a pool of networked machines in the same
department, organized by some ‘CPU cycle scavenger’ software like Condor.
With the advent of Grid computing, new possibilities became accessible to researchers that could
offer on demand large experiments.
Computing Grids introduced new capabilities like the dynamic breakthrough of services and
finding the best set of machines meeting the obligations of applications. The use of Grids for
technical computing has become so thriving that numerous worldwide tasks led to the
establishment of worldwide infrastructures accessible for computational science.
Cloud computing can address numerous of the aforementioned problems. By using virtualization
technologies, cloud computing boasts end-users a variety of services covering the whole
computing stack, from hardware to application level. Another significant feature is that
researchers can take advantage in terms of scaling up and scaling down the computing
infrastructure as asserted by the application obligations and the budget of users.
By utilizing Cloud-founded technologies, researchers can have a straightforward access to large
distributed infrastructures and will customize their execution environment. Moreover, by leasing
the infrastructure on a pay per use basis, they can have direct access to needed assets and are free
to release them when no longer needed.
Aneka is a Cloud platform for growing applications that can be climbed onto via harnessing the
CPU of virtual resources, desktop PCs and clusters. Its support for multiple programming models
presents researchers with distinct choices for expressing the reasoning of their applications: bag of
tasks, distributed threads, dataflow or MapReduce.
The Service-oriented architecture presents users with an exceedingly customizable infrastructure
that can rendezvous the desired quality of service for applications.
Clouds are therefore appearing as a significant class of distributed computational assets, for both
data-intensive and compute-intensive applications.

MICROSOFT CLOUD SERVICES


Mega technology tendencies are altering how people work today. The cloud, wireless,
communal and large-scale data are all affecting how enterprises enlist with their clients, partners and
workers in alignment to competence.
Today, IT is experiencing a move from the customary client/server to the cloud. Going ahead,
enterprises and other organizations will gaze to spend and deliver IT as a service.
Small enterprises have numerous of the identical rudimentary IT desires as bigger
organizations, like connection, security, reliability, storage and desktop management. However,
small enterprises have fewer assets, so they have restricted proficiency to make foremost IT capital
investments.
Microsoft boasts the advantages of the cloud with the familiarity of Microsoft applications
that users, developers, IT professionals and leaders currently understand and trust.
Microsoft Cloud Services include Office Web Apps, Microsoft Lync Online, Microsoft
Exchange Online, Microsoft Dynamics CRM Online, Windows Live ID, Windows Server Active
Directory, SQL Azure, Windows Azure Platform Appliance, Windows Azure, Windows Azure
Platform Appliance, SharePoint Online, Office Live Meeting and Windows Intune.
Accelerate the development of the enterprise by deployment or trading cloud solutions based
on Microsoft cloud technologies. The next possibilities assists to supply the technical and enterprise
carriers that are required to propel new levels of profitability and comparable advantage.

Table 29.1 Business Model and Cloud Solutions


Business Cloud Solutions
Model

Sell  Recurring revenue


 Packaged solutions
 Expanded services
 New markets and customer segments

Build  Repeatable IP
 Faster deployment
 Migrate solutions to the cloud
 Scale users
 Faster, less costly testing
 Extended and customized cloud offerings

Host  Extended product offerings


 Broader marketplace
 Increased wallet share

WINDOWS AZURE PLATFORM


Microsoft Cloud Services
Microsoft has its own cloud hosting services, Azure, but there are still other scenarios where
Microsoft softwares can be established in cloud and these offer a fertile merchandise development
locality for world wide web hosting providers.
These ‘Microsoft Cloud Services’ (MCS) offer the perfect way to proceed into more of an
Microsoft Service Partner (MSP) mode, supplying a fuller variety of IT outsourcing services and
increased recurring revenues. Most organizations currently have apps like SharePoint and Exchange
established internally, so hosted versions do not offer any pain-solving solutions. In comparison,
new, cloud-based services that add worth to these living installations are very well aimed at niche
opportunities.
The platform comprises diverse on-demand services hosted in Microsoft data hubs and
consigned through three merchandise brands:
1. Windows Azure (a functioning system supplying scalable compute and storage facilities).
2. SQL Azure (a cloud-based, scale-out type of SQL server).
3. Windows Azure AppFabric (an assemblage of services carrying applications both in the
cloud and on premise).
Windows Azure: It provides a Microsoft Windows Server–based computing environment for
applications and continual storage for both organized and unstructured data, as well as asynchronous
messaging.
Windows Azure AppFabric: It provides a variety of services that assist consumers to attach users and
on-premise applications to cloud-hosted applications, organize authentication and apply data
administration and associated characteristics, like caching.
SQL Azure: It is vitally an SQL server supplied as a service in the cloud. The platform furthermore
encompasses a variety of administration services that permit users to control all these assets
(resources), either through a web-based portal or programmatically. In most situations, there is a
REST-based API that can be utilized to characterize how the services will work.
Windows Azure is a platform for running Windows applications and saving data in the cloud.
Here are some example applications that might be constructed on Windows Azure:
 A self-determining software vendor (ISV) could visualize an application that aims
enterprise users; an approach that is often brought up is Software as a Service (SaaS). ISVs
can use Windows Azure as a base for business-oriented SaaS applications.
 An ISV might conceive a SaaS application that targets consumers. Windows Azure is
conceived to support very scalable programs and so a firm that designs to target a large
buyer market will select it as a base for a new application.
 Employees use Windows Azure to construct and run applications within Enterprises. While
this position will not likely need the tremendous scale of a consumer-facing application,
the reliability and manageability that Windows Azure boasts could still make it an
appealing choice.
Windows Azure is the application service in cloud which allows Microsoft data centres to host and
run applications. All Azure services and applications created using them run on peak of Windows
Azure.
Windows Azure has three centre components:
1. Compute which presents a computation environment with Web Role, Worker Role and VM
Role.
2. Storage which focuses on supplying scalable storage (Blobs, non-relational Tables and
Queues) for large-scale needs.
3. Fabric which values high-speed attachments and swaps to interconnect nodes comprising
some servers. Fabric resources, applications and services running are organized by the
Windows Azure Fabric Controller service.
The Windows Azure Platform presents an API constructed on REST, HTTP and XML that
permits a developer to combine with the services supplied by Windows Azure.
Microsoft also presents a client-side, organized class library which encapsulates the purpose of
combining with the services. It also incorporates with Microsoft Visual Studio in order that it can be
utilized as IDE to evolve and release Azure-hosted applications.

GOOGLE CLOUD APPLICATIONS

Google’s domain is constructed on the World Wide Web advertising. In 2010, 96% of its $29
billion income came from online ads.
Google deals subscriptions to enterprises, applying its web know-how to market
conventionally controlled by a very distinct kind of Software Company.
In September 2009, Gmail was offline for 1 hour 40 minutes. Users over the globe were
unable to access the service after the company made a mistake when updating the demand to routers
that direct queries to Gmail’s web servers.
The occurrence pursued a sequence of other, lesser Gmail outages, but Google habitually
contended that, in evaluation to client-server e-mail systems, the service was far more reliable.
Nearly a year and a half on, the contention retains up. Like Google’s search engine, Google
Apps is constructed atop a highly distributed infrastructure that disperses both data and code over
myriad servers and data centres.

GOOGLE APPLICATIONS UTILIZING


CLOUD Gmail
Gmail makes organizing the e-mail system so straightforward and efficient. Gmail boasts 25 GB
of storage per client, mighty spam filtering, BlackBerry and Outlook interoperability and a 99.9%
uptime SLA (Service Level Agreement).
 E-mail, IM, voice and video chat: Each client gets 25 GB of e-mail and IM storage.
 Anytime, any location, get access to your e-mail: Gmail is securely powered by the World
Wide Web, so you can be creative from your table, on the street, at home and on your
wireless telephone, even when you are offline.
 Sync with Android, iPhone and BlackBerry: Get the advantages of Apps on premier
wireless platforms.
 Search and find e-mails instantly: Spend short time in managing e-mail and locate e-mails
quickly with Google-powered search to your inbox.
 Get less spam: Gmail spam is powerful filtering which assists you to concentrate on
important ones.
Google Calendar
Organizing the agenda should not be a burden. With Google Calendar, it is so straightforward to
hold track of life’s significant happenings all in one place.
 Easily schedule appointments: Overlay multiple calendars to glimpse when people are
available. Google Calendar drives requests and organizes them.
 Integrate with e-mail system: Google Calendar is incorporated into Gmail and
interoperable with well-liked calendar applications.
 Share task calendars: Calendars can be distributed company-wide or with chosen
coworkers. A variety of distributing consent controls assist to sustain security and privacy.
 Access with your wireless device: View and edit happening minutia, add new happenings
and ask for visitors on wireless devices like the BlackBerry and iPhone. Even obtain
calendar notifications by SMS.
 Publish calendars: Publicize external business happenings by announcing a calendar to
make it searchable in the Google Calendar gallery. Easily embed calendars into web pages.
Google Docs
Google Docs is an easy-to-use online phrase processor, spreadsheet and production reviewer that
endows to conceive, store and share instantly and securely and cooperate online in less time. Users
can conceive new articles from the rub or upload living articles, spreadsheets and presentations.
There are no software programs to download and all your work is retained securely online and can be
accessed from any computer.
 Works over other operating systems: Google Docs works in the browser on PC, Mac and
Linux computers and carries well-liked formats, for example,.doc,.xls,.ppt and.pdf.
 Easily upload and share files: Files retained on Google Docs are habitually accessible and
backed-up online.
 Secure access to controls: Administrators can organize document distributing permissions
system-wide and article proprietors can share and revoke document access at any time.
Google Sites
Google Sites are the easiest way to make knowledge accessible to the population who want
speedy, up-to-date access. People can work concurrently on a site to add record supplements,
knowledge from other Google requests for paid jobs (like Google Docs, Google Calendar, YouTube
and Picasa) and new free-form content. Google Sites are accessible from any internet-bound
computer.
 Organize knowledge in a central place: Use Google Sites to centralize written material,
spreadsheets, demonstrations, videos, slideshows and more to aid to retain the teams
organized.
 Anytime, everywhere access: Google Sites are securely propelled by the web, so you can
gather pages from your office desk, on the move, at home and on your portable phone.
 Works through various operating systems: Google Sites work on the browser installed on
PC with Mac and Linux computers.
 System and site-level security controls: Administrators can supervise site sharing
permissions through the enterprise and authors can share and revoke file access at any
time.
Google Groups
The Google Groups service creates a Google Group which is a user-owned group. Google
Groups not only sanctions us to supervise and archive the mailing list, but in addition gives a manner
for accurate communication and collaboration with gathered members.
Unlike other free mailing list services, Google Groups bids lavish storage fixes, customizable
pages and unique organization options.
Google Groups are all about aiding users to bind with peoples, to access knowledge and convey
effectually over computer communication and on the web.
 Fast setup: Create and supervise gatherings without burdening IT.
 Sharing with a group: Employees can share docs, calendars, sites, divided folders and
videos with a gathering instead of individuals.
 Searchable archives: Group members can access and explore archives of posted items,
conveyed to their e-mail lists to expeditiously find topics of interest.
Google Video
The Google Video index is the most comprehensive on the world wide web, comprising
millions of videos indexed and obtainable for viewing. Using Google Video, one can explore and
watch an ever-growing accumulation of video presentations, cinema clips, videos tunes,
documentaries, private productions and more from all over the web.
 A video channel for your business: Video sharing makes valued communications like
inside training and company announcements more engaging and effective.
 Keep videos secured and private: Employees can securely share videos with co-workers
without uncovering private information.
 Anytime, everywhere access: Google Video is securely propelled by the web, so you can
view videos from your office desk, on the road and at home.
 Works in various operating systems: Google Video works on a browser installed on PC
with Mac and Linux OS.
GOOGLE APP ENGINE
Google App Engine is Google’s stimulating application development and hosting platform in
the cloud. With it, the client can construct and establish web applications on Google’s scalable high-
traffic infrastructure.
App Engine carries apps written in Python or Java and they will execute on servers that use
the identical expertise that forces Google’s websites for pace and reliability.
App Engine applications are so straightforward to construct and scale as the traffic and data
grows. To maintain App Engine, there are no servers available. It helps the user to upload the
application.
Google App Engine devotes you to get access to the identical construction blocks that Google
values for its own applications.
It makes it simpler to construct an application that sprints reliably, even under a hefty load
and with a large amount of data. The development environment encompasses the following features:
 Dynamic world wide web assisting with full support of widespread web technologies.
 Persistent storage with queries, sorting and transactions that are powered by Bigtable and
GFS.
 Scalability and load balancing are done automatically.
 Google APIs for authenticating users and dispatching e-mail.
 Fully boasted localized development environment.
Google App Engine bundles the construction blocks and takes care of the infrastructure stack,
departing you more time to aim on composing code and advancing your application.
Google Apps for Business
Powerful cloud-based messaging and collaboration tools are offered by Google Apps for
Business, from tiny organizations to huge organizations.
Google Apps is 100% hosted by Google, which decreases IT charges, minimizes up-keeping and
management and simplifies primary setup. With Google Apps for Business, client gets:
 Customized e-mail addresses
 Mobile e-mail, calendar and IM access
 No added hardware or programs
 Industry-leading spam filtering
 24/7 internet note and telephone support
 99.9% uptime assurance
 Dramatic cost savings
Choosing Google Apps not only saves money, but also saves an unbelievable amount of time. If
the entire IT group focuses on methods and forward-thinking that can really advance the way
enterprise operates.
Google Apps for Education
Google Apps for Education boasts worth that is yet to be agreed in the world of cloud-based
messaging and collaboration. For $0/user/year, school, employees and scholars of informative
organizations at all levels can leverage this huge set of customizable connection and collaboration
tools.
Tools like Google Sites and Google Groups are ready-made for the world of learning,
endowing data exchange and direction at solely new levels.
Google Apps adds the newest technologies and establishes best practices for data-centre
administration, network application security and data integrity. Eight ways how Google Apps
advantages the campus are listed herewith:
 Students will love you for it
 Free up your IT
 Easy to deploy
 Save money
 Google defends your privacy
 Security as powerful as Google
 Innovation in real-time
 Collaborate globally
Google Apps for Government
Google Apps for Government presents all of the identical advantages that Google Apps for
Business does, but with a supplemented level of security that stands up to even the largest levels of
government standards.
With Google Apps, the government department benefitted from the scale and redundancy of
distributed data centres around the globe.

AMAZON CLOUD SERVICES


Amazon commenced its Web Services in 2006 and is widely used and accepted by the
industry as a general cloud initiative. Amazon Web Services are supplied by Amazon, under which
Amazon EC2 and Amazon S3 make up the most well liked services.
The Amazon Elastic Compute Cloud or EC2 permits users to pay for what they use. Servers are
‘increased’ on demand and are ‘decreased’ when usage ceases.
These instances are easily virtual machines based on peak of the Xen VM that are organized by
Amazon’s interior asset administration facility.
The VMs are better renowned as elastic compute units (ECU), which are pre-packaged and can be
‘ordered’ and one can easily choose and select what one desires and pay for it after one is done.
Pricing is done on the kind of ECU and size of data moved or the time used.
UNDERSTANDING AMAZON WEB COMPONENTS AND SERVICES
Cloud computing adopts scalable computing assets supplied as a service from out-of-doors on the
natural environment on a pay-per-use basis. One can get access to any of the assets that reside in the
‘cloud’ at any time and from any location over the Internet. The user does not have to care about how
things are being sustained behind the scenes in the cloud.
Cloud computing draws from the widespread portrayal in expertise architecture designs of the
Internet, or IP accessibility, displayed as a cloud.
Cloud computing is also called utility computing or grid computing. Cloud computing is a
paradigm move in how we architect and consign scalable applications.

Amazon Web Services


Amazon Web Services give programmatic entry to Amazon’s ready-to-use computing
infrastructure.
The robust computing podium that was assembled and enhanced by Amazon is now obtainable
by anyone who has entry to the internet.
Amazon gives numerous web services, building-blocks that fulfil some of the quintessence wants
of most systems: storage, computing, messaging and datasets. Amazon Web Services can aid us to
architect scalable procedures by providing:
 Reliability: The services run in Amazon’s battle-tested, highly obtainable data centres that
run Amazon’s own business.
 Security: Basic security and authentication methods are obtainable out of the packing box
and customers can enhance them as wanted by layering his/her application-specific
security on apex of the services.
 Cost benefits: No fastened charges or support costs.
 Ease of development: Simple APIs allow us to harness the full power of this virtual
infrastructure and libraries, obtainable in most extensively employed programming
languages.
 Elasticity: Scale the computing supplies based on demand.
 Cohesiveness: The four quintessence building-blocks using which services (storage,
computing, messaging and datasets) are created from scratch currently work well and give
a whole result through a large type of request for paid job domains.
 Community: Tap into the vibrant and dynamic customer community that is propelling the
extensive adoption of these web services and is bringing ahead sole requests for paid jobs
assembled on this infrastructure.
Amazon presents standards-based SOAP and REST interfaces for combining with each of the
services. Developer libraries either from Amazon or third parties are accessible in multiple
languages, encompassing Ruby, Python, Java, Erlang and PHP, for broadcasting with these services.
Amazon S3 (Storage)
Amazon Simple Storage Service (S3) presents a web service interface for the storage and retrieval
of data.
The data can be of any kind and can be retained and accessed from any location over the internet.
Users can shop an unlimited number of things in S3, and the dimensions of each retained object
can vary from 1 byte to 5 GB.
The data is retained securely utilizing the identical data storage infrastructure Amazon values to
power its worldwide mesh of e-commerce websites.
Amazon EC2 (Elastic Computing)
Amazon EC2 is a web service that permits us to use virtual machines within minutes and
effortlessly scale the capability up or down founded on demand.
These instances are based on Linux and can run any submission or software. The EC2 natural
environment itself is constructed on the peak of the open source Xen hypervisor.
Amazon permits us to conceive Amazon Machine Images (AMIs) that act as templates for the
instances. Access to these can be controlled by identifying the permissions.
Amazon EC2 presents factual web-scale computing, which makes it so straightforward to scale
computing assets up and down. Amazon presents five kinds of servers. These servers vary from
product single-core x86 servers to eight-core x86_64 servers.
Amazon SQS (Simple Queue Service)
Amazon Simple Queue Service (SQS) presents get access to the dependable messaging
infrastructure utilized by Amazon.
Users can send and receive messages from any location utilizing straightforward REST-based
HTTP requests.
The message is retained by Amazon over multiple servers and data hubs to supply the redundancy
and reliability required from a messaging system. Each message can comprise up to 8 KB of text
data.
Amazon SimpleDB
Amazon SimpleDB (SDB) is a web service for saving, processing and querying, organized
datasets. It is not a relational database in the customary sense, but it is a highly accessible schema,
with a less-structured data shop in the cloud.
SDB is straightforward to use and presents most of the purposes of a relational database. The
upkeep is much easier than a usual database.
Administrative tasks are taken care of by Amazon. The data is mechanically indexed by
Amazon and is accessible to us anytime from anywhere.
A key benefit of not being guarded to schemas is the proficiency to inject data on the fly and
add new columns or keys dynamically.

ELASTIC COMPUTE CLOUD (EC2)


Amazon Elastic Compute Cloud is a most important component of Amazon.com’s cloud
computing platform, Amazon Web Services.
EC2 permits scalable deployment of applications by supplying a web service. Clients can use an
Amazon Machine Image to conceive a virtual machine, encompassing any software programs
desired.
A client can conceive, launch and terminate server instances as required, giving time for active
servers, therefore the period ‘ elastic’. EC2 presents users with command over the geographical
position of instances that permits for latency optimization and high grades of redundancy.
Amazon’s features are:
 A service grade affirmation for EC2
 Microsoft Windows in beta pattern on EC2
 Microsoft SQL Server in beta pattern on EC2
 Designs for an AWS (Amazon Web Service) administration console
 Designs for load balancing, auto-scaling and cloud supervising services
Amazon Elastic Compute Cloud (Amazon EC2) is a world wide web service that presents
resizable computing capability that is utilized to construct and host software systems.
Following are the list of the major constituents of EC2 which are briefed.
Amazon Machine Images and Instances
An Amazon Machine Image (AMI) is a template that comprises a software program configuration
(e.g., functioning scheme, submission server and applications).
From an AMI, a client can launch instances, which are running exact replicates of the AMI. Also
he/she can launch multiple instances of an AMI, as shown in Figure 31.1.

Figure 31.1 Amazon Machine Images and Instances


Amazon publishes numerous AMIs that comprise widespread program configurations for public
use. In supplement, constituents of the AWS developer community have released their own made-to-
order AMIs.
Storage
When utilizing EC2, data have to be stored. The two most routinely utilized storage kinds are:
1. Amazon Simple Storage Service (Amazon S3)
2. Amazon Elastic Block Store (Amazon EBS) volumes

CLOUD APPLICATIONS
Major companies encompassing Amazon, Google, IBM, Sun, Cisco, Dell, HP, Intel, Novell
and Oracle have bought into cloud computing and offer persons and enterprises a variety of cloud-
based solutions.
CLOUD-BASED SOLUTIONS
Social Networking
Perhaps the most well-renowned use of cloud computing, which does not hit persons as ‘cloud
computing’ at the start glimpse is communal networking websites, encompassing Facebook,
LinkedIn, MySpace, Twitter and numerous others. The major concept of communal networking is to
find persons you currently understand or persons you would like to understand and share your data
with them.
E-mail
Some of the large-scale cloud computing services are web-based e-mail. Using a cloud computing
e-mail answer permits the mechanics of hosting an e-mail server and alleviates in sustaining it.
Document/Spreadsheet/Other Hosting Services
Just like Google Docs, several services like Zoho Office live on the internet permit us to hold and
edit articles online.
By managing so, the articles will be accessible from any location and one can share the articles
and cooperate on them. Multiple persons can work in the identical article simultaneously. A new
online task administration device, Onit, is for ‘anyone, who organizes projects: large-scale, little,
enterprise, legal’.
Yahoo’s Flickr and Google’s Picasa offer hosting for photos that can be distributed with
associates, family and the world. People can comment on the photos, much like they can on
Facebook, but these focused photograph-hosting services offer some perks for the photographers.

CLOUD COMPUTING SERVICES


Google Apps
Reliable, protected web-based agency devices for any dimensions enterprise. Powerful,
intuitive submissions like Gmail, Google Calendar and Google Docs can assist to decrease IT
charges and assist workers to cooperate more competently, all for just $50 per client per year.
PanTerra Networks
PanTerra Networks are the premier provider for cloud-based unified Software-as-a-Service
(SaaS) communication answers for small and intermediate dimensions enterprises.
The company’s WorldSmart answer is consigned from the cloud through a 100% browser-
based purchaser, eradicating any premise-deployed hardware or software.
WorldSmart presents unified connection services for unlimited digital voice, video and fax,
instant note, wireless text and Internet note, all with occurrence through a lone client and
administrative interface.
Cisco WebEx Mail
Cisco WebEx Mail decreases the problem of Internet note administration so IT can aim on
strategic tasks rather than usual tasks. Yet, managers stay completely in command through a web-
based console, permitting them to acclimatize to ever-changing organizational needs.
Cisco WebEx Mail encompasses sophisticated migration devices that simplify the migration
process. The answer interoperates with a living internet note infrastructure as well as archiving and
security solutions.
This minimizes disturbances throughout the transition to a hosted Internet note solution.
Yahoo Zimbra
Zimbra is a next-generation collaboration server that presents the association’s larger general
flexibility and ease with incorporated internet note, associates, calendaring, distributing and article
administration in addition to mobility and desktop synchronization to users on any computer.
Zimbra Collaboration Suite’s sophisticated world broad web submission and server is
constructed on open measures and technologies to consign unparalleled per-user scalability and
smaller general Total Cost-of-Ownership (TCO).
IBM LotusLive iNotes
LotusLive iNotes e-mail is a business-class messaging answer for everyone. Remote workers,
retail employees and any individual who does not work behind a table will realize the straightforward
get access to business e-mail.
With web-based e-mail, all workers will have real-time e-mail access to and from a web
browser and Internet connection. In addition to a web-based interface, all e-mail anecdotes are
endowed with POP, authenticated SMTP and IMAP capabilities for use with e-mail purchasers, for
example, Lotus Notes or Microsoft Outlook.
ElasticEmail
Elastic Internet note makes Internet note to be dispatched simpler for both the developer and
the enterprise supervisor of a cloud application.
Several cloud applications, for example, Windows Azure and Amazon EC2, manage to
supply an internet note consignment service and may even set restricts on internet note sending.
ElasticEmail presents direct internet note dispatching through a straightforward REST API.
Hence, rather than having to setup and configure an SMTP internet note server or service, one can
start dispatching Internet note immediately.
Microsoft Exchange Online
Microsoft Exchange Online is a world broad web type of the ubiquitous on-premise e-mail
client. Features encompass the proficiency to log on to the account and swab a wireless telephone of
perceptive facts and numbers if it is lost or stolen. Drawback: The program works best on Internet
Explorer.
Salesforece.com
Salesforce.com is a vendor of customer relationship management (CRM) solutions, which it
consigns to enterprises over the Internet utilizing the programs as a service model. It is utilized to
hold the pathway of and reinforce a company’s connection with its living and promise clients.

SUMMARY OF UNIT V

 For IT organizations, cloud computing entails potential transformation in terms of speed,


flexibility, effectiveness and competitiveness.
 Some of the cloud possibilities are (i) cloud for cost reduction, (ii) cloud for enterprise growth, (iii)
cloud for fast innovation and (iv) cloud for enterprise agility.
 Cloud adoption is topped up with business and operational trials surrounding expertise, security,
total cost of ownership (TCO) and the intersection with the business system and operations.
 The cloud has three elements: Software-as-a-Service (SaaS), Platform-as-a-Service (PaaS) and
Infrastructure-as-a-Service (IaaS) and a fourth element: IT services on the cloud.
 Some applications that can be shifted to the cloud are (i) e-mail, (ii) conferencing software, (iii)
CRM, (iv) web hosting and (v) batch processing applications.
 Cloud Desktops presents completely purposeful, personalized and continual desktops without the
cost and complexity in terms of hardware and OS.
 Cloud Desktops Web interface provides easy design, imaging, deleting and tracking desktop usage
in the cloud environment.
 Scientific computing engages the building of mathematical models and numerical solution methods
to solve technical, scientific and technology problems.
 Computing Grids introduced new capabilities like the dynamic breakthrough of services and
finding the best set of machines meeting the obligations of applications.
 Different solutions are available to precede from the customary research Grids and adopt the Cloud
computing paradigm.
 Google AppEngine and Microsoft Azure concentrate more on application level virtualization.
 Aneka is a Cloud platform for growing applications that one can climb onto via harnessing the CPU
of virtual resources, desktop PCs and clusters.
 Clouds are appearing as a significant class of distributed computational assets for both data-
intensive and compute-intensive applications.
 Cloud promises to make IT not just lower, but furthermore much quicker, simpler, more flexible
and more effective.
 Microsoft boasts the advantages of cloud with the familiarity of Microsoft applications that users,
developers, IT professionals and leaders currently understand and trust.
 Cloud-based services that add worth to these existing installations are very well aimed at niche
opportunities.
 The Microsoft cloud computing platform utilizes Windows Azure to construct and extent world
wide web applications using its data centres.
 Windows Azure provides a Microsoft Windows Server–based computing environment for
applications and continual storage for both organized and unstructured data, as well as
asynchronous messaging.
 The Windows Azure AppFabric provides a variety of services that assist consumers to attach users
and on-premise applications to cloud-hosted applications, authentication and data administration.
 SQL Azure is vitally an SQL server supplied as a service in the cloud.
 Windows Azure is the application service in cloud which allows Microsoft data centres to host and
run applications.
 Windows Azure has three centre components (i) compute, (ii) storage and (iii) fabric.
 The Windows Azure Compute Service can run numerous distinct types of applications.
 Google’s domain is constructed on web advertising.
 Google deals subscriptions to enterprises, applying its web know-how to the market controlled in a
very distinct kind.
 Gmail makes organizing e-mail system straightforward and efficient.
 With Google Calendar, it is straightforward to hold track of life’s significant happenings all in one
place.
 Google Docs are an easy-to-use online phrase processor, spreadsheet and production reviewer that
endows to create, store and share instantly and securely and cooperate online in less time.
 Google Sites are the easiest way to make knowledge accessible to a population who wants speedy,
up-to-date access.
 Google applications utilizing cloud are (i) Gmail, (ii) Google Calendar, (iii) Google Docs, (iv)
Google Sites, (v) Google Groups and (vi) Google Video.
 Choosing Google Apps not only saves money, but also saves an unbelievable amount of time.
 Google Apps for Education boasts worth that is yet to be agreed in the world of cloud-based
messaging and collaboration.
 Amazon Web Services are supplied by Amazon, under which Amazon EC2 and Amazon S3 make
up the most well-liked services.
 Cloud computing adopts scalable computing assets supplied as a service from outof-doors on the
environment on a pay-per-use basis.
 Cloud computing draws from the widespread portrayal in expertise architecture designs of the
internet or IP accessibility, displayed as a cloud.
 Cloud computing is a paradigm shift in how we architect and consign scalable applications.
 Amazon Web Services can aid us to architect scalable procedures by providing (i) reliability, (ii)
security, (iii) cost benefits, (iv) ease of development, (v) elasticity, (vi) cohesiveness and (vii)
community.
 Amazon Simple Storage Service (S3) presents a web services interface for the storage and retrieval
of data.
 Amazon EC2 is a web service that permits us to use virtual machines within minutes and
effortlessly scale the capability up or down founded on demand.
 Amazon Simple Queue Service (SQS) presents get access to the dependable messaging
infrastructure utilized by Amazon.
 Amazon SimpleDB (SDB) is a web service for saving, processing and querying organized datasets.
 Amazon S3 is required for those who have the following issues: (i) running out of bandwidths, (ii)
better scalability, (iii) store documents online and (iv) easier documents retrieval and sharing.
 Cloud computing is used for communal networking websites, such as Facebook, LinkedIn,
MySpace, Twitter.
 The goal of communal networking is to find persons you currently understand and share your data
with them.
 Some of the large-scale cloud computing services are web-based e-mail.
 Using a cloud computing e-mail solution permits the automation of hosting an e-mail server and
alleviates in sustaining it.
 Some cloud computing services are (i) Google Apps, (ii) PanTerra Networks, (iii) Cisco WebEx
Mail, (iv) Yahoo Zimbra, (v) IBM LotusLive iNotes, (vi) ElasticEmail and (vii) Microsoft
Exchange Online.
 Cloud computing can make banks to reuse IT assets more effectively.
 Cloud computing gives possibilities for banks to construct a more flexible, nimble and customer-
centric enterprise model.
 In general, several components can play how a bank can save by utilizing the cloud.
 Cloud computing is the most convincing use case for banks and many innovative services can be
created.
 Process clouds and collaboration clouds permit professionals to attach to any agency position and
become virtual advisors.
 The cloud-based solution provider should have powerful capabilities in 14 critical areas .

You might also like