CC Unit - 3
CC Unit - 3
CC Unit - 3
A utility model is a statutory exclusive right granted for a limited period of time (the so-called "term")
in exchange for an inventor providing sufficient teaching of his or her invention to permit a person of
ordinary skill in the relevant art to perform the invention. The rights conferred by utility model laws
are similar to those granted by patent laws, but are more suited to what may be considered as
"incremental inventions". Specifically, a utility model is a "right to prevent others, for a limited period
of time, from commercially using a protected invention without the authorization of the right holder(s)
Some of the economies of scale and cost savings of cloud computing are also akin to those in
electricity generation. Through statistical multiplexing, centralized infrastructure can run at higher
utilization than many forms of distributed server deployment. One system administrator, for example,
can tend over 1,000 servers in a very large data center, while his or her equivalent in a medium-sized
data center typically manages.
By moving data centers closer to energy production, cloud computing creates additional cost savings.
It is far cheaper to move photons over the fiber optic backbone of the Internet than it is to transmit
electrons over our power grid. These savings are captured when data centers are located near low-cost
power sources like the hydroelectric dams of the northwest U.S.
Along with its strengths, however, the electric utility analogy also has three technical weaknesses and
three business model weaknesses.
Cloud computing 34
Technical Weaknesses of the Utility Model
The Pace of Innovation: The pace of innovation in electricity generation and distribution happens on
the scale of decades or centuries.8 In contrast, Moore's Law is measured in months. In 1976, the basic
computational power of a $200 iPod would have cost one billion dollars, while the full set of
capabilities would have been impossible to replicate at any price, much less in a shirt pocket.
Managing innovative and rapidly changing systems requires the attention of skilled, creative people,
even when the innovations are created by others, unlike managing stable technologies.
The Limits of Scale: The rapid availability of additional server instances is a central benefit of cloud
computing, but it has its limits. In the first place, parallel problems are only a subset of difficult
computing tasks: some problems and processes must be attacked with other architectures of
processing, memory, and storage, so simply renting more nodes will not help. Secondly, many
business applications rely on consistent transactions supported by RDBMS. The CAP Theorem says
one cannot have consistency and scalability at the same time. The problem of scalable data storage in
the cloud with an API as rich as SQL makes it difficult for high-volume, mission-critical transaction
systems to run in cloud environments.
Evolving IT Infrastructure:
A major shift in enterprise computing which will change the IT infrastructure forever. It is now clear
how a product like VMware changed server computing from physical to virtual and overhauled the
makeup and footprint of the server room. This revolutionary change in server computing, though epic,
was focused in scope and only affected a single segment of the IT infrastructure. Leveraging this
momentum, the next wave of change in IT infrastructure will trigger a transition from physical to
virtual, hardware-driven to software-driven and in-house to the Cloud.
Cloud computing 35
can only run on hardware, the role of hardware will diminish as the intelligence is removed from the
hardware devices and centralized in software devices separated from the hardware it controls. As
server virtualization allowed a single physical server to be carved into multiple, compartmentalized
‗virtual servers,‘ all running on and sharing the resources of the single physical host, other
virtualization technologies are also gaining traction and are all driven by software.
Software Defined Networks move the control plane, the decision-making software, from the network
switch to a common management platform that controls the entire network. Cloud providers will jump
start enterprise adoption as these companies build their hybrid clouds with integrations into both
public and private cloud services.
Another trend that will affect the IT infrastructure is storage virtualization, or Virtual SANs, which
have been around for a while now. These virtual SANs do not face the same throughput limitation of
the existing controller based SANs. Again, this technology is driven by cloud providers looking to
build massively scalable storage solutions to cost effectively meet the needs of their customers. Once
common place in the cloud, enterprises will feel comfortable deploying to their internal hybrid cloud
platform and these enterprises will never return to the traditional controller based SAN currently
deployed.
Software to the Cloud
Another movement gaining traction is the Software as a Service (SaaS) Cloud delivery model. The day
of IT deploying, managing and maintaining all their software applications is dwindling. There is
already a large movement of enterprise applications that have traditionally been housed internally to
the Cloud. This greatly reduces the time IT needs to devote deploying, upgrading and maintaining
these applications. SaaS is also gaining momentum with other mission critical applications such as
SAP and CRM which traditionally
required expensive hardware and specially trained administrators to run and support. Moving forward,
the breadth of applications available via SaaS will be fueled by companies such as Microsoft, IBM and
others whom are committed to making their entire application portfolio available via a SaaS delivery
model.
Why would anyone want to rely on another computer system to run programs and store data? Here are
just a few reasons:
Clients would be able to access their applications and data from anywhere at any time. They
could access the cloud computing system using any computer linked to the Internet. Data
wouldn't be confined to a hard drive on one user's computer or even a corporation's internal
network.
It could bring hardware costs down. Cloud computing systems would reduce the need for
advanced hardware on the client side. You wouldn't need to buy the fastest computer with the
most memory, because the cloud system would take care of those needs for you. Instead, you
could buy an inexpensive computer terminal. The terminal could include a monitor, input
devices like a keyboard and mouse and just enough processing power to run the middleware
Cloud computing 36
necessary to connect to the cloud system. You wouldn't need a large hard drive because you'd
store all your information on a remote computer.
Corporations that rely on computers have to make sure they have the right software in place to
achieve goals. Cloud computing systems give these organizations company-wide access to
computer applications. The companies don't have to buy a set of software or software licenses
for every employee. Instead, the company could pay a metered fee to a cloud computing
company.
Servers and digital storage devices take up space. Some companies rent physical space to store
servers and databases because they don't have it available on site. Cloud computing gives these
companies the option of storing data on someone else's hardware, removing the need for
physical space on the front end.
Corporations might save money on IT support. Streamlined hardware would, in theory, have
fewer problems than a network of heterogeneous machines and operating systems.
If the cloud computing system's back end is a grid computing system, then the client could take
advantage of the entire network's processing power. Often, scientists and researchers work with
calculations so complex that it would take years for individual computers to complete them. On
a grid computing system, the client could send the calculation to the cloud for processing. The
cloud system would tap into the processing power of all available computers on the back end,
significantly speeding up the calculation
(or)
Cloud computing is a process that entails accessing of services, including, storage, applications and
servers through the Internet, making use of another company's remote services for a fee. This enables a
company to store and access data or programs virtually, i.e. in a cloud, rather than on local hard drives
or servers.
The benefits include the cost advantage of the commoditization of hardware (such as on-demand,
utility computing, cloud computing, software-oriented infrastructure), software (the software-as-
service model, software-oriented architecture), and even business processes.
The other catalysts were grid computing, which allowed major issues to be addressed via parallel
computing; utility computing facilitated computing resources to be offered as a metered service and
SaaS allowed subscriptions, which were network-based, to applications. Cloud computing, therefore,
owes its emergence to all these factors.
The three prominent types of cloud computing for businesses are Software-as-a-Service (SaaS), which
requires a company to subscribe to it and access services over the Internet; Infrastructure-as-a-Service
(IaaS) is a solution where large cloud computing companies deliver virtual infrastructure; and
Platform-as-a-Service (PaaS) gives the company the freedom to make its own custom applications that
will be used by all its entire workforce.
Clouds are of four types: public, private, community, and hybrid. Through public cloud, a
provider can offer services, including storage and application, to anybody via the Internet.
They can be provided freely or charged on a pay-per-usage method.
Public cloud services are easier to install and less expensive, as costs for application, hardware
and bandwidth are borne by the provider. They are scalable, and the users avail only those
services that they use.
A private cloud is referred to as also internal cloud or corporate cloud, and it called so as it
offers a proprietary computing architecture through which hosted services can be provided to a
Cloud computing 37
restricted number of users protected by a firewall. A private cloud is used by businesses that
want to wield more control over their data.
As far as the community cloud is concerned, it is resources shared by more than one
organization whose cloud needs are similar.
A combination of two or more clouds is a hybrid cloud. Here, the clouds used are a
combination of private, public, or community.
Cloud computing is now being adopted by mobile phone users too, although there are
limitations, such as storage capacity, life of battery and restricted processing power.
Some of the most popular cloud applications globally are Amazon Web Services (AWS), Google
Compute Engine, Rackspace, Salesforce.com, IBM Cloud Managed Services, among others. Cloud
services have made it possible for small and medium businesses (SMBs) to be on par with large
companies.
Mobile cloud computing is being harnessed by bringing into existence a new infrastructure, which is
made possible by getting together mobile devices and cloud computing. This infrastructure allows the
cloud to execute massive tasks and store huge data, as processing of data and its storage do not take
place within mobile devices, but only beyond them. Mobile computing is getting a fillip as customers
want to use their companies' applications and websites wherever they are.
The emergence of 4G, Worldwide Interoperability for Microwave Access (Wimax), among others, is
also scaling up the connectivity of mobile devices. In addition, new technologies for mobile, such as,
CSS3, Hypertext Markup Language (HTML5) hypervisor for mobile devices, Web 4.0, etc. will only
power the adoption of mobile cloud computing.
The main benefits of using cloud computing by companies are that they need not buy any
infrastructure, thus lowering their maintenance costs. They can do away with the services used when
their business demands have been met. It also gives firms comfort that they have huge resources at
beck and call if they suddenly acquire a major project.
On the other hand, transferring their data to cloud makes businesses share their data security
responsibility with the provider of cloud services. This means that the consumer of cloud services
reposes lot of trust on the provider of those services. Cloud consumers control on the services used is
lesser than on on-premise IT resources.
Continuum of Utilities:
There is an important link between fog computing and cloud computing. It‘s often called an extension
of the cloud to where connected iot ‗things‘ are or in its broader scope of ―the Cloud-to-Thing
continuum‖ where data-producing sources are. Fog computing has been evolving since its early days.
As you‘ll read and see below fog computing is seen as a necessity for iot but also for 5G, embedded
artificial intelligence (AI) and ‗advanced distributed and connected systems‘.
Fog computing is designed to deal with the challenges of traditional cloud-based iot systems in
managing iot data and data generated by sources along this cloud-to-thing continuum. It does so by
decentralizing data analytics but also applications and management into the network with its
distributed and federated compute model – in other words: in iot at the edge.
According to IDC, 43 percent of all iot data will be processed at the edge before being sent to a data
center by 2019, further boosting fog computing and edge computing. And when looking at the impact
of iot on IT infrastructure, 451 Research sees that most organizations today process iot workloads at
the edge to enhance security, process real-time operational action triggers, and reduce iot data storage
Cloud computing 38
and transport requirements. This is expected to change over time as big data and AI drive analysis at
the edge with more heavy data processing at that edge.
N other words: In fog computing the fog iot application will decide what is the best place for data
analysis, depending on the data, and then send it to that place.
If the data is highly time-sensitive (typically below or even very far below a second) it is sent to the
fog node which is closest to the data source for analysis. If it is less time-sensitive (typically seconds to
minutes) it goes to a fog aggregation node and if it essentially can wait it goes to the cloud for, among
others, big data analytics.
Cloud Management Working Group (CMWG) - Models the management of cloud services and the
operations and attributes of the cloud service lifecycle through its work on the Cloud Infrastructure
Management Interface (CIMI).
Cloud Auditing Data Federation Working Group (CADF) - Defines the CADF standard, a full
event model anyone can use to fill in the essential data needed to certify, self-manage and self-
audit application security in cloud environments.
Software Entitlement Working Group (SEWG) - Focuses on the interoperability with which
software inventory and product usage are expressed, allowing the industry to better manage
licensed software products and product usage.
Open Virtualization Working Group (OVF) - Produces the OVF standard, which provides the
industry with a standard packaging format for software solutions based on virtual systems.
Resources:
Virtualization Management (VMAN) - DMTF's VMAN is a set of specifications that address the
management lifecycle of a virtual environment.
Cloud Standards Wiki - The Cloud Standards Wiki is a resource documenting the activities of the
various Standards Development Organizations (SDOs) working on cloud standards.
Cloud computing 39
Standards Bodies and Working Groups:
Cloud computing standardization was started by industrial organizations, which develop what are
called forum standards. Since late 2009, de jure standards bodies, such as ITU-T and ISO/IEC JTC1,
and ICT-oriented standards bodies, such as IEEE (Institute of Electrical and Electronic Engineers) and
IETF (Internet Engineering Task Force), have also begun to study it. In the USA and Europe,
government-affiliated organizations are also discussing it. The activities of major forum standards
bodies, ICT-oriented standards bodies, de jure standards bodies, and government-affiliated bodies are
described below.
Cloud computing 40
(5) OASIS (Organization for the Advancement of Structured Information Standards)
OASIS established the Identity in the Cloud Technical Committee (ID Cloud TC) in May 2010,
surveyed existing ID management standards, and developed use cases of cloud ID management and
guidelines on reducing vulnerability. It also developed basic security standards, such as SAML
(Security Assertion Markup Language) and maintains liaison with CSA and ITU-T. The main
members are IBM, Microsoft, and others.
(6) OCC (Open Cloud Consortium)
OCC is a nonprofit organization formed in January 2009 under the leadership of the University of
Illinois at Chicago. It aims to develop benchmarks using a cloud testbed and achieve interoperability
between cloud systems. Its Working Groups include Open Cloud Testbed, Project Matsu, which is a
collaboration with NASA, and Open Science Data Cloud, which covers the scientific field. The main
members include NASA, Yahoo, Cisco, and Citrix.
(7) Open Cloud Manifesto
Open Cloud Manifesto is a nonprofit organization established in March 2009 to promote the
development of cloud environments that incorporate the user‘s perspective under the principle of open
cloud computing. It published cloud use cases and requirements for standards as a white paper in
August 2009. The latest version of the white paper is version 4.0 (V4) [3], which included for the first
time the viewpoint of the service level agreement (SLA). The participants include IBM, VMware,
Rackspace, AT&T, and TM Forum. A Japanese translation of the white paper is available [4].
(8) CSA (Cloud Security Alliance)
CSA is a nonprofit organization established in March 2009 to study best practices in ensuring cloud
security and promote their use. It released guidelines on cloud security in April 2009. The current
version is version 2.1 [5], which proposes best practices in thirteen fields, such as governance and
compliance. The main members are PGP, ISACA, ENISA, IPA, IBM, and Microsoft. A distinctive
feature of the membership is that it includes front runners in cloud computing, such as Google and
Salesforce. A Japan Chapter of CSA (NCSA) was inaugurated in June 2010.
(9) CCF (Cloud Computing Forum)
CCF is a Korean organization established in December 2009 to develop cloud standards and
promote their application to public organizations. Its membership consists of 32 corporate members
and more than 60 experts. CCF comprises six Working Groups, including Media Cloud, Storage
Cloud, and Mobile Cloud.
(10) GICTF (Global Inter-Cloud Technology Forum)
GICTF is a Japanese organization studying inter-cloud standard interfaces, etc. in order to enhance
the reliability of clouds. As of March 2011, it has a membership of 74 corporate members and four
organizations from industry, government, and academia. In June 2010, it released a white paper on use
cases of inter-cloud federation and functional requirements. The main members include NTT, KDDI,
NEC, Hitachi, Toshiba Solutions, IBM, and Oracle.
Cloud computing 41
ICT-oriented standards bodies
Major standards bodies in the ICT field have also, one after another, established study groups on
cloud computing. These study groups are holding lively discussions.
(1) IETF
IETF had been informally discussing cloud computing in a bar BOF (discussions over drinks in a
bar; BOF: birds of a feather) before November 2010 when, at IETF79, it agreed to establish the Cloud
OPS WG (WG on cloud computing and maintenance), which is discussing cloud resource
management and monitoring, and Cloud-APS BOF (BOF on cloud computing applications), which is
mainly discussing matters related to applications. Since around the end of 2010, it has been receiving
drafts for surveys of the cloud industries and standards bodies, reference frameworks, logging, etc.
(2) IEEE
IEEE formed the Cloud Computing Standards Study Group (CCSSG) in March 2010. It announced
the launch of two new standards development projects in April 2011: P2301, Guide for Cloud
Portability and Interoperability Profiles (CPIP) and P2302, Standard for Inter cloud Interoperability
and Federation (SIIF).
(3) TM Forum
In December 2009, TM Forum established the Enterprise Cloud Buyers Council (ECBC) to resolve
issues (on standardization, security, performance, etc.) faced by enterprises when they host private
clouds and thereby to promote the use of cloud computing. In May 2010, it started the Cloud Services
Initiative, which aims to encourage cloud service market growth. The main members of this initiative
are Microsoft, IBM, and AT&T.
Cloud computing 42
February 2010. In addition, SC27 is studying requirements for Information Security Management
Systems (ISMSs).
(3) ETSI (European Telecommunications Standards Institute)
ETSI has established a Technical Committee on grids and clouds. The TC Cloud has released a
Technical Report (TR) on standards required in providing cloud services.
Government-affiliated bodies
Government-affiliated bodies in the USA and Europe are active in cloud-related standardization.
Government systems constitute a large potential cloud market. It is highly likely that the specifications
used by governmental organizations for procurement will be adopted as de facto standards.
(1) NIST
NIST is a technical department belonging to the U.S. Department of Commerce. ―The NIST
Definition of Cloud Computing‖, which was published in October 2009, is referred to on various
occasions. NIST undertakes cloud standardization with five WGs. One of them, Standards
Acceleration to Jumpstart Adoption of Cloud Computing (SAJACC), is intended to promote the
development of cloud standards based on actual examples and use cases. It discloses a number of
different specifications and actual implementation examples on its portal. It also discloses test results
for the developed standard specifications.
(2) ENISA (European Network and Information Security Agency)
In November 2009, ENISA, an EU agency, released two documents: ―Cloud Computing: Benefits,
Risks and Recommendations for Information Security‖, which deals with cloud security, risk, and
assessment and ―Cloud Computing Information Assurance Framework‖, which is a framework for
ensuring security in cloud computing.
Cloud computing 43
components exist, and are accessible on the internet protocol (IP) wide area network (WAN), they can
be reassembled more rapidly to solve new problems. Comparing Cloud Computing and SOA Cloud
computing and SOA have important overlapping concerns and common considerations, as shown in
Figure 4. The most important overlap occurs near the top of the cloud computing stack, in the area of
Cloud Services, which are network accessible application components and software services, such as
contemporary Web Services. (See the notional cloud stack in Figure 1.)
Both cloud computing and SOA share concepts of service orientation. Services of many types are
available on a common network for use by consumers. Cloud computing focuses on turning aspects of
the IT computing stack into commodities that can be purchased incrementally from the cloud based
providers and can be considered a type of outsourcing in many cases. For example, large-scale online
storage can be procured and automatically allocated in terabyte units from the cloud. Similarly, a
platform to operate web-based applications can be rented from redundant data centers in the cloud.
However, cloud computing is currently a broader term than SOA and covers the entire stack from
hardware through the presentation layer software systems. SOA, though not restricted conceptually to
software, is oft en implemented in practice as components or software services, as exemplified by the
Web Service standards used in many implementations. These components can be tied together and
executed on many platforms across the Network to provide a business function.
Network dependence: Both cloud computing and SOA count on a robust network to connect
consumers and producers and in that sense, both have the same foundational structural weakness when
the network is not performing or is unavailable. John Naughton elaborates on this concern when he
Cloud computing 44
writes that ―with gigabit ethernet connections in local area networks, and increasingly fast broadband,
network performance has improved to the point where cloud computing looks like a feasible
proposition .... If we are betting our futures on the network being the computer, we ought to be sure
that it can stand the strain.
Forms of outsourcing: Both concepts require forms of contractual relationships and trust between
service providers and service consumers. Reuse of an SOA service by a group of other systems is in
effect an ―outsourcing‖ of that capability to another organization. With cloud computing, the
outsourcing is more overt and oft en has a fully commercial flavor. Storage, platforms, and servers are
rented from commercial providers who have economies of scale in providing those commodities to a
very large audience. Cloud computing allows the consumer organization to leave the detailed IT
administration issues to the service providers.
Standards: Both cloud computing and SOA provide an organization with an opportunity to select
common standards for network accessible capabilities. SOA has a fairly mature set of standards with
which to implement software services, such as Representational State Transfer (REST), SOAP, and
Web Services Description Language (WSDL), among many others. Cloud computing is not as mature,
and many of the interfaces offered are unique to a particular vendor, thus raising the risk of vendor
lock-in. Simon Wardley writes, ―The ability to switch between providers overcomes the largest
concerns of using such service providers, the lack of second sourcing options and the fear of vendor
lock-in (and the subsequent weaknesses in strategic control and lack of pricing competition).‖ This is
likely to change over time as offerings at each layer in the stack become more homogenous. Wardley
continues, ―The computing stack, from the applications we write, to the platforms we build upon, to
the operating systems we use are now moving from a product- to a service-based economy. The shift
towards services will also lead to standardization of lower orders of the computing stack to internet
provided components.‖
The first objective aims to structure procedures or software components as services. These
services are designed to be loosely coupled to applications, so they are only used when needed.
They are also designed to be easily utilized by software developers, who have to create
applications in a consistent way.
The second objective is to provide a mechanism for publishing available services, which
includes their functionality and input/output (I/O) requirements. Services are published in a
way that allows developers to easily incorporate them into applications.
The third objective of SOA is to control the use of these services to avoid security and
governance problems. Security in SOA revolves heavily around the security of the individual
components within the architecture, identity and authentication procedures related to those
components, and securing the actual connections between the components of the architecture.
Cloud computing 45
Business Process Execution Language (BPEL) is an Organization for the Advancement of Structured
Information Standards (OASIS) executable language for exporting and importing business information
using only the interfaces available through Web services.
BPEL is concerned with the abstract process of "programming in the large", which involves the high-
level state transition interactions of processes. The language includes such information as when to
send messages, when to wait for messages and when to compensate for unsuccessful transactions. In
contrast, "programming in the small" deals with short-lived programmable behavior such as a single
transaction involving the logical manipulation of resources.
BPEL was developed to address the differences between programming in the large and programming
in the small. This term is also known as Web Services Business Process Execution Language (WS-
BPEL), and is sometimes written as business process execution language for Web Services.
Microsoft and IBM both developed their own programming in the large languages, which are
very similar, and called XLANG and WSFL respectively. In view of the popularity of a third
language, BPML, Microsoft and IBM decided to combine their two languages into another
called BPEL4WS. After submitting the new language to OASIS for standardization, it
emerged from a technical committee in 2004 as WS-BPEL 2.0.
Both models serve a descriptive role and have more than one possible use case. BPEL
should be used both between businesses and within a given business. The BPEL4People
language and WS-Human Task specifications were published in 2007 and describe how
people can interact with BPEL processes.
Cloud computing 46
though this definition corresponds to the meaning of the term portability—the ability to move a system from
one platform to another—the community refers to this property as interoperability, and I will use this term in
this report. In general, the cloud-computing community sees the lack of cloud interoperability as a barrier to
cloud-computing adoption because organizations fear “vendor lock-in.” Vendor lock-in refers to a situation in
which, once an organization has selected a cloud provider, either it cannot move to another provider or it can
change providers but only at great cost [Armbrust 2009, Hinchcliffe 2009, Linthicum 2009, Ahronovitz 2010,
Harding 2010, Badger 2011, and Kundra 2011]. Risks of vendor lock-in include reduced negotiation power in
reaction to price increases and service discontinuation because the provider goes out of business. A common
tactic for enabling interoperability is the use of open standards [ITU 2005]. A representative of the military, for
example, recently urged industry to take a more open-standards approach to cloud computing to increase
adoption [Perera 2011]. The Open Cloud Manifesto published a set of principles that its members suggest that
the industry follow, including using open standards and “playing nice with others” [Open Cloud 2009]. Cerf
emphasizes the need for “inter-cloud standards” to improve asset management in the cloud [Krill 2010].
However, other groups state that using standards is just “one piece of the cloud interoperability puzzle” [Lewis
2008, Hemsoth 2010, Linthicum 2010b, Considine 2011]. Achieving interoperability may also require sound
architecture principles and dynamic negotiation between cloud providers and users. This report explores the
role of standards in cloud-computing interoperability. The goal of the report is to provide greater insight into
areas of cloud computing in which standards would be useful for interoperability and areas in which standards
would not help or would need to mature to provide any value.
Use cases in the context of cloud computing refer to typical ways in which cloud consumers and
providers interact. NIST, OMG, DMTF, and others—as part of their efforts related to standards
for data portability, cloud interoperability, security, and management—have developed use cases
NIST defines 21 use cases classified into three groups: cloud management, cloud interoperability,
and cloud security [Badger 2010]. These use cases are listed below [Badger 2010]:
− Open an Account
− Close an Account
− Terminate an Account
Cloud computing 47
• Cloud Interoperability Use Cases
Cloud
− eDiscovery
− Security Monitoring
OMG presents a more abstract set of use cases as part of the Open Cloud Manifesto [Ahronovitz
2010]. These are much more generic than those published by NIST and relate more to deployment
than to usage. The use cases ―Changing Cloud Vendors‖ and ―Hybrid Cloud‖ are the ones of interest
from a standards perspective because they are the main drivers for standards in cloud computing
environments. ―Changing Cloud Vendors‖ particularly motivates organizations that do not want to be
in a vendor lock-in situation. The full list is presented below [Ahronovitz 2010]:
• End User to Cloud: applications running in the public cloud and accessed by end users
• Enterprise to Cloud to End User: applications running in the public cloud and accessed by
• Enterprise to Cloud: applications running in the public cloud integrated with internal IT capabilities
• Enterprise to Cloud to Enterprise: applications running in the public cloud and interoperating with
partner applications (supply chain)
• Changing Cloud Vendors: an organization using cloud services decides to switch cloud providers or
work with additional providers
Cloud computing 48
• Hybrid Cloud: multiple clouds work together, coordinated by a cloud broker that federates data,
applications, user identity, security, and other details
DMTF produced a list of 14 use cases specifically related to cloud management [DMTF 2010]:
• Establish Relationship
• Administer Relationship
• Contract Reporting
• Contract Billing
• Provision Resources
Across the complete set of use cases proposed by NIST, OMG, and DMTF, four types of use cases
concern consumer–provider interactions that would benefit from the existence of standards.
These interactions relate to interoperability and can be mapped to the following four basic clouds
1. User Authentication: A user who has established an identity with a cloud provider can use the
2. Workload Migration: A workload that executes in one cloud provider can be uploaded to another
cloud provider.
3. Data Migration: Data that resides in one cloud provider can be moved to another cloud provider.
4. Workload Management: Custom tools developed for cloud workload management can be used to
manage multiple cloud resources from different vendors.
Cloud computing 49
The remainder of this section describes existing standards and specifications that support these four
main types of use cases.
User Authentication
The use case for user authentication corresponds to a user or program that needs to be identified in the
cloud environment. It is important to differentiate between two types of users of cloud environments:
end users and cloud-resource users.
End users are users of applications deployed on cloud resources. Because these users register and
identify with the application and not with the infrastructure resources, they are usually not aware that
the application is running on cloud resources.
Cloud-resource users are typically administrators of the cloud resources. These users can also set
permissions for the resources based on roles, access lists, IP addresses, domains, and so forth. This
second type of user is of greater interest from an interoperability perspective.
Some of the standardization efforts, as well as technologies that are becoming de facto standards, that
support this use case are
• Amazon Web Services Identity Access Management (AWS IAM): Amazon uses this mechanism for
user authentication and management, and it is becoming a de facto standard [Amazon 2012d]. It
supports the creation and the permissions management for multiple users within an AWS account.
Each user has unique security credentials with which to access the services associated with an account.
Eucalyptus also uses AWS IAM for user authentication and management.
• OAuth: OAuth is an open protocol by the Internet Engineering Task Force (IETF) [OAuth 2010]. It
provides a method for clients to access server resources on behalf of the resource owner. It also
provides a process for end users to authorize third-party access to their server resources without
sharing their credentials. The current version is 1.0, and IETF‘s work continues for Version 2.0.
Similarly to WS-Security, OAuth Version 2.0 will support user identification information in Simple
Object Access Protocol (SOAP) messages. Cloud platforms that support OAuth include Force.com,
Google App Engine, and Microsoft Azure.
• OpenID: OpenID is an open standard that enables users to be authenticated in a decentralized manner
[OpenID 2012]. Users create accounts with an OpenID identity provider and then use those accounts
(or identities) to authenticate with any web resource that accepts OpenID authentication.Cloud
platforms that support OpenID include Google App Engine and Microsoft Azure. OpenStack has an
ongoing project to support OpenID.
• WS-Security: WS-Security is an OASIS security standard specification [OASIS 2006]. The current
release is Version 1.1. WS-Security describes how to secure SOAP messages using Extensible Markup
Language (XML) Signature and XML Encryption and attach security tokens to SOAP messages.
Cloud platforms that support WS-Security for message authentication include Amazon EC2 and
Microsoft Azure.
Workload Migration:
The use case for workload migration corresponds to the migration of a workload, typically represented
as a virtual-machine image, from one cloud provider to a different cloud provider. The migration of a
Cloud computing 50
workload requires (1) the extraction of the workload from one cloud environment and (2) the upload
of the workload to another cloud environment. Some of the standards that support this use case are
• Amazon Machine Image (AMI): An AMI is a special type of virtual machine that can be deployed
within Amazon EC2 and is also becoming a de facto standard [Amazon 2012b]. Eucalyptus and
OpenStack support AMI as well.
• Open Virtualization Framework (OVF): OVF is a virtual-machine packaging standard developed and
supported by DMTF [DMTF 2012]. Cloud platforms that support OVF include Amazon EC2,
Eucalyptus, and OpenStack.
• Virtual Hard Disk (VHD): VHD is a virtual-machine file format supported by Microsoft [Microsoft
2006]. Cloud platforms that support VHD include Amazon EC2 and Microsoft Azure.
There are two types of cloud storage. Typed-data storage works similarly to an SQL-compatible
database and enables CRUD operations on user-defined tables. Object storage enables CRUD
operations of generic objects that range from data items (similar to a row of a table), to files, to virtual-
machine images.
Some of the standards that support this use case, especially for object storage, are
• Cloud Data Management Interface (CDMI): CDMI is a standard supported by the Storage
Networking Industry Association (SNIA) [SNIA 2011]. CDMI defines an API to CRUD data elements
from a cloud-storage environment. It also defines an API for discovery of cloud storage capabilities
and management of data containers.
• SOAP: Even though SOAP is not a data-specific standard, multiple cloud-storage providers support
data- and storage-management interfaces that use SOAP as a protocol. SOAP is a W3C specification
that defines a framework to construct XML-based messages in a decentralized, networked
environment [W3C 2007]. The current version is 1.2, and HTTP is the primary transport mechanism.
Amazon S3 provides a SOAP-based interface that other cloudstorage environments, including
Eucalyptus and OpenStack, also support.
• Representational State Transfer (REST): REST is not a data-specific standard either, but multiple
cloud-storage providers support RESTful interfaces. REST is considered architecture and not a
protocol [IBM 2008]. In a REST implementation, every entity that can be identified, CMU/SEI-2012-
TN-012 | 12 named, addressed, or handled is considered a resource. Each resource is addressable via
its universal resource identifier and provides the same interface, as defined by HTTP: GET, POST,
PUT, DELETE. Amazon S3 provides a RESTful interface that Eucalyptus and OpenStack also
support. Other providers with RESTful interfaces for data management include Salesforce.com‘s
Cloud computing 51
Force.com, Microsoft Windows Azure (Windows Azure Storage), OpenStack (Object Storage), and
Rackspace (Cloud Files). The API defined by CDMI is a RESTful interface.
Workload Management:
The use case for workload management corresponds to the management of a workload deployed in the
cloud environment, such as starting, stopping, changing, or querying the state of a virtual instance. As
with the data-management use case, in an interoperability context an organization can ideally use any
workload-management program with any provider. Even though most environments provide a form of
management console or command-line tools, they also provide APIs based on REST or SOAP.
Providers that offer SOAP-based or RESTful APIs for workload management include Amazon EC2,
Eucalyptus, GoGrid Cloud Servers, Google App Engine, Microsoft Windows Azure, and OpenStack
(Image Service).
(or)
Data models describe the structure of data and are used by applications. They enable the
applications to interpret and process the data. They apply to data that is held in store and accessed by
applications, and also to data in messages passed between applications.
While there are some standard data models, such as that defined by the ITU-T X.500 standards for
directories, most data models are application-specific. There is, however, value in standardizing how
these specific data models are described. This is important for data portability and for interoperability
between applications and software services.
Relational database schemas are the most commonly-encountered data models. They are based on the
relational database table paradigm. A schema typically exists in human-readable and machine-readable
form. The machine-readable form is used by the Database Management System (DBMS) that is the
application that directly accesses the data. Applications that use the DBMS to access the data
indirectly do not often use the machine-readable form of the schema; they work because their
programmers read the human-readable form. The Structured Query Language (SQL) standard [SQL]
applies to relational databases.
The semantic web standards can be used to define data and data models in machine-readable form.
They are based on yet another paradigm, in which data and metadata exists as subject-verb-object
triples. They include the Resource Description Framework (RDF) [RDF] and the Web Ontology
Language (OWL) [OWL]. With these standards, all applications use the machine-readable form, and
there is less reliance on understanding of the human-readable form by programmers.
Application-Application Interfaces
These are interfaces between applications. Increasingly, applications are web-based and
intercommunicate using web service APIs. Other means of communication, such as message queuing
Cloud computing 52
or shared data, are and will continue to be widely used, for performance and other reasons, particularly
between distributed components of a single application. However, APIs to loosely-coupled services
form the best approach for interoperability.
Some cloud service providers make client libraries available to make it easier to write programs that
use their APIs. These client libraries may be available in one or more of the currently popular
programming languages.
A service provider may go further, and make available complete applications that run on client devices
and use its service (―client apps‖). This is a growing phenomenon for mobile client devices such as
tablets and smartphones. For example, many airlines supply ―apps‖ that passengers can use to manage
their bookings and check in to flights.
If a service provider publishes and guarantees to support an API then it can be regarded as an
interoperability interface. Use of a library or client app can enable the service provider to change the
underlying HTTP or SOAP interface without disrupting use of the service. In such a case a stable
library interface may still enable interoperability. A service that is available only through client apps is
unlikely to be interoperable.
Applications are concerned with the highest layer, which is the message content layer. This provides
transfer of information about the state of the client and the state of the service, including the values of
data elements maintained by the service.
An application-application API specification defines the message content, the syntax in which it is
expressed, and the envelopes in which it is transported.
Cloud computing 53
The platforms supporting the applications handle the Internet, HTTP, and message envelope layers of
a web service interface, and enable a service to send and receive arbitrary message contents. These
layers are discussed under Interfaces below.
The amount of application-specific processing required can be reduced by using standards for
message syntax and semantics.
The Cloud Data Management Interface (CDMI) [CDMI] defined by the Storage Networking Industry
Association (SNIA) is a standard application-specific interface for a generic data storage and retrieval
application. (It is a direct HTTP interface, follows REST principles, and uses JSON to encode data
elements.) It also provides some management capabilities.
Message contents are essentially data. Standards for describing data models can be applied to service
functional interface message contents and improve service interoperability.
These are web service APIs, like Application-Application Interfaces, but are presented by applications
to expose management capabilities rather than functional capabilities.
Standardization of some message content is appropriate for these and other management interfaces.
This is an active area of cloud standardization, and there are a number of emerging standards. Some
are generic, while others are specific to applications, platform, or infrastructure management. None,
however, appear to be widely adopted yet.
There are two generic standards, TOSCA and OCCI, which apply to application management.
The OASIS Topology and Orchestration Specification for Cloud Applications (TOSCA) [TOSCA] is
an XML standard language for descriptions of service-based applications and their operation and
management regimes. It can apply to complex services implemented on multiple interacting servers.
The Open Cloud Computing Interface (OCCI) [OCCI] of the Open Grid Forum is a standard interface
for all kinds of cloud management tasks. OCCI was originally initiated to create a remote management
API for IaaS model-based services, allowing for the development of interoperable tools for common
tasks including deployment, autonomic scaling, and monitoring. The current release is suitable to
serve many other models in addition to IaaS, including PaaS and SaaS.
Cloud computing 54
The Cloud Application Management for Platforms (CAMP) [CAMP] is a PaaS management
specification that is to be submitted to OASIS for development as an industry standard. It defines an
API using REST and JSON for packaging and controlling PaaS workloads.
There are some standard frameworks that make it possible to write generic management systems that
interoperate with vendor-specific products.
The IETF Simple Network Management Protocol (SNMP) [SNMP] is the basis for such a
framework. It is designed for Internet devices.
The Common Management Information Service (CMIS) [CMIS] and the Common Management
Information Protocol (CMIP) [CMIP] are the basis for another such framework. They are designed
for telecommunication devices.
The Distributed Management Task Force (DMTF) [DMTF] has defined a Common Information
Model (CIM) [CIM] that provides a common definition of management information for systems of
all kinds.
The word utility is used to make an analogy to other services, such as electrical power, that seek to
meet fluctuating customer needs, and charge for the resources based on usage rather than on a flat-rate
basis. This approach, sometimes known as pay-per-use or metered services is becoming increasingly
common in enterprise computing and is sometimes used for the consumer market as well, for Internet
service, Web site access, file sharing, and other applications.
Another version of utility computing is carried out within an enterprise. In a shared pool utility model,
an enterprise centralizes its computing resources to serve a larger number of users without unnecessary
redundancy.
This model is based on that used by conventional utilities such as telephone services, electricity and
gas. The principle behind utility computing is simple. The consumer has access to a virtually unlimited
supply of computing solutions over the Internet or a virtual private network, which can be sourced and
used whenever it's required. The back-end infrastructure and computing resources management and
delivery is governed by the provider.
Cloud computing 55
Utility computing solutions can include virtual servers, virtual storage, virtual software, backup and
most IT solutions.
Cloud computing, grid computing and managed IT services are based on the concept of utility
computing.
Virtualization:
Virtualization is the creation of virtual servers, infrastructures, devices and computing
resources. A great example of how it works in your daily life is the separation of your hard drive into
different parts. While you may have only one hard drive, your system sees it as two, three or more
different and separate segments. Similarly, this technology has been used for a long time. It started as
the ability to run multiple operating systems on one hardware set and now it a vital part of testing and
cloud-based computing.
A technology called the Virtual Machine Monitor — also called virtual manager– encapsulates the very
basics of virtualization in cloud computing. It is used to separate the physical hardware from its
emulated parts. This often includes the CPU‘s memory, I/O and network traffic. A secondary operating
system that is usually interacting with the hardware is now a software emulation of that hardware, and
often the guest operating system has no idea it‘s on the virtualized hardware. Despite the fact that
performance of the virtual system is not equal to the functioning of the ―true hardware‖ operating
system, the technology still works because most secondary OSs and applications don‘t need the full use
of the underlying hardware. This allows for greater flexibility, control and isolation by removing the
dependency on a given hardware platform.
The layer of software that enables this abstraction is called ―hypervisor‖. A study in the International
Journal of Scientific & Technology Research defines it as ―a software layer that can monitor and
virtualize the resources of a host machine conferring to the user requirements.‖ The most common
hypervisor is referred to as Type 1. By talking to the hardware directly, it virtualizes the hardware
platform that makes it available to be used by virtual machines. There‘s also a Type 2 hypervisor,
which requires an operating system. Most often, you can find it being used in software testing and
laboratory research.
Cloud computing 56
Network Virtualization
Network virtualization in cloud computing is a method of combining the available resources in a
network by splitting up the available bandwidth into different channels, each being separate and
distinguished. They can be either assigned to a particular server or device or stay unassigned
completely — all in real time. The idea is that the technology disguises the true complexity of the
network by separating it into parts that are easy to manage, much like your segmented hard drive
makes it easier for you to manage files.
Storage Virtualizing
Using this technique gives the user an ability to pool the hardware storage space from several
interconnected storage devices into a simulated single storage device that is managed from one single
command console. This storage technique is often used in storage area networks. Storage manipulation
in the cloud is mostly used for backup, archiving, and recovering of data by hiding the real and
physical complex storage architecture. Administrators can implement it with software applications or
by employing hardware and software hybrid appliances.
Server Virtualization
This technique is the masking of server resources. It simulates physical servers by changing their
identity, numbers, processors and operating systems. This spares the user from continuously managing
complex server resources. It also makes a lot of resources available for sharing and utilizing, while
maintaining the capacity to expand them when needed.
Data Virtualization
This kind of cloud computing virtualization technique is abstracting the technical details usually used
in data management, such as location, performance or format, in favor of broader access and more
resiliency that are directly related to business needs.
Desktop Virtualizing
As compared to other types of virtualization in cloud computing, this model enables you to emulate a
workstation load, rather than a server. This allows the user to access the desktop remotely. Since the
workstation is essentially running in a data center server, access to it can be both more secure and
portable.
Application Virtualization
Software virtualization in cloud computing abstracts the application layer, separating it from the
operating system. This way the application can run in an encapsulated form without being dependant
upon the operating system underneath. In addition to providing a level of isolation, an application
created for one OS can run on a completely different operating system.
Hyper-Threading:
Hyper-Threading is a technology used by some Intel microprocessor s that allows a single
microprocessor to act like two separate processors to the operating system and the application
programs that use it. It is a feature of Intel's IA-32 processor architecture.
Cloud computing 57
A superscalar CPU architecture implements parallel threads of information units, a process known as
instruction-level parallelism (ILP). A CPU with multithreading capability can simultaneously execute
different program parts, such as threads.
HT allows a multithreaded application to implement threads in parallel from a single multicore
processor, which executes threads in linear form. HT‘s main advantage is that it allows for the
simultaneous execution of multiple threads, which improves response and reaction time while
enhancing system capabilities and support.
An HT processor contains two sets of registers: the control registers and basic registers. A control
register is a processing register that controls or changes the CPU‘s overall performance by switching
address mode, interrupt control or coprocessor control. A basic register is a storage location and part of
the CPU. Both logical processors have the same bus, cache and performance units. During execution,
each register handles threads individually.
Older models with similar techniques were built with dual-processing software threads that divided
instructions into several streams and more than one processor executed commands. PCs that
multithread simultaneously have hardware support and the ability to execute more than one
information thread in parallel form.
HT was developed by Digital Equipment Corporation, but was brought to market in 2002, when Intel
introduced the MP-based Foster Xeon and released the Northwood-based Pentium 4 with 3.06 GHz.
Other HT processors entered the marketplace, including the Pentium 4 HT, Pentium 4 Extreme Edition
and Pentium Extreme Edition.
Blade Servers:
A blade server is a server chassis housing multiple thin, modular electronic circuit boards, known as
server blades. Each blade is a server in its own right, often dedicated to a single application. The blades
are literally servers on a card, containing processors, memory, integrated network controllers, an
optional Fiber Channel host bus adaptor (HBA) and other input/output (IO) ports.
Blade servers are designed to overcome the space and energy restrictions of a typical data
center environment. The blade enclosure, also known as chassis, caters to the power,
cooling, network connectivity and management needs of each blade. Each blade server in an
enclosure may be dedicated to a single application. A blade server can be used for tasks
such as:
File sharing
Database and application hosting
SSL encryption of Web communication
Hosting virtual server platforms
Streaming audio and video content
The components of a blade may vary depending on the manufacturer. Blade servers offer
increased resiliency, efficiency, dynamic load handling and scalability. A blade enclosure
pools, shares and optimizes power and cooling requirements across all the blade servers,
resulting in multiple blades in a typical rack space.
Cloud computing 58
Some of the benefits of blade servers include:
Reduced energy costs
Reduced power and cooling expenses
Space savings
Reduced cabling
Redundancy
Increased storage capacity
Reduced data center footprint
Minimum administration
Low total cost of ownership
Automated provisioning is a type of policy-based management and provisioning rights can be granted
on either a permissions-based or role-based basis. Once automated provisioning has been implemented,
it is up to the service provider to ensure that operational processes are being followed and governance
policies are not being circumvented.
Policy-based management of a multi-user workstation typically includes setting individual policies for
such things as access to files or applications, various levels of access (such as "read-only" permission,
or permission to update or delete files), the appearance and makeup of individual users' desktops and so
Cloud computing 59
on. There are a number of software packages available to automate some elements of policy-based
management. In general, the way these work is as follows: business policies are input to the products,
and the software communicates to network hardware how to support those policies.
Application Management:
Cloud application management for platforms (CAMP) is a specification developed for the management
of applications specifically in Platform as a Service (PaaS) based cloud environments.
CAMP specification provides a framework for enabling application developers to manage their
applications through open-source API structures based on representation state transfer (REST).
Management Components:
Patterns of this category describe how management functionality can be integrated with components
providing application functionality.
Management Processes:
Patterns of this category describe how distributed and componentized cloud applications may address
runtime challenges, such as elasticity and failure handling in an automated fashion.
Elasticity Management Process Feature Flag Management Process
Cloud computing 60
Update Transition Process Standby Pooling Process
Provider Adapter:
The Provider Adapter encapsulates all provider-specific implementations required for authentication,
data formatting etc. in an abstract interface. The Provider Adapter, thus, ensures separation of
concerns between application components accessing provider functionality and application
components providing application functionality. It may also offer synchronous provider-interfaces to
be accessed asynchronously via messages and vice versa.
Managed Configuration:
Application components of a Distributed Application often have configuration parameters. Storing
configuration information together with the application component implementation can be unpractical
as it results in more overhead in case of configuration changes. Each running instance of the
application component must be updated separately. Component images stored in Elastic Infrastructures
or Elastic Platforms also have to be updated upon configuration change.
Cloud computing 61
Commonly, a Relational Database, Key-Value Storage, or Blob Storage from where it is accessed by
all running component instances either by accessing the storage periodically or by sending messages to
the components.
Elasticity Manager:
Application components of a Distributed Application hosted on an Elastic Infrastructure or Elastic
Platform shall be scaled-out. The instances of applications components, thus, shall be provisioned and
decommissioned automatically based on the current workload experienced by the application.
The utilization of cloud resources on which application component instances are deployed is
monitored. This could be, for example, the CPU load of a virtual server. This information is used to
determine the number of required instances.
Cloud computing 62
Elastic Queue:
A Distributed Application is comprised of multiple application components that are accessed
asynchronously and deployed to an Elastic Infrastructure or an Elastic Platform. The required
provisioning and decommissioning operations to scale this application should be performed in an
automated fashion.
Queues that are used to distribute asynchronous requests among multiple application components
instances are monitored. Based on the number of enqueued messages the Elastic Queue adjusts the
number of application component instances handling these requests.
Watchdog:
Applications cope with failures automatically by monitoring and replacing application component
instances if the provider-assured availability is insufficient.
If a Distributed Application is comprised of many application components it is dependent on the
availability of all component instances. To enable high availability under such conditions, applications
have to rely on redundant application component instances and the failure of these instances has to be
detected and coped with automatically.
Cloud computing 63
Individual application components rely on external state information by implementing the Stateless
Component pattern. Components are scaled out and multiple instances of them are deployed to
redundant resources. The component instances are monitored by a separate Watchdog component and
replaced in case of failures.
the authors have proposed a QoS ranking prediction framework for Cloud services by considering
historical service usage data of the consumers. This framework facilities inexpensive real-world
service invocations. the authors have proposed a generic QoS framework consists of four components
for Cloud workflow systems. However, the framework is not suitable for solving complex problems
such as multi QoS based service selection, monitoring and violation handling.
Evaluation techniques
One of the popular decision-making technique utilized for the cloud service evaluation and raking is
Analytical Hierarchical Process (AHP). The framework proposed in utilizes AHP based ranking
mechanism which can evaluate the Cloud services based on different applications depending on QoS
requirements. Various other works have been reviewed in have utilized AHP technique for the cloud
resources. In based on AHP and expert scoring system various SaaS products are evaluated. In AHP is
Cloud computing 64
utilized to evaluate the IaaS products. Though other techniques have also been employed to evaluate
and rank cloud services, they limit in considering both qualitative and quantitative criteria and
computation effectively. Although AHP is an effective decision-making tool, disadvantages such as
complex pair wise comparison and subjectivity makes it a complex tool. Also, the comparisons
become computationally intensive and unmanageable when the criteria and alternatives are large in
number. Thus, this paper utilizes an effective mechanism by employing Technique for Order
Preference by Similarity to Ideal Solution (TOPSIS), that can consider large number of criteria
(qualitative and quantitative) and alternatives to evaluate and rank cloud service. The technique is
computational effective and easily manageable.
Metrics for measuring scalability, cost, peak load handling, and the fault tolerance of Cloud
environments is proposed. The framework proposed in is used only for quantifiable QoS attributes
such as Accountability, Agility, Assurance of Service, Cost, Performance, Security, Privacy, and
Usability. It is not suitable for non-quantifiable QoS attributes such as Service Response Time,
Sustainability, Suitability, Accuracy, Transparency, Interoperability, Availability, Reliability and
Stability. Garget al. proposed metrics for Data Center Performance per Energy, Data Center
Infrastructure Efficiency, Power Usage Efficiency, Suitability, Interoperability, Availability,
Reliability, Stability, Accuracy, Cost, Adaptability, Elasticity, Usability, Throughput and efficiency
and Scalability. Few of the metrics such as suitability is not computationally effective.
General common criteria mentioned and covered in many studies fall under following - cost,
performance, availability, reliability, security, usability, agility and reputation.
In this section, QoS evaluation and raking framework for cloud service is proposed. Fig.1 illustrates
QoS evaluation and raking framework in which a plurality of cloud services can be evaluated and
ranked.
The framework includes components such as - cloud administrator, cloud data discovery, cloud service
discovery and cloud services. Cloud administrator is composed of cloud service measurement
component and cloud manager component. The cloud administrator component communicates with
cloud data discovery component for getting required service parameter data. The cloud data discovery
component is composed of cloud monitor component and history manager component. In evaluating
and ranking a cloud service, the proposed framework does not necessarily select the cost effective i.e.
least expensive cloud service provider. This is so as the service measurement depends in multiple
other parameters which directly or indirectly effect the cost of the service. The cloud administrator
component is responsible for computing the QoS of cloud service by generating cloud service ranking
in the form of indices. The cloud service measurement component receives the customer‘s request for
the cloud service evaluation. It collects all their requirements and performs the discovery and ranking
of suitable services using other components. The cloud manager component keeps track of SLAs of
customers with Cloud providers and their fulfillment history. The cloud service measurement
component uses one or more QoS parameters to generate service index as to which cloud service
provides best fit the user service request requirements. The cloud manager component, manages the
smooth gathering of information from cloud administrator component and delegating it to the cloud
service measurement component. The cloud data discovery component deals with gathering requisite
Cloud computing 65
service level data to compute QoS of cloud service ranking. The cloud monitor component first
discovers the Cloud services which can satisfy user‘s essential QoS requirements. Then, it monitors
the performance of the Cloud services. The history manager component stores the history of services
provided by the cloud provider. The cloud monitor component gathers data such as speed of VM,
memory, scaling latency, storage performance, network latency and available bandwidth. It also keeps
track of how SLA requirements of previous customers are being satisfied by the Cloud provider. The
history manager component stores the past customer feedback, interaction and service experience
information about the cloud service for each cloud vendors.
An IDE is a programming environment that has been packaged as an application, typically consisting
of a code editor, a compiler, a debugger, and a graphical user interface (GUI) builder. Frequently,
cloud IDEs are not only cloud-based but also designed for the creation of cloud apps. However, some
cloud IDEs are optimized for the creation of native apps for smartphones, tablets and other mobile
devices.
A virtual testing and development environment lets you test OSes and applications before deployment.
You can even build a virtual test lab at home.
The entire development workspace into the cloud. The developer‘s environment is a combination of
the IDE, the local build system, the local runtime (to test and debug the locally edited code), the
connections between these components and the their dependencies with tools such as Continuous
Integration or central services such as Web Services, specialized data stores, legacy applications or
partner-provided services.
The cloud-based workspace is centralized, making it easy to share. Developers can invite others into
their workspace to co-edit, co-build, or co-debug. Developers can communicate with one another in
the workspace itself – changing the entire nature of pair programming, code reviews and classroom
teaching. The cloud can offer improvements in system efficiency & density, giving each individual
workspace a configurable slice of the available memory and compute resources.
The benefits of cloud IDEs include accessibility from anywhere in the world, from any compatible
device; minimal-to-nonexistent download and installation; and ease of collaboration among
geographically dispersed developers.
The emergence of HTML 5 is often cited as a key enabler of cloud IDEs because that standard
supports browser-based development. Other key factors include the increasing trends toward mobility,
cloud computing and open source software.
Cloud computing 66
cloud services provided by the cloud service providers. So the data stored in a remote data center for
data processing should be done with utmost care.
Cloud Computing security is the major concern to be addressed nowadays. If security measures are not
provided properly for data operations and transmissions then data is at high risk. Since cloud
computing provides a facility for a group of users to access the stored data there is a
possibility of having high data risk. Strongest security measures are to be implemented by
identifying security challenge and solutions to handle these challenges. From Fig. 1 it is clear that
how Data Security and Privacy are most important and critical factor to be considered.
1. Confidentiality: - Top vulnerabilities are to be checked to ensure that data is protected from
any attacks. So security test has to be done to protect data from malicious user such as Cross-
site Scripting, Access Control mechanisms etc..,.
2. Integrity: - To provide security to the client data, thin clients are used where only few
resources are available. Users should not store their personal data such as passwords so that
integrity can be assured.
3. Availability: - Availability is the most important issue in several organizations facing
downtime as a major issue. It depends on the agreement between vendor and the client.
Cloud computing 67
Locality:
In cloud computing, the data is distributed over the number of regions and to find the location of data
is difficult. When the data is moved to different geographic locations the laws governing on that data
can also change. So there is an issue of compliance and data privacy laws in cloud computing.
Customers should know their data location and it is to be intimated by the service provider.
Integrity:
The system should maintain security such that data can be only modified by the authorized person. In
cloud based environment, data integrity must be maintained correctly to avoid the data lost. In general
every transactions in cloud computing should follow ACID Properties to preserver data integrity. Most
of the web services face lot of problems with the transaction management frequently as it uses HTTP
services. HTTP service does not support transaction or guarantee delivery. It can be handled by
implementing transaction management in the API itself.
Access:
The key is distributed only to the authorized parties using various key distribution mechanisms. To
secure the data from the unauthorized users the data security policies must be strictly followed. Since
access is given through the internet for all cloud users, it is necessary to provide privileged user access.
User can use data encryption and protection mechanisms to avoid security risk.
Confidentiality:
Data is stored on remote servers by the cloud users and content such as data, videos etc.., can be stored
with the single or multi cloud providers. When data is stored in the remote server, data confidentiality
is one of the important requirements. To maintain confidentiality data understanding and its
classification, users should be aware of which data is stored in cloud and its accessibility.
Breaches:
Data Breaches is another important security issue to be concentrated in cloud. Since large data from
various users are stored in the cloud, there is a possibility of malicious user entering the cloud
such that the entire cloud environment is prone to a high value attack. A breach can occur due to
various accidental transmission issues or due to insider attack.
Segregation:
One the major characteristics of cloud computing is multi-tenancy. Since multi-tenancy allows to store
data by multiple users on cloud servers there is a possibility of data intrusion. By injecting a client
code or by using any application, data can be intruded. So there is a necessity to store data separately
from the remaining customer‘s data.
Storage:
The data stored in virtual machines have many issues one such issue is reliability of data
storage. Virtual machines needs to be stored in a physical infrastructure which may cause security
risk.
Cloud computing 68
issue of data storage and data access. In case of disaster, the cloud providers are responsible for the
loss of data.
RSA based data integrity check can be provided by combining identity based cryptography and RSA
Signature. SaaS ensures that there must be clear boundaries both at the physical level and application
level to segregate data from different users. Distributed access control architecture can be used for
access management in cloud computing. To identify unauthorized users, using of credential or
attributed based policies are better. Permission as a service can be used to tell the user that which part
of data can be accessed. Fine grained access control mechanism enables the owner to delegate most of
computation intensive tasks to cloud servers without disclosing the data contents. A data driven
framework can be designed for secure data processing and sharing between cloud users.
New technologies are forcing data center providers to adopt new methods to increase efficiency,
scalability and redundancy. Let‘s face facts; there are numerous big trends which have emphasized the
increased use of data center facilities. These trends include:
More users
More devices
More cloud
More workloads
A lot more data
As infrastructure improves, more companies have looked towards the data center provider to offload a
big part of their IT infrastructure. With better cost structures and even better incentives in moving
towards a data center environment, organizations of all sizes are looking at colocation as an option for
their IT environment.
With that, data center administrators are teaming with networking, infrastructure and cloud architects
to create an even more efficient environment. This means creating intelligent systems from the
hardware to the software layer. This growth in data center dependency has resulted in direct growth
around automation and orchestration technologies.
Now, organizations can granularly control resources, both internally and in the cloud. This type of
automation can be seen at both the software layer as well as the hardware layer. Vendors like BMC,
Service Now, and Microsoft SCCM/SCOM are working towards unifying massive systems under one
management engine to provide a single pain of glass into the data center workload environment.
Cloud computing 69