Cloud Computing Unit 2
Cloud Computing Unit 2
Cloud Computing Unit 2
REST
REST, or REpresentational State Transfer, is an architectural style for providing
standards between computer systems on the web, making it easier for systems to
communicate with each other. REST-compliant systems, often called RESTful
systems, are characterized by how they are stateless and separate the concerns of
client and server.
As long as each side knows what format of messages to send to the other, they can
be kept modular and separate. Separating the user interface concerns from the data
storage concerns, we improve the flexibility of the interface across platforms and
improve scalability by simplifying the server components. Additionally, the separation
allows each component the ability to evolve independently.
By using a REST interface, different clients hit the same REST endpoints, perform
the same actions, and receive the same responses.
Statelessness
System of Systems
System of systems is a type of architecture that uses a single interface to allow
multiple systems to be used as one. A classic example is the Internet. Internet
protocols such as HTTP allow the use of information and services across millions of
physical machines using a single interface such as a web browser. Cloud computing
is also a system of systems approach to computing that provides a single platform to
access the computing power of many physical machines.
Web Services
A web service is a set of open protocols and standards that allow data to be
exchanged between different applications or systems. Web services can be used by
software programs written in a variety of programming languages and running on a
variety of platforms to exchange data via computer networks such as the Internet in
a similar way to inter-process communication on a single computer.
Any software, application, or cloud technology that uses standardized web protocols
(HTTP or HTTPS) to connect, interoperate, and exchange data messages –
commonly XML (Extensible Markup Language) – across the internet is considered a
web service.Web services have the advantage of allowing programs developed in
different languages to connect with one another by exchanging data over a web
service between clients and servers. A client invokes a web service by submitting an
XML request, which the service responds with an XML response.
If a web service can’t be found, it can’t be used. The client invoking the web service
should be aware of the location of the web service. Second, the client application
must understand what the web service does in order to invoke the correct web
service. The WSDL, or Web services description language, is used to accomplish
this. The WSDL file is another XML-based file that explains what the web service
does to the client application. The client application will be able to understand where
the web service is located and how to use it by using the WSDL document.
Remote procedure calls are what are used to make these requests. Calls to methods
hosted by the relevant web service are known as Remote Procedure Calls (RPC).
Example: Flipkart offers a web service that displays prices for items offered on
Flipkart.com. The front end or presentation layer can be written in .Net or Java, but
the web service can be communicated using either programming language.
The data that is exchanged between the client and the server, which is XML, is the
most important part of a web service design. XML (Extensible markup language) is a
simple intermediate language that is understood by various programming languages.
It is a counterpart to HTML. As a result, when programs communicate with one
another, they do so using XML. This creates a common platform for applications
written in different programming languages to communicate with one another.
For transmitting XML data between applications, web services employ SOAP
(Simple Object Access Protocol). The data is sent using standard HTTP. A SOAP
message is data that is sent from the web service to the application. An XML
document is all that is contained in a SOAP message. The client application that
calls the web service can be created in any programming language because the
content is written in XML.
(a) XML Based: The information representation and record transportation layers of a
web service employ XML. There is no need for networking, operating system, or
platform binding when using XML. At the middle level, web offering-based
applications are highly interoperable.
(b) Loosely Coupled: A customer of an internet service provider isn’t necessarily
directly linked to that service provider. The user interface for a web service provider
can change over time without impacting the user’s ability to interact with the service
provider. A strongly coupled system means that the patron’s and server’s decisions
are inextricably linked, indicating that if one interface changes, the other should be
updated as well.A loosely connected architecture makes software systems more
manageable and allows for easier integration between different structures.
(c) Capability to be Synchronous or Asynchronous: Synchronicity refers to the
client’s connection to the function’s execution. The client is blocked and the client
has to wait for the service to complete its operation, before continuing in
synchronous invocations. Asynchronous operations allow a client to invoke a task
and then continue with other tasks.Asynchronous clients get their results later, but
synchronous clients get their effect immediately when the service is completed. The
ability to enable loosely linked systems requires asynchronous capabilities.
(d) Coarse-Grained: Object-oriented systems, such as Java, make their services
available through individual methods. At the corporate level, a character technique is
far too fine an operation to be useful. Building a Java application from the
ground, necessitates the development of several fine-grained strategies, which are
then combined into a rough-grained provider that is consumed by either a buyer or a
service.Corporations should be coarse-grained, as should the interfaces they
expose. Web services generation is an easy approach to define coarse-grained
services that have access to enough commercial enterprise logic.
(e) Supports Remote Procedural Call: Consumers can use an XML-based
protocol to call procedures, functions, and methods on remote objects utilizing web
services. A web service must support the input and output framework exposed by
remote systems.Enterprise-wide component development Over the last few years,
JavaBeans (EJBs) and.NET Components have become more prevalent in
architectural and enterprise deployments. A number of RPC techniques are used to
allocate and access both technologies.A web function can support RPC by offering
its own services, similar to those of a traditional role, or by translating incoming
invocations into an EJB or.NET component invocation.
Pub/Sub model
Virtualization
Virtualization translates to creating a virtual counterpart of an existing system such
as a desktop, server, network resource or an operating system. Holistically speaking,
it is a technique that allows multiple users or organizations to make use of a single
resource thread or an application among themselves.
Types of Virtualization
Operating System Virtualization
Hardware Virtualization
Server Virtualization
Storage Virtualization
ISA virtualization can work through ISA emulation. This is used to run many legacy
codes written for a different hardware configuration. These codes run on any virtual
machine using the ISA. With this, a binary code that originally needed some
additional layers to run is now capable of running on the x86 machines. It can also
be tweaked to run on the x64 machine. With ISA, it is possible to make the virtual
machine hardware agnostic.
For the basic emulation, an interpreter is needed, which interprets the source code
and then converts it into a hardware format that can be read. This then allows
processing. This is one of the five implementation levels of virtualization in Cloud
Computing..
2) Hardware Abstraction Level (HAL)
True to its name HAL lets the virtualization perform at the level of the hardware. This
makes use of a hypervisor which is used for functioning. The virtual machine is
formed at this level, which manages the hardware using the virtualization process. It
allows the virtualization of each of the hardware components, which could be the
input-output device, the memory, the processor, etc.
Multiple users will not be able to use the same hardware and also use multiple
virtualization instances at the very same time. This is mostly used in the cloud-based
infrastructure.imp
4) Library Level
The operating system is cumbersome, and this is when the applications use the API
from the libraries at a user level. These APIs are documented well, and this is why
the library virtualization level is preferred in these scenarios. API hooks make it
possible as it controls the link of communication from the application to the system.
5) Application Level
The application-level virtualization is used when there is a desire to virtualize only
one application and is the last of the implementation levels of virtualization in Cloud
Computing. One does not need to virtualize the entire environment of the platform.
This is generally used when you run virtual machines that use high-level languages.
The application will sit above the virtualization layer, which in turn sits on the
application program.
It lets the high-level language programs compiled to be used at the application level
of the virtual machine run seamlessly.
Virtualization Structures
1) Hypervisor and Xen Architecture
A micro-kernel hypervisor includes only the basic and unchanging functions (such as
physical memory management and processor scheduling). The device drivers and
other changeable components are outside the hypervisor. A monolithic hypervisor
The core components of a Xen system are the hypervisor, kernel, and applications.
The organi-zation of the three components is important. Like other virtualization
systems, many guest OSes can run on top of the hypervisor. However, not all guest
OSes are created equal, and one in particular controls the others. The guest OS,
which has control ability, is called Domain 0, and the others are called Domain U.
Domain 0 is a privileged guest OS of Xen. It is first loaded when Xen boots without
any file system drivers being available. Domain 0 is designed to access hardware
directly and manage devices. Therefore, one of the responsibilities of Domain 0 is to
allocate and map hardware resources for the guest domains (the Domain U
domains).
This approach was implemented by VMware and many other software companies.
VMware puts the VMM at Ring 0 and the guest OS at Ring 1. The VMM scans the
instruction stream and identifies the privileged, control- and behavior-sensitive
instructions. When these instructions are identified, they are trapped into the VMM,
which emulates the behavior of these instructions. The method used in this
emulation is called binary translation. Therefore, full vir-tualization combines binary
translation and direct execution. The guest OS is completely decoupled from the
underlying hardware. Consequently, the guest OS is unaware that it is being
virtualized.
The performance of full virtualization may not be ideal, because it involves binary
translation which is rather time-consuming. In particular, the full virtualization of I/O-
intensive applications is a really a big challenge. Binary translation employs a code
cache to store translated hot instructions to improve performance, but it increases
the cost of memory usage. At the time of this writing, the performance of full
virtualization on the x86 architecture is typically 80 percent to 97 percent that of the
host machine.
Host-Based Virtualization
An alternative VM architecture is to install a virtualization layer on top of the host OS.
This host OS is still responsible for managing the hardware. The guest OSes are
installed and run on top of the virtualization layer. Dedicated applications may run on
the VMs. Certainly, some other applications
can also run with the host OS directly. This host-based architecture has some
distinct advantages, as enumerated next. First, the user can install this VM
architecture without modifying the host OS. The virtualizing software can rely on the
host OS to provide device drivers and other low-level services. This will simplify the
VM design and ease its deployment.
The guest operating systems are para-virtualized. They are assisted by an intelligent
compiler to replace the nonvirtualizable OS instructions by hypercalls as illustrated in
Figure 3.8. The traditional x86 processor offers four instruction execution rings:
Rings 0, 1, 2, and 3. The lower the ring number, the higher the privilege of instruction
being executed. The OS is responsible for managing the hardware and the
privileged instructions to execute at Ring 0, while user-level applications run at Ring
3.
A CPU architecture is virtualizable if it supports the ability to run the VM’s privileged
and unprivileged instructions in the CPU’s user mode while the VMM runs in
supervisor mode. When the privileged instructions including control- and behavior-
sensitive instructions of a VM are exe-cuted, they are trapped in the VMM. In this
case, the VMM acts as a unified mediator for hardware access from different VMs to
guarantee the correctness and stability of the whole system. However, not all CPU
architectures are virtualizable. RISC CPU architectures can be naturally virtualized
because all control- and behavior-sensitive instructions are privileged instructions.
On the contrary, x86 CPU architectures are not primarily designed to support
virtualization. This is because about 10 sensitive instructions, such
Memory Virtualization
Virtual memory virtualization is similar to the virtual memory support provided by
modern operat-ing systems. In a traditional execution environment, the operating
system maintains mappings of virtual memory to machine memory using page
tables, which is a one-stage mapping from virtual memory to machine memory. All
modern x86 CPUs include a memory management unit (MMU) and a translation
lookaside buffer (TLB) to optimize virtual memory performance. However, in a virtual
execution environment, virtual memory virtualization involves sharing the physical
system memory in RAM and dynamically allocating it to the physical memory of the
VMs.
I/O Virtualization
I/O virtualization involves managing the routing of I/O requests between virtual
devices and the shared physical hardware. At the time of this writing, there are three
ways to implement I/O virtualization: full device emulation, para-virtualization, and
direct I/O. Full device emulation is the first approach for I/O virtualization. Generally,
this approach emulates well-known, real-world devices.
Another way to help I/O virtualization is via self-virtualized I/O (SV-IO) [47]. The key
idea of SV-IO is to harness the rich resources of a multicore processor. All tasks
associated with virtualizing an I/O device are encapsulated in SV-IO. It provides
virtual devices and an associated access API to VMs and a management API to the
VMM. SV-IO defines one virtual interface (VIF) for every kind of virtua-lized I/O
device, such as virtual network interfaces, virtual block devices (disk), virtual camera
devices, and others. The guest OS interacts with the VIFs via VIF device drivers.
Each VIF consists of two mes-sage queues. One is for outgoing messages to the
devices and the other is for incoming messages from the devices. In addition, each
VIF has a unique ID for identifying it in SV-IO.
Many cloud computing providers offer virtualization and disaster recovery services
as part of their core offering, which can make it easier for organizations to implement
and manage these technologies. This can be particularly useful for small and
medium-sized businesses that may not have the in-house expertise or resources to
manage these complex technologies on their own.
Disaster recovery in cloud computing can take two forms: physical disaster recovery
and virtual disaster recovery.
Physical disaster recovery involves using physical hardware and infrastructure to
back up and restore data and systems in the event of a disaster. This typically