Final CC QB With Ans

Download as pdf or txt
Download as pdf or txt
You are on page 1of 82

Q1. Define Cloud and explain the historical development of cloud.

ANS:-

Definition:-

The cloud is large group interconnected computers. These computers can be personal
computers or network servers; cloud computing is technology that uses the internet and
central remote servers to maintain data and application.

 HISTORICAL DEVELOPMENTS
 Client/Server Computing: Centralized Applications and Storage
o In client/server model all the software applications, data, and the control resided on
huge mainframe computers, known as servers.
o If a user wanted to access specific data or run a program, he had to connect to the
mainframe and then do his business. Users connected to the server via a computer
terminal, called a workstation or client
 Drawbacks in client /server Model
o Processing power is limited.
o Access was not immediate nor could two users access the same data at the same
time.
o When multiple people are sharing a single computer, you have to wait for your turn.
o There isn‘t always immediate access in a client/server environment
 Peer-to-Peer Computing: Sharing Resources
o P2P computing defines a network architecture in which each computer has
equivalent capabilities and responsibilities.
o In the P2P environment, every computer is a client and a server; there are no masters
and slaves.
o P2P enables direct exchange of resources and services.
o There is no need for a central server
o P2P was a decentralizing concept. Control is decentralized, with all computers
functioning as equals. Content is also dispersed among the various peer computers
 Distributed Computing:
o Providing More Computing Power
o One of the subsets of the P2P model.
o Distributed computing, where idle PCs across a network or Internet are tapped to
provide computing power for large, processor-intensive projects.
 Collaborative Computing: Working as a Group
o Multiple users to work simultaneously on the same computer-based project called
collaborative computing.
o The goal was to enable multiple users to collaborate on group projects online, in real
time. To collaborate on any project, users must first be able to talk to one another.
o Most collaboration systems offer the complete range of audio/video options, for full-
featured multiple-user video conferencing.
 Cloud Computing: The Next Step in Collaboration
o With the growth of the Internet, there was no need to limit group collaboration to a
single enterprise‘s network environment. The users from multiple locations within a
corporation, and from multiple organizations, desired to collaborate on projects that
crossed company and geographic boundaries.
Q.2. Explain in detail Characteristics and Benefits of cloud.
ANS:-
 CHARACTERISTICS OF CLOUD COMPUTING
1. Agility
The cloud works in a distributed computing environment. It shares resources among
users and works very fast.
2. High availability and reliability
The availability of servers is high and more reliable because the chances of
infrastructure failure are minimum.
3. High Scalability
Cloud offers "on-demand" provisioning of resources on a large scale, without having
engineers for peak loads.
4. Multi-Sharing
With the help of cloud computing, multiple users and applications can work more
efficiently with cost reductions by sharing common infrastructure.
5. Device and Location Independence
Cloud computing enables the users to access systems using a web browser regardless
of their location or what device they use e.g. PC, mobile phone, etc. As infrastructure is
off-site (typically provided by a third-party) and accessed via the Internet, users can connect
from anywhere.

6. Maintenance
Maintenance of cloud computing applications is easier, since they do not need to be
installed on each user's computer and can be accessed from different places. So, it reduces
the cost also.
7. Low Cost
By using cloud computing, the cost will be reduced because to take the services of
cloud computing, IT company need not to set its own infrastructure and pay-as-per usage of
resources.
8. Services in the pay-per-use mode
Application Programming Interfaces (APIs) are provided to the users so that they
can access services on the cloud by using these APIs and pay the charges as per the usage of
services.
 BENEFITS OF CLOUD COMPUTING:--
1. Lower costs:
Because cloud networks operate at higher efficiencies and with greater utilization,
significant cost reductions are often encountered.
2. Ease of utilization:
Depending upon the type of service being offered, you may find that you do not
require hardware or software licenses to implement your service.
3. Quality of Service:
The Quality of Service (QoS) is something that you can obtain under contract from
your vendor.
4. Reliability:
The scale of cloud computing networks and their ability to provide load balancing
and failover makes them highly reliable, often much more reliable than what you can
achieve in a single organization.
5. Outsourced IT management:
A cloud computing deployment lets someone else manage your computing
infrastructure while you manage your business. In most instances, you achieve
considerable reductions in IT staffing costs.
6. Simplified maintenance and upgrade:
Because the system is centralized, you can easily apply patches and upgrades. This
means your users always have access to the latest software versions.
7. Low Barrier to Entry:
In particular, upfront capital expenditures are dramatically reduced. In cloud
computing, anyone can be a giant at any time.
Q3.Explain the concept of Cloud Virtualization.
ANS:-
 VIRTUALIZATION
 Virtualization is a technique, which allows to share single physical instance of an
application or resource among multiple organizations or tenants (customers).
 Creating a virtual machine over existing operating system and hardware is referred
as Hardware Virtualization.
 Virtual Machines provide an environment that is logically separated from the
underlying hardware.
 The machine on which the virtual machine is created is known as host machine and
virtual machine is referred as a guest machine.
 This virtual machine is managed by a software or firmware, which is known as
hypervisor.

User User User

VM VM VVM
1 2 3
Fig Traditional vs. Virtual
Hypervisor
OS

Hardware

 Hypervisor
The hypervisor is a firmware or low-level program that acts as a Virtual Machine
Manager.
There are two types of hypervisor:
o Type 1 is a hypervisor that is installed directly on the hardware and is also called a
―bare-metal‖ hypervisor.
o Type 2 is a hypervisor that is installed on top of an operating system and is also
called ―hosted‖ hypervisor.
In fig a , both types of hypervisors are shown.

Fig a Bare metal and hosted hypervisor

 CHARACTERISTICS OF VIRTUALIZATION
 Partitioning:
In virtualization, many applications and operating systems are supported in a
single physical system by partitioning (separating) the available resources.

 Isolation:
Each virtual machine is isolated from its host physical system and other
virtualized machines. Because of this isolation, if one virtual-instance crashes, it
doesn‘t affect the other virtual machines. Also data isn‘t shared between one virtual
container and another.

 Encapsulation:
A virtual machine can be represented (and even stored) as a single file, so
you can identify it easily based on the service it provides. This encapsulated virtual
machine can be presented to an application as a complete entity.
Therefore, encapsulation can protect each application so that it doesn‘t interfere with
another application.
Q4 What are the pros and Cons of Virtualization.
ANS:-
 Pros of Virtualization
There are various pros of virtualization which are as follows −
 Cheaper –
Virtualization doesn‘t need actual hardware elements to be used or installed
and hence, IT infrastructures discover it to be a low-cost system to execute. There is
no higher need to dedicate huge areas of space and huge monetary investments to
generate an on-site resource.
 Efficiency –
Virtualization also enables automatic updates to the hardware and software
by installing on their third-party provider. Thus, IT professionals don‘t have to spend
money for singles and corporations. Further, that virtualization decreases the load of
resource management for supporting adaptability in the virtual environment.
 Portability –
It can simply move our virtual machine from one defective host server to the
new host server with a very high success cost.

 Flexibility –
Virtualization supports the flexibility to the users to waste their resources.
Whatever the operation cloud software uses for supporting resources to the user can
simply be managed or completed through various steps.

 Cons of Virtualization
The cons of virtualization are as follows −
 Security –
Data is an important element of each organization. Data security is provided
in a virtualized environment because the server is handled by third party providers.
Thus, it is essential to select the virtualization solution carefully so that it can
support adequate protection.
 Limitations –
Virtualization does contain some limitations. Each server and software out
there is not virtualization adaptable. Therefore, various IT infrastructures of the
organizations will not be providing the virtualized solutions. Further, some vendors
have interrupted supporting them. For reducing this individuals and organizations
are required to have a hybrid system.
 Availability –
Availability is an essential element of an organization. The data is required to
be linked for an extended period. If not the organization will be going to fail the
competition in the market. The problem with the availability can appear from the
virtualization servers. The virtualization servers have shifted to go offline.
Therefore, the websites which are hosted will also be declined. This is merely
controlled by the third-party providers and there is nothing the client can do about it.
Q5. Explain the taxonomy of virtualization techniques.
ANS:-

 Server virtualization
There are three different approaches to server Virtualization:
 Full virtualization,
 Para-virtualization and
 OS partitioning
With full virtualization, a hypervisor serves as the hardware abstraction layer
and can host multiple virtual machines. The virtual machines are isolated from each
other.
With para-virtualization, specially modified operating system(s) are installed on
top of the hypervisor to host multiple guest operating systems.

 Application virtualization
With application virtualization, an application is packaged in a single executable or in a
set of files that can be distributed independently from the operating system.
There are different types of application virtualization of which two common types are :
 sandbox application
 application streaming.
Sandbox applications are completely isolated in what is called a ―bubble‖, where it ia
encapsulated from the underlying OS.
No installation or additional driver installation is required,
All the operating system features required for application execution are already embedded
in the executable file.
Application streaming is a form of application virtualization where an application is
divided into multiple packages.
With application streaming, the application is stored on a central server and streamed
towards the user location.
Only the application data that is required will be streamed to the user.
For example, when a user wants to use an office program such as Word, the server
will not stream the whole Office application. Only the application package with the
Word application will be streamed to the user.
 Desktop virtualization
Desktop virtualization is the separation of a desktop, consisting of an operating system,
applications and user data, from the underlying endpoint.
The endpoint is the computer device which is used to access the desktop.
Desktop virtualization can be subdivided into two types:
 Client side
 Server side
With server side desktop virtualization, the end-user applications are executed remotely,
on a central server, and streamed towards the endpoint via a Remote Display Protocol or
other presentation and access virtualization technology.
With client side desktop virtualization, the applications are executed at the endpoint,
which is the user location, and presented locally on the user‘s computer.
Different types of desktop virtualization shown in fig a below
 Server side
Stateless desktops refer to virtual desktops that remain ‗clean‘ or ‗stateless'. All desktop-
related modifications, for example changes to applications by a user, are removed when
the user logs off. However, user-specific settings that are recorded in the user profile can
be stored and re-used.
Statefull desktops refer to virtual desktops where the users have the freedom to install
software and to make changes to his or her desktop. This is also called user state or
profile virtualization.
 Storage virtualization
Storage virtualization technologies can be divided into two types:
 Block virtualization
 File virtualization
Block virtualization focuses on creating virtual disks so that distributed storage networks
appear as one storage system.
File virtualization creates a virtual file system of the storage devices in the network.

 Network virtualization
Network virtualization is that where multiple networks can be combined into a single
network, or a single network can be logically separated into multiple parts.
The currently known network virtualizations are
 Virtual LAN (VLAN),
 Virtual IP (VIP)
 Virtual Private Network (VPN)
VLAN is a safe method of creating independent or isolated logical networks within a
shared network.
Devices in one isolated segment cannot communicate with devices of other segments
even if they are
connected to the same physical network.
A VLAN is a common feature in all modern Ethernet switches, allowing the creation of
multiple virtual networks, which isolates each segment from the others.
Virtual IP (VIP) is an IP address that is not associated to a specific computer or network
interface card (NIC), but is normally assigned to a network device that is in-path of the
network traffic.
Virtual Private Network (VPN) is a private communication network that uses a public
network, such as the Internet. The purpose of a VPN is to guarantee confidentiality on an
unsecured network channel, from one geographical location to another.

Q6. Explain in brief VMware and Microsoft Hyper-V.

Hyper-V VMWare
Hyper-V supports Windows, Linux and VMware supports Windows, Linux, Unix and
FreeBSD operating systems. macOS operating systems.

Hyper-V‘s pricing depends on the number of VMware charges per processor and its pricing
cores on the host and may be preferred by structure might appeal to larger organizations.
smaller companies.
Hyper-V‘s Cluster Shared Volume is VMware‘s Virtual Machine File System
somewhat more complex and more difficult to (VMFS) holds a slight edge when it comes to
use than VMware‘s storage deployment clustering.
system.
Hyper-V uses a single memory technique VMware implements a variety of techniques,
called ―Dynamic Memory.‖ Using the such as memory compression and transparent
dynamic memory settings, Hyper-V virtual page sharing, to ensure that RAM use in the
machine memory can be added or released VM is optimized. It is a more complex system
from the virtual machine back to the Hyper-V than Hyper-V‘s memory technique.
host.
Q7. Explain Cloud Reference Model.
ANS:-
 Diagram:-

Fig. The Cloud reference model

The cloud computing reference architecture defines five major actors:


1. Cloud consumer
2. Cloud provider
3. Cloud carrier
4. Cloud auditor and
5. Cloud broker.

 Explanation:-

 Cloud Consumer:-
A person or organization that maintains a business relationship with, and uses service
from Cloud Providers.

 Cloud Provider:-
A person, Organization, or entity responsible for making a service available to
interested parties.

 Cloud Auditor:-
A party that can conduct independent assessment of cloud services, information system
operations, performance and security of the cloud implementation.

 Cloud broker:-
An entity that manages the use, performance and delivery of cloud services, and
negotiates relationships between cloud providers and cloud consumers.
 Cloud Carrier:-
A intermediary that provides connectivity and transport of cloud services from cloud
providers to cloud consumers.

Q 8. Explain the architecture of cloud.


 Diagram:

 Explanation:-

Cloud Computing Architecture is divided into 2 parts:

1.frontend

2.Backend

1. Frontend :
Frontend of the cloud architecture refers to the client side of cloud computing system.
Means it contains all the user interfaces and applications which are used by the client to
access the cloud computing services/resources. For example, use of a web browser to
access the cloud platform.
 Client Infrastructure – Client Infrastructure is a part of the frontend component. It
contains the applications and user interfaces which are required to access the cloud
platform.
 In other words, it provides a GUI( Graphical User Interface ) to interact with the
cloud.
2. Backend :

Backend refers to the cloud itself which is used by the service provider. It contains
the resources as well as manages the resources and provides security mechanisms. Along
with this, it includes huge storage, virtual applications, virtual machines, traffic control
mechanisms, deployment models, etc.

 Application –

Application in backend refers to a software or platform to which client accesses.


Means it provides the service in backend as per the client requirement.

 Service –

Service in backend refers to the major three types of cloud based services like SaaS,
PaaS and IaaS. Also manages which type of service the user accesses.

 Runtime Cloud-

Runtime cloud in backend provides the execution and Runtime


platform/environment to the Virtual machine.

 Storage –

Storage in backend provides flexible and scalable storage service and management
of stored data.

 Infrastructure –

Cloud Infrastructure in backend refers to the hardware and software components of


cloud like it includes servers, storage, network devices, virtualization software etc.

 Management –

Management in backend refers to management of backend components like


application, service, runtime cloud, storage, infrastructure, and other security
mechanisms etc.

 Security –

Security in backend refers to implementation of different security mechanisms in the


backend for secure cloud resources, systems, files, and infrastructure to end-users.

 Internet –

Internet connection acts as the medium or a bridge between frontend and backend
and establishes the interaction and communication between frontend and backend.
 Benefits of Cloud Computing Architecture :
 Makes overall cloud computing system simpler.
 Improves data processing requirements.
 Helps in providing high security.
 Makes it more modularized.
 Results in better disaster recovery.
 Gives good user accessibility.
 Reduces IT operating costs.

Q9.Explain In detail IaaS?


ANS:-

 Definition:-

IaaS is the basic layer in cloud computing model. Infrastructure-as-a-


Service provides access to fundamental resources such as physical machines, virtual
machines, virtual storage, etc. IaaS delivers customizable infrastructure on demand. Apart
from these resources, the IaaS also offers Virtual machine disk storage, Virtual local area
network (VLANs), Load balancers, IP addresses, Software bundles. All of the above
resources are made available to end user via server virtualization.
IaaS examples can be categorized in two categories
1. Management layer
2. Physical infrastructure
Some service providers provide both above categories and some provides only
management layer. On virtual machines applications are installed and deployed, one of the
examples of virtual machine is Oracle VM. Hardware virtualization includes workload
partitioning, application isolation, sandboxing, and hardware tuning. Instead of purchasing
user can access these virtual hardwares on pay per use basis.
users can take advantage of the full customization offered by virtualization to deploy their
infrastructure in the cloud. Some virtual machines can be with pre- installed operating
systems and other softwares. On some virtual machines operating systems and others
softwares can be installed as per use.
 Diagram:-

 Examples :

 Amazon Web Services (AWS),

 Microsoft Azure,

 Google Compute Engine (GCE)

 Benefits :
 Full control of the computing resources through administrative access to
VMs.
 Flexible and efficient renting of computer hardware
 Portability, interoperability with legacy applications
 Issues:
 Compatibility with legacy security vulnerabilities
 Virtual Machine sprawl
 Robustness of VM-level isolation
 Data erase practices
 Characteristics:
 Virtual machines with pre-installed software.
 Virtual machines with pre-installed operating systems such as Windows,
Linux, and Solaris.
 On-demand availability of resources.
 Allows to store copies of particular data at different locations.
 The computing resources can be easily scaled up and down.
Q10.Explain In detail PaaS ?

 Definition:-

PaaS provides a computing platform with a programming language execution


environment. It provide a development and deployment platform for running applications in
the cloud. Application management is the core functionality of the middleware. PaaS
provides run time environments for the applications.

 PaaS provides
 Applications deployment
 Configuring application components
 Provisioning and configuring supporting technologies
 PaasS classification:
 PaaS-I: Runtime environment with Web-hosted application development
platform. Rapid application prototyping.
 PaaS-II: Runtime environment for scaling Web applications. The runtime
could be enhanced by additional components that provide scaling
capabilities.
 PaaS-III: Middleware and programming model for developing distributed
applications in the cloud.
 Examples:
 Google App Engine
 Force.com
 Benefits:-
 Lower administrative overhead
 Lower total cost of ownership
 Scalable solutions
 More current system software
 Characterstics of PaaS:

1. The runtime framework executes end-user code according to the policies set
by the user and the provider.
2. Provide services for creation, delivery, monitoring, management, reporting of
applications.
3. PaaS provides built-in security, scalability, and web service interfaces.
4. PaaS provides built-in tools for defining workflow, approval processes, and
business rules.
5. It is easy to integrate PaaS with other applications on the same platform.

 Issues:
 Lack of portability between PaaS clouds
 Event based processor scheduling
 Security engineering of PaaS applications
Q11.Explain In detail SaaS.
ANS:-
 Definition:-
It is the service with which end users interact directly. It provides a means to free
users from complex hardware and software management.
There are several SaaS applications listed below:

 Customer Relationship Management (CRM)


 Human Resource (HR) solutions
 Billing and invoicing system
In SaaS simply access the application website, enter their credentials and billing details, and
can instantly use the application. Customer can customize their software. Application is
available to the customer on demand. SaaS can be considered as a ―one-to-many‖ software
delivery model. In SaaS applications are build as per the user needs.
From the examples mentioned below we can find why SaaS is considered as one to many
model.

 Examples:
 Gmail
 Google drive
 Dropbox
 WhatsApp

 Benefits

 Efficient use of software licenses


 Multitenant solutions

 Modest software tools

 Centralized management and data


 Platform responsibilities managed by provider

 Characterstics of SaaS:

1. They are available on demand.


2. The service delivered is one-to-many.
3. The service delivered is an integrated solution delivered on the contract.
4. The software applications are maintained by the vendor.
5. The license to the software may be subscription based or usage based.
6. The service is cost-effective since they do not require any maintenance at end
user side.
7. They can be scaled up or down on demand.
8. The application is centrally managed.

 Issues
 Network dependence
 Browser based risks
 Lack of portability between SaaS clouds

Q 12. Explain in detail Public Cloud.


ANS:-
The public cloud the services offered are made available to anyone, from anywhere,
and at any time through the Internet. In public clouds one or more datacenters connected
together. On these data centers services are implemented. Small enterprises prefere this
public couds due to its less cost. Public clouds offers renting the infrastructure or
subscribing to application services. Public cloud keeps monitoring of services used by users
to provide billing as per the uses.
 Diagram:-
 Benefits:
The following diagram shows some of those benefits:

 Cost Effective
Since public cloud shares same resources with large number of customers it turns out
inexpensive.

 Reliability
The public cloud employs large number of resources from different locations. If any of
the resources fails, public cloud can employ another one.

 Flexibility
The public cloud can smoothly integrate with private cloud, which gives customers a
flexible approach.

 Location Independence
Public cloud services are delivered through Internet, ensuring location independence.

 Utility Style Costing


Public cloud is also based on pay-per-use model and resources are accessible whenever
customer needs them.

 High Scalability
They can be scaled up or down according the requirement.
 Disadvantages
Here are some disadvantages of public cloud model:

 It does not ensure higher level of security.


 Less Customizable
 It is comparatively less customizable than private cloud.

Q13.Explain in detail Private Cloud.


ANS:-
The cloud is implemented within the private premises of an institution and generally
made accessible to the members of the institution or a subset of them. When customers
privacy in important private clouds are preferable over public clouds. Instead of pay-as-you-
go model as in public cloud, there could be other scheme in private clouds. In private cloud
sensitive information are kept in house. The cloud is implemented within the private
premises of an institution and generally made accessible to the members of the institution or
a subset of them.
When customers privacy in important private clouds are preferable over public
clouds. Instead of pay-as-you-go model as in public cloud, there could be other scheme in
private clouds. Private Cloud allows systems and services to be accessible within an
organization. The Private Cloud is operated only within a single organization. However, it
may be managed internally by the organization itself or by third-party. The private cloud
model is shown in the diagram below.

 Diagram:-
 Benefits
 High Security and Privacy
 More Control
 The private cloud has more control than public cloud because it is accessed only
within an organization.
 Cost and Energy Efficiency
 The private cloud are not as cost effective as public clouds but they offer more
efficiency than public cloud resources.
 Disadvantages
 Restricted Area of Operation
 High Priced
 Limited Scalability
 Additional skilled expertise required to maintain

Q14.Explain in detail Hybrid Cloud.


ANS:-
Hybrid clouds are the combinations of private clouds and public clouds .When
advantages of public clouds and private clouds are taken together that is known as hybrid
clouds. Hybrid cloud allowed the services to be taken from public clouds when needed and
keep the sensitive information‘s within private clouds. Dynamic provisioning refers to the
ability to acquire on demand virtual machines in order to increase the capability of the
resulting distributed system and then release them.
 Diagram:-
 Benefits
 Scalability
 Flexibility
 Cost Efficiency
 Security
 Disadvantages
 Networking Issues that becomes complex due to presence of private and public
cloud.

Q 15. Explain in detail Community Cloud.


ANS:-
Community clouds are distributed systems created by integrating the services of
different clouds to address the specific needs of an industry, a community, or a business
sector.
 Sectors for community clouds are as follows:
 Media industry
 Healthcare industry
 Energy and other core industries
 Public sector
 Scientific research
Community clouds can provide a shared environment where services can facilitate
business-to-business collaboration.

 Diagram:-
 Benefits
 Cost Effective
 Sharing Among Organizations
 Security

Q 16. Write a short note:


a. Economics of the cloud.
b. Open Challenges.
c. Cloud Interoperability and standards
d. Scalability and fault Tolerace.
A. Economics of the cloud:-
The economics of Cloud Computing is based on the pay as you go method. It is
beneficial for the users. This eliminates some indirect costs such as license of the software
and their support, users can use software applications on a subscription basis without any
cost.

 Cloud Computing Allows:


 Reduces the capital costs of infrastructure.
 Removes the maintenance cost.
 Removes the administrative cost.
There are three different Pricing Strategies that are introduced by Cloud
Computing: Tiered Pricing, Per-unit Pricing, and Subscription-based Pricing.

 These are explained as following below.


 Tiered Pricing: Cloud Services are offered in the various tiers. Each tier offers to
fix service agreements at a specific cost.
 Per-unit Pricing: The model is based upon the unit-specific service concept.
 Subscription-based Pricing: In this model, users are paying periodic subscription
fees for the usage of the software.
B. Open Challenges:-
 Challenges in Cloud Computing :
1. Portability
2. Interoperability
3. Reliability and Availability
4. Service quality
5. Computing performance
6. Security and Privacy

Q.17 Explain in detail Storage as a service.


Ans:

Storage as a service is a cloud business model in which a company leases or rents its storage
infrastructure to another company or individuals to store data.

Storage as a Service is cloud storage that you rent from a Cloud Service Provider (CSP) and
that provides basic ways to access that storage. Enterprises, small and medium businesses,
home offices, and individuals can use the cloud for multimedia storage, data repositories,
data backup and recovery, and disaster recovery.

There are also higher-tier managed services that build on top of such as Database as a
Service, in which you can write data into tables that are hosted through CSP resources.

Fig: Storage as a service

The key benefit to storage as a service is that you are offloading the cost and effort to
manage data storage infrastructure and technology to a third-party CSP. This makes it much
more effective to scale up storage resources without investing in new hardware or taking on
configuration costs.
Storage as a service is fast becoming the method of choice to all small and medium scale
businesses. This is because storing files remotely rather than locally boasts an array of
advantages for professional users.

Storage as a service can be used for a variety of purposes, from long-term archival storage
to short-term transfers of large amounts of data. Since Storage as a service is a type of
software defined storage, the storage capacity available to the customer can vary easily, and
can be expanded at short notice without the capital outlay required to purchase extra servers.

In a typical situation, a company will decide that instead of conducting maintenance on a


huge tape library of back-ups and archival data, they will subcontract management of these
data to a Storage as a service provider.

Advantages of STaaS

Key advantages to STaaS in the enterprise include the following:

 Storage costs. Personnel, hardware and physical storage space expenses are
reduced.

 Disaster recovery. Having multiple copies of data stored in different locations


can better enable disaster recovery measures.

 Scalability. With most public cloud services, users only pay for the resources
that they use.

 Syncing. Files can be automatically synced across multiple devices.

 Security. Security can be both an advantage and a disadvantage, as security


methods may change per vendor. Data tends to be encrypted during transmission
and while at rest.

Examples of STaaS vendors include Dell EMC, Hewlett Packard Enterprise (HPE), NetApp
and IBM. Dell EMC provides Isilon NAS storage, EMC Unity hybrid-flash storage and
other storage options. HPE has an equally large, if not larger, presence in storage systems
compared to Dell EMC.
Q.18 Explain in detail Database as a Service.

Database as a Service (DBaaS) provides enterprises with a database solution that is


simple to use and easy to update. As the database takes a more central role within
data heavy, application-focused IT departments, DBaaS fills a very important need
in this space.

DBaaS is a cost-efficient solution for organizations looking to set up and scale


databases, especially when operating large-scale, complex, and distributed app
components.

Database as a service is just one more ―as a service‖ offering that can bring agility,
flexibility, and scaling to any business, no matter your size or industry.

Fig: Database as a Service (DBaas)

Database as a Service is a cloud-based software service used to set up and manage


databases. A database, remember, is a storage location that houses structured data. The
administrative capabilities offered by the service includes scaling, securing, monitoring,
tuning and upgrade of the database and the underlying technologies, which are managed by
the cloud vendor.

Database as a service (DBaaS) is one of the fastest growing cloud services—it‘s projected to
reach $320 billion by 2025. The service allows organizations to take advantage of database
solutions without having to manage and maintain the underlying technologies.
The use of DBaaS is growing as more organizations shift from on-premises systems to cloud
databases. DBaaS vendors include cloud platform providers that sell database software and
other database makers that host their software on one or more of the cloud platforms.

The benefits of DBaaS set it apart from other Cloud services as it delivers database
functionality on the same scale as a relational database management system.

 Faster deployment. Free your resources from administrative tasks and engage
your employees on tasks that lead directly to innovation and business growth—
instead of merely keeping the systems running.
 Resource elasticity. The technology resources dedicated for database systems
can be changed in response to changing usage requirements. This is especially
suitable in business use cases where the demand for database workloads is
dynamic and not entirely predictable.
 Rapid provisioning. Self-service capabilities allow users to provision new
database instances as required, often with a few simple clicks. This removes the
governance hurdles and administrative responsibilities from IT.
 Business agility. Organizations can take advantage of rapid provisioning and
deployment to address changing business requirements. In DevOps
organizations, this is particularly useful as Devs and Ops both take on collective
responsibilities of operations tasks.
 Security. The technologies support encryption and multiple layers of security to
protect sensitive data at rest, in transit and during processing.
Q.19 Explain in detail process as a Service?
Answer:

Business process as a Service (Bpaas) is any type of Horizantal vertical Business Process
that's delivered based on the cloud Service Model.
These cloud services which include Software as a Service (saas), platform as a Service
(paas), & infrastruture- as a Service (Iaas).
These Business processes, can be really be any service that Can be automated, including
managing email, shipping a package or Managing customer Credit.

BPaaS keeps companies in lockstep with industry best practices and technology
advancements. Companies can also easily increase service levels during peak periods and
bring new products and services to market faster with BPaaS's unique operating flexibility
and agility.

BPaaS Offers Many Business Benefits, Including: Product/Service Deliverability: From


managing inventory to organizing email and customer records, BPaaS helps companies
facilitate the delivery of products and services in an automated, streamlined way with help
of cloud technologies. BPaaS is

Fig. Business process as a service

Characteristics:

1. A Bpaas configurable based on the process being designed


2) A Bpaas Service must have well defined API'S so it can be easily connected to related
Services

3) A Bpaas must be able to support multiple languages & multiple development


Environment .

4) A Bpaas Environment must be able to handle massive Scaling.

BPaaS Offers Many Business Benefits, Including:

1) Product/Service Deliverability:

From managing inventory to organizing email and customer records, BPaaS helps
companies facilitate the delivery of products and services in an automated, streamlined
way with help of cloud technologies. BPaaS is standardized for use across industries and
organizations, so it's flexible and repeatable, resulting in higher efficiency and, ultimately,
better service and experience for customers.

2) Cutting Edge at Reduced Cost:

BPaaS provides a business with the latest digital tools, technologies, processes and talent
to improve its efficiency, service and the customer experience, without the large capital
investment traditionally required. By implementing BPaaS, companies can shift to a pay-
per-use (CAPEX consumption model and reduce total cost of ownership.

3) Accommodates Fluctuating Business Needs:

BPaaS utility can scale on-demand when a company experiences a peak workload. Due to
its innate configurability applicable across multiple business areas, and its world
interaction with other foundational cloud services like the s SaaS, the service can make use
of its cloud foundation to scale to accommodate large fluctuations in business process
needs.

Any business process (for example, payroll, printing, ecommerce) delivered as a service
over the Internet and accessible by one or more web enabled interfaces (PC, smart devices
and phones) can be considered as a BPaas.
Q .20 Explain in detail Information as a Service ?
Answer:

Informal as a Service: Iaas is an emerging cloud Business model in which a company. share
or Sells relevant info to another Company or individuals to perform their Business.

Information as a Service (IaaS) could be a incontestable approach to productive service


arrangement. Reflecting about data and information as break free from the procedures
that utilization them is one key part.

This comparatively lowest effort approach can alter departments to maximize service
quality and provides price savings. It is additionally a vital advancement if cloud computing
is being focused and considered.

Information as a Service could be a service, which supplies (serves)


knowledge/data/information in a method or another.

Information as a Service could be a developing cloud business model within which a


corporation shares or offers relevant information to a different company or people to play
out their business models and Technology Start-up. IaaS clients would prefer not to or
don't have the assets to process and analyze data.

IaaS (Information as a Service) centers around giving bits of knowledge based on the
analysis of handled data. In this case the client's business to-be-done is increasingly about
thinking of their own decisions or in any event, "selling" an idea based on certain
information. Additionally, IaaS clients would prefer not to or don't have the assets to
process and analyze data. Rather they will exchange value for analysis from confided in
parties. The IaaS (Information) business model is all about transforming data into
information for clients who need something - and will pay for something - progressively
tailored.

Following are general examples of information as a Services.

1. Zip code or address validation & lookup

2. Payment Processing

Service that validate or complete Data.

Characteristics:

1) Accuracy

2) Completeness

3) Cost Effective
4) Relevance

5) Easily Understood

Fig. Information as a service

Advantages and Importance:

Through a lifecycle approach, IaaS can help in business capture, organize, integrate,
transform, analyze, and use information to create information inside a SOA domain.
Information as a service encompasses a range of software, services and answers for
address the appropriate starting point for the business:

1)Customer control for assent

Users control their personality and must agree to the utilization of their information. Like -
The hamlet Forum, a global free gathering of data security leaders, has supplemental their
contribution on the way to collaborate information as a Service safely within the clouds to
facilitate the needs of customers.

2)Minimal Disclosure

The minimal amount of information should be uncovered for a planned use. An


information-as-a-service strategy is about making data or information easier to eat up all
through the organization.
3)Justifiable access

Only entity who have an upheld utilization of the information contained in a digital
personality and have a trusted in character relationship with the owner of the information
may be offered access to that information.

Q.21 Explain in detail intigration as a service ?

Integration as a Service (IaaS)

Integration as a Service (IaaS) is a cloud-based delivery model that strives to connect on-
premise data with data located in cloud-based applications. This paradigm facilitates real-
time exchange of data and programs among enterprise-wide systems and trading partners.

In business-to-business (B2B) integration, IaaS allows partners to develop, maintain and


manage custom integrations for diverse systems and applications in the cloud. In this way,
the enterprise can more effectively pursue process innovations without the need to
constantly modify and maintain diverse and often incompatible application programs.

IaaS vendors will typically provide infrastructure, such as servers, along with middleware.
Vendors will also commonly supply tools for customers to build, test, deploy and manage
cloud applications. Payment is typically available in the form of a ‗pay as you go‘ model, so
users can readily scale their environments up or down. Most IaaS vendors will also share a
multi-tenant setup.

Customers of an IaaS service will typically interact with their data via a web-based
interface, which interconnects backend data, systems and files with other data, applications
and systems in other locations. IaaS also removes system and data interdependencies
through this process.

Uses of IaaS

IaaS is commonly used in small and medium-sized businesses since it facilitates low-cost,
efficient and reliable B2B integration. IaaS allows enterprises of modest size to spend more
of their valuable resources on the products and services that directly benefit customers. In
addition, IaaS can streamline infrastructure management (IM) by minimizing the amount of
unnecessary or redundant time and energy spent on it.
Organizations can utilize IaaS to:

Storing data, setting up backups and performing recovery.

Running and hosting websites and web applications.

Developing and testing applications through the tools some providers may offer.

Facilitating data analytics, which aids organizations in managing large datasets.

As an example of IaaS in use, the New York Times archived much of their historical data in
less than two days using an IaaS system developed by Amazon called Elastic Compute
Cloud (EC2). Without the assistance of EC2 or a similar IaaS platform, the same process
would probably have taken weeks.

Benefits of IaaS

An organization that hosts an IaaS platform can benefit through:

A consistent architecture that is created through connecting applications and resources, both
in cloud and on-premise, in one interface.

Reduced cost by allowing an organization to avoid management of an on-premise data


center.

The data center infrastructure is handled by the service provider for the organization.

The organization does not have to worry about software or hardware upgrades since the
service provider handles both.

Startups do not have to pay the initial cost of buying, building and managing an extensive
infrastructure.

Users pay for what they use.

Services are scalable.

Some IaaS providers will support 24/7/365 monitoring.


Q.22 Explain in detail testing as a service ?

 Testing as a Service (TaaS)

TaaS meaning Testing as a Service, is an outsourcing model, in which software testing is


carried out by a third party service provider rather than employees of the organization. In
TaaS, testing is done by a service provider that specializes in simulating real-world testing
environments and finding bugs in the software product.

 TaaS is used when


 A company lacks the skills or resources to carry out testing internally
 Don‘t want the in-house developers to influence the results of the testing process
(which they could if done internally)
 Save on Cost
 Increase the speed of test execution and reduce software development time.

 Types of TaaS
 Functional Testing as a Service: TaaS Functional Testing may include UI/GUI
Testing, regression, integration and automated User Acceptance Testing (UAT) but
not necessary to be part of functional testing
 Performance Testing as a Service: Multiple users are accessing the application at
the same time. TaaS mimic as a real-world users environment by creating virtual
users and performing the load and stress test
 Security Testing as a Service: TaaS scans the applications and websites for any
vulnerability

 Key TaaS Features


 Software Testing as a Service over Cloud

Once user scenarios are created, and the test is designed, these service providers deliver
servers to generate virtual traffic across the globe.

 In Cloud, software testing occurs in following steps


1. Develop users scenarios
2. Design test cases
3. A select cloud service provider
4. Set up infrastructure
5. Leverage cloud service
6. Start testing
7. Monitor goals
8. Deliver
 When to use TaaS

TaaS is useful when

 Testing of applications that require extensive automation and with short test
execution cycle.
 Performing a testing task that doesn‘t ask for in-depth knowledge of the design or
the system
 For ad-hoc or irregular testing activities that require extensive resources.
 Benefits of Cloud Testing
 Flexible Test Execution and Test Assets
 Some users claim 40-60% savings in the cloud testing vs. the traditional testing
model
 Achieve a fast return of investments by eliminating the investment made after
hardware procurement, management, and maintenance, software licensing, etc.
 Deliver product in quicker time through rapid procurement, project set-up, and
execution
 Ensure data integrity and anytime anywhere accessibility
 Reduce operational costs, maintenance costs, and investments
 Pay as you use.
Q.23 State the concept of scaling of cloud infrastructure .

Cloud scalability in cloud computing refers to the ability to increase or


decrease IT resources as needed to meet changing demand. Scalability is one
of the hallmarks of the cloud and the primary driver of its exploding popularity
with businesses.

Data storage capacity, processing power and networking can all be scaled
using existing cloud computing infrastructure. Better yet, scaling can be done
quickly and easily, typically with little to no disruption or down time. Third-
party cloud providers have all the infrastructure already in place; in the past,
when scaling with on-premises physical infrastructure, the process could take
weeks or months and require tremendous expense.

A system‘s scalability, as described above, refers to its ability to increase


workload with existing hardware resources.
This is one of the most popular and beneficial features of cloud
computing, as businesses can grow up or down to meet the demands
depending on the season, projects, development, etc.

By implementing cloud scalability, you enable your resources to grow as


your traffic or organization grows and vice versa. There are a few main
ways to scale to the cloud:

If your business needs more data storage capacity or processing power,


you'll want a system that scales easily and quickly.

Benefits of cloud scalability

The major cloud scalability benefits are driving cloud adoption for businesses
large and small:

 Convenience: Often with just a few clicks, IT administrators can


easily add more VMs that are available without delay—and
customized to the exact needs of an organization. That saves
precious time for IT staff. Instead of spending hours and days setting
up physical hardware, teams can focus on other tasks.
 Flexibility and speed: As business needs change and grow—
including unexpected spikes in demand—cloud scalability allows IT
to respond quickly. Today, even smaller businesses have access to
high-powered resources that used to be cost prohibitive. No longer
are companies tied down by obsolete equipment—they can update
systems and increase power and storage with ease.
 Cost savings: Thanks to cloud scalability, businesses can avoid the
upfront costs of purchasing expensive equipment that could become
outdated in a few years. Through cloud providers, they pay for only
what they use and minimize waste.
 Disaster recovery: With scalable cloud computing, you can
reduce disaster recovery costs by eliminating the need for building
and maintaining secondary data centers.

Q. 24 State the concept of disaster recovery in terms of disaster recovery


planning disaster in the cloud & disaster in management .

Definition - What does Cloud Disaster Recovery mean?


Cloud disaster recovery is a service that enables the backup and recovery of remote
machines on a cloud-based platform. Cloud disaster recovery is primarily an infrastructure
as a service (IaaS) solution that backs up designated system data on a remote offsite cloud
server. It provides updated recovery point objective (RPO) and recovery time objective
(RTO) in case of a disaster or system restore
.
Disaster recovery planning
Disaster recovery is generally a planning process and it produces a document which
ensures businesses to solve critical events that affect their activities. Such events can be a
natural disaster (earthquakes, flood, etc.), cyber–attack or hardware failure like servers or
routers.

Requirements to Have a Disaster Recovery Plan:


Disaster recovery starts with an inventory of all assets like computers, network equipment,
server, etc. and it is recommended to register by serial numbers too. We should make an
inventory of all the software and prioritize them according to business importance.
An example is shown in the following table −
Systems Down Disaster Preventions Solution strategy Recover fully
Time type

Restore the Fix the primary server


Payroll Server We take
8 hours backups in the and restore up to date
system damaged backup daily
Backup Server data
You should prepare a list of all contacts of your partners and service providers, like ISP
contact and data, license that you have purchased and where they are purchased.
Documenting all your Network which should include IP schemas, usernames and password
of servers.
Preventive steps to be taken for Disaster Recovery
The server room should have an authorized level. For example: only IT personnel should
enter at any given point of time.
In the server room there should be a fire alarm, humidity sensor, flood sensor and a
temperature sensor.

 At the server level, RAID systems should always be used and there should always be
a spare Hard Disk in the server room.

 You should have backups in place, this is generally recommended for local and off-
site backup, so a NAS should be in your server room.

 Backup should be done periodically.

 The connectivity to internet is another issue and it is recommended that the


headquarters should have one or more internet lines. One primary and one
secondary with a device that offers redundancy.

 If you are an enterprise, you should have a disaster recovery site which generally is
located out of the city of the main site. The main purpose is to be as a stand-by as
in any case of a disaster, it replicates and backs up the data.

 Disasters in the cloud


Disasters can be the result of three broad categories of threats and hazards.
The first category is natural hazards that include acts of nature such as floods, hurricanes,
tornadoes, earthquakes, and epidemics.

The second category is technological hazards that include accidents or the failures of
systems and structures such as pipeline explosions, transportation accidents, utility
disruptions, dam failures, and accidental hazardous material releases.

The third category is human-caused threats that include intentional acts such as active
assailant attacks, chemical or biological attacks, cyber-attacks against data or
infrastructure, and sabotage.
DISASTER RECOVERY MANAGEMENT

Disaster Recovery Management consists of some important components such as:


Disaster Recovery Plan
Business Continuity Plan
Business Impact Analysis
Recovery Time Objective
Recovery Point Objective

Q.25 State the framework overview of Aneka.

Aneka is a platform and a framework for developing distributed applications on the Cloud.
It harnesses the spare CPU cycles of a heterogeneous network of desktop PCs and servers or
datacenters on demand. Aneka provides developers with a rich set of APIs for transparently
exploiting such resources and expressing the business logic of applications by using the
preferred programming abstractions. System administrators can leverage on a collection of
tools to monitor and control the deployed infrastructure. This can be a public cloud available
to anyone through the Internet, or a private cloud constituted by a set of nodes with
restricted access.

The Aneka based computing cloud is a collection of physical and virtualized resources
connected through a network, which are either the Internet or a private intranet. Each of
these resources hosts an instance of the Aneka Container representing the runtime
environment where the distributed applications are executed. The container provides the
basic management features of the single node and leverages all the other operations on the
services that it is hosting. The services are broken up into fabric, foundation, and execution
services. Fabric services directly interact with the node through the Platform Abstraction
Layer (PAL) and perform hardware profiling and dynamic resource provisioning.
Foundation services identify the core system of the Aneka middleware, providing a set of
basic features to enable Aneka containers to perform specialized and specific sets of tasks.
Execution services directly deal with the scheduling and execution of applications in the
Cloud.
One of the key features of Aneka is the ability of providing different ways for expressing
distributed applications by offering different programming models; execution services are
mostly concerned with providing the middleware with an implementation for these models.
Additional services such as persistence and security are transversal to the entire stack of
services that are hosted by the Container. At the application level, a set of different
components and tools are provided to: 1) simplify the development of applications (SDK);
2) porting existing applications to the Cloud; and 3) monitoring and managing the Aneka
Cloud.

A common deployment of Aneka is presented at the side. An Aneka based Cloud is


constituted by a set of interconnected resources that are dynamically modified according to
the user needs by using resource virtualization or by harnessing the spare CPU cycles of
desktop machines. If the deployment identifies a private Cloud all the resources are in
house, for example within the enterprise. This deployment is extended by adding publicly
available resources on demand or by interacting with other Aneka public clouds providing
computing resources connected over the Internet.
Q.26 Explain in detail platform abstraction layer of Aneka
The core infrastructure of the system is based on the .NET technology and allows the Aneka
container to be portable over different platforms and operating systems. Any platform
featuring an and compatible environment can host and run an instance of the Aneka
container.
The Common Language Infrastructure (CLI), which is the specification introduced in the
standard, defines a common runtime environment and application model for executing
programs but does not provide any interface to access the hardware or to collect
performance data from the hosting operating system. Moreover, each operating system has a
different file system organization and stores that information differently. The Platform
Abstraction Layer (PAL) addresses this heterogeneity and provides the container with a
uniform interface for accessing the relevant hardware and operating system information,
thus allowing the rest of the container to run unmodified on any supported platform.
The PAL is responsible for detecting the supported hosting environment and providing the
corresponding implementation to interact with it to support the activity of the container. The
PAL provides the following features:
 Uniform and platform-independent implementation interface for accessing the
hosting platform
 Uniform access to extended and additional properties of the hosting platform
 Uniform and platform-independent access to remote nodes
 Uniform and platform-independent management interfaces
The PAL is a small layer of software that comprises a detection engine, which automatically
configures the container at boot time, with the platform-specific component to access the
above information and an implementation of the abstraction layer for the Windows, Linux,
and Mac OS X operating systems.
The collectible data that are exposed by the PAL are the following:
 Number of cores, frequency, and CPU usage
 Memory size and usage
 Aggregate available disk space
 Network addresses and devices attached to the node
Moreover, additional custom information can be retrieved by querying the properties of the
hardware. The PAL interface provides means for custom implementations to pull additional
information by using name-value pairs that can host any kind of information about the
hosting platform. For example, these properties can contain additional information about the
processor, such as the model and family, or additional data about the process running the
container.
Fig. Platform abstraction layer of Aneka
Q.27 Explain in detail fabric services of Aneka?
Fabric Services:
define the lowest level of the software stack representing the Aneka Container. They
provide access to the resource-provisioning subsystem and to the monitoring facilities
implemented in Aneka. Resource-provisioning services are in charge of dynamically
providing new nodes on demand by relying on virtualization technologies, while
monitoring services allow for hardware profiling and implement a basic monitoring
infrastructure that can be used by all the services installed in the container.
Profilling and monitoring:
Profiling and monitoring services are mostly exposed through the Heartbeat, Monitoring,
and Reporting Services. The first makes available the information that is collected through
the PAL; the other two implement a generic infrastructure for monitoring the activity of any
service in the Aneka Cloud.
Resource management:
Resource management is another fundamental feature of Aneka Clouds. It comprises
several tasks :resource membership, resource reservation, and resource provisioning. Aneka
provides a collection of services that are in charge of managing resources. These are the
Index Service (or Membership Catalogue), Reservation Service, and Resource Provisioning
Service Aneka includes an extensible set of APIs associated with programming models like
Map Reduce These APIs support different cloud models like a private, public, hybrid
Cloud. Manjra soft focuses on creating innovative software technologies to simplify the
development and deployment of private or public cloud applications. Our product plays the
role of an application platform as a service for multiple cloud computing.

Multiple Structures:
Aneka is a software platform for developing cloud computing applications. In Aneka, cloud
applications are executed. Fabric Services defines the lowest level of the software stack that
represents multiple containers. They provide access to resource- provisioning subsystems
and features implemented in many. Monitoring Fabric Services are the core services of
Many Cloud and define the infrastructure management features of the system. Foundation
services are concerned with the logical management of a distributed system built on top of
the infrastructure and provide ancillary services for delivering applications.
Application services manage the execution of applications and constitute a layer that varies
according to the specific programming model used to develop distributed applications into
Aneka. There are mainly two major components technologies: in multiple The SDK
(Software Development Kit) includes the Application Programming Interface (API) and
tools needed for the rapid development of applications. The Aneka API supports three
popular cloud programming models: Tasks, Threads and Map Reduce A runtime engine and
platform for managing the deployment and execution of applications on a private or public
cloud. One of the notable features of Aneka Pass is to support the provision of private cloud
resources from desktop, cluster to a virtual data center using VMware, Citrix Zen Server,
and public cloud resources such as Windows Azure, Amazon EC2, and Go Grid cloud
service.
Aneka's potential as a Platform as a Service has been successfully harnessed by its users
and customers in three different areas, including engineering, life business intelligence.
sciences, education. A multiplex-based computing cloud is a collection of physical and
virtualized resources connected via a network, either the Internet or a private intranet. Each
resource hosts an instance of multiple containers that represent the runtime environment
where distributed applications are executed. The container provides the basic management
features of a single node and takes advantage of all the other functions of its hosting
services. Services are divided into clothing, foundation, and execution services.
Foundation services identify the core system of Aneka middleware, which provides a set of
infrastructure features to enable Aneka containers to perform specific and specific tasks.
Fabric services interact directly with nodes through the Platform Abstraction Layer (PAL)
and perform hardware profiling and dynamic resource provisioning. Execution services deal
directly with scheduling and executing applications in the Cloud. One of the key features of
Aneka is its ability to provide a variety of ways to express distributed applications by
offering different programming models; Execution services are mostly concerned with
providing middleware with the implementation of these models. Additional services such as
persistence and security are inverse to the whole stack of services hosted by the container.

Q.28 Explain in detail Foundation Services of Aneka?

Foundation services p0295 Fabric Services are fundamental services of the Aneka Cloud
and define the basic infrastructure management features of the system. Foundation Services
are related to the logical management of the distributed system built on top of the
infrastructure and provide supporting services for the execution of distributed applications.
All the supported programming models can integrate with and leverage these services to
provide advanced and comprehensive application management. These services cover:
• Storage management for applications
• Accounting, billing, and resource pricing
• Resource reservation
Foundation Services provide a uniform approach to managing distributed applications
and allow developers to concentrate only on the logic that distinguishes a specific
programming model from the others. Together with the Fabric Services, Foundation
Services constitute the core of the Aneka middleware. These services are mostly consumed
by the execution services and Management Consoles. External applications can leverage the
exposed capabilities for providing advanced application managemet.
1.Storage management:
Data management is an important aspect of any distributed system, even in computing
clouds. Applications operate on data, which are mostly persisted and moved in the format
of files. Hence, any infrastructure that supports the execution of distributed applications
needs to provide facilities for file/data transfer management and persistent storage. Aneka
offers two different facilities for storage management: a centralized file storage, which is
mostly used for the execution of computeintensive applications, and a distributed file
system, which is more suitable for the execution of data-intensive applications.
The requirements for the two types of applications are rather different. Compute-intensive
applications mostly require powerful processors and do not have high demands in terms of
storage, which in many cases is used to store small files that are easily transferred from one
node to another. In this scenario, a centralized storage node, or a pool of storage nodes, can
constitute an appropriate solution. In contrast, data-intensive applications are characterized
by large data files (gigabytes or terabytes), and the processing power required by tasks does
not constitute a performance bottleneck. In this scenario, a distributed file system
harnessing the storage space of all the nodes belonging to the cloud might be a better and
more scalable solution.
2. Accounting, billing, and resource pricing:
Accounting Services keep track of the status of applications in the Aneka Cloud. The
collected information provides a detailed breakdown of the distributed infrastructure usage
and is vital for the proper management of resources. The information collected for
accounting is primarily related to infrastructure usage and application execution. A
complete history of application execution and storage as well as other resource Anatomy of
the Aneka container To protect the rights of the author(s) and publisher we inform you that
this PDF is an uncorrected proof for internal business use only by the author(s), editor(s),
reviewer(s), Elsevier and typesetter MPS. It is not allowed to publish this proof online or in
print.
This proof copy is the copyright property of the publisher and is confidential until formal
publication. utilization parameters is captured and minded by the Accounting Services. This
information constitutes the foundation on which users are charged in Aneka. Billing is
another important feature of accounting. Aneka is a multitenant cloud programming
platform in which the execution of applications can involve provisioning additional
resources from commercial IaaS providers. Aneka Billing Service provides detailed
information about each user‘s usage of resources, with the associated costs.
3.Resource reservation:
Aneka‘s Resource Reservation supports the execution of distributed applications and
allows for reserving resources for exclusive use by specific applications. Resource
reservation is built out of two different kinds of services: Resource Reservation and the
Allocation Service. Resource Reservation keeps track of all the reserved time slots in the
Aneka Cloud and provides a unified view of the system. The Allocation Service is installed
on each node that features execution services and manages the database of information
regarding the allocated slots on the local node. Applications that need to complete within a
given deadline can make a reservation request for a specific number of nodes in a given
timeframe.
If it is possible to satisfy the request, the Reservation Service will return a reservation
identifier as proof of the resource booking. During application execution, such an identifier
is used to select the nodes that have been reserved, and they will be used to execute the
application. On each reserved node, the execution services will check with the Allocation
Service that each job has valid permissions to occupy the execution timeline by verifying
the reservation identifier. Even though this is the general reference model for the
reservation infrastructure, Aneka allows for different implementations of the service, which
mostly vary in the protocol that is used to reserve resources or the parameters that can be
specified while making a reservation request.

Q.29 Explain in detail Application services of Aneka?

 Application services manage the execution of applications and constitute a layer


that differentiate according to the specific programming model used for developing
distributed applications on top of Aneka.
 The types and the number of service that compose this layer of each of
programming models may vary according to the specific needs of features of the
selected model. It is possible to identify too major types of activities that are
common across all the supported models: Scheduling and Execution.

1. Scheduling
 Scheduling services are in the charge of planning the execution of
distributed application on top of Aneka and governing the allocation of
job composing an application to nodes. They also constitute the
integration point with a several other foundation and fabric service,
such as the resource provisioning service, the reservation service, and
accounting service, and the reporting service.
 Common tasks that are performed by the scheduling component are the
following :
 Job to node mapping
 Rescheduling of failed jobs
 Jobs status monitoring
 Application status monitoring

 Aneka does not provide centralized scheduling engine, but each programming
modes features its own scheduling services that needs to work in synergy with the
existing services of middleware.
 The possibility of having different scheduling engines for different models gives
great freedom implementing scheduling and resource allocation strategies but, at
the same time, requires a careful design of a use of shared resources. In this
scenarios, common situation that have to be appropriately managed are the
following: Multiple job send to the same node at the same time; Jobs without
reservations send to observed nodes; Job send to nodes where the required
services are not installed. Aneka’s Foundation Services provide sufficient
information to avoid these cases, but the runtime infrastructure does not feature
specific policies to detect these conditions and provide corrective actions.

2. Executions
 Execution services controlled execution of single job that compose
application. They are in charge of setting up the runtime environment
hosting the execution of job. As happens for the scheduling services,
each programming model has its own requirement, but it is possible to
identify some common operations that apply cross all the range of
supported models:
 Unpacking the job received from the schedular
 Retrieval of input files required for job execution
 Sandboxed execution of jobs
 submission of output files at the end of the execution
 execution failure management (i.e. capturing sufficient
contextual information useful to identify the nature of the failure
)
 performance monitoring
 Packing jobs and sending them back to schedular
 Execution services constitute a more self-contained unit with respect to
the corresponding schedulind services. They handle less information and
are required to integrate themselves only with storage service and the
local Allocation and Monitoring Services.
 Application services constitute the runtime support of the programming
model in the Aneka Cloud. Currently there are several supported
models:
 Task Model: This model provides the support for the
independent “bag of tasks”, application and many computing
tasks.In this model, an application is modelled as a collection of
tasks that are independent from each other and whose execution
can be sequenced in any order.
 Thread Model: This model provides an extension to the classical
multithreaded programming to a distributed infrastructure and
uses the abstraction of Thread to wrap a method that is executed
remotely.
 MapReduce Model: This is an implementation of MapReduce as
proposed by Google on top of Aneka.
Parameter Sweep Model: This model is a specialization of on Tasks Model for application
that can be described by a template task whose instances are created by generating
different combinations of parameters, which identify a specific point the domain of
interest.

Q.30 Explain the concept of infrastructural organization of Aneka.


Ans :- Aneka is a primarily a platform for developing distributed application for
clouds. As a software platform it requires infrastructure on which to be deployed ;
this infrastructure to be managed.
Infrastructure management tools are specially designed for this task, and building
cloud is one of the primary task of administrator. Aneka support various
deployment model for public, private and hybrid cloud.

 Infrastructure Organization :
Below fig. provides an overview of aneka cloud from an infrastructure point of
view.

The scenario is a reference model for all the different deployment aneka supports.

A central role is played by the administrative console, which performs all the
libraries required to layout and install the basic aneka platform.

These libraries constitute the software image for the node manager and the
container programs. Repositories can make libraries available through a variety of
communication channels, such as HTTP, FTP, common file sharing, and so on.

The system includes four key components, including Aneka Master, Aneka Worker,
Aneka Management Console, and Aneka Client Libraries . The Aneka Master and
Aneka Worker are both Aneka Containers which represents the basic deployment
unit of Aneka based Clouds.
The management console can manage multiple repositories and select the
one that best suit the specific deployment. The infrastructure is deployed by
harnessing a collection of nodes and installing on them aneka domain.

The domain constitutes the remote management service used to deploy and
control container instances. The collection of resulting container identities the
aneka cloud.

From an infrastructure point of view, the management of physical and virtual node
is performed uniformly as long as it is possible to have an internet connection and
remote administrative access to the node.

A different scenario is constituted by the dynamic provisioning of virtual instances;


these are generally created by pre-packaged images already containing an
installation of Aneka, which only need to be configured to join a specific Aneka
Cloud.

Q.31 Explain the concept of logical organization of Aneka.


Aneka

The logical organization of Aneka clouds can be diverse, since it strongly depends on the
configuration selected for each of the contain instances belonging to the cloud. The most
common scenario is to use a master worker Configuration with seperated mode for
storage. The master node features all the services that are most likely to be pressed in one
single Copy & that provide the intelligence of the

Aneka cloud

The master node also provides connections to the ROMS facility where the state Services is
maintained. Several s The water node constitutes the workforce of the Aneka cloud and
generally configured for the execution of application.
Q.32 Explain the concept private cloud deployment mode of Aneka.
A private deployment mode is mostly constituted by local physical resources and
infrastructure management software providing access to a local pool of nodes, which
might be virtualized.

Aneka includes an extensible set of api’s associated with programming models like Map
Reduce.

It works as your virtual computing environment with choice of deployment to store and
who has access to the infrastructure.

A private cloud refers to a cloud deployment model operated exclusively for a single
organization, whether it is physically located at the company’s onsite data center, or is
managed and hosted by a third-party provider.

This deployment is acceptable for a scenario in which the workload of the system is
predictable and local virtual machine manager can easily address excess capacity demand.
Most of the Aneka nodes are constituted of physical nodes with a longtime and a static
configuration and generally do not need to be reconfigured often
Workstation clusters might have some specific legacy software that is required for
supporting the execution of applications and should be preferred for the execution of

applications with special requirements.

Fig. Private cloud deployment

Benefits of Aneka

 Data privacy – It is ideal for storing corporate data where only authorized personnel
gets access.
 Security – Segmentation of resources within the same infrastructure can help with
better access and higher levels of security.
 Supports Legacy Systems – This model supports legacy system that can’t access the
punlic cloud
Limitations of Private Cloud

 Higher Cost-effective With the benefits you get,the investment will also be larger
than the public cloud Here, you will pay for software, hardware, and resources for
staff and training.
 Fixed Scalability- The hardware you choose will according help you scale in a
certain direction.
Q.33 Explain the concept public cloud deployment mode of Aneka
Public Cloud deployment mode

The installation of Aneka master and worker nodes over a completely virtualized
infrastructure that is hosted on the infrastructure of one or more resource protect the
rights of the authors and publisher we inform you that this PDF is an uncorrected proof for
internal business use only by the authors editor(s), reviewer(s), Elsevier and typesetter
MPS.

It is not allowed to publish this proof online or in print. This proof copy is the copyright
property of the publisher and is confidential until formal publication. providers such as
Amazon EC2 or Go Grid. In this case it is possible to have a static deployment where the
nodes are provisioned beforehand and used as though they were real machines.

This deployment merely replicates a classic Aneka installation on a physical infrastructure


without any dynamic provisioning capability. More interesting is the use of the elastic
features of IaaS providers and the creation of a Cloud that is completely dynamic. provides
an overview of this scenario.

The deployment is generally contained within the infrastructure boundaries of a single


IaaS provider. The reasons for this are to minimize the data transfer between different
providers, which is generally priced at a higher cost, and to have better network
performance. In this scenario it is possible to deploy an Aneka Cloud composed of only one
node and to completely leverage dynamic provisioning to elastically scale the
infrastructure on demand. A fundamental role is played by the Resource Provisioning
Service, which can be configured with different images and templates to instantiate. Other
important services that have to be included in the master node are the Accounting and
Reporting Services. These provide details about resource utilization by users and
applications and are fundamental in a multitenant Cloud where users are billed according
to their consumption of Cloud capabilities.

Dynamic instances provisioned on demand will mostly be configured as worker nodes,


and, in the specific case of Amazon EC2, different images featuring a different hardware
setup can be made available to instantiate worker containers. Applications with specific
requirements for computing capacity or memory can provide additional information to the
scheduler that will trigger the appropriate provisioning request. Application execution is
not the only use of dynamic instances; any service requiring elastic scaling can leverage
dynamic provisioning. Another example is the Storage Service. In multitenant Clouds,
multiple applications can leverage the support in this scenario it is then possible to
introduce bottlenecks or simply reach the quota limits allocated for storage on the node.

Dynamic provisioning can easily solve this issue as it does for increasing the computing
capability of an Aneka Cloud. Deployments using different providers are unlikely to
happen because of the data transfer costs among providers, but they might be a possible
scenario for federated Aneka Clouds . In this scenario resources can be shared or leased
among providers under specific agreements and more convenient prices. In this case the
specific policies installed in the Resource Provisioning Service can discriminate among
different resource providers, mapping different IaaS providers to provide the best solution
to a provisioning request.

Fig : public cloud deployment mode of Aneka


Q.34 Explain shortly cloud programming and Aneka SDK.

Cloud programming and management p0750 Aneka’s primary purpose is to provide a


scalable middleware product in which to execute distributed applications. Application
development and management constitute the two major features that are exposed to
developers and system administrators. To simplify these activities, Aneka provides
developers with a comprehensive and extensible set of APIs and administrators with
powerful and intuitive management tools. The APIs for development are mostly
concentrated in the Aneka SDK; management tools are exposed through the Management
Console.

Aneka SDK

Aneka provides APIs for developing applications on top of existing programming models,
implementing new programming models, and developing new services to integrate into
the Aneka Cloud. The development of applications mostly focuses on the use of existing
features and leveraging the services of the middleware, while the implementation of new
programming models or new services enriches the features of Aneka. The SDK provides
support for both programming models and services by means of the Application Model and
the Service Model. The former covers the development of applications and new
programming models; the latter defines the general infrastructure for service
development.

(1)Application model:

Aneka provides support for distributed execution in the Cloud with the abstraction of
programming models. A programming model identifies both the abstraction used by the
developers and the runtime support for the execution of programs on top of Aneka. The
Application Model represents the minimum set of APIs that is common to all the
programming models for representing and programming distributed applications on top
of Aneka. This model is further specialized according to the needs and the particular
features of each of the programming models.

An overview of the components that define the Aneka Application Model is shown in
Figure 5.8. Each distributed application running on top of Aneka is an instance of the
ApplicationBase , M . class, where M identifies the specific type of application manager
used to control the application. Application classes constitute the developers’ view of a
distributed application on Aneka Clouds, whereas application managers are internal
components that interact with Aneka Clouds in order to monitor and control the
execution of the application.
(2)Service model

The Aneka Service Model defines the basic requirements to implement a service that
can be hosted in an Aneka Cloud. The container defines the runtime environment in
which services are hosted. Each service that is hosted in the container must be compliant
with the IService interface, which exposes the following methods and properties:
• Name and status

• Control operations such as Start, Stop, Pause, and Continue methods

• Message handling by means of the HandleMessage method

Specific services can also provide clients if they are meant to directly interact with end
users. Examples of such services might be Resource Provisioning and Resource
Reservation Services, which ship their own clients for allowing resource provisioning and
reservation. Apart from control operations, which are used by the container to set up and
shut down the service during the container life cycle, the core logic of a service resides in
its message-processing functionalities that are contained in the HandleMessage method.
Each operation that is requested to a service is triggered by a specific message, and
results are communicated back to the caller by means of messages.

Figure 5.9 describes the reference life cycle of each service instance in the Aneka
container. The shaded balloons indicate transient states; the white balloons indicate
steady states. A service instance can initially be in the Unknown or Initialized state, a
condition that refers to the creation of the service instance by invoking its constructor
during the configuration of the container. Once the container is started, it will iteratively
call the Start method on each service method. As a result the service instance is expected
to be in a Starting state until the startup process is completed, after which it will exhibit
the Running state. In particular ,the guidelines define a ServiceBase Class that can be
further extended to provide a proper implementation. This class is the base class of
several services in the framework and provides some built-in features:

• Implementation of the basic properties exposed by IService

• Implementation of the control operations with logging capabilities and state control

• Built-in infrastructure for delivering a service specific client

• Support for service monitoring


Q.35 Explain in detail Management tools of aneka cloud.
Aneka is a pure PaaS implementation and requires virtual or physical hardware to be
deployed.

Hence, infrastructure management, together with facilities for installing logical clouds on
such
infrastructure, is a fundamental feature of Aneka‘s management layer.

This layer also includes capabilities for managing services and applications running in the
Aneka Cloud.

Management tools of aneka cloud is as follows:-

1. Infrastructure management

2.platform management

3.application management

1 Infrastructure management:-

Aneka leverages virtual and physical hardware in order to deploy Aneka Clouds.

Virtual hardware is generally managed by means of the Resource Provisioning Service,


which acquires resources on demand according to the need of applications, while physical
hardware is directly managed by the Administrative Console by leveraging the Aneka
management API of the PAL.

The management features are mostly concerned with the provisioning of physical hardware
and the remote installation of Aneka on the hardware.

2 Platform management :-

Infrastructure management provides the basic layer on top of which Aneka Clouds are
deployed.

The creation of Clouds is orchestrated by deploying a collection of services on the physical


infrastructure that allows the installation and the management of containers.

A collection of connected containers defines the platform on top of which applications are
executed.

The features available for platform management are mostly concerned with the logical
organization and structure of Aneka Clouds.

It is possible to partition the available hardware into several Clouds variably configured for
different purposes.
Services implement the core features of Aneka Clouds and the management layer exposes
operations for some of them, such as Cloud monitoring, resource provisioning and
reservation, user management, and application profiling.

3 Application management:-

Applications identify the user contribution to the Cloud.

The management APIs provide administrators with monitoring and profiling features that
help them track the usage of resources and relate them to users and applications.

Aneka exposes capabilities for giving summary and detailed information about application
execution and resource utilization.

Q. 36 Explain in detail healthcare cloud computing Application .


Cloud Computing in health care is. going more and more popularity. especially after the
covid- 19 Pandemic

The importance of so cloud computing in health care is evident from the fact that the
global computing market for the health care industry. Is expected d to reach around
$25.25 billion by 2024.

Cloud computing is now a must have technology for the healthcare industry to provide an
optimal patient-centred experience

Cloud based health care is the process of integrating cloud technology into healthcare
services for Cost saving , easy data sharing, personalized medicine, tele health. apps &
more benefits.

Many Cloud healthcare provider also use based system in health care for Safe data storage
digital back up &digital records retrival.better analysis of monitoring of data related to
diagnosis & treatment of different diseases.

Massive storage resources for large datasets for resources for EHR (electronic health
records) of radiology images.

Ability to provide On-demand access.to computing resources, sharing of EHR among


Physicians doctors of hospitals in different Location around the worlds. improved data
Enhanced tracking of patients. healthcare data.

Application of cloud computing in health


1) E-health and tele medicines :- cloud computing is weed for E-health which before to
providing health care services. electronically through the internet.

2) Drug Discovery :- Drug recovery requires large number of computing resources for
discovering Compounds from billions of chemical structures.

3) Health care information system :- cloud-based managed information system to provide


Improved patient come, manage humans resources better-querying Services & billing of
finacing.

4) personal health records :-It is managing access to personal health records CPHR) &
Electronic. health Records (EHR).

Clinical (Decision Support system (CDSS) :-

CDSS is uses the knowledge of behaviour of a medical professional to provide advice on the
patients records analysis.

Q.37 Explain in detail Geo science and Biology cloud application.

GeoScience : Satellite Image Processing :

Geoscience applications collect, produce, and analyses massive amounts of geospatial and
non-spatial data. As the technology progresses and our planet becomes more
instrumented (i.e., through the deployment of sensors and satellites for monitoring), the
volume of data that need to be processed increases significantly. In particular, the
geographic information system (GIS) is a major element of geoscience applications. GIS
applications capture, store, manipulate, analyze, manage, and present all types of
geographically referenced data. This type of information is now

Portal

Arc Distri
Local Anek
a
Public
Private
becoming increasingly relevant to a wide variety of application domains: from advanced
farming to civil security and also natural resources management. As a result, a considerable
amount of geo-referenced data is ingested into computer systems for further processing
and analysis. Cloud computing is an attractive option for executing these demanding tasks
and extracting meaningful information for supporting decision makers. Satellite remote
sensing generates hundreds of gigabytes of raw images that need to be further processed
to become the basis of several different GIS products. This process requires both I/O and
compute intensive tasks.

Biology : Protein Structure Prediction

Protein structure prediction is a computationally intensive task fundamental for different


types of research in the life sciences. Among these is the design of new drugs for the
treatment of diseases. The geometrical structure of a protein cannot be directly inferred
from the sequence of genes that compose its structure, but it is the result of complex
computations aimed at identifying the structure that minimizes the required energy. This
task requires the investigation of a space with a massive number of states, and
consequently creating a large number of computations for each of these states. The
computational power required for protein structure prediction can now be acquired on
demand, without owning a cluster or doing all the bureaucracy for getting access to
parallel and distributed computing facilities. Cloud computing grants the access to such
capacity on a pay per use basis.

Jeeva

Anek

Task
Graph
A A : BLAST
Initial B : Create Data
Phase B Vector
C : HH Classifier
D : SS Classifier
Classific C D E F G H E : TT Classifier
ation F : HS Classifier
G : ST Classifier
Fina
I
Q.38 Explain in detail Business & Consumer Applications CRM, ERP of Cloud.
Business and Consumer Applications

The business and consumer sector is the one that probably benefits the most from Cloud
computing technologies. On the one hand the opportunity of transforming capital cost into
operational costs makes Clouds an attractive option for all enterprises that are IT centric.
On the other hand, the sense of ubiquity that Cloud offers for accessing data and services
makes it interesting for end users as well. Moreover, the elastic nature of Cloud
technologies does not require huge upfront investments, thus allowing new ideas to be
quickly translated into products and services that can comfortably grow with the demand.
The combination of all these elements has made Cloud computing the preferred
technology for a wide range of applications: from CRM and ERP systems to productivity
and social networking applications.

CRM and ERP

Customer Relationship Management (CRM) and Enterprise Resource Planning (ERP)


applications are market segments that are flourishing in the Cloud, with CRM applications
being more mature than ERP implementations.

Cloud CRM applications constitute a great opportunity for small


enterprises and start-ups to have a fully functional CRM software without large upfront
costs and by paying subscriptions. Moreover, customer relationship management is not an
activity that requires specific needs and it can be easily moved to the Cloud. Such a
characteristic, together with the possibility of having access to your business and customer
data from everywhere and any device, has fostered the spread of Cloud CRM
applications.CRM handles the sales, marketing, and customer service information. It
handles activities such as recording customer interactions, sales tracking, pipeline
management, prospecting, and creating/evaluating marketing campaigns.

ERP solutions on the Cloud are less mature and have to compete
with well-established in-house solutions. ERP systems integrate several aspects of an
enterprise like finance and accounting, human resources, manufacturing, supply chain
management, project management, and customer relationship management. ERP handles
the back-end processes and internal information. It takes care of tasks like order
placement, tracking, billing, shipping, accounting, and supply chain details.

Benefits of CRM and ERP

1. Centralization of Accounts and Contacts


Both CRM and ERP software systems store detailed information on customers
(including contact information, order history, and billing/shipping details).By
integrating customer relationship management in ERP, you can see all the details for one
account in one location, rather than having to look at the same account in the two
different solutions.

This will also save time on data entry. Instead of updating accounts/contacts in both
systems, you will only have to do so in one centralized location.

2. Reduction of Data Duplication


If you’ve ever used an automated solution such as ERP or CRM before, you know that
duplicate data can be a major headache for software users. While enterprise resource
planning software is focused on billing-related and shipping data, customer relationship
management software holds customer-centric information and sales metrics. With an
integrated approach, duplicate data entry is a thing of the past, since customer
relationship management in ERP work through the same rules and structure.

3. Stronger Visibility and Forecasting


In a major B2B enterprise, sales reps need to be able to access data within the enterprise
resource planning solution. Sales reps often need to check in on the status of an order,
make changes and check in on the progress of orders and accounts at any time, and more.
In terms of forecasting, the integrated ERP and CRM system will naturally provide better
data since it’s real-time and always as accurate as possible.

4. Cross-Departmental Collaboration
A major benefit of both enterprise resource planning and customer relationship
management software is the ability to work cross-departmentally, without department
siloes. (In business, organizational silos refer to business divisions that operate
independently and avoid sharing information. It also refers to businesses
whose departments have silo' system applications, in which information cannot be shared
because of system limitations). In a siloed business approach, departments are completely
distinct from one another, discouraging collaboration, making data accessibility a
challenge, and data duplication common. A cross-departmental approach ensures real-
time data is always being utilized and departments are working together to accomplish the
same goals.

5. Easier Quoting and Ordering


A sales rep will be able to take a proposal from CRM and turn it directly into an order in the
ERP system without having to change systems and re-enter the data in multiple locations.
This will save time and improve company efficiency. Additionally, the sales reps would also
have improved visibility into the status/progress of an order for customer updates as well
as easy access to make changes if needed.

6. Correct Quoting and Inventory


A sales rep, with the integration, can look at the ERP to view the company’s inventory and
current costs to get the most accurate quote. They can factor in things like promotional
and discount pricing from the CRM. From there, they use this information to make a far
more accurate quote and ultimately, a faster timeline from quote to the finished product.

Sales Force

Sales force is customer relationship management (CRM) platform. It help your marketing,
sales, commerce, service and IT teams work as one from anywhere Salesforce.com is
probably the most popular and developed CRM solutions available today. The application
provides customizable CRM solutions that can be integrated with additional features
developed by third parties. Salesforce.com is based on the Force.com Cloud development
platform. This represents the scalable and high-performance middleware executing all the
operations of all Salesforce.com applications.

Q.39 Explain in detail social networking cloud application.


Social Networking

Social networking applications have considerably grown and in order to sustain their traffic
and to serve millions of users seamlessly, services like Twitter or Facebook, have leveraged
Cloud computing technologies. The possibility of continuously adding capacity while
systems are running is the most attractive feature for social networks, which constantly
increase their user base. Facebook is probably the most evident and interesting environment
in social networking. It became one of the largest web sites in the world with more than 800
million users. In order to sustain this incredible growth it has been fundamental to be
capable of continuously adding capacity, developing new scalable technologies and
software systems while keeping a high performance for a smooth user experience.
Currently, the social network is backed by two data centers that have been built and
optimized to reduce costs and impact on the environment. On top of this highly efficient
infrastructure built and designed out of inexpensive hardware, a completely customized
stack of open source technologies opportunely modified and refined constitutes the backend
of largest social network. Taken all together, these technologies constitute a powerful
platform for developing Cloud applications. This platform primarily supports Facebook
itself and offers APIs to integrate third party applications with Facebook‘s core
infrastructure to deliver additional services such as social games and quizzes created by
others. The reference stack serving Facebook is based on LAMP (Linux, Apache, MySQL,
and PHP). This collection of technologies is accompanied by a collection of other services
developed in-house. These services are developed in a variety of languages and implement
specific functionalities such as search, new feeds, notifications, and others. While serving
page requests, the social graph of the user is composed. The social graph identifies
collection of interlinked information that is of relevance for a given user. Most of the user
data is served by querying a distributed cluster of MySQL instances, which mostly contain
key-value pairs. This data is then cached for faster retrieval.
The rest of the relevant information is then composed together by using the services
mentioned before. These services are located closer to the data and developed in languages
that provide a better performance than PHP. The development of services is facilitated by a
set of tools internally developed. One of the core elements is Thrift. This is a collection of
abstractions (and language bindings) that allow cross-language development. Thrift allows
services developed in different languages to communicate and exchange data. Bindings for
Thrift in different languages take care of data serialization and deserialization,
communication, and client and server boilerplate code. This simplifies the work of the
developers that can quickly prototype services and leverage existing one. Other relevant
services and tools are Scribe, which aggregates streaming log feeds, and applications for
alerting and monitoring.

Q.40 Explain in detail Media Applications in cloud.


Media Applications

Media applications are a niche that has taken a considerable advantage from leveraging
Cloud computing technologies. In particular, video processing operations, such as encoding,
transcoding, composition, and rendering, are good candidates for a Cloud-based
environment. Video conferencing apps provide a simple and instant connected experience. It
allows us to communicate with our business partners, friends, and relatives using a cloud-
based video conferencing. The benefits of using video conferencing are that it reduces cost,
increases efficiency, and removes interoperability.

A cloud video streaming solution involves streaming and storing videos in the cloud
using a network of video streaming servers in the cloud. Some of the key features of the best
cloud service for video streaming and best cloud storage for streaming video include:

1. Efficient video hosting: allows video streaming service providers to deliver content
anytime
2. Ability to live stream video to cloud storage: cloud service for video streaming
allows you to record live streams and stream video from cloud storage anytime.
3. A cloud video encoder or cloud video encoding service: essential for cloud-based
live video streaming. Cloud media file encoding or video encoding in the cloud
refers to converting a video file from one format into another.
4. A cloud video transcoding service: allows you to prepare your videos to be delivered
on the web. Transcoding means creating different versions of the same video, each
version with a different size and quality.
5. Video analytics support.
6. A cloud-based media player: such as an HTML5 video player.
7.
Animoto

Animoto is perhaps the most popular example of media applications on the Cloud. The
website provides users with a very straightforward interface for quickly creating videos out
of images, music, and video fragments submitted by users. Users select a specific theme for
the video, upload the photos and videos and order them in the sequence they want to appear,
select the song for the music, and render the video. The process is executed in the
background and the user is notified via e-mail once the video is rendered. The core value of
Animoto is the ability to quickly create videos with stunning effects without the user
intervention. A proprietary AI engine that selects the animation and transition effects
according to pictures and music drives the rendering operation. Users only have to define
the storyboard by organizing pictures and videos into the desired sequence. If not, the video
can be rendered again and the engine will select a different composition, thus producing a
different outcome every time. The service allows creating 30 seconds videos for free. By
paying a monthly or a yearly subscription it is possible to produce videos of any length and
to choose among a wider range of templates.

Animoto Reference Architecture


Q.41 Explain in detail multiplayer online gaming cloud
application ?
client-server applications, in which the game servers simulate a persistent world
within a game session, receive and process commands from the players distributed
in the Internet (shootings, collection of items, chat), and interoperate with a billing
and accounting system. Game servers are typically hosted by specialized
companies called Hosters that rent to game operators computational and network
capabilities for running game servers with guaranteed Quality of Service

To support millions of active concurrent players and many other entities


simultaneously, Hosters install and operate a large static infrastructure, with
hundreds to thousands of computers onto which the load of each game session is
distributed. However, the demand of a MMOG is highly dynamic and depends on
various factors such as game popularity, content updates, or weekend and public
holiday effects. To sustain such highly variable loads, game operators over-
provision a large static infrastructure capable of sustaining the game peak load,
even though a large portion of the resources is unused most of the time. This
inefficient resource utilization has negative economic impacts by preventing any but
the largest hosting centres from joining the market, and dramatically increases
prices.

This solution provides an overview of common components and design patterns used to host
game infrastructure on cloud platforms.

Video games have evolved over the last several decades into a thriving entertainment
business. With the broadband internet becoming widespread, one of the key factors in the
growth of games has been online play.

Online play comes in several forms, such as session-based multiplayer matches, massively
multiplayer virtual worlds, and intertwined single-player experiences.
Q.42 Explain in detail Amazon web service cloud platform ?

Amazon has many services for cloud applications. Let us list down a few key services of the
AWS ecosystem and a brief description of how developers use them in their business.
Amazon has a list of services:

 Compute service

 Storage

 Database

 Networking and delivery of content

 Security tools

 Developer tools

 Management tools

Compute Service

These services help developers build, deploy, and scale an application in the cloud
platform.

AWS EC2

 It is a web service that allows developers to rent virtual machines and automatically
scales the compute capacity when required.

 It offers various instance types to developers so that they can choose required
resources such as CPU, memory, storage, and networking capacity based on their
application requirements.

AWS Lambda

 AWS Lamdais a serverless compute service. It is also responsible for executing code for
applications.

 It helps you execute a program without the hassle of managing servers.


Storage

AWS provides web data storage service for archiving data. Also, its primary advantage is
disaster data recovery with high durability.

Amazon S3

 It is an open cloud-based storage service that is utilized for online data backup.

 Amazon S3 provides storage through a web services interface and is designed for
developers where web-scale computing can be easier for them.

Amazon EBS

 It provides a high availability storage volume for persistent data. It is mainly used
Amazon EC2 instances.

 EBS volumes are used explicitly for primary storage such as file storage, databases
storage, and block-level storage.

As one of the top three cloud providers available, there are plenty of career opportunities
related to GCP. Simplilearn’s Google cloud certification provides you with the foundation
you will need to start or enhance your current career working with this comprehensive
cloud platform. Get started today!

database

AWS database domain service offers cost-efficient, highly secure, and scalable database
instances in the cloud.

DynamoDB

 It is a flexible NoSQL databse service that offers fast and reliable performance with no
scalability issues.

 It is a multi-region and durable database with instant built-in security, backup and
restores features.

RDS

 It is a managed distributed relational database cloud service that helps developers to


operate and scale a database in a simple manner.
We launched it to simplify the setup, operation, and scaling process for developers while
accessing a relational database.

Q.43 Explain in detail google app engine cloud platform.

What is Google App Engine?

App Engine is a fully managed, serverless platform for developing and hosting web
applications at scale. You can choose from several popular languages, libraries, and
frameworks to develop your apps, and then let App Engine take care of provisioning servers
and scaling your app instances based on demand

Google App Engine (GAE) is a platform-as-a-service product that provide web app
development and enterprises with access to Google's Scalable hosting and tier 1
internet service.GAE requires that applications be written in Java or Python, store
data in Google Bigtable and use the Google query language. Noncompliant
applications require modification to use GAE.

Google provides GAE free up to a certain amount of use for the following
resources:

 processor (CPU)

 storage

 application programming interface (API) calls

 concurrent requests

How is GAE used?

GAE is a fully managed,serverless platform that is used to host, build and deploy
web applications. Users can create a GAE account, set up a software development
kit and write application source code. They can then use GAE to test and deploy
the code in the cloud.One way to use GAE is building scalable mobile
application back ends that adapt to workloads as needed. Application testing is
another way to use GAE. Users can route traffic to different application versions
to A/B test them and see which version performs better under various workloads.

What are GAE's key features?

Key features of GAE include the following:

API selection. GAE has several built-in APIs, including the following five:

 Blobstore for serving large data objects;

 GAE Cloud Storage for storing data objects;

 Page Speed Service for automatically speeding up webpage load times;

 URL Fetch Service to issue HTTP requests and receive responses for
efficiency and scaling; and

 Memcache for a fully managed in-memory data store.

Benefits of GAE

 Ease of setup and use. GAE is fully managed, so users can write code
without considering IT operations and back-end infrastructure. The built-
in APIs enable users to build different types of applications. Access to
application logs also facilitates debugging and monitoring in production.

 Pay-per-use pricing. GAE's billing scheme only charges users daily for
the resources they use. Users can monitor their resource usage and bills
on a dashboard.

 Scalability. Google App Engine automatically scales as workloads


fluctuate, adding and removing application instances or application
resources as needed.
 Security. GAE supports the ability to specify a range of
acceptable Internet Protocol (IP) addresses. Users can allowlist specific
networks and services and blocklist specific IP addresses.

The Architecture of Google App Engine(GAE):

Force.co

User- Metadata
base Bulk
Process
Piv Met Multitenan
ot adat t-aware
Runtime
Data
Application
Full-text
Shared Search

User- CO TEN OBJE Ind


MM ANT- CTS
base
Virtual
User-
base
Q.44 Explain in detail Microsoft Azure cloud Platform.
Microsoft Azure
Windows Azure provided by Microsoft. Azure can be described as the managed data
centers that are used to build, deploy, manage the applications and provide services through
a global network. The services provided by Microsoft Azure are PaaS and IaaS. Many
programming languages and frameworks are supported by it.

Azure as PaaS (Platform as a Service)

As the name suggests, a platform is provided to clients to develop and deploy software.
The clients can focus on the application development rather than having to worry about
hardware and infrastructure. It also takes care of most of the operating systems, servers and
networking issues.

Pros
 The overall cost is low as the resources are allocated on demand and servers are
automatically updated.
 It is less vulnerable as servers are automatically updated and being checked for all
known security issues. The whole process is not visible to developer and thus does
not pose a risk of data breach.
 Since new versions of development tools are tested by the Azure team, it becomes
easy for developers to move on to new tools. This also helps the developers to meet
the customer‘s demand by quickly adapting to new versions.

Cons
 There are portability issues with using PaaS. There can be a different environment at
Azure, thus the application might have to be adapted accordingly.

Azure as IaaS (Infrastructure as a Service)

It is a managed compute service that gives complete control of the operating systems and
the application platform stack to the application developers. It lets the user to access,
manage and monitor the data centers by themselves.

Pros
 This is ideal for the application where complete control is required. The virtual
machine can be completely adapted to the requirements of the organization or
business.
 IaaS facilitates very efficient design time portability. This means application can be
migrated to Windows Azure without rework. All the application dependencies such
as database can also be migrated to Azure.
 IaaS allows quick transition of services to clouds, which helps the vendors to offer
services to their clients easily. This also helps the vendors to expand their business
by selling the existing software or services in new markets.

Cons
 Since users are given complete control they are tempted to stick to a particular
version for the dependencies of applications. It might become difficult for them to
migrate the application to future versions.
 There are many factors which increases the cost of its operation. For example,
higher server maintenance for patching and upgrading software.
 There are lots of security risks from unpatched servers. Some companies have well
defined processes for testing and updating on-premise servers for security
vulnerabilities. These processes need to be extended to the cloud-hosted IaaS VMs
to mitigate hacking risks.
 The unpatched servers pose a great security risk. Unlike PaaS, there is no provision
of automatic server patching in IaaS. An unpatched server with sensitive
information can be very vulnerable affecting the entire business of an organization.
 It is difficult to maintain legacy apps in Iaas. It can be stuck with the older version
of the operating systems and application stacks. Thus, resulting in applications that
are difficult to maintain and add new functionality over the period of time.

Q. 45. Explain in detail SQL Azure platform.

Ans:

Azure SQL is a family of managed, secure, and intelligent products that use the SQL
Server database engine in the Azure cloud.

 Azure SQL Database: Support modern cloud applications on an intelligent, managed


database service, that includes serverless compute.
 Azure SQL Managed Instance: Modernize your existing SQL Server applications at
scale with an intelligent fully managed instance as a service, with almost 100% feature
parity with the SQL Server database engine. Best for most migrations to the cloud.
 SQL Server on Azure VMs: Lift-and-shift your SQL Server workloads with ease and
maintain 100% SQL Server compatibility and operating system-level access.
Azure SQL is built upon the familiar SQL Server engine, so you can migrate
applications with ease and continue to use the tools, languages, and resources you're
familiar with. Your skills and experience transfer to the cloud, so you can do even more
with what you already have.

Learn how each product fits into Microsoft's Azure SQL data platform to match the
right option for your business requirements. Whether you prioritize cost savings or
minimal administration, this article can help you decide which approach delivers
against the business requirements you care about most.

Fig: Service comparison.

As seen in the diagram, each service offering can be characterized by the level of
administration you have over the infrastructure, and by the degree of cost efficiency.

In Azure, you can have your SQL Server workloads running as a hosted service (PaaS), or a
hosted infrastructure (IaaS) supporting the software layer, such as Software-as-a-Service
(SaaS) or an application. Within PaaS, you have multiple product options, and service tiers
within each option. The key question that you need to ask when deciding between PaaS or
IaaS is do you want to manage your database, apply patches, and take backups, or do you
want to delegate these operations to Azure?
Azure SQL Database

Azure SQL Database is a relational database-as-a-service (DBaaS) hosted in Azure that falls
into the industry category of Platform-as-a-Service (PaaS).

 Best for modern cloud applications that want to use the latest stable SQL Server
features and have time constraints in development and marketing.
 A fully managed SQL Server database engine, based on the latest stable Enterprise
Edition of SQL Server. SQL Database has two deployment options built on
standardized hardware and software that is owned, hosted, and maintained by
Microsoft.

With SQL Server, you can use built-in features and functionality that requires extensive
configuration (either on-premises or in an Azure virtual machine). When using SQL
Database, you pay-as-you-go with options to scale up or out for greater power with no
interruption. SQL Database has some additional features that are not available in SQL
Server, such as built-in high availability, intelligence, and management.

Azure SQL Database offers the following deployment options:

 As a single database with its own set of resources managed via a logical SQL server.
A single database is similar to a contained database in SQL Server. This option is
optimized for modern application development of new cloud-born
applications. Hyperscale and serverless options are available.
 An elastic pool, which is a collection of databases with a shared set of resources
managed via a logical server. Single databases can be moved into and out of an elastic
pool. This option is optimized for modern application development of new cloud-born
applications using the multi-tenant SaaS application pattern. Elastic pools provide a
cost-effective solution for managing the performance of multiple databases that have
variable usage patterns.

Azure SQL Managed Instance

Azure SQL Managed Instance falls into the industry category of Platform-as-a-Service
(PaaS), and is best for most migrations to the cloud. SQL Managed Instance is a collection
of system and user databases with a shared set of resources that is lift-and-shift ready.
 Best for new applications or existing on-premises applications that want to use the
latest stable SQL Server features and that are migrated to the cloud with minimal
changes. An instance of SQL Managed Instance is similar to an instance of
the Microsoft SQL Server database engine offering shared resources for databases and
additional instance-scoped features.
 SQL Managed Instance supports database migration from on-premises with minimal
to no database change. This option provides all of the PaaS benefits of Azure SQL
Database but adds capabilities that were previously only available in SQL Server
VMs. This includes a native virtual network and near 100% compatibility with on-
premises SQL Server. Instances of SQL Managed Instance provide full SQL Server
access and feature compatibility for migrating SQL Servers to Azure.

 SQL Server on Azure VM

SQL Server on Azure VM falls into the industry category Infrastructure-as-


a-Service (IaaS) and allows you to run SQL Server inside a fully managed virtual
machine (VM) in Azure.

 SQL Server installed and hosted in the cloud runs on Windows Server or Linux virtual
machines running on Azure, also known as an infrastructure as a service (IaaS). SQL
virtual machines are a good option for migrating on-premises SQL Server databases
and applications without any database change. All recent versions and editions of SQL
Server are available for installation in an IaaS virtual machine.
 Best for migrations and applications requiring OS-level access. SQL virtual machines
in Azure are lift-and-shift ready for existing applications that require fast migration to
the cloud with minimal changes or no changes. SQL virtual machines offer full
administrative control over the SQL Server instance and underlying OS for migration
to Azure.
 The most significant difference from SQL Database and SQL Managed Instance is
that SQL Server on Azure Virtual Machines allows full control over the database
engine. You can choose when to start maintenance/patching, change the recovery
model to simple or bulk-logged, pause or start the service when needed, and you can
fully customize the SQL Server database engine. With this additional control comes
the added responsibility to manage the virtual machine.
 Rapid development and test scenarios when you do not want to buy on-premises non-
production SQL Server hardware. SQL virtual machines also run on standardized
hardware that is owned, hosted, and maintained by Microsoft. When using SQL
virtual machines, you can either pay-as-you-go for a SQL Server license already
included in a SQL Server image or easily use an existing license. You can also stop or
resume the VM as needed.
 Optimized for migrating existing applications to Azure or extending existing on-
premises applications to the cloud in hybrid deployments. In addition, you can use
SQL Server in a virtual machine to develop and test traditional SQL Server
applications. With SQL virtual machines, you have the full administrative rights over
a dedicated SQL Server instance and a cloud-based VM. It is a perfect choice when an
organization already has IT resources available to maintain the virtual machines.
These capabilities allow you to build a highly customized system to address your
application's specific performance and availability requirements.

 Service-level agreement (SLA)

or many IT departments, meeting up-time obligations of a service-level


agreement (SLA) is a top priority. In this section, we look at what SLA applies to each
database hosting option.

For both Azure SQL Database and Azure SQL Managed Instance, Microsoft
provides an availability SLA of 99.99%. For the latest information, see Service-level
agreement.

For SQL on Azure VM, Microsoft provides an availability SLA of 99.95% that
covers just the virtual machine. This SLA does not cover the processes (such as SQL
Server) running on the VM and requires that you host at least two VM instances in an
availability set. For the latest information, see the VM SLA. For database high
availability (HA) within VMs, you should configure one of the supported high
availability options in SQL Server, such as Always On availability groups. Using a
supported high availability option doesn't provide an additional SLA, but allows you to
achieve >99.99% database availability.

You might also like