Final CC QB With Ans
Final CC QB With Ans
Final CC QB With Ans
ANS:-
Definition:-
The cloud is large group interconnected computers. These computers can be personal
computers or network servers; cloud computing is technology that uses the internet and
central remote servers to maintain data and application.
HISTORICAL DEVELOPMENTS
Client/Server Computing: Centralized Applications and Storage
o In client/server model all the software applications, data, and the control resided on
huge mainframe computers, known as servers.
o If a user wanted to access specific data or run a program, he had to connect to the
mainframe and then do his business. Users connected to the server via a computer
terminal, called a workstation or client
Drawbacks in client /server Model
o Processing power is limited.
o Access was not immediate nor could two users access the same data at the same
time.
o When multiple people are sharing a single computer, you have to wait for your turn.
o There isn‘t always immediate access in a client/server environment
Peer-to-Peer Computing: Sharing Resources
o P2P computing defines a network architecture in which each computer has
equivalent capabilities and responsibilities.
o In the P2P environment, every computer is a client and a server; there are no masters
and slaves.
o P2P enables direct exchange of resources and services.
o There is no need for a central server
o P2P was a decentralizing concept. Control is decentralized, with all computers
functioning as equals. Content is also dispersed among the various peer computers
Distributed Computing:
o Providing More Computing Power
o One of the subsets of the P2P model.
o Distributed computing, where idle PCs across a network or Internet are tapped to
provide computing power for large, processor-intensive projects.
Collaborative Computing: Working as a Group
o Multiple users to work simultaneously on the same computer-based project called
collaborative computing.
o The goal was to enable multiple users to collaborate on group projects online, in real
time. To collaborate on any project, users must first be able to talk to one another.
o Most collaboration systems offer the complete range of audio/video options, for full-
featured multiple-user video conferencing.
Cloud Computing: The Next Step in Collaboration
o With the growth of the Internet, there was no need to limit group collaboration to a
single enterprise‘s network environment. The users from multiple locations within a
corporation, and from multiple organizations, desired to collaborate on projects that
crossed company and geographic boundaries.
Q.2. Explain in detail Characteristics and Benefits of cloud.
ANS:-
CHARACTERISTICS OF CLOUD COMPUTING
1. Agility
The cloud works in a distributed computing environment. It shares resources among
users and works very fast.
2. High availability and reliability
The availability of servers is high and more reliable because the chances of
infrastructure failure are minimum.
3. High Scalability
Cloud offers "on-demand" provisioning of resources on a large scale, without having
engineers for peak loads.
4. Multi-Sharing
With the help of cloud computing, multiple users and applications can work more
efficiently with cost reductions by sharing common infrastructure.
5. Device and Location Independence
Cloud computing enables the users to access systems using a web browser regardless
of their location or what device they use e.g. PC, mobile phone, etc. As infrastructure is
off-site (typically provided by a third-party) and accessed via the Internet, users can connect
from anywhere.
6. Maintenance
Maintenance of cloud computing applications is easier, since they do not need to be
installed on each user's computer and can be accessed from different places. So, it reduces
the cost also.
7. Low Cost
By using cloud computing, the cost will be reduced because to take the services of
cloud computing, IT company need not to set its own infrastructure and pay-as-per usage of
resources.
8. Services in the pay-per-use mode
Application Programming Interfaces (APIs) are provided to the users so that they
can access services on the cloud by using these APIs and pay the charges as per the usage of
services.
BENEFITS OF CLOUD COMPUTING:--
1. Lower costs:
Because cloud networks operate at higher efficiencies and with greater utilization,
significant cost reductions are often encountered.
2. Ease of utilization:
Depending upon the type of service being offered, you may find that you do not
require hardware or software licenses to implement your service.
3. Quality of Service:
The Quality of Service (QoS) is something that you can obtain under contract from
your vendor.
4. Reliability:
The scale of cloud computing networks and their ability to provide load balancing
and failover makes them highly reliable, often much more reliable than what you can
achieve in a single organization.
5. Outsourced IT management:
A cloud computing deployment lets someone else manage your computing
infrastructure while you manage your business. In most instances, you achieve
considerable reductions in IT staffing costs.
6. Simplified maintenance and upgrade:
Because the system is centralized, you can easily apply patches and upgrades. This
means your users always have access to the latest software versions.
7. Low Barrier to Entry:
In particular, upfront capital expenditures are dramatically reduced. In cloud
computing, anyone can be a giant at any time.
Q3.Explain the concept of Cloud Virtualization.
ANS:-
VIRTUALIZATION
Virtualization is a technique, which allows to share single physical instance of an
application or resource among multiple organizations or tenants (customers).
Creating a virtual machine over existing operating system and hardware is referred
as Hardware Virtualization.
Virtual Machines provide an environment that is logically separated from the
underlying hardware.
The machine on which the virtual machine is created is known as host machine and
virtual machine is referred as a guest machine.
This virtual machine is managed by a software or firmware, which is known as
hypervisor.
VM VM VVM
1 2 3
Fig Traditional vs. Virtual
Hypervisor
OS
Hardware
Hypervisor
The hypervisor is a firmware or low-level program that acts as a Virtual Machine
Manager.
There are two types of hypervisor:
o Type 1 is a hypervisor that is installed directly on the hardware and is also called a
―bare-metal‖ hypervisor.
o Type 2 is a hypervisor that is installed on top of an operating system and is also
called ―hosted‖ hypervisor.
In fig a , both types of hypervisors are shown.
CHARACTERISTICS OF VIRTUALIZATION
Partitioning:
In virtualization, many applications and operating systems are supported in a
single physical system by partitioning (separating) the available resources.
Isolation:
Each virtual machine is isolated from its host physical system and other
virtualized machines. Because of this isolation, if one virtual-instance crashes, it
doesn‘t affect the other virtual machines. Also data isn‘t shared between one virtual
container and another.
Encapsulation:
A virtual machine can be represented (and even stored) as a single file, so
you can identify it easily based on the service it provides. This encapsulated virtual
machine can be presented to an application as a complete entity.
Therefore, encapsulation can protect each application so that it doesn‘t interfere with
another application.
Q4 What are the pros and Cons of Virtualization.
ANS:-
Pros of Virtualization
There are various pros of virtualization which are as follows −
Cheaper –
Virtualization doesn‘t need actual hardware elements to be used or installed
and hence, IT infrastructures discover it to be a low-cost system to execute. There is
no higher need to dedicate huge areas of space and huge monetary investments to
generate an on-site resource.
Efficiency –
Virtualization also enables automatic updates to the hardware and software
by installing on their third-party provider. Thus, IT professionals don‘t have to spend
money for singles and corporations. Further, that virtualization decreases the load of
resource management for supporting adaptability in the virtual environment.
Portability –
It can simply move our virtual machine from one defective host server to the
new host server with a very high success cost.
Flexibility –
Virtualization supports the flexibility to the users to waste their resources.
Whatever the operation cloud software uses for supporting resources to the user can
simply be managed or completed through various steps.
Cons of Virtualization
The cons of virtualization are as follows −
Security –
Data is an important element of each organization. Data security is provided
in a virtualized environment because the server is handled by third party providers.
Thus, it is essential to select the virtualization solution carefully so that it can
support adequate protection.
Limitations –
Virtualization does contain some limitations. Each server and software out
there is not virtualization adaptable. Therefore, various IT infrastructures of the
organizations will not be providing the virtualized solutions. Further, some vendors
have interrupted supporting them. For reducing this individuals and organizations
are required to have a hybrid system.
Availability –
Availability is an essential element of an organization. The data is required to
be linked for an extended period. If not the organization will be going to fail the
competition in the market. The problem with the availability can appear from the
virtualization servers. The virtualization servers have shifted to go offline.
Therefore, the websites which are hosted will also be declined. This is merely
controlled by the third-party providers and there is nothing the client can do about it.
Q5. Explain the taxonomy of virtualization techniques.
ANS:-
Server virtualization
There are three different approaches to server Virtualization:
Full virtualization,
Para-virtualization and
OS partitioning
With full virtualization, a hypervisor serves as the hardware abstraction layer
and can host multiple virtual machines. The virtual machines are isolated from each
other.
With para-virtualization, specially modified operating system(s) are installed on
top of the hypervisor to host multiple guest operating systems.
Application virtualization
With application virtualization, an application is packaged in a single executable or in a
set of files that can be distributed independently from the operating system.
There are different types of application virtualization of which two common types are :
sandbox application
application streaming.
Sandbox applications are completely isolated in what is called a ―bubble‖, where it ia
encapsulated from the underlying OS.
No installation or additional driver installation is required,
All the operating system features required for application execution are already embedded
in the executable file.
Application streaming is a form of application virtualization where an application is
divided into multiple packages.
With application streaming, the application is stored on a central server and streamed
towards the user location.
Only the application data that is required will be streamed to the user.
For example, when a user wants to use an office program such as Word, the server
will not stream the whole Office application. Only the application package with the
Word application will be streamed to the user.
Desktop virtualization
Desktop virtualization is the separation of a desktop, consisting of an operating system,
applications and user data, from the underlying endpoint.
The endpoint is the computer device which is used to access the desktop.
Desktop virtualization can be subdivided into two types:
Client side
Server side
With server side desktop virtualization, the end-user applications are executed remotely,
on a central server, and streamed towards the endpoint via a Remote Display Protocol or
other presentation and access virtualization technology.
With client side desktop virtualization, the applications are executed at the endpoint,
which is the user location, and presented locally on the user‘s computer.
Different types of desktop virtualization shown in fig a below
Server side
Stateless desktops refer to virtual desktops that remain ‗clean‘ or ‗stateless'. All desktop-
related modifications, for example changes to applications by a user, are removed when
the user logs off. However, user-specific settings that are recorded in the user profile can
be stored and re-used.
Statefull desktops refer to virtual desktops where the users have the freedom to install
software and to make changes to his or her desktop. This is also called user state or
profile virtualization.
Storage virtualization
Storage virtualization technologies can be divided into two types:
Block virtualization
File virtualization
Block virtualization focuses on creating virtual disks so that distributed storage networks
appear as one storage system.
File virtualization creates a virtual file system of the storage devices in the network.
Network virtualization
Network virtualization is that where multiple networks can be combined into a single
network, or a single network can be logically separated into multiple parts.
The currently known network virtualizations are
Virtual LAN (VLAN),
Virtual IP (VIP)
Virtual Private Network (VPN)
VLAN is a safe method of creating independent or isolated logical networks within a
shared network.
Devices in one isolated segment cannot communicate with devices of other segments
even if they are
connected to the same physical network.
A VLAN is a common feature in all modern Ethernet switches, allowing the creation of
multiple virtual networks, which isolates each segment from the others.
Virtual IP (VIP) is an IP address that is not associated to a specific computer or network
interface card (NIC), but is normally assigned to a network device that is in-path of the
network traffic.
Virtual Private Network (VPN) is a private communication network that uses a public
network, such as the Internet. The purpose of a VPN is to guarantee confidentiality on an
unsecured network channel, from one geographical location to another.
Hyper-V VMWare
Hyper-V supports Windows, Linux and VMware supports Windows, Linux, Unix and
FreeBSD operating systems. macOS operating systems.
Hyper-V‘s pricing depends on the number of VMware charges per processor and its pricing
cores on the host and may be preferred by structure might appeal to larger organizations.
smaller companies.
Hyper-V‘s Cluster Shared Volume is VMware‘s Virtual Machine File System
somewhat more complex and more difficult to (VMFS) holds a slight edge when it comes to
use than VMware‘s storage deployment clustering.
system.
Hyper-V uses a single memory technique VMware implements a variety of techniques,
called ―Dynamic Memory.‖ Using the such as memory compression and transparent
dynamic memory settings, Hyper-V virtual page sharing, to ensure that RAM use in the
machine memory can be added or released VM is optimized. It is a more complex system
from the virtual machine back to the Hyper-V than Hyper-V‘s memory technique.
host.
Q7. Explain Cloud Reference Model.
ANS:-
Diagram:-
Explanation:-
Cloud Consumer:-
A person or organization that maintains a business relationship with, and uses service
from Cloud Providers.
Cloud Provider:-
A person, Organization, or entity responsible for making a service available to
interested parties.
Cloud Auditor:-
A party that can conduct independent assessment of cloud services, information system
operations, performance and security of the cloud implementation.
Cloud broker:-
An entity that manages the use, performance and delivery of cloud services, and
negotiates relationships between cloud providers and cloud consumers.
Cloud Carrier:-
A intermediary that provides connectivity and transport of cloud services from cloud
providers to cloud consumers.
Explanation:-
1.frontend
2.Backend
1. Frontend :
Frontend of the cloud architecture refers to the client side of cloud computing system.
Means it contains all the user interfaces and applications which are used by the client to
access the cloud computing services/resources. For example, use of a web browser to
access the cloud platform.
Client Infrastructure – Client Infrastructure is a part of the frontend component. It
contains the applications and user interfaces which are required to access the cloud
platform.
In other words, it provides a GUI( Graphical User Interface ) to interact with the
cloud.
2. Backend :
Backend refers to the cloud itself which is used by the service provider. It contains
the resources as well as manages the resources and provides security mechanisms. Along
with this, it includes huge storage, virtual applications, virtual machines, traffic control
mechanisms, deployment models, etc.
Application –
Service –
Service in backend refers to the major three types of cloud based services like SaaS,
PaaS and IaaS. Also manages which type of service the user accesses.
Runtime Cloud-
Storage –
Storage in backend provides flexible and scalable storage service and management
of stored data.
Infrastructure –
Management –
Security –
Internet –
Internet connection acts as the medium or a bridge between frontend and backend
and establishes the interaction and communication between frontend and backend.
Benefits of Cloud Computing Architecture :
Makes overall cloud computing system simpler.
Improves data processing requirements.
Helps in providing high security.
Makes it more modularized.
Results in better disaster recovery.
Gives good user accessibility.
Reduces IT operating costs.
Definition:-
Examples :
Microsoft Azure,
Benefits :
Full control of the computing resources through administrative access to
VMs.
Flexible and efficient renting of computer hardware
Portability, interoperability with legacy applications
Issues:
Compatibility with legacy security vulnerabilities
Virtual Machine sprawl
Robustness of VM-level isolation
Data erase practices
Characteristics:
Virtual machines with pre-installed software.
Virtual machines with pre-installed operating systems such as Windows,
Linux, and Solaris.
On-demand availability of resources.
Allows to store copies of particular data at different locations.
The computing resources can be easily scaled up and down.
Q10.Explain In detail PaaS ?
Definition:-
PaaS provides
Applications deployment
Configuring application components
Provisioning and configuring supporting technologies
PaasS classification:
PaaS-I: Runtime environment with Web-hosted application development
platform. Rapid application prototyping.
PaaS-II: Runtime environment for scaling Web applications. The runtime
could be enhanced by additional components that provide scaling
capabilities.
PaaS-III: Middleware and programming model for developing distributed
applications in the cloud.
Examples:
Google App Engine
Force.com
Benefits:-
Lower administrative overhead
Lower total cost of ownership
Scalable solutions
More current system software
Characterstics of PaaS:
1. The runtime framework executes end-user code according to the policies set
by the user and the provider.
2. Provide services for creation, delivery, monitoring, management, reporting of
applications.
3. PaaS provides built-in security, scalability, and web service interfaces.
4. PaaS provides built-in tools for defining workflow, approval processes, and
business rules.
5. It is easy to integrate PaaS with other applications on the same platform.
Issues:
Lack of portability between PaaS clouds
Event based processor scheduling
Security engineering of PaaS applications
Q11.Explain In detail SaaS.
ANS:-
Definition:-
It is the service with which end users interact directly. It provides a means to free
users from complex hardware and software management.
There are several SaaS applications listed below:
Examples:
Gmail
Google drive
Dropbox
WhatsApp
Benefits
Characterstics of SaaS:
Issues
Network dependence
Browser based risks
Lack of portability between SaaS clouds
Cost Effective
Since public cloud shares same resources with large number of customers it turns out
inexpensive.
Reliability
The public cloud employs large number of resources from different locations. If any of
the resources fails, public cloud can employ another one.
Flexibility
The public cloud can smoothly integrate with private cloud, which gives customers a
flexible approach.
Location Independence
Public cloud services are delivered through Internet, ensuring location independence.
High Scalability
They can be scaled up or down according the requirement.
Disadvantages
Here are some disadvantages of public cloud model:
Diagram:-
Benefits
High Security and Privacy
More Control
The private cloud has more control than public cloud because it is accessed only
within an organization.
Cost and Energy Efficiency
The private cloud are not as cost effective as public clouds but they offer more
efficiency than public cloud resources.
Disadvantages
Restricted Area of Operation
High Priced
Limited Scalability
Additional skilled expertise required to maintain
Diagram:-
Benefits
Cost Effective
Sharing Among Organizations
Security
Storage as a service is a cloud business model in which a company leases or rents its storage
infrastructure to another company or individuals to store data.
Storage as a Service is cloud storage that you rent from a Cloud Service Provider (CSP) and
that provides basic ways to access that storage. Enterprises, small and medium businesses,
home offices, and individuals can use the cloud for multimedia storage, data repositories,
data backup and recovery, and disaster recovery.
There are also higher-tier managed services that build on top of such as Database as a
Service, in which you can write data into tables that are hosted through CSP resources.
The key benefit to storage as a service is that you are offloading the cost and effort to
manage data storage infrastructure and technology to a third-party CSP. This makes it much
more effective to scale up storage resources without investing in new hardware or taking on
configuration costs.
Storage as a service is fast becoming the method of choice to all small and medium scale
businesses. This is because storing files remotely rather than locally boasts an array of
advantages for professional users.
Storage as a service can be used for a variety of purposes, from long-term archival storage
to short-term transfers of large amounts of data. Since Storage as a service is a type of
software defined storage, the storage capacity available to the customer can vary easily, and
can be expanded at short notice without the capital outlay required to purchase extra servers.
Advantages of STaaS
Storage costs. Personnel, hardware and physical storage space expenses are
reduced.
Scalability. With most public cloud services, users only pay for the resources
that they use.
Examples of STaaS vendors include Dell EMC, Hewlett Packard Enterprise (HPE), NetApp
and IBM. Dell EMC provides Isilon NAS storage, EMC Unity hybrid-flash storage and
other storage options. HPE has an equally large, if not larger, presence in storage systems
compared to Dell EMC.
Q.18 Explain in detail Database as a Service.
Database as a service is just one more ―as a service‖ offering that can bring agility,
flexibility, and scaling to any business, no matter your size or industry.
Database as a service (DBaaS) is one of the fastest growing cloud services—it‘s projected to
reach $320 billion by 2025. The service allows organizations to take advantage of database
solutions without having to manage and maintain the underlying technologies.
The use of DBaaS is growing as more organizations shift from on-premises systems to cloud
databases. DBaaS vendors include cloud platform providers that sell database software and
other database makers that host their software on one or more of the cloud platforms.
The benefits of DBaaS set it apart from other Cloud services as it delivers database
functionality on the same scale as a relational database management system.
Faster deployment. Free your resources from administrative tasks and engage
your employees on tasks that lead directly to innovation and business growth—
instead of merely keeping the systems running.
Resource elasticity. The technology resources dedicated for database systems
can be changed in response to changing usage requirements. This is especially
suitable in business use cases where the demand for database workloads is
dynamic and not entirely predictable.
Rapid provisioning. Self-service capabilities allow users to provision new
database instances as required, often with a few simple clicks. This removes the
governance hurdles and administrative responsibilities from IT.
Business agility. Organizations can take advantage of rapid provisioning and
deployment to address changing business requirements. In DevOps
organizations, this is particularly useful as Devs and Ops both take on collective
responsibilities of operations tasks.
Security. The technologies support encryption and multiple layers of security to
protect sensitive data at rest, in transit and during processing.
Q.19 Explain in detail process as a Service?
Answer:
Business process as a Service (Bpaas) is any type of Horizantal vertical Business Process
that's delivered based on the cloud Service Model.
These cloud services which include Software as a Service (saas), platform as a Service
(paas), & infrastruture- as a Service (Iaas).
These Business processes, can be really be any service that Can be automated, including
managing email, shipping a package or Managing customer Credit.
BPaaS keeps companies in lockstep with industry best practices and technology
advancements. Companies can also easily increase service levels during peak periods and
bring new products and services to market faster with BPaaS's unique operating flexibility
and agility.
Characteristics:
1) Product/Service Deliverability:
From managing inventory to organizing email and customer records, BPaaS helps
companies facilitate the delivery of products and services in an automated, streamlined
way with help of cloud technologies. BPaaS is standardized for use across industries and
organizations, so it's flexible and repeatable, resulting in higher efficiency and, ultimately,
better service and experience for customers.
BPaaS provides a business with the latest digital tools, technologies, processes and talent
to improve its efficiency, service and the customer experience, without the large capital
investment traditionally required. By implementing BPaaS, companies can shift to a pay-
per-use (CAPEX consumption model and reduce total cost of ownership.
BPaaS utility can scale on-demand when a company experiences a peak workload. Due to
its innate configurability applicable across multiple business areas, and its world
interaction with other foundational cloud services like the s SaaS, the service can make use
of its cloud foundation to scale to accommodate large fluctuations in business process
needs.
Any business process (for example, payroll, printing, ecommerce) delivered as a service
over the Internet and accessible by one or more web enabled interfaces (PC, smart devices
and phones) can be considered as a BPaas.
Q .20 Explain in detail Information as a Service ?
Answer:
Informal as a Service: Iaas is an emerging cloud Business model in which a company. share
or Sells relevant info to another Company or individuals to perform their Business.
This comparatively lowest effort approach can alter departments to maximize service
quality and provides price savings. It is additionally a vital advancement if cloud computing
is being focused and considered.
IaaS (Information as a Service) centers around giving bits of knowledge based on the
analysis of handled data. In this case the client's business to-be-done is increasingly about
thinking of their own decisions or in any event, "selling" an idea based on certain
information. Additionally, IaaS clients would prefer not to or don't have the assets to
process and analyze data. Rather they will exchange value for analysis from confided in
parties. The IaaS (Information) business model is all about transforming data into
information for clients who need something - and will pay for something - progressively
tailored.
2. Payment Processing
Characteristics:
1) Accuracy
2) Completeness
3) Cost Effective
4) Relevance
5) Easily Understood
Through a lifecycle approach, IaaS can help in business capture, organize, integrate,
transform, analyze, and use information to create information inside a SOA domain.
Information as a service encompasses a range of software, services and answers for
address the appropriate starting point for the business:
Users control their personality and must agree to the utilization of their information. Like -
The hamlet Forum, a global free gathering of data security leaders, has supplemental their
contribution on the way to collaborate information as a Service safely within the clouds to
facilitate the needs of customers.
2)Minimal Disclosure
Only entity who have an upheld utilization of the information contained in a digital
personality and have a trusted in character relationship with the owner of the information
may be offered access to that information.
Integration as a Service (IaaS) is a cloud-based delivery model that strives to connect on-
premise data with data located in cloud-based applications. This paradigm facilitates real-
time exchange of data and programs among enterprise-wide systems and trading partners.
IaaS vendors will typically provide infrastructure, such as servers, along with middleware.
Vendors will also commonly supply tools for customers to build, test, deploy and manage
cloud applications. Payment is typically available in the form of a ‗pay as you go‘ model, so
users can readily scale their environments up or down. Most IaaS vendors will also share a
multi-tenant setup.
Customers of an IaaS service will typically interact with their data via a web-based
interface, which interconnects backend data, systems and files with other data, applications
and systems in other locations. IaaS also removes system and data interdependencies
through this process.
Uses of IaaS
IaaS is commonly used in small and medium-sized businesses since it facilitates low-cost,
efficient and reliable B2B integration. IaaS allows enterprises of modest size to spend more
of their valuable resources on the products and services that directly benefit customers. In
addition, IaaS can streamline infrastructure management (IM) by minimizing the amount of
unnecessary or redundant time and energy spent on it.
Organizations can utilize IaaS to:
Developing and testing applications through the tools some providers may offer.
As an example of IaaS in use, the New York Times archived much of their historical data in
less than two days using an IaaS system developed by Amazon called Elastic Compute
Cloud (EC2). Without the assistance of EC2 or a similar IaaS platform, the same process
would probably have taken weeks.
Benefits of IaaS
A consistent architecture that is created through connecting applications and resources, both
in cloud and on-premise, in one interface.
The data center infrastructure is handled by the service provider for the organization.
The organization does not have to worry about software or hardware upgrades since the
service provider handles both.
Startups do not have to pay the initial cost of buying, building and managing an extensive
infrastructure.
Types of TaaS
Functional Testing as a Service: TaaS Functional Testing may include UI/GUI
Testing, regression, integration and automated User Acceptance Testing (UAT) but
not necessary to be part of functional testing
Performance Testing as a Service: Multiple users are accessing the application at
the same time. TaaS mimic as a real-world users environment by creating virtual
users and performing the load and stress test
Security Testing as a Service: TaaS scans the applications and websites for any
vulnerability
Once user scenarios are created, and the test is designed, these service providers deliver
servers to generate virtual traffic across the globe.
Testing of applications that require extensive automation and with short test
execution cycle.
Performing a testing task that doesn‘t ask for in-depth knowledge of the design or
the system
For ad-hoc or irregular testing activities that require extensive resources.
Benefits of Cloud Testing
Flexible Test Execution and Test Assets
Some users claim 40-60% savings in the cloud testing vs. the traditional testing
model
Achieve a fast return of investments by eliminating the investment made after
hardware procurement, management, and maintenance, software licensing, etc.
Deliver product in quicker time through rapid procurement, project set-up, and
execution
Ensure data integrity and anytime anywhere accessibility
Reduce operational costs, maintenance costs, and investments
Pay as you use.
Q.23 State the concept of scaling of cloud infrastructure .
Data storage capacity, processing power and networking can all be scaled
using existing cloud computing infrastructure. Better yet, scaling can be done
quickly and easily, typically with little to no disruption or down time. Third-
party cloud providers have all the infrastructure already in place; in the past,
when scaling with on-premises physical infrastructure, the process could take
weeks or months and require tremendous expense.
The major cloud scalability benefits are driving cloud adoption for businesses
large and small:
At the server level, RAID systems should always be used and there should always be
a spare Hard Disk in the server room.
You should have backups in place, this is generally recommended for local and off-
site backup, so a NAS should be in your server room.
If you are an enterprise, you should have a disaster recovery site which generally is
located out of the city of the main site. The main purpose is to be as a stand-by as
in any case of a disaster, it replicates and backs up the data.
The second category is technological hazards that include accidents or the failures of
systems and structures such as pipeline explosions, transportation accidents, utility
disruptions, dam failures, and accidental hazardous material releases.
The third category is human-caused threats that include intentional acts such as active
assailant attacks, chemical or biological attacks, cyber-attacks against data or
infrastructure, and sabotage.
DISASTER RECOVERY MANAGEMENT
Aneka is a platform and a framework for developing distributed applications on the Cloud.
It harnesses the spare CPU cycles of a heterogeneous network of desktop PCs and servers or
datacenters on demand. Aneka provides developers with a rich set of APIs for transparently
exploiting such resources and expressing the business logic of applications by using the
preferred programming abstractions. System administrators can leverage on a collection of
tools to monitor and control the deployed infrastructure. This can be a public cloud available
to anyone through the Internet, or a private cloud constituted by a set of nodes with
restricted access.
The Aneka based computing cloud is a collection of physical and virtualized resources
connected through a network, which are either the Internet or a private intranet. Each of
these resources hosts an instance of the Aneka Container representing the runtime
environment where the distributed applications are executed. The container provides the
basic management features of the single node and leverages all the other operations on the
services that it is hosting. The services are broken up into fabric, foundation, and execution
services. Fabric services directly interact with the node through the Platform Abstraction
Layer (PAL) and perform hardware profiling and dynamic resource provisioning.
Foundation services identify the core system of the Aneka middleware, providing a set of
basic features to enable Aneka containers to perform specialized and specific sets of tasks.
Execution services directly deal with the scheduling and execution of applications in the
Cloud.
One of the key features of Aneka is the ability of providing different ways for expressing
distributed applications by offering different programming models; execution services are
mostly concerned with providing the middleware with an implementation for these models.
Additional services such as persistence and security are transversal to the entire stack of
services that are hosted by the Container. At the application level, a set of different
components and tools are provided to: 1) simplify the development of applications (SDK);
2) porting existing applications to the Cloud; and 3) monitoring and managing the Aneka
Cloud.
Multiple Structures:
Aneka is a software platform for developing cloud computing applications. In Aneka, cloud
applications are executed. Fabric Services defines the lowest level of the software stack that
represents multiple containers. They provide access to resource- provisioning subsystems
and features implemented in many. Monitoring Fabric Services are the core services of
Many Cloud and define the infrastructure management features of the system. Foundation
services are concerned with the logical management of a distributed system built on top of
the infrastructure and provide ancillary services for delivering applications.
Application services manage the execution of applications and constitute a layer that varies
according to the specific programming model used to develop distributed applications into
Aneka. There are mainly two major components technologies: in multiple The SDK
(Software Development Kit) includes the Application Programming Interface (API) and
tools needed for the rapid development of applications. The Aneka API supports three
popular cloud programming models: Tasks, Threads and Map Reduce A runtime engine and
platform for managing the deployment and execution of applications on a private or public
cloud. One of the notable features of Aneka Pass is to support the provision of private cloud
resources from desktop, cluster to a virtual data center using VMware, Citrix Zen Server,
and public cloud resources such as Windows Azure, Amazon EC2, and Go Grid cloud
service.
Aneka's potential as a Platform as a Service has been successfully harnessed by its users
and customers in three different areas, including engineering, life business intelligence.
sciences, education. A multiplex-based computing cloud is a collection of physical and
virtualized resources connected via a network, either the Internet or a private intranet. Each
resource hosts an instance of multiple containers that represent the runtime environment
where distributed applications are executed. The container provides the basic management
features of a single node and takes advantage of all the other functions of its hosting
services. Services are divided into clothing, foundation, and execution services.
Foundation services identify the core system of Aneka middleware, which provides a set of
infrastructure features to enable Aneka containers to perform specific and specific tasks.
Fabric services interact directly with nodes through the Platform Abstraction Layer (PAL)
and perform hardware profiling and dynamic resource provisioning. Execution services deal
directly with scheduling and executing applications in the Cloud. One of the key features of
Aneka is its ability to provide a variety of ways to express distributed applications by
offering different programming models; Execution services are mostly concerned with
providing middleware with the implementation of these models. Additional services such as
persistence and security are inverse to the whole stack of services hosted by the container.
Foundation services p0295 Fabric Services are fundamental services of the Aneka Cloud
and define the basic infrastructure management features of the system. Foundation Services
are related to the logical management of the distributed system built on top of the
infrastructure and provide supporting services for the execution of distributed applications.
All the supported programming models can integrate with and leverage these services to
provide advanced and comprehensive application management. These services cover:
• Storage management for applications
• Accounting, billing, and resource pricing
• Resource reservation
Foundation Services provide a uniform approach to managing distributed applications
and allow developers to concentrate only on the logic that distinguishes a specific
programming model from the others. Together with the Fabric Services, Foundation
Services constitute the core of the Aneka middleware. These services are mostly consumed
by the execution services and Management Consoles. External applications can leverage the
exposed capabilities for providing advanced application managemet.
1.Storage management:
Data management is an important aspect of any distributed system, even in computing
clouds. Applications operate on data, which are mostly persisted and moved in the format
of files. Hence, any infrastructure that supports the execution of distributed applications
needs to provide facilities for file/data transfer management and persistent storage. Aneka
offers two different facilities for storage management: a centralized file storage, which is
mostly used for the execution of computeintensive applications, and a distributed file
system, which is more suitable for the execution of data-intensive applications.
The requirements for the two types of applications are rather different. Compute-intensive
applications mostly require powerful processors and do not have high demands in terms of
storage, which in many cases is used to store small files that are easily transferred from one
node to another. In this scenario, a centralized storage node, or a pool of storage nodes, can
constitute an appropriate solution. In contrast, data-intensive applications are characterized
by large data files (gigabytes or terabytes), and the processing power required by tasks does
not constitute a performance bottleneck. In this scenario, a distributed file system
harnessing the storage space of all the nodes belonging to the cloud might be a better and
more scalable solution.
2. Accounting, billing, and resource pricing:
Accounting Services keep track of the status of applications in the Aneka Cloud. The
collected information provides a detailed breakdown of the distributed infrastructure usage
and is vital for the proper management of resources. The information collected for
accounting is primarily related to infrastructure usage and application execution. A
complete history of application execution and storage as well as other resource Anatomy of
the Aneka container To protect the rights of the author(s) and publisher we inform you that
this PDF is an uncorrected proof for internal business use only by the author(s), editor(s),
reviewer(s), Elsevier and typesetter MPS. It is not allowed to publish this proof online or in
print.
This proof copy is the copyright property of the publisher and is confidential until formal
publication. utilization parameters is captured and minded by the Accounting Services. This
information constitutes the foundation on which users are charged in Aneka. Billing is
another important feature of accounting. Aneka is a multitenant cloud programming
platform in which the execution of applications can involve provisioning additional
resources from commercial IaaS providers. Aneka Billing Service provides detailed
information about each user‘s usage of resources, with the associated costs.
3.Resource reservation:
Aneka‘s Resource Reservation supports the execution of distributed applications and
allows for reserving resources for exclusive use by specific applications. Resource
reservation is built out of two different kinds of services: Resource Reservation and the
Allocation Service. Resource Reservation keeps track of all the reserved time slots in the
Aneka Cloud and provides a unified view of the system. The Allocation Service is installed
on each node that features execution services and manages the database of information
regarding the allocated slots on the local node. Applications that need to complete within a
given deadline can make a reservation request for a specific number of nodes in a given
timeframe.
If it is possible to satisfy the request, the Reservation Service will return a reservation
identifier as proof of the resource booking. During application execution, such an identifier
is used to select the nodes that have been reserved, and they will be used to execute the
application. On each reserved node, the execution services will check with the Allocation
Service that each job has valid permissions to occupy the execution timeline by verifying
the reservation identifier. Even though this is the general reference model for the
reservation infrastructure, Aneka allows for different implementations of the service, which
mostly vary in the protocol that is used to reserve resources or the parameters that can be
specified while making a reservation request.
1. Scheduling
Scheduling services are in the charge of planning the execution of
distributed application on top of Aneka and governing the allocation of
job composing an application to nodes. They also constitute the
integration point with a several other foundation and fabric service,
such as the resource provisioning service, the reservation service, and
accounting service, and the reporting service.
Common tasks that are performed by the scheduling component are the
following :
Job to node mapping
Rescheduling of failed jobs
Jobs status monitoring
Application status monitoring
Aneka does not provide centralized scheduling engine, but each programming
modes features its own scheduling services that needs to work in synergy with the
existing services of middleware.
The possibility of having different scheduling engines for different models gives
great freedom implementing scheduling and resource allocation strategies but, at
the same time, requires a careful design of a use of shared resources. In this
scenarios, common situation that have to be appropriately managed are the
following: Multiple job send to the same node at the same time; Jobs without
reservations send to observed nodes; Job send to nodes where the required
services are not installed. Aneka’s Foundation Services provide sufficient
information to avoid these cases, but the runtime infrastructure does not feature
specific policies to detect these conditions and provide corrective actions.
2. Executions
Execution services controlled execution of single job that compose
application. They are in charge of setting up the runtime environment
hosting the execution of job. As happens for the scheduling services,
each programming model has its own requirement, but it is possible to
identify some common operations that apply cross all the range of
supported models:
Unpacking the job received from the schedular
Retrieval of input files required for job execution
Sandboxed execution of jobs
submission of output files at the end of the execution
execution failure management (i.e. capturing sufficient
contextual information useful to identify the nature of the failure
)
performance monitoring
Packing jobs and sending them back to schedular
Execution services constitute a more self-contained unit with respect to
the corresponding schedulind services. They handle less information and
are required to integrate themselves only with storage service and the
local Allocation and Monitoring Services.
Application services constitute the runtime support of the programming
model in the Aneka Cloud. Currently there are several supported
models:
Task Model: This model provides the support for the
independent “bag of tasks”, application and many computing
tasks.In this model, an application is modelled as a collection of
tasks that are independent from each other and whose execution
can be sequenced in any order.
Thread Model: This model provides an extension to the classical
multithreaded programming to a distributed infrastructure and
uses the abstraction of Thread to wrap a method that is executed
remotely.
MapReduce Model: This is an implementation of MapReduce as
proposed by Google on top of Aneka.
Parameter Sweep Model: This model is a specialization of on Tasks Model for application
that can be described by a template task whose instances are created by generating
different combinations of parameters, which identify a specific point the domain of
interest.
Infrastructure Organization :
Below fig. provides an overview of aneka cloud from an infrastructure point of
view.
The scenario is a reference model for all the different deployment aneka supports.
A central role is played by the administrative console, which performs all the
libraries required to layout and install the basic aneka platform.
These libraries constitute the software image for the node manager and the
container programs. Repositories can make libraries available through a variety of
communication channels, such as HTTP, FTP, common file sharing, and so on.
The system includes four key components, including Aneka Master, Aneka Worker,
Aneka Management Console, and Aneka Client Libraries . The Aneka Master and
Aneka Worker are both Aneka Containers which represents the basic deployment
unit of Aneka based Clouds.
The management console can manage multiple repositories and select the
one that best suit the specific deployment. The infrastructure is deployed by
harnessing a collection of nodes and installing on them aneka domain.
The domain constitutes the remote management service used to deploy and
control container instances. The collection of resulting container identities the
aneka cloud.
From an infrastructure point of view, the management of physical and virtual node
is performed uniformly as long as it is possible to have an internet connection and
remote administrative access to the node.
The logical organization of Aneka clouds can be diverse, since it strongly depends on the
configuration selected for each of the contain instances belonging to the cloud. The most
common scenario is to use a master worker Configuration with seperated mode for
storage. The master node features all the services that are most likely to be pressed in one
single Copy & that provide the intelligence of the
Aneka cloud
The master node also provides connections to the ROMS facility where the state Services is
maintained. Several s The water node constitutes the workforce of the Aneka cloud and
generally configured for the execution of application.
Q.32 Explain the concept private cloud deployment mode of Aneka.
A private deployment mode is mostly constituted by local physical resources and
infrastructure management software providing access to a local pool of nodes, which
might be virtualized.
Aneka includes an extensible set of api’s associated with programming models like Map
Reduce.
It works as your virtual computing environment with choice of deployment to store and
who has access to the infrastructure.
A private cloud refers to a cloud deployment model operated exclusively for a single
organization, whether it is physically located at the company’s onsite data center, or is
managed and hosted by a third-party provider.
This deployment is acceptable for a scenario in which the workload of the system is
predictable and local virtual machine manager can easily address excess capacity demand.
Most of the Aneka nodes are constituted of physical nodes with a longtime and a static
configuration and generally do not need to be reconfigured often
Workstation clusters might have some specific legacy software that is required for
supporting the execution of applications and should be preferred for the execution of
Benefits of Aneka
Data privacy – It is ideal for storing corporate data where only authorized personnel
gets access.
Security – Segmentation of resources within the same infrastructure can help with
better access and higher levels of security.
Supports Legacy Systems – This model supports legacy system that can’t access the
punlic cloud
Limitations of Private Cloud
Higher Cost-effective With the benefits you get,the investment will also be larger
than the public cloud Here, you will pay for software, hardware, and resources for
staff and training.
Fixed Scalability- The hardware you choose will according help you scale in a
certain direction.
Q.33 Explain the concept public cloud deployment mode of Aneka
Public Cloud deployment mode
The installation of Aneka master and worker nodes over a completely virtualized
infrastructure that is hosted on the infrastructure of one or more resource protect the
rights of the authors and publisher we inform you that this PDF is an uncorrected proof for
internal business use only by the authors editor(s), reviewer(s), Elsevier and typesetter
MPS.
It is not allowed to publish this proof online or in print. This proof copy is the copyright
property of the publisher and is confidential until formal publication. providers such as
Amazon EC2 or Go Grid. In this case it is possible to have a static deployment where the
nodes are provisioned beforehand and used as though they were real machines.
Dynamic provisioning can easily solve this issue as it does for increasing the computing
capability of an Aneka Cloud. Deployments using different providers are unlikely to
happen because of the data transfer costs among providers, but they might be a possible
scenario for federated Aneka Clouds . In this scenario resources can be shared or leased
among providers under specific agreements and more convenient prices. In this case the
specific policies installed in the Resource Provisioning Service can discriminate among
different resource providers, mapping different IaaS providers to provide the best solution
to a provisioning request.
Aneka SDK
Aneka provides APIs for developing applications on top of existing programming models,
implementing new programming models, and developing new services to integrate into
the Aneka Cloud. The development of applications mostly focuses on the use of existing
features and leveraging the services of the middleware, while the implementation of new
programming models or new services enriches the features of Aneka. The SDK provides
support for both programming models and services by means of the Application Model and
the Service Model. The former covers the development of applications and new
programming models; the latter defines the general infrastructure for service
development.
(1)Application model:
Aneka provides support for distributed execution in the Cloud with the abstraction of
programming models. A programming model identifies both the abstraction used by the
developers and the runtime support for the execution of programs on top of Aneka. The
Application Model represents the minimum set of APIs that is common to all the
programming models for representing and programming distributed applications on top
of Aneka. This model is further specialized according to the needs and the particular
features of each of the programming models.
An overview of the components that define the Aneka Application Model is shown in
Figure 5.8. Each distributed application running on top of Aneka is an instance of the
ApplicationBase , M . class, where M identifies the specific type of application manager
used to control the application. Application classes constitute the developers’ view of a
distributed application on Aneka Clouds, whereas application managers are internal
components that interact with Aneka Clouds in order to monitor and control the
execution of the application.
(2)Service model
The Aneka Service Model defines the basic requirements to implement a service that
can be hosted in an Aneka Cloud. The container defines the runtime environment in
which services are hosted. Each service that is hosted in the container must be compliant
with the IService interface, which exposes the following methods and properties:
• Name and status
Specific services can also provide clients if they are meant to directly interact with end
users. Examples of such services might be Resource Provisioning and Resource
Reservation Services, which ship their own clients for allowing resource provisioning and
reservation. Apart from control operations, which are used by the container to set up and
shut down the service during the container life cycle, the core logic of a service resides in
its message-processing functionalities that are contained in the HandleMessage method.
Each operation that is requested to a service is triggered by a specific message, and
results are communicated back to the caller by means of messages.
Figure 5.9 describes the reference life cycle of each service instance in the Aneka
container. The shaded balloons indicate transient states; the white balloons indicate
steady states. A service instance can initially be in the Unknown or Initialized state, a
condition that refers to the creation of the service instance by invoking its constructor
during the configuration of the container. Once the container is started, it will iteratively
call the Start method on each service method. As a result the service instance is expected
to be in a Starting state until the startup process is completed, after which it will exhibit
the Running state. In particular ,the guidelines define a ServiceBase Class that can be
further extended to provide a proper implementation. This class is the base class of
several services in the framework and provides some built-in features:
• Implementation of the control operations with logging capabilities and state control
Hence, infrastructure management, together with facilities for installing logical clouds on
such
infrastructure, is a fundamental feature of Aneka‘s management layer.
This layer also includes capabilities for managing services and applications running in the
Aneka Cloud.
1. Infrastructure management
2.platform management
3.application management
1 Infrastructure management:-
Aneka leverages virtual and physical hardware in order to deploy Aneka Clouds.
The management features are mostly concerned with the provisioning of physical hardware
and the remote installation of Aneka on the hardware.
2 Platform management :-
Infrastructure management provides the basic layer on top of which Aneka Clouds are
deployed.
A collection of connected containers defines the platform on top of which applications are
executed.
The features available for platform management are mostly concerned with the logical
organization and structure of Aneka Clouds.
It is possible to partition the available hardware into several Clouds variably configured for
different purposes.
Services implement the core features of Aneka Clouds and the management layer exposes
operations for some of them, such as Cloud monitoring, resource provisioning and
reservation, user management, and application profiling.
3 Application management:-
The management APIs provide administrators with monitoring and profiling features that
help them track the usage of resources and relate them to users and applications.
Aneka exposes capabilities for giving summary and detailed information about application
execution and resource utilization.
The importance of so cloud computing in health care is evident from the fact that the
global computing market for the health care industry. Is expected d to reach around
$25.25 billion by 2024.
Cloud computing is now a must have technology for the healthcare industry to provide an
optimal patient-centred experience
Cloud based health care is the process of integrating cloud technology into healthcare
services for Cost saving , easy data sharing, personalized medicine, tele health. apps &
more benefits.
Many Cloud healthcare provider also use based system in health care for Safe data storage
digital back up &digital records retrival.better analysis of monitoring of data related to
diagnosis & treatment of different diseases.
Massive storage resources for large datasets for resources for EHR (electronic health
records) of radiology images.
2) Drug Discovery :- Drug recovery requires large number of computing resources for
discovering Compounds from billions of chemical structures.
4) personal health records :-It is managing access to personal health records CPHR) &
Electronic. health Records (EHR).
CDSS is uses the knowledge of behaviour of a medical professional to provide advice on the
patients records analysis.
Geoscience applications collect, produce, and analyses massive amounts of geospatial and
non-spatial data. As the technology progresses and our planet becomes more
instrumented (i.e., through the deployment of sensors and satellites for monitoring), the
volume of data that need to be processed increases significantly. In particular, the
geographic information system (GIS) is a major element of geoscience applications. GIS
applications capture, store, manipulate, analyze, manage, and present all types of
geographically referenced data. This type of information is now
Portal
Arc Distri
Local Anek
a
Public
Private
becoming increasingly relevant to a wide variety of application domains: from advanced
farming to civil security and also natural resources management. As a result, a considerable
amount of geo-referenced data is ingested into computer systems for further processing
and analysis. Cloud computing is an attractive option for executing these demanding tasks
and extracting meaningful information for supporting decision makers. Satellite remote
sensing generates hundreds of gigabytes of raw images that need to be further processed
to become the basis of several different GIS products. This process requires both I/O and
compute intensive tasks.
Jeeva
Anek
Task
Graph
A A : BLAST
Initial B : Create Data
Phase B Vector
C : HH Classifier
D : SS Classifier
Classific C D E F G H E : TT Classifier
ation F : HS Classifier
G : ST Classifier
Fina
I
Q.38 Explain in detail Business & Consumer Applications CRM, ERP of Cloud.
Business and Consumer Applications
The business and consumer sector is the one that probably benefits the most from Cloud
computing technologies. On the one hand the opportunity of transforming capital cost into
operational costs makes Clouds an attractive option for all enterprises that are IT centric.
On the other hand, the sense of ubiquity that Cloud offers for accessing data and services
makes it interesting for end users as well. Moreover, the elastic nature of Cloud
technologies does not require huge upfront investments, thus allowing new ideas to be
quickly translated into products and services that can comfortably grow with the demand.
The combination of all these elements has made Cloud computing the preferred
technology for a wide range of applications: from CRM and ERP systems to productivity
and social networking applications.
ERP solutions on the Cloud are less mature and have to compete
with well-established in-house solutions. ERP systems integrate several aspects of an
enterprise like finance and accounting, human resources, manufacturing, supply chain
management, project management, and customer relationship management. ERP handles
the back-end processes and internal information. It takes care of tasks like order
placement, tracking, billing, shipping, accounting, and supply chain details.
This will also save time on data entry. Instead of updating accounts/contacts in both
systems, you will only have to do so in one centralized location.
4. Cross-Departmental Collaboration
A major benefit of both enterprise resource planning and customer relationship
management software is the ability to work cross-departmentally, without department
siloes. (In business, organizational silos refer to business divisions that operate
independently and avoid sharing information. It also refers to businesses
whose departments have silo' system applications, in which information cannot be shared
because of system limitations). In a siloed business approach, departments are completely
distinct from one another, discouraging collaboration, making data accessibility a
challenge, and data duplication common. A cross-departmental approach ensures real-
time data is always being utilized and departments are working together to accomplish the
same goals.
Sales Force
Sales force is customer relationship management (CRM) platform. It help your marketing,
sales, commerce, service and IT teams work as one from anywhere Salesforce.com is
probably the most popular and developed CRM solutions available today. The application
provides customizable CRM solutions that can be integrated with additional features
developed by third parties. Salesforce.com is based on the Force.com Cloud development
platform. This represents the scalable and high-performance middleware executing all the
operations of all Salesforce.com applications.
Social networking applications have considerably grown and in order to sustain their traffic
and to serve millions of users seamlessly, services like Twitter or Facebook, have leveraged
Cloud computing technologies. The possibility of continuously adding capacity while
systems are running is the most attractive feature for social networks, which constantly
increase their user base. Facebook is probably the most evident and interesting environment
in social networking. It became one of the largest web sites in the world with more than 800
million users. In order to sustain this incredible growth it has been fundamental to be
capable of continuously adding capacity, developing new scalable technologies and
software systems while keeping a high performance for a smooth user experience.
Currently, the social network is backed by two data centers that have been built and
optimized to reduce costs and impact on the environment. On top of this highly efficient
infrastructure built and designed out of inexpensive hardware, a completely customized
stack of open source technologies opportunely modified and refined constitutes the backend
of largest social network. Taken all together, these technologies constitute a powerful
platform for developing Cloud applications. This platform primarily supports Facebook
itself and offers APIs to integrate third party applications with Facebook‘s core
infrastructure to deliver additional services such as social games and quizzes created by
others. The reference stack serving Facebook is based on LAMP (Linux, Apache, MySQL,
and PHP). This collection of technologies is accompanied by a collection of other services
developed in-house. These services are developed in a variety of languages and implement
specific functionalities such as search, new feeds, notifications, and others. While serving
page requests, the social graph of the user is composed. The social graph identifies
collection of interlinked information that is of relevance for a given user. Most of the user
data is served by querying a distributed cluster of MySQL instances, which mostly contain
key-value pairs. This data is then cached for faster retrieval.
The rest of the relevant information is then composed together by using the services
mentioned before. These services are located closer to the data and developed in languages
that provide a better performance than PHP. The development of services is facilitated by a
set of tools internally developed. One of the core elements is Thrift. This is a collection of
abstractions (and language bindings) that allow cross-language development. Thrift allows
services developed in different languages to communicate and exchange data. Bindings for
Thrift in different languages take care of data serialization and deserialization,
communication, and client and server boilerplate code. This simplifies the work of the
developers that can quickly prototype services and leverage existing one. Other relevant
services and tools are Scribe, which aggregates streaming log feeds, and applications for
alerting and monitoring.
Media applications are a niche that has taken a considerable advantage from leveraging
Cloud computing technologies. In particular, video processing operations, such as encoding,
transcoding, composition, and rendering, are good candidates for a Cloud-based
environment. Video conferencing apps provide a simple and instant connected experience. It
allows us to communicate with our business partners, friends, and relatives using a cloud-
based video conferencing. The benefits of using video conferencing are that it reduces cost,
increases efficiency, and removes interoperability.
A cloud video streaming solution involves streaming and storing videos in the cloud
using a network of video streaming servers in the cloud. Some of the key features of the best
cloud service for video streaming and best cloud storage for streaming video include:
1. Efficient video hosting: allows video streaming service providers to deliver content
anytime
2. Ability to live stream video to cloud storage: cloud service for video streaming
allows you to record live streams and stream video from cloud storage anytime.
3. A cloud video encoder or cloud video encoding service: essential for cloud-based
live video streaming. Cloud media file encoding or video encoding in the cloud
refers to converting a video file from one format into another.
4. A cloud video transcoding service: allows you to prepare your videos to be delivered
on the web. Transcoding means creating different versions of the same video, each
version with a different size and quality.
5. Video analytics support.
6. A cloud-based media player: such as an HTML5 video player.
7.
Animoto
Animoto is perhaps the most popular example of media applications on the Cloud. The
website provides users with a very straightforward interface for quickly creating videos out
of images, music, and video fragments submitted by users. Users select a specific theme for
the video, upload the photos and videos and order them in the sequence they want to appear,
select the song for the music, and render the video. The process is executed in the
background and the user is notified via e-mail once the video is rendered. The core value of
Animoto is the ability to quickly create videos with stunning effects without the user
intervention. A proprietary AI engine that selects the animation and transition effects
according to pictures and music drives the rendering operation. Users only have to define
the storyboard by organizing pictures and videos into the desired sequence. If not, the video
can be rendered again and the engine will select a different composition, thus producing a
different outcome every time. The service allows creating 30 seconds videos for free. By
paying a monthly or a yearly subscription it is possible to produce videos of any length and
to choose among a wider range of templates.
This solution provides an overview of common components and design patterns used to host
game infrastructure on cloud platforms.
Video games have evolved over the last several decades into a thriving entertainment
business. With the broadband internet becoming widespread, one of the key factors in the
growth of games has been online play.
Online play comes in several forms, such as session-based multiplayer matches, massively
multiplayer virtual worlds, and intertwined single-player experiences.
Q.42 Explain in detail Amazon web service cloud platform ?
Amazon has many services for cloud applications. Let us list down a few key services of the
AWS ecosystem and a brief description of how developers use them in their business.
Amazon has a list of services:
Compute service
Storage
Database
Security tools
Developer tools
Management tools
Compute Service
These services help developers build, deploy, and scale an application in the cloud
platform.
AWS EC2
It is a web service that allows developers to rent virtual machines and automatically
scales the compute capacity when required.
It offers various instance types to developers so that they can choose required
resources such as CPU, memory, storage, and networking capacity based on their
application requirements.
AWS Lambda
AWS Lamdais a serverless compute service. It is also responsible for executing code for
applications.
AWS provides web data storage service for archiving data. Also, its primary advantage is
disaster data recovery with high durability.
Amazon S3
It is an open cloud-based storage service that is utilized for online data backup.
Amazon S3 provides storage through a web services interface and is designed for
developers where web-scale computing can be easier for them.
Amazon EBS
It provides a high availability storage volume for persistent data. It is mainly used
Amazon EC2 instances.
EBS volumes are used explicitly for primary storage such as file storage, databases
storage, and block-level storage.
As one of the top three cloud providers available, there are plenty of career opportunities
related to GCP. Simplilearn’s Google cloud certification provides you with the foundation
you will need to start or enhance your current career working with this comprehensive
cloud platform. Get started today!
database
AWS database domain service offers cost-efficient, highly secure, and scalable database
instances in the cloud.
DynamoDB
It is a flexible NoSQL databse service that offers fast and reliable performance with no
scalability issues.
It is a multi-region and durable database with instant built-in security, backup and
restores features.
RDS
App Engine is a fully managed, serverless platform for developing and hosting web
applications at scale. You can choose from several popular languages, libraries, and
frameworks to develop your apps, and then let App Engine take care of provisioning servers
and scaling your app instances based on demand
Google App Engine (GAE) is a platform-as-a-service product that provide web app
development and enterprises with access to Google's Scalable hosting and tier 1
internet service.GAE requires that applications be written in Java or Python, store
data in Google Bigtable and use the Google query language. Noncompliant
applications require modification to use GAE.
Google provides GAE free up to a certain amount of use for the following
resources:
processor (CPU)
storage
concurrent requests
GAE is a fully managed,serverless platform that is used to host, build and deploy
web applications. Users can create a GAE account, set up a software development
kit and write application source code. They can then use GAE to test and deploy
the code in the cloud.One way to use GAE is building scalable mobile
application back ends that adapt to workloads as needed. Application testing is
another way to use GAE. Users can route traffic to different application versions
to A/B test them and see which version performs better under various workloads.
API selection. GAE has several built-in APIs, including the following five:
URL Fetch Service to issue HTTP requests and receive responses for
efficiency and scaling; and
Benefits of GAE
Ease of setup and use. GAE is fully managed, so users can write code
without considering IT operations and back-end infrastructure. The built-
in APIs enable users to build different types of applications. Access to
application logs also facilitates debugging and monitoring in production.
Pay-per-use pricing. GAE's billing scheme only charges users daily for
the resources they use. Users can monitor their resource usage and bills
on a dashboard.
Force.co
User- Metadata
base Bulk
Process
Piv Met Multitenan
ot adat t-aware
Runtime
Data
Application
Full-text
Shared Search
As the name suggests, a platform is provided to clients to develop and deploy software.
The clients can focus on the application development rather than having to worry about
hardware and infrastructure. It also takes care of most of the operating systems, servers and
networking issues.
Pros
The overall cost is low as the resources are allocated on demand and servers are
automatically updated.
It is less vulnerable as servers are automatically updated and being checked for all
known security issues. The whole process is not visible to developer and thus does
not pose a risk of data breach.
Since new versions of development tools are tested by the Azure team, it becomes
easy for developers to move on to new tools. This also helps the developers to meet
the customer‘s demand by quickly adapting to new versions.
Cons
There are portability issues with using PaaS. There can be a different environment at
Azure, thus the application might have to be adapted accordingly.
It is a managed compute service that gives complete control of the operating systems and
the application platform stack to the application developers. It lets the user to access,
manage and monitor the data centers by themselves.
Pros
This is ideal for the application where complete control is required. The virtual
machine can be completely adapted to the requirements of the organization or
business.
IaaS facilitates very efficient design time portability. This means application can be
migrated to Windows Azure without rework. All the application dependencies such
as database can also be migrated to Azure.
IaaS allows quick transition of services to clouds, which helps the vendors to offer
services to their clients easily. This also helps the vendors to expand their business
by selling the existing software or services in new markets.
Cons
Since users are given complete control they are tempted to stick to a particular
version for the dependencies of applications. It might become difficult for them to
migrate the application to future versions.
There are many factors which increases the cost of its operation. For example,
higher server maintenance for patching and upgrading software.
There are lots of security risks from unpatched servers. Some companies have well
defined processes for testing and updating on-premise servers for security
vulnerabilities. These processes need to be extended to the cloud-hosted IaaS VMs
to mitigate hacking risks.
The unpatched servers pose a great security risk. Unlike PaaS, there is no provision
of automatic server patching in IaaS. An unpatched server with sensitive
information can be very vulnerable affecting the entire business of an organization.
It is difficult to maintain legacy apps in Iaas. It can be stuck with the older version
of the operating systems and application stacks. Thus, resulting in applications that
are difficult to maintain and add new functionality over the period of time.
Ans:
Azure SQL is a family of managed, secure, and intelligent products that use the SQL
Server database engine in the Azure cloud.
Learn how each product fits into Microsoft's Azure SQL data platform to match the
right option for your business requirements. Whether you prioritize cost savings or
minimal administration, this article can help you decide which approach delivers
against the business requirements you care about most.
As seen in the diagram, each service offering can be characterized by the level of
administration you have over the infrastructure, and by the degree of cost efficiency.
In Azure, you can have your SQL Server workloads running as a hosted service (PaaS), or a
hosted infrastructure (IaaS) supporting the software layer, such as Software-as-a-Service
(SaaS) or an application. Within PaaS, you have multiple product options, and service tiers
within each option. The key question that you need to ask when deciding between PaaS or
IaaS is do you want to manage your database, apply patches, and take backups, or do you
want to delegate these operations to Azure?
Azure SQL Database
Azure SQL Database is a relational database-as-a-service (DBaaS) hosted in Azure that falls
into the industry category of Platform-as-a-Service (PaaS).
Best for modern cloud applications that want to use the latest stable SQL Server
features and have time constraints in development and marketing.
A fully managed SQL Server database engine, based on the latest stable Enterprise
Edition of SQL Server. SQL Database has two deployment options built on
standardized hardware and software that is owned, hosted, and maintained by
Microsoft.
With SQL Server, you can use built-in features and functionality that requires extensive
configuration (either on-premises or in an Azure virtual machine). When using SQL
Database, you pay-as-you-go with options to scale up or out for greater power with no
interruption. SQL Database has some additional features that are not available in SQL
Server, such as built-in high availability, intelligence, and management.
As a single database with its own set of resources managed via a logical SQL server.
A single database is similar to a contained database in SQL Server. This option is
optimized for modern application development of new cloud-born
applications. Hyperscale and serverless options are available.
An elastic pool, which is a collection of databases with a shared set of resources
managed via a logical server. Single databases can be moved into and out of an elastic
pool. This option is optimized for modern application development of new cloud-born
applications using the multi-tenant SaaS application pattern. Elastic pools provide a
cost-effective solution for managing the performance of multiple databases that have
variable usage patterns.
Azure SQL Managed Instance falls into the industry category of Platform-as-a-Service
(PaaS), and is best for most migrations to the cloud. SQL Managed Instance is a collection
of system and user databases with a shared set of resources that is lift-and-shift ready.
Best for new applications or existing on-premises applications that want to use the
latest stable SQL Server features and that are migrated to the cloud with minimal
changes. An instance of SQL Managed Instance is similar to an instance of
the Microsoft SQL Server database engine offering shared resources for databases and
additional instance-scoped features.
SQL Managed Instance supports database migration from on-premises with minimal
to no database change. This option provides all of the PaaS benefits of Azure SQL
Database but adds capabilities that were previously only available in SQL Server
VMs. This includes a native virtual network and near 100% compatibility with on-
premises SQL Server. Instances of SQL Managed Instance provide full SQL Server
access and feature compatibility for migrating SQL Servers to Azure.
SQL Server installed and hosted in the cloud runs on Windows Server or Linux virtual
machines running on Azure, also known as an infrastructure as a service (IaaS). SQL
virtual machines are a good option for migrating on-premises SQL Server databases
and applications without any database change. All recent versions and editions of SQL
Server are available for installation in an IaaS virtual machine.
Best for migrations and applications requiring OS-level access. SQL virtual machines
in Azure are lift-and-shift ready for existing applications that require fast migration to
the cloud with minimal changes or no changes. SQL virtual machines offer full
administrative control over the SQL Server instance and underlying OS for migration
to Azure.
The most significant difference from SQL Database and SQL Managed Instance is
that SQL Server on Azure Virtual Machines allows full control over the database
engine. You can choose when to start maintenance/patching, change the recovery
model to simple or bulk-logged, pause or start the service when needed, and you can
fully customize the SQL Server database engine. With this additional control comes
the added responsibility to manage the virtual machine.
Rapid development and test scenarios when you do not want to buy on-premises non-
production SQL Server hardware. SQL virtual machines also run on standardized
hardware that is owned, hosted, and maintained by Microsoft. When using SQL
virtual machines, you can either pay-as-you-go for a SQL Server license already
included in a SQL Server image or easily use an existing license. You can also stop or
resume the VM as needed.
Optimized for migrating existing applications to Azure or extending existing on-
premises applications to the cloud in hybrid deployments. In addition, you can use
SQL Server in a virtual machine to develop and test traditional SQL Server
applications. With SQL virtual machines, you have the full administrative rights over
a dedicated SQL Server instance and a cloud-based VM. It is a perfect choice when an
organization already has IT resources available to maintain the virtual machines.
These capabilities allow you to build a highly customized system to address your
application's specific performance and availability requirements.
For both Azure SQL Database and Azure SQL Managed Instance, Microsoft
provides an availability SLA of 99.99%. For the latest information, see Service-level
agreement.
For SQL on Azure VM, Microsoft provides an availability SLA of 99.95% that
covers just the virtual machine. This SLA does not cover the processes (such as SQL
Server) running on the VM and requires that you host at least two VM instances in an
availability set. For the latest information, see the VM SLA. For database high
availability (HA) within VMs, you should configure one of the supported high
availability options in SQL Server, such as Always On availability groups. Using a
supported high availability option doesn't provide an additional SLA, but allows you to
achieve >99.99% database availability.