Cloud Computing

Download as pdf or txt
Download as pdf or txt
You are on page 1of 9

1st Answer

Introduction:

Workload Distribution and Resource Pooling Architecture


Cloud architects should implement several concepts and best practices in order to build
applications that are highly scalable. These concepts are important due to expanding
datasets, unpredictable patterns of traffic, and the demand for a quicker response time.
Two common foundational models of cloud architecture are architecture of workload
distribution, and the architecture of resource pooling.

Workload Distribution Architecture: It makes use of IT resources that are possible to


be scaled with the use of one or more identical resources. This is achieved with the use
of a load balancer that would render runtime logic distribution the workload among the
IT assets that are available and the load is distributed evenly.

This model is applied to any IT resource, and it is commonly used with; distributed
virtual servers, devices of cloud storage and cloud services. In addition to a load
balancer and the resources that were mentioned previously, the following mechanism
would also be a part of this model:

 Cloud Usage Monitor that would conduct run-time tracking as well as data
processing.
 Audit Monitor used in order to monitor the system, as it may be required to
comply with the legal requirements.
 Hypervisor which is used in order to manage the workload and virtual hosts that
need distribution.
 Logical network perimeter which would isolate the boundaries of cloud consumer
network.
 Resource clusters that are commonly used in order to support the workload that
balances between cluster nodes.
 The replication of resource that would generate new instances of virtualized
resources under increased workloads.

The workload architecture model would basically function as follows:


Resource A and Resource B are exact copies of the same resource. Inbound requests
from the customers are handled and managed by the load balancer that would forward
the request to the appropriate resource that depends on the workload that is handled by
each resource.

In other words, if resource A is busier as compared to resource B, it would forward the


resource request to resource B. In this manner, this model would distribute the load
among the available IT resources on the basis of workload of each and every resource.
The following example on how this could is possible to be applied in a business
environment is based on the information that is provided by cloud patterns web. A
system would contain two virtual instances of a cloud-based domain controller on two
varying physical servers.

To enhance the performance, domain queries can be received by a load balancer, and
this would then forward the query to the appropriate instance of the domain controller on
the basis of the current workload of each instance.

In this manner, the workload would be evenly distributed between both the servers and
instances of domain controller.

Resource Pooling Architecture: It is based on pooling identical IT resources into groups.


Pools are possible to be physical as well as virtual resources. These identical resources
are grouped as well as maintained by the system which would make sure they remain
synchronized.

Examples of resource pooling would be as follows:

Physical server pools that comprise of networked servers that already have an
operating system and other needed applications installed and are ready for an
immediate use.

Virtual server pools would usually configure from the templates that have been pre-
chosen by the consumer when they would be provisioned.

Storage pools that comprise of the file, or block on the basis of the storage containers.

Network pools that comprise of different devices that are pre-configured network.
Example; virtual firewalls and switches that are used to ensure redundant connections,
load balancing and link aggregation.

CPU pools that allot the resources of CPU to virtual servers.

Physical RAM pools that are possible to be used in order to vertically scale newly
provisioned physical servers.

Resource pools often become very complex, and so, it is best to organize them in a
hierarchical structure to form parent, sibling and nested pools. After the pool has been
defined, various instances of resources from every pool are possible to be used in order
to create an in-memory pool of live IT resources, and this can be pulled and used on
demand by the system as required.

Resource pool architecture makes use of the same kind of mechanism as a workload
distribution architecture. Sharing of identical IT resources for scalability purpose would
be error-prone and a problem in order to keep them completely synchronized on a
regular-basis. The solution to this problem is to make use an automated synchronization
system in order to group identical IT resources into pools and to maintain their
synchronicity.
Conclusion: A cloud system would need methods to be able to dynamically scale their
IT resources up or down as the demand would dictate, and also have a mechanism that
would provide redundancy and easy management of IT resources. The workload
distribution architecture would provide a method to distribute the workload across
different copies of an IT resource, and the resource pooling would provide a method of
automatically synchronizing IT resources with the use of resource pools, and provide a
dynamic method to allocate the resources on demand.

Taken together, they can be used in order to design and implement complex
architectures that would allow for an automatic scaling of IT resources as the workload
would increase and decrease, as well as provide the mechanism to ensure
synchronization of IT resources, and provides the system redundancy, balance of the
load between resources, and the management and auditing of IT resources.
2nd Answer
Introduction:
Cloud scalability in cloud computing would refer to the ability to increase or decrease
the IT resources as required to meet the changing demands of consumers. Scalability is
one of the hallmarks of the cloud and the main driver of its exploding popularity with a
business organization.
Cloud Elasticity: It is the property of a cloud to grow or shrink capacity for CPU,
memory and storage resources in order to adapt to the changing demands of an
organization.
Concept and Application:
Capacity of data storage, processing power as well as networking is all possible to be
scaled with the use of an existing infrastructure of cloud computing. Better yet, scaling is
done quickly and easily, usually with little to no disruption or down time.
Third-party cloud providers often consist of all the infrastructure in place, in the past,
when the scaling with on-premises physical infrastructure, the process can even take
weeks or months and need huge expense.
It can be automatic, without having the need to perform capacity planning in advance or
occasion, or it can also be a manual process where the organization would be notified
that they are running low on resources, and can decide to add or reduce the capacity
when needed. Monitoring the tools that are offered by the cloud provider dynamically
help to adjust the resources that have been allocated to an organization without any
impact on the existing cloud-based operations.
A scalable solution would enable the stable, long-term growth in a manner that is pre-
planned, while an elastic solution would address more immediate, and variable shift in
the demand. Elasticity as well as scalability in cloud computing are both important
features for a system, however, the priority of one over the other would depend in part
on whether the business has a predictable or a highly variable workload.
Benefits of cloud scalability: The major could scalability benefits would be to drive a
cloud adoption for a business that is large as well as small:
Convenience: Often with a few clicks, IT administrators would easily add more VMs
that are available without any delay, and customized to the exact requirements of an
organization. This helps to save precious time for IT staff. Instead of spending hours
and days to set up physical hardware, the teams can focus on other tasks.
Flexibility and speed: As the needs of business change and grow, and this includes
the unexpected rise in demand, cloud scalability would enable the IT to respond quickly.
Today, even smaller business have an access to a high-powered resources that would
be cost prohibitive. No longer are the companies tied down by obsolete equipment, and
they can update the systems and increase the power as well as storage with ease.
Benefits of Cloud Elasticity
Agile: By elimination of the need to purchase, configure, and install new infrastructure
when the demand would change, cloud elasticity would prevent the need to plan for
such an anticipated demand spike, and enable the organization to meet any unexpected
rise in the demand, whether it is due to seasonal spike or any other reason.
High Availability: Cloud elasticity would facilitate both high availability as well as fault
tolerance, since the VMs or containers are possible to be replicated if they appear to be
failing, and this enables to ensure that the services provided by the business are
uninterrupted, and that the users don’t experience downtime. This enables to ensure
that the users can perceive a consistent as well as predictable experience, even as the
resources are provisioned or de-provisioned automatically without any impact on the
business operations.
Cloud elasticity would enable the users to prevent over-provisioning or under-
provisioning system resources. Over-provisioning would refer to a scenario where the
organization would buy more capacity than required.

Over-provisioning often leads to wastage of cloud costs, and under-provisioning often


leads to server outages as the available servers overwork. Server shutdowns often result
in the loss of revenue and customer dissatisfaction, and this is bad for business.

Elasticity often makes use of a dynamic variation in order to align the computing resources
to workload the demand as closely as it is possible, and this helps to prevent
overprovision of wastage and boost the cost-efficiency. Another aim is to make sure that
the system is always in a position to serve the customers satisfactorily, even when it is
bombarded by massive, and sudden workloads.

Scaling with elasticity would provide a middle-ground: Elasticity is an ideal thing for short-
term needs, such as handling traffic spikes in website and database backups. However,
elasticity cloud would also enable to streamline the service delivery when it is combined
with scalability. For instance, by spinning up additional VMs in the same server, it is
possible to create more capacity in that server in order to handle the surge in dynamic
workload.

Use Case one: Insurance

For instance, if a company is into auto insurance business. Perhaps, the customers renew
auto policies at roughly around the same time each year. A policyholder would rush to
exceed the renewal deadline. The company can expect a rise in the traffic when the
company arrives at that time.
If the company is relying on scalability alone, a traffic rise would quickly overwhelm the
provisioned virtual machine, and this causes a service outage. It would lead to a loss of
revenue and customers.

However, if the company or that branch has leased a few more virtual machines, it is
possible to handle the traffic for the entire policy renewal period. So, the company would
have multiple scalable virtual machines to manage the demand in real-time.

Policyholders would not observe any changes in the performance whether the person has
served more customers in that year as compared to that previous year. In order to reduce
cloud spending, it is possible to release some of them to virtual machines when they are
no longer required, such as during months of off-peak.

Conclusion: So, it can be concluded that the cloud scalability would help to make sure
that the IT administrators can add more VMs that are available without any delay, and
so, this would enable the IT team to respond to the rising demands, Cloud elasticity would
enable the users to prevent over-provisioning or under-provisioning system resources.

Cloud elasticity would facilitate both high availability as well as fault tolerance, since the
VMs or containers can be replicated if they appear to be failing, and this would ensure
that the services provided by the business remain uninterrupted, and that the users don’t
experience downtime.
3rd Answer
3a.
Introduction: Multi-tenancy refers to a type of software architecture where a single
software instance is in a position to serve various distinct groups. It implies that multiple
customers of cloud vendors have been using the same computing resources. As they
share the same computing resources, the data of each cloud customer is kept separate
and safe. It is an important concept of cloud computing.
Concept and Application:
Multi-tenancy refers to a shared host where the same resources are divided among
various customers in cloud computing.

Multi-Tenancy Issues in Cloud Computing:

The issues in multi-tenancy when it comes to cloud computing have been a growing
concern, especially as the industry would expand. And, the big business enterprises
have shifted their workload to the cloud. Cloud computing would provide different
services on the internet, and this includes providing the users with an access to
resources via the internet, such as a server and a database.

Security: This is one of the most challenging as well as risky issues when it comes to
multi-tenancy cloud computing. There is always a risk of losing data, data theft and
hacking. The administrator often is in a position to grant an access to an unauthorized
person accidentally. Despite the companies of software and cloud computing have been
saying that the client data is safe than ever on their servers, there are still certain risks
pertaining to security.

There is a potential for threat of security when the information is stored with the use of
remote servers and access via the internet. There is always a risk of hacking with cloud
computing. No matter how secure the encryption is, someone is always there to decrypt
it with proper knowledge and skills.

Performance: SaaS applications are at various places, and it impacts the response
time. SaaS applications often take longer to respond and are much slower as compared
to the server applications. This slowness impacts the overall system performance and
makes them less efficient. In the competitive as well as growing world of cloud
computing, a low performance would push the cloud service providers down. It is
important for multi-tenancy cloud service providers to improve their performance.

Less Powerful: Many cloud services run of Web 2, 0, with a new user interface and the
latest templates, but they would lack many important features. Without the important
and adequate features, the services of multi-tenancy cloud computing would often be a
nuisance for the clients.
Noisy Neighbor Effect: If a tenant makes use of many computing resources, other
tenants may often suffer due to their low computing power, however, this is something
that would rarely happen if the cloud architecture and infrastructure is inappropriate.

Interoperability: A user would remain restricted by their cloud service providers. Users
are not in a position to go beyond the limitations set by the cloud service providers and
is not in a position to communicate with the local applications.

Monitoring: Constant monitoring is important for the providers of cloud service in order
to check if there is a challenge in the system of multi-tenancy. These systems need
constant monitoring, as the computing resource often get shared with multiple users in a
simultaneous manner. If any challenge takes place, it must get solved on an immediate
basis not to disturb the system’s efficiency.

However, monitoring a multi-tenancy cloud system is very difficult as it is tough to find


flaws in the system and adjust accordingly.

Conclusion: So, it can be concluded that major issues of multitenancy computing are
security, performance, Noisy Neighbor Effect and Monitoring. This happens because of
a shared host where the resources are divided among various multiple customers. It
becomes to keep the data of each cloud safe and smooth.

3b.
Introduction: Serverless computing refers a method of providing backend services on
an as-used basis. Servers are still made us of, but company that receives backend
services from a server less vendor is charged on the basis of usage, and not a fixed
amount of bandwidth or number of servers.
Concept and Application: For instance, let’s image a website that would sell concert
tickets. When a user would type a website address into the browser window, the
browser would send a request to the backend serve, and this responds with the website
data. The user would then see the frontend of website, and this can involve the content
such as text, images as well as form fields for the user to fill out.
The user is in a position to interact with one of the form fields on the frontend to search
for their favorite musical act. When the user would click on submit, this would trigger
another request to the backend. This code would check its database to see if a
performer with this name would exist, and if so, when they would be playing next, and
how many tickets would be available. The backend would then pass the data back to
the frontend, and the frontend would display the result in a way that would make sense
to the user.
In a similar way, when the user would create an account and enter the financial
information to purchase the ticket, another back-and-forth communication between the
frontend and backend would take place.
Lower Cost: Serverless computing is usually very cost-effective, as the traditional cloud
providers of backend services often lead to the user paying for unused space or idle
CPU time.
Simplified scalability: Developers making use of serverless architecture don’t have to
be conscious regarding the policies to scale up their code. The serverless vendor
handles all the scaling on demand.
Simplified backend code: With FaaS, developers are in a position to create simple
functions that can perform a single purpose, such as making an API call.
Quicker Turnaround: Serverless architecture often hugely helps to cut the time to
market. Instead of needing a complicated deploy process to roll out the bug fixes as well
as new features, it is possible for the developers to add and modify code on a
piecemeal basis.
The serverless computing has following models:
Backend-as-a-service: It is a service model where a cloud provider would offer
backend services such as storing of data, so that the developers can often focus on
writing the front-end code. While the serverless applications are event-driven and run on
the end, BaaS applications may not be able to meet either of these requirements.

Platform-as-a-service: It is a model where the developers would essentially rent all the
important tools in order to develop as well as deploy the applications from a cloud
provider, and this includes the things such as operating systems as well as middleware.

Infrastructure-as-a-service refers to a term for all the cloud vendors that host
infrastructure on behalf of their customers. IaaS providers often offer serverless
functionality, and the terms are not synonymous.

Conclusion: So, it can be concluded that when a serverless computing is used by a


company, it receives backend services from a serverless vendor, and the company is
charged on the basis of usage. Serverless computing is cost-effective, it has simplified
scalability and a quicker turnaround. It can be used as a BaaS model, PaaS model or
IaaS model

You might also like