AWS Module 5 (First 3 Topics)

Download as pdf or txt
Download as pdf or txt
You are on page 1of 38

AWS for Cloud Computing

Module 5
Module:5 AWS Architectural Best Practices
AWS Well-Architected Framework, Design Principles for AWS
Cloud Architectures, Scalability and Elasticity, High
Availability and Fault Tolerance, Cost optimization and AWS
pricing, Performance Efficiency.
AWS Well-Architected Framework
AWS Well-Architected Framework Introduction
➢ The AWS Well-Architected Framework is a structured approach to help
architects, developers, and cloud professionals build secure, high-performing,
resilient, and efficient infrastructure on Amazon Web Services (AWS).
➢ It helps you understand the pros and cons of decisions you make while building
systems on AWS. Using the Framework helps you learn architectural best
practices for designing and operating secure, reliable, efficient, cost-effective,
and sustainable workloads in the AWS Cloud.
➢ It provides a way for you to consistently measure your architectures against best
practices and identify areas for improvement.
➢ The process for reviewing an architecture is a constructive conversation about
architectural decisions, and is not an audit mechanism. AWS believe that having
well-architected systems greatly increases the likelihood of business success.
AWS Well-Architected Framework Introduction

➢AWS Solutions Architects have years of experience architecting


solutions across a wide variety of business verticals and use
cases. They have helped design and review thousands of
customers’ architectures on AWS. From this experience, AWS
have identified best practices and core strategies for architecting
systems in the cloud.
➢The AWS Well-Architected Framework documents a set of
foundational questions that help you to understand if a specific
architecture aligns well with cloud best practices.
➢ The AWS Well-Architected Framework documents a set of foundational questions that help
you to understand if a specific architecture aligns well with cloud best practices. The
framework provides a consistent approach to evaluating systems against the qualities you
expect from modern cloud-based systems, and the remediation that would be required to
achieve those qualities. As AWS continues to evolve, and we continue to learn more from
working with our customers, AWS will continue to refine the definition of well-architected.
➢ This framework is intended for those in technology roles, such as chief technology officers
(CTOs), architects, developers, and operations team members. It describes AWS best
practices and strategies to use when designing and operating a cloud workload, and provides
links to further implementation details and architectural patterns.
➢ AWS also provides a service for reviewing your workloads at no charge. The AWS Well-
Architected Tool (AWS WA Tool) is a service in the cloud that provides a consistent process
for you to review and measure your architecture using the AWS Well-Architected
Framework. The AWS WA Tool provides recommendations for making your workloads more
reliable, secure, efficient, and cost-effective.
➢ This framework is intended for those in technology roles, such as chief technology officers
(CTOs), architects, developers, and operations team members. It describes AWS best practices
and strategies to use when designing and operating a cloud workload, and provides links to
further implementation details and architectural patterns.
AWS Well-Architected and the Six Pillars

➢ The AWS Well-Architected Framework describes key concepts, design


principles, and architectural best practices for designing and running workloads
in the cloud. By answering a few foundational questions, learn how well your
architecture aligns with cloud best practices and gain guidance for making
improvements.

1. Operational Excellence Pillar


2. Security Pillar
3. Reliability Pillar
4. Performance Efficiency Pillar
5. Cost Optimization Pillar
6. Sustainability Pillar
Operational Excellence Pillar
• The operational excellence pillar focuses on running, monitoring systems, and
continually improving processes and procedures. Key functions include
automating changes, responding to events, and defining standards to manage
daily operations.
• The Operational Excellence pillar includes the ability to support development
and run workloads effectively, gain insight into their operation, and continuously
improve supporting processes and procedures to delivery business value.

Design Principles
There are five design principles for operational excellence in the cloud:
a. Perform operations as code
b. Make frequent, small, reversible changes
c. Refine operations procedures frequently
d. Anticipate failure
e. Learn from all operational failures
Security Pillar
The Security pillar includes the ability to protect data, systems, and assets
to take advantage of cloud technologies to improve your security.
Design Principles
There are seven design principles for security in the cloud:

a. Implement a strong identity foundation


b. Enable traceability
c. Apply security at all layers
d. Automate security best practices
e. Protect data in transit and at rest
f. Keep people away from data
g. Prepare for security events
Reliability Pillar
The Reliability pillar encompasses the ability of a workload to perform its
intended function correctly and consistently when it’s expected to. This
includes the ability to operate and test the workload through its total
lifecycle.

Design Principles

There are five design principles for reliability in the cloud:

a. Automatically recover from failure


b. Test recovery procedures
c. Scale horizontally to increase aggregate workload availability
d. Stop guessing capacity
e. Manage change in automation
Performance Efficiency
The Performance Efficiency pillar includes the ability to use computing
resources efficiently to meet system requirements, and to maintain that
efficiency as demand changes and technologies evolve.
Design Principles

There are five design principles for performance efficiency in the


cloud:

a. Democratize advanced technologies


b. Go global in minutes
c. Use serverless architectures
d. Experiment more often
e. Consider mechanical sympathy
Cost Optimization

The Cost Optimization pillar includes the ability to run systems to


deliver business value at the lowest price point.

Design Principles

There are five design principles for cost optimization in the cloud:
a. Implement cloud financial management
b. Adopt a consumption model
c. Measure overall efficiency
d. Stop spending money on undifferentiated heavy lifting
e. Analyze and attribute expenditure
Sustainability

The discipline of sustainability addresses the long-term environmental,


economic, and societal impact of your business activities.

Design Principles

There are six design principles for sustainability in the cloud:

a. Understand your impact


b. Establish sustainability goals
c. Maximize utilization
d. Anticipate and adopt new, more efficient hardware and software offerings
e. Use managed services
f. Reduce the downstream impact of your cloud workloads
AWS Design Principles
AWS Design Principles
1. Scalability
➢ Scaling Horizontally – an increase in the number of resources
➢ Scaling Vertically – an increase in the specifications of an individual resource
2. Disposable Resources Instead of Fixed Servers
➢ Instantiating Compute Resources – automate setting up of new resources along with
their configuration and code.
➢ Infrastructure as Code – AWS assets are programmable. You can apply techniques,
practices, and tools from software development to make your whole infrastructure
reusable, maintainable, extensible, and testable.

3. Automation
➢ Serverless Management and Deployment – being serverless shifts your focus to
automation of your code deployment. AWS handles the management tasks for you.
➢ Infrastructure Management and Deployment – AWS automatically handles details,
such as resource provisioning, load balancing, auto scaling, and monitoring, so you can
focus on resource deployment.
➢ Alarms and Events – AWS services will continuously monitor your resources and
initiate events when certain metrics or conditions are met.
4. Loose Coupling

➢ Well-Defined Interfaces – reduce interdependencies in a system by allowing various


components to interact with each other only through specific, technology interfaces,
such as RESTful APIs.
➢ Service Discovery – Applications that are deployed as a set of smaller services
should be able to be consumed without prior knowledge of their network topology
details. Apart from hiding complexity, this also allows infrastructure details to
change at any time.
➢ Asynchronous Integration – Interacting components that do not need an immediate
response and where an acknowledgement that a request has been registered will
suffice, should integrate through an intermediate durable storage layer.
➢ Distributed Systems Best Practices – Build applications that handle component
failure in a graceful manner.
5. Services, Not Servers

➢ Managed Services – provide building blocks that developers can consume to


power their applications, such as databases, machine learning, analytics, queuing,
search, email, notifications, and more.

➢ Serverless Architectures – allow you to build both event-driven and synchronous


services without managing server infrastructure, which can reduce the operational
complexity of running applications.
6. Databases
Choose the Right Database Technology for Each Workload

➢ Relational Databases – provide a powerful query language, flexible indexing


capabilities, strong integrity controls, and the ability to combine data from
multiple tables in a fast and efficient manner.

➢ NoSQL Databases – trade some of the query and transaction capabilities of


relational databases for a more flexible data model that seamlessly scales
horizontally. It uses a variety of data models, including graphs, key-value pairs,
and JSON documents, and are widely recognized for ease of development,
scalable performance, high availability, and resilience.

➢ Data Warehouses – are a specialized type of relational database, which is


optimized for analysis and reporting of large amounts of data.
6. Databases

Graph Databases – uses graph structures for queries.

❑ Search Functionalities
➢ Search is often confused with query. A query is a formal database query, which is
addressed in formal terms to a specific data set. Search enables datasets to be
queried that are not precisely structured.
➢ A search service can be used to index and search both structured and free text
format and can support functionality that is not available in other databases, such
as customizable result ranking, faceting for filtering, synonyms, and stemming.
7. Managing Increasing Volumes of Data

➢ Data Lake – An architectural approach that allows you to store massive amounts of
data in a central location so that it’s readily available to be categorized, processed,
analyzed, and consumed by diverse groups within your organization.
8. Removing Single Points of Failure

❑ Introducing Redundancy
❖ Standby redundancy – when a resource fails, functionality is recovered on a
secondary resource with the failover process. The failover typically requires some
time before it completes, and during this period the resource remains unavailable.
This is often used for stateful components such as relational databases.
❖ Active redundancy – requests are distributed to multiple redundant compute
resources. When one of them fails, the rest can simply absorb a larger share of the
workload.
❑ Detect Failure – use health checks and collect logs.
8. Removing Single Points of Failure
❑ Durable Data Storage
❖ Synchronous replication – only acknowledges a transaction after it has been
durably stored in both the. It is ideal for protecting the integrity of dataprimary
storage and its replicas from the event of a failure of the primary node.
❖ Asynchronous replication – decouples the primary node from its replicas at the
expense of introducing replication lag. This means that changes on the primary
node are not immediately reflected on its replicas.
❖ Quorum-based replication – combines synchronous and asynchronous
replication by defining a minimum number of nodes that must participate in a
successful write operation.
❑ Automated Multi-Data Centre Resilience – utilize AWS Regions and Availability
Zones (Multi-AZ Principle). (See Disaster Recovery section)

❑ Fault Isolation and Traditional Horizontal Scaling – Shuffle Sharding


9. Optimize for Cost
➢ Right Sizing – AWS offers a broad range of resource types and configurations for
many use cases.
➢ Elasticity – save money with AWS by taking advantage of the platform’s elasticity.
➢ Take Advantage of the Variety of Purchasing Options – Reserved Instances vs Spot
Instances (See AWS Pricing)

10. Caching
➢ Application Data Caching – store and retrieve information from fast, managed in-
memory caches.
➢ Edge Caching – serves content by infrastructure that is closer to viewers, which
lowers latency and gives high, sustained data transfer rates necessary to deliver
large popular objects to end users at scale.
11. Security
➢ Use AWS Features for Defense in Depth – secure multiple levels of your
infrastructure from network down to application and database.
➢ Share Security Responsibility with AWS – AWS handles the security OF the Cloud
while customers handle security IN the Cloud.
➢ Reduce Privileged Access – implement the Principle of Least Privilege controls.
➢ Security as Code – firewall rules, network access controls, internal/external
subnets, and operating system hardening can all be captured in a template that
defines a Golden Environment.
➢ Real-Time Auditing – implement continuous monitoring and automation of controls
on AWS to minimize exposure to security risks.
12. Cloud Architecture Best Practices
There are various best practices that you can follow which can help you build an
application in the AWS cloud. The notable ones are:
1.Decouple your components – the key concept is to build components that do not
have tight dependencies on each other so that if one component were to fail for some
reason, the other components in the system will continue to work. This is also known
as loose coupling. This reinforces the Service-Oriented Architecture (SOA) design
principle that the more loosely coupled the components of the system are, the better
and more stable it scales.
2.Think parallel – This internalizes the concept of parallelization when designing
architectures in the cloud. It encourages you to implement parallelization whenever
possible and to also automate the processes of your cloud architecture.
3.Implement elasticity – This principle is implemented by automating your deployment
process and streamlining the configuration and build process of your architecture. This
ensures that the system can scale in and scale out to meet the demand without any
human intervention.
4.Design for failure – This concept encourages you to be a pessimist when designing
architectures in the cloud and assumes that the components of your architecture will
fail. This reinforces you to always design your cloud architecture to be highly available
and fault-tolerant.
Scalability and Elasticity in Cloud
Computing
Cloud Elasticity:

➢ Elasticity refers to the ability of a cloud to automatically expand or compress the


infrastructural resources on a sudden up and down in the requirement so that the
workload can be managed efficiently.
➢ This elasticity helps to minimize infrastructural costs. This is not applicable for all
kinds of environments, it is helpful to address only those scenarios where the
resource requirements fluctuate up and down suddenly for a specific time interval. It
is not quite practical to use where persistent resource infrastructure is required to
handle the heavy workload.
➢ The versatility is vital for mission basic or business basic applications where any
split the difference in the exhibition may prompts enormous business misfortune.
Thus, flexibility comes into picture where extra assets are provisioned for such
application to meet the presentation prerequisites.
➢ It works such a way that when number of client access expands, applications are
naturally provisioned the extra figuring, stockpiling and organization assets like
central processor, Memory, Stockpiling or transfer speed what’s more, when fewer
clients are there it will naturally diminish those as per prerequisite.
Cloud Elasticity:

➢ The Flexibility is the capacity to develop or contract framework assets (like process,
capacity or organization) powerfully on a case by case basis to adjust to
responsibility changes in the applications in an autonomic way.
➢ It makes make most extreme asset use which bring about reserve funds in foundation
costs in general.
➢ Relies upon the climate, flexibility is applied on assets in the framework that isn’t
restricted to equipment, programming, network, QoS and different arrangements.
➢ The versatility is totally relying upon the climate as now and again it might become
negative characteristic where execution of certain applications probably ensured
execution.
➢ It is most commonly used in pay-per-use, public cloud services. Where IT managers
are willing to pay only for the duration to which they consumed the resources.
➢ Example: Consider an online shopping site whose transaction workload increases
during festive season like Christmas. So for this specific period of time, the resources
need a spike up. In order to handle this kind of situation, we can go for a Cloud-
Elasticity service rather than Cloud Scalability. As soon as the season goes out, the
deployed resources can then be requested for withdrawal.
Cloud Scalability:

✓ Cloud scalability is used to handle the growing workload where good performance is
also needed to work efficiently with software or applications. Scalability is commonly
used where the persistent deployment of resources is required to handle the
workload statically.
✓ Example: Consider you are the owner of a company whose database size was small
in earlier days but as time passed your business does grow and the size of your
database also increases, so in this case you just need to request your cloud service
vendor to scale up your database capacity to handle a heavy workload.
✓ It is totally different from what you have read above in Cloud Elasticity. Scalability is
used to fulfill the static needs while elasticity is used to fulfill the dynamic need of the
organization. Scalability is a similar kind of service provided by the cloud where the
customers have to pay-per-use. So, in conclusion, we can say that Scalability is
useful where the workload remains high and increases statically.
Types of Scalability:

1. Vertical Scalability (Scale-up) –


In this type of scalability, we increase the
power of existing resources in the
working environment in an upward
direction.

2. Horizontal Scalability: In this kind of


scaling, the resources are added in a
horizontal row

3. Diagonal Scalability: It is a mixture of


both Horizontal and Vertical scalability
where the resources are added both
vertically and horizontally.

You might also like