AWS 1st Attempt

Download as pdf or txt
Download as pdf or txt
You are on page 1of 68

CLF-C02 Exam Simulation https://www.kaplanlearn.com/education/test/print/90570307?testId=28...

Simulated Exam February 12, 2024 Test ID: 283171153

Question #1 of 65 Question ID: 1603378

You have been managing your current production workload with several Reserved Instances for about 9 months.
You have been monitoring resources within the Amazon CloudWatch dashboard and notice several progressive
increases in overall workload each month. Your goal is to make sure your EC2 infrastructure has enough capacity
and overall bandwidth to support these smaller increases in workload.

What type of instance would meet this requirement?

A) On-Demand Instances

B) Dedicated Hosts

C) Reserved Instances

D) Spot Instances

Explanation

On-Demand Instances are good solutions for application workloads that are considered spiky, short term, and not
predictable. This option is considered flexible and low cost because there are no long contacts and no upfront
payment. For On-Demand Instances, you are paying per second or per hour based on the instance configuration
you choose. When your application workload spikes, your system will dynamically allocate your pre-determined On-
Demand Instance to help alleviate your total CPU and memory workload concerns. So, the price will vary based on
overall workload activity.

Spot Instances are viable for any type of workload that is classified as non-time sensitive, meaning the workload
does not have to start or stop at a specific time. This is different from On-Demand Instances because they will be
used during times that, although spiky, are used during key times of the day. This type of instance is also good for
security testing, developing, integration and validating overall loads on a system.

Reserved Instances are good for environments that need to use an Amazon Elastic Compute Cloud (EC2) instance
between one and three years. These instances should support applications that are online 24/7 and have an
anticipated and well-known workload pattern. These instances are classified as supporting the base workload. Any
abnormal increase in workload should be managed by Spot Instances or On-Demand Instances.

Dedicated Hosts are physical servers that are owned and supported by Amazon. One of their main benefits is the
reduction in licensing costs of traditional systems related to software licenses. These systems are good for
compliance requirements for external vendors and other internal support needs. They are purchased on an hourly
basis, which is similar to On-Demand solutions. Finally, they can be acquired as reserved servers at a greatly

1 of 68 12-02-2024, 23:34
CLF-C02 Exam Simulation https://www.kaplanlearn.com/education/test/print/90570307?testId=28...

reduced price.

Objective:
Billing, Pricing, and Support

Sub-Objective:
Understand resources for billing, budget, and cost management

References:

Optimizing your costs for AWS services: Part 1 | AWS Startups Blog (amazon.com)

On-Demand DB instances for Amazon RDS - Amazon Relational Database Service

On-Demand DB instances for Aurora - Amazon Aurora

Question #2 of 65 Question ID: 1615044

You want to create industrial applications to perform remote operation monitoring, predictive quality, and
maintenance by connecting industrial sensors to the cloud securely. Which of the following systems should you use
for this?

A) Amazon Connect

B) AWS Control Tower

C) AWS IoT Core

D) AWS Launch Wizard

Explanation

You would use AWS Internet of Things (IoT) Core. AWS IoT Core is a technology that you use for connecting IoT
devices to the cloud securely and easily. It offers messaging features that are MQTT-based and help create
scalable, efficient, and cost-optimized IoT architectures. MQTT is a messaging protocol for machine-to-machine
communication. A related system, AWS IoT Greengrass, enables you to bring cloud capabilities to a local device.
You can use IoT Greengrass deploy and manage application logic and data processing on IoT devices.. With IoT
Greengrass, you can manage application logic running on devices using the cloud. The main difference between IoT
Core and Greengrass is that IoT Core is a service running on the cloud while IoT Greengrass is an edge runtime
system.

AWS Control Tower is a service that allows you to manage a multi-account AWS system and orchestrate several
AWS services such as AWS Organizations and AWS IAM Identity Center. It provides landing zones which are
environments that contain all the organizational units (OUs), users, and resources that you need to keep within

2 of 68 12-02-2024, 23:34
CLF-C02 Exam Simulation https://www.kaplanlearn.com/education/test/print/90570307?testId=28...

compliance regulation.

You would not use AWS Launch Wizard. Launch Wizard is a system used to size, configure, and deploy AWS
resources for various third-party systems such as HANA-based SAP and Microsoft SQL Server Always On. SAP
High-performance ANalytic Appliance (HANA) is a database system that stores data in memory instead of on a disk.
This simplifies the deployment of applications and automates the process of selecting AWS resources and
estimating costs.

You would not use Amazon Connect. This is a cloud contact center service that enables you to use omnichannel
communications for creating personalized experiences for your users. With Amazon Connect, you can offer chat
and voice support using factors such as tentative wait times and customer preferences.

Objective:
Cloud Technology and Services

Sub-Objective:
Identify services from other in-scope AWS service categories

References:

AWS > Documentation > AWS IoT Core > Developer Guide > What is AWS IoT?

Question #3 of 65 Question ID: 1603349

You are using Amazon Elastic Compute Cloud (EC2) for running an application and need to send newsletters from
this application to thousands of recipients. Which of these AWS services will you use for this requirement?

A) Amazon EventBridge

B) Amazon Connect

C) Amazon Simple Email Service (Amazon SES)

D) Amazon OpenSearch Service

Explanation

You will use Amazon Simple Email Service (Amazon SES). Amazon SES is a platform for emails that enables you to
send and receive emails through your own domains and email addresses. By using Amazon SES for receiving
emails, you can create systems that have features like email autoresponder and unsubscribe. You can also create
applications that make support tickets from emails sent in by customers.

You will not use Amazon OpenSearch Service. This service is used for deploying, operating, and scaling
OpenSearch clusters on AWS. OpenSearch is an open-source engine for search and analytics and can be sued for

3 of 68 12-02-2024, 23:34
CLF-C02 Exam Simulation https://www.kaplanlearn.com/education/test/print/90570307?testId=28...

clickstream analysis, application monitoring in real-time, and log analytics.

You will not use Amazon EventBridge. This is a serverless system that allows you to create event-driven
applications that are scalable. EventBridge enables you to perform routing for events among AWS services.
EventBridge allows you to process events using event buses and pipes. An event-driven architecture involves
loosely coupled architecture where software systems function by sending and responding to events.

You will not use Amazon Connect. This is a cloud contact center service that enables you to use omnichannel
communications for creating personalized experiences for your users. With Amazon Connect you can offer chat and
voice support using factors like tentative wait times and customer preferences.

Objective:
Cloud Technology and Services

Sub-Objective:
Identify services from other in-scope AWS service categories

References:

AWS > Documentation > Amazon Simple Email Service > Developer Guide > What is Amazon SES?

https://docs.aws.amazon.com/ses/latest/dg/Welcome.html

Question #4 of 65 Question ID: 1615031

Which of the following technologies should you use for securely connecting remote workers and on-premises
networks to your AWS cloud?

A) AWS VPN

B) AWS Snowmobile

C) AWS Cloud9

D) AWS Direct Connect

Explanation

You would use AWS Virtual Private Network (VPN). Two services comprise AWS VPN: AWS Client VPN and AWS
Site-to-Site VPN. You use AWS Client VPN for connecting users securely to AWS. You use AWS Site-to-Site VPN
for securely connecting a branch office or on-premises network to an AWS Virtual Private Cloud (VPC) within your
AWS cloud environment.

You would not use AWS Direct Connect. Direct Connect is better suited to creating a private, dedicated connection

4 of 68 12-02-2024, 23:34
CLF-C02 Exam Simulation https://www.kaplanlearn.com/education/test/print/90570307?testId=28...

from a datacenter to an AWS VPC using a fiber-optic link.

You would not use AWS Cloud9 for this scenario. AWS Cloud9 provides a cloud-based integrated development
environment (IDE). You use Cloud9 for writing, running, and debugging application code.

Snowmobile is a data migration technology from AWS and is unsuitable for this scenario. AWS offers various
devices as part of its Snow family for migrating data in and out of AWS.

Objective:
Cloud Technology and Services

Sub-Objective:
Define methods of deploying and operating in the AWS Cloud

References:

HYPERLINK "https://docs.aws.amazon.com/vpn/latest/clientvpn-admin/what-is.html"AWS > Documentation > AWS


VPN > Administrator Guide > What is AWS Client VPN?

AWS > Documentation > AWS VPN > User Guide > What is AWS Site-to-Site VPN?

https://docs.aws.amazon.com/vpn/latest/s2svpn/VPC_VPN.html

Question #5 of 65 Question ID: 1603277

When does Amazon DynamoDB encrypt data at rest?

A) Only when you create a new table structure

B) Only when you use an ALTER TABLE command

C) When the first rows of data enter the table

D) Only when you create a new table structure within the US East (Ohio) Region

Explanation

Amazon DynamoDB offers encryption at rest by implementing AWS Key Management Service (KMS). Encryption at
rest can only be implemented when the new DynamoDB table is created. You cannot alter the table to add it later, it
must be done at the time of creation. Amazon recommends the encryption at rest option for data that is considered
sensitive. This option reduces the difficulties, such as time and cost, in protecting critical data.

You can create a table with encryption at rest enabled using either the Amazon DynamoDB console or the CLI
command aws dynamodb create-table with the SSEDescription parameter set to Status": "ENABLED.

5 of 68 12-02-2024, 23:34
CLF-C02 Exam Simulation https://www.kaplanlearn.com/education/test/print/90570307?testId=28...

Encryption at rest is not enabled when the first rows of data enter the table because encryption is applied at the
structural level, and only during the creation of the table object. For the same reason, it cannot be enabled using an
ALTER TABLE command.

Only when you create a new table structure within the US East (Ohio) Region is incorrect because Amazon offers
encryption at rest in several AWS Regions. They include US East, US West, Canada, South America, Asia, and
many more.

Objective:
Security and Compliance

Sub-Objective:
Identify components and resources for security

References:

DynamoDB encryption at rest - Amazon DynamoDB

Encryption at rest: How it works - Amazon DynamoDB

Question #6 of 65 Question ID: 1615035

You want to use an AWS service for centralized and automated data protection over multiple AWS services for both
on-premises and in the cloud. Which of the following technologies should you use?

A) AWS Backup

B) Amazon MSK

C) AWS Batch

D) AWS Data Exchange

Explanation

You would use AWS Backup which is a service for configuring, scheduling, and monitoring AWS backup systems for
various services that include Amazon Elastic Block Store (EBS) volumes, EC2 instances, RDS databases, and
DynamoDB tables. With AWS Backup, you can back up data across various AWS services in a centralized and
automated manner. By creating back-up plans with AWS Backup, you can specify how long the backup needs to be
kept as well as how often a backup needs to be taken. A related service you need to know about is AWS Elastic
Disaster Recovery (AWS DRS) which is an AWS solution for disaster recovery that you can use for replicating
cloud-based or on-premises applications.

You would not use AWS Data Exchange for this scenario. AWS Data Exchange is a service that can be used to

6 of 68 12-02-2024, 23:34
CLF-C02 Exam Simulation https://www.kaplanlearn.com/education/test/print/90570307?testId=28...

locate and use third-party data on AWS. It allows users to find data products from qualified data providers and
subscribe to these products. For data providers, AWS Data Exchange removes the requirement for building and
maintaining technology for data delivery or billing.

Amazon Managed Streaming for Apache Kafka (Amazon MSK) allows you to create applications that use Apache
Kafka for processing streaming data. It is a fully managed service. Apache Kafka provides an open-source platform
for creating real-time streaming systems and pipelines.

AWS Batch is a service that enables you to execute batch computing workloads on the AWS cloud. It is a fully
managed service that allows for the running of batch computing workloads of near limitless sizes.

Objective:
Cloud Technology and Services

Sub-Objective:
Identify AWS storage services

References:

AWS > Documentation > AWS Backup > Developer Guide > What is AWS Backup?

Question #7 of 65 Question ID: 1540865

What type of internal Amazon user can be created to mimic a service, application, or person that has access to
AWS resources?

A) Root user

B) Federated user

C) IAM user

D) IAM group

Explanation

An AWS Identity and Access Management (IAM) user can be created for an AWS account. You can create multiple
IAM users to manage multiple resources. You can create an IAM user for a person, service, or even application to
utilize specific AWS resources. All of this can be done through the AWS console, direct application programming
interface (API), or by using command line interface (CLI) components. Amazon best practices are to create an IAM
user for every user that needs access to AWS resources. You should always use the least privilege rule, which
means granting only the minimum privileges required based on job functionality. You can also leverage fine-grained
permissions to each IAM user from within the AWS account.

7 of 68 12-02-2024, 23:34
CLF-C02 Exam Simulation https://www.kaplanlearn.com/education/test/print/90570307?testId=28...

A root user is the main user within each AWS account. The account should never be used to support an application,
person, or service. This account has full access to all AWS resources and to private billing information as well. This
is an incorrect answer because Amazon does not recommend this privileged user role be used to support Amazon
services.

An IAM group is a security solution for managing several IAM users and gives you the ability to specify security
permissions to the entire group. This choice is incorrect because it is considered a group and not a specific user.
The correlation to a group is that you would grant permissions to the group and then assign people/users to specific
groups, and they will inherit the group’s permissions.

Federated users are typically granted temporary access to AWS resources and are considered external users like
that of a local Lightweight Directory Access Protocol (LDAP) group. This option is incorrect because federated users
are considered external to the AWS infrastructure.

Objective:
Security and Compliance

Sub-Objective:
Identify AWS access management capabilities

References:

Security best practices in IAM - AWS Identity and Access Management (amazon.com)

Question #8 of 65 Question ID: 1603224

Which of the following statements are FALSE when creating an Amazon S3 bucket? (Choose all that apply.)

A) You cannot use uppercase letters in bucket names.

B) The bucket name cannot be duplicated between accounts.

C) The bucket name can be changed after it has been created.

D) The bucket name can be duplicated between accounts.

E) You cannot use underscores in bucket names.

Explanation

When you create a bucket within Amazon Simple Storage Service (S3), you must make sure you know the name
you are going to use because once you have created that name, it cannot be changed. Bucket names must be
unique within Amazon S3. The names can be between 3 and 63 characters long and must start with a lowercase
letter or a number. You also cannot create bucket names that resemble IP addresses.

8 of 68 12-02-2024, 23:34
CLF-C02 Exam Simulation https://www.kaplanlearn.com/education/test/print/90570307?testId=28...

Bucket names cannot be duplicated between accounts. AWS S3 bucket names have to be unique across the entire
Amazon S3 infrastructure.

You cannot use underscores in a bucket name.

You cannot use uppercase letters in bucket names.

Objective:
Cloud Concepts

Sub-Objective:
Identify design principles of the AWS Cloud

References:

Step 1: Create your first S3 bucket - Amazon Simple Storage Service

Question #9 of 65 Question ID: 1603221

What is a key capability of an Amazon S3 data lake architecture component?

A) The ability to implement single sign-on within the data lake.

B) Utilizes a broad perspective of data science, data analytics, and machine


learning in a centralized platform.

C) Storing data as key-value pairs using a NoSQL database across multiple


Regions.

D) Being able to query data in multiple Availability Zones.

Explanation

When using the Amazon Simple Storage Service (S3) data lake, you utilize a broad perspective of data science,
data analytics, and machine learning (ML), all in a centralized platform. It gives you the ability to house large
amounts of data from a number of different resources in one central location. You can create unique catalogs and
utilize tools to analyze, monitor, and focus on unique data patterns. You can also efficiently use other tools that
might be needed in the future for data processing.

An Amazon S3 data lake cannot store data as key-value pairs using a NoSQL database across multiple Regions.
This is a feature offered by DynamoDB, which is a NoSQL database that is fully managed and offers multi-Region
replication for data.

Being able to query data in multiple Availability Zones (AZs) is not a key capability of a data lake. Data lakes are in

9 of 68 12-02-2024, 23:34
CLF-C02 Exam Simulation https://www.kaplanlearn.com/education/test/print/90570307?testId=28...

one storage platform and have the ability to data assets that are considered in place.

Data lakes allow you the security to share summarized datasets and overall results sets to other end users, but
does not give you the ability to implement a single sign-on within the data lake.

Objective:
Cloud Concepts

Sub-Objective:
Identify design principles of the AWS Cloud

References:

Storage Best Practices for Data and Analytics Applications - Storage Best Practices for Data and Analytics
Applications (amazon.com)

Question #10 of 65 Question ID: 1603276

What part of an Amazon Virtual Private Cloud (VPC) is considered stateful?

A) Security groups

B) Amazon EBS

C) AWS Snowball storage

D) Network access control list

Explanation

Amazon Virtual Private Cloud (VPC), the networking layer for Amazon Elastic Compute Cloud (EC2), is meant to
closely mimic an actual data center. One of the many components that makes up a VPC is a security group.
Amazon classifies its security groups as being stateful, which in this case means that requests sent from within the
EC2 instance are allowed, regardless of any form of pre-specified inbound rule.

A network access control list is considered a stateless network component of Amazon VPC, which means that data
coming into an EC2 instance must be specifically allowed by creating the appropriate networking rule.

AWS Snowball storage uses a physical storage device that is transported to the data center location and then sent
back to Amazon, bypassing a connection to the Internet and avoiding the cost, time, and security concerns of
network data transfer. This is not part of a VPC.

Amazon Elastic Block Store (EBS) is a storage solution that provides block-level storage, as opposed to Amazon
S3, which provides object-level storage. Although this component would be considered part of an EC2 instance, it

10 of 68 12-02-2024, 23:34
CLF-C02 Exam Simulation https://www.kaplanlearn.com/education/test/print/90570307?testId=28...

would not have a classification of stateful because of the customizations needed for its creation.

Objective:
Security and Compliance

Sub-Objective:
Identify components and resources for security

References:

Control traffic to resources using security groups - Amazon Virtual Private Cloud

Question #11 of 65 Question ID: 1615027

You need a way to automate the process of evidence collection so that you can easily assess the effectiveness of
controls, including activities, procedures, and policies. Which of the following AWS services should you use for this?

A) AWS Security Hub

B) Amazon GuardDuty

C) Amazon QuickSight

D) AWS Audit Manager

Explanation

You would use AWS Audit Manager. This service is used for continuous auditing of AWS usage for a simplified way
of managing compliance and risk with standards in the industry and regulations. Using Audit Manager, you can
manage stakeholder reviews of existing controls during audits. It lets you create reports that are audit-ready with
minimal manual work.

You would not use AWS Security Hub. AWS Security Hub provides you with a detailed view of the security situation
on your AWS deployment. It helps protect your AWS environment using security best practices and industry
standards.

You would not use Amazon QuickSight. This is a business intelligence service that allows users to analyze data
using a simple conversational interface. It uses machine learning (ML) for performing data analysis and creating
forecasts.

You would not use AWS GuardDuty. AWS GuardDuty provides an intelligent threat-detection system for AWS
resources and infrastructures. It monitors all activity on a network and on AWS accounts, looking for threats.

11 of 68 12-02-2024, 23:34
CLF-C02 Exam Simulation https://www.kaplanlearn.com/education/test/print/90570307?testId=28...

Objective:
Security and Compliance

Sub-Objective:
Understand AWS Cloud security, governance, and compliance concepts

References:

AWS > Documentation > AWS Audit Manager > User Guide > What is AWS Audit Manager?

Question #12 of 65 Question ID: 1603293

You are a web server administrator and you support web servers that are very busy. Your web servers are
experiencing issues with high CPU and abnormal spikes in memory because of unpredictable workloads.

What type of Amazon EC2 instance could you use to meet this requirement?

A) Reserved Instances

B) Spot Instances

C) Dedicated host

D) On-Demand Instances

Explanation

On-Demand Instances are good solutions for application workloads that are spiky, short term, and/or unpredictable.
This option is flexible and low cost because there are no long contracts and no upfront payments. For On-Demand
Instances, you are paying per second or per hour, based on the instance configuration you choose. When your
application workload spikes, your system will dynamically allocate your pre-determined On-Demand Instance to help
alleviate your total CPU and memory workload concerns. The price will vary based on overall workload activity.

Dedicated hosts are physical servers that are owned and supported by Amazon. One of their main benefits is the
reduction in licensing costs of traditional systems related to software licenses. These systems are good for required
compliance requirements for external vendors and other internal support needs. They are purchased on an hourly
basis, similar to On-Demand solutions. Finally, they can be acquired as reserved servers at a greatly reduced price.
Due to the cost, however, On-Demand is a better option.

Reserved Instances are good for environments that plan to use Amazon Elastic Compute Cloud (EC2) instances for
between one and three years. These instances support applications that are online 24/7 and have an anticipated
and well-known workload pattern. They are classified as supporting the base workload. Any abnormal increase in
workload should be managed by Spot or On-Demand Instances.

Spot Instances are viable for any type of workload that is classified as non-time sensitive, meaning that the workload

12 of 68 12-02-2024, 23:34
CLF-C02 Exam Simulation https://www.kaplanlearn.com/education/test/print/90570307?testId=28...

does not have to start or stop at a specific time. This is different from On-Demand Instances because they, although
spiky, are used during key times of the business. This type of instance is also good for security testing, developing,
integration, and validating overall loads on a system.

Objective:
Cloud Technology and Services

Sub-Objective:
Define methods of deploying and operating in the AWS Cloud

References:

Amazon EC2 - Secure and resizable compute capacity – Amazon Web Services

Question #13 of 65 Question ID: 1603408

You want to monitor service limits related to Elastic IP addresses that are being used, active snapshots, and EBS
volumes. Which service would you use?

A) Amazon EC2

B) AWS Storage Gateway

C) Amazon SNS

D) AWS Trusted Advisor

Explanation

AWS Trusted Advisor can monitor Amazon Elastic Block Store (EBS) volumes, snapshots, provisioned input/output
operations per second (IOPS), and even magnetic volumes. It can also monitor Amazon Elastic Compute Cloud
(EC2) components, Elastic IP addresses, and Reserved Instance limits. This tool helps implement best practices
within your AWS EC2 environment by advising on reducing costs, increasing system performance, and reducing
security risks.

Amazon EC2 is an AWS component that creates compute capacity within the AWS cloud infrastructure. This
component is not used for auditing service limits.

Amazon Simple Notification Service (SNS) is a web service that end users, applications, and devices use to send
and receive messages within the AWS infrastructure. This is a notification service and does not allow you audit
service limits.

AWS Storage Gateway is a service that connects on-premises systems to your AWS infrastructure. It is not used for
monitoring service limits associated with EBS Snapshots and Elastic IP addresses.

13 of 68 12-02-2024, 23:34
CLF-C02 Exam Simulation https://www.kaplanlearn.com/education/test/print/90570307?testId=28...

Objective:
Billing, Pricing, and Support

Sub-Objective:
Identify AWS technical resources and AWS Support options

References:

AWS Trusted Advisor check reference - AWS Support (amazon.com)

Question #14 of 65 Question ID: 1603239

You work for a company that has several EC2 servers that were built three months ago to support a production
application. The plan is to have these production servers running with zero down time. You are planning on
upgrading the instance type in about a month.

What type of instance should you have purchased during the design of the application for cost-effective increases in
instance types?

A) On-Demand Instances

B) Convertible Reserved Instances

C) Spot Instances

D) Standard Reserved Instances

Explanation

A Convertible Reserved Instance can be exchanged within the same term and you can easily select new instance
attributes such as instance type and platform. You can also change the instance family scope and tenancy. There
are no limitations on the number of times you exchange your instance. The only requirement is that the instance
type you are changing to has to be of higher or equal value to the original instance.

Spot Instances are primarily viable for batch type processing environments, but are considered an instance that
could be interrupted. Therefore, they would not be a good fit for an application that requires 24/7 uptime.

On-Demand Instances are primarily used for temporary areas of increased workloads and processing, and are not
available 24/7.

When using Standard Reserved Instances, you are not allowed to make changes during the term of purchase.

Objective:

14 of 68 12-02-2024, 23:34
CLF-C02 Exam Simulation https://www.kaplanlearn.com/education/test/print/90570307?testId=28...

Cloud Concepts

Sub-Objective:
Understand concepts of cloud economics

References:

Exchange Convertible Reserved Instances - Amazon Elastic Compute Cloud

Question #15 of 65 Question ID: 1540863

What type of storage option is a regional service that gives you the ability to store and manage files within the AWS
Cloud?

A) Amazon EFS

B) AWS Storage Gateway

C) Amazon EBS

D) Amazon ElastiCache

Explanation

Amazon Elastic File System (EFS) is a regional service that allows you to create and manage file systems within the
AWS cloud infrastructure. These file systems can be shared among multiple Amazon Elastic Compute Cloud (EC2)
instances and have a pay-as-you-go (PAYG) cost function. They are also designed for increased input/output
operations per second (IOPS) and lower levels of latency.

Amazon Elastic Block Storage (Amazon EBS) is a storage solution that provides block-level storage, as opposed to
Amazon Simple Storage Service (S3), which provides object-level storage. This storage option would not be
specifically used for application migrations into the Amazon cloud infrastructure. This is a storage option to use once
you are in the Amazon EC2 instance and is not a file system storage solution.

Amazon ElastiCache is a scalable system that is considered an in-memory data store. This type of system places
data directly into memory components that dramatically increases application response times. This service is not
related to a file system used within the AWS infrastructure.

AWS Storage Gateway is a method to integrate your current on-premises system storage to easily migrate data into
AWS storage components. You can scale your storage costs and increase your potential capacity within minutes
while maintaining a high level of data security. AWS Storage Gateway offers three specific options for integrating
your on-premises data into the cloud. This storage option is not related to a file system storage solution.

Objective:

15 of 68 12-02-2024, 23:34
CLF-C02 Exam Simulation https://www.kaplanlearn.com/education/test/print/90570307?testId=28...

Security and Compliance

Sub-Objective:
Identify AWS access management capabilities

References:

Amazon EFS

Question #16 of 65 Question ID: 1603305

How many Internet gateways can be attached to an Amazon VPC?

A) 1

B) 5

C) 16

D) 28

Explanation

An Amazon Virtual Private Cloud (VPC) can only have one Internet gateway attached at a time. An Internet gateway
is a VPC component that gives you the ability to horizontally scale your Amazon Elastic Compute Cloud (EC2)
instances with the Internet. The Internet gateway executes Network Address Translations (NATs) in your Amazon
EC2 instances with a public IP address. It will pull target information from a routing table and push traffic to its
appropriate destination.

16 is the maximum IP address range for an Amazon VPC.

28 is the minimum size of an individual subnet for an Amazon VPC.

5 is the default number of VPCs for an AWS account within a Region.

Objective:
Cloud Technology and Services

Sub-Objective:
Define the AWS global infrastructure

References:

Connect to the internet using an internet gateway - Amazon Virtual Private Cloud

16 of 68 12-02-2024, 23:34
CLF-C02 Exam Simulation https://www.kaplanlearn.com/education/test/print/90570307?testId=28...

Question #17 of 65 Question ID: 1603339

You need to select an AWS data storage option for your company, but you are presently unsure of the access
patterns of your data objects. Which storage class should you select?

A) S3 One Zone-IA

B) S3 Intelligent-Tiering

C) S3 Glacier

D) S3 Standard

Explanation

You should select Amazon Simple Storage Service (S3) Intelligent-Tiering. This class is for data that has a changing
or unknown frequency of access. It requires a monthly fee for monitoring and automation for each object. S3
automatically moves objects not accessed for 30 days into the S3 Standard-Infrequent Access (Standard-IA) tier. If
an object is accessed in the S3 Standard-IA tier, S3 moves it into S3 Standard.

You would not use S3 One Zone-IA. This is a more affordable class than the S3 Standard classes and stores data in
a single Availability Zone (AZ). This is a choice for situations where a lower cost storage option is required, and the
data can be reproduced if the AZ fails.

You would not use S3 Standard. This is used for frequently accessed data and stores data across three AZs. It is
ideally used for data analytics, websites, and content delivery.

You would not use S3 Glacier. This is a low-cost storage option for data-archiving requirements. Data stored in this
class can be retrieved in a few minutes to a few hours.

Objective:
Cloud Technology and Services

Sub-Objective:
Identify AWS storage services

References:

What is Amazon S3? - Amazon Simple Storage Service

Question #18 of 65 Question ID: 1603343

You need to suggest key technologies your company can leverage on its AWS Cloud for its operations. Which of

17 of 68 12-02-2024, 23:34
CLF-C02 Exam Simulation https://www.kaplanlearn.com/education/test/print/90570307?testId=28...

these AWS services can you use for block storage of data? (Choose two.)

A) EC2

B) S3

C) RDS

D) EFS

E) EBS

Explanation

Amazon Elastic File Storage (EFS) and Elastic Block Storage (EBS) are both technologies used for block storage.

Amazon Simple Storage Service (S3) is not used for block storage but for object storage. Amazon Elastic Compute
Cloud (EC2) is used for computing resources and not dedicated to block storage. Relational Database Service
(RDS) is a managed database service from AWS that supports various database engines, including MySQL and
PostgreSQL.

AWS provides various core services across several categories, which include:

Compute – You use EC2 instances for processing and AWS Lambda for serverless computing.
Storage – You use Amazon S3, EBS, and EFS for storage.
Network – You use Amazon Route53 for domain name management and routing.
Database – You use RDS for using a fully managed relational database and DynamoDB, which is a serverless
key value pair database.

Objective:
Cloud Technology and Services

Sub-Objective:
Identify AWS storage services

References:

Amazon Web Services Cloud - Overview of Amazon Web Services

Question #19 of 65 Question ID: 1603222

Which Amazon Elastic Block Storage (EBS) storage type provides cost-effective storage for data that is typically
accessed infrequently, with an IOPS of around 100?

18 of 68 12-02-2024, 23:34
CLF-C02 Exam Simulation https://www.kaplanlearn.com/education/test/print/90570307?testId=28...

A) HDD-backed volumes

B) General Purpose SSD (gp2)* volumes

C) Provisioned IOPS SSD (io1) volumes

D) Magnetic volumes

Explanation

Magnetic volumes are low-cost storage solutions for data that is accessed infrequently. The focus for this type of
storage solution is not performance. It is used primarily for functional testing and validation of processes for
applications. This type of storage produces about 100 input/output operations per second (IOPS), and each volume
can have a range from 1GB to 1TB in size.

General Purpose Solid-State Drive (SSD) (gp2) volumes are new generation volumes that are considered the
default type used when creating an Amazon Elastic Compute Cloud (EC2) instance. This type of storage is suitable
for in-depth workloads with low-latency apps and normal ranges of transactional workloads. This storage option
provides about 160 MB/s with a maximum of 10,000 IOPS. They are backed by solid-state drives.

Provisioned IOPS SSD (io1) volumes offer the lowest level of latency available. They are suitable for intensive IOPS
and systems that require an extreme throughput. This storage option provides about 500 MB/s with a maximum of
32,000 IOPS. They are also backed by solid-state drives.

Hard Disk Drive (HDD)-backed volumes are storage backed by hard disk drives. This storage is good for large I/O
sizes and larger datasets that use such Amazon services as MapReduce, Extract, Transform, Load (ETL)
workloads, and data warehouse environments. This storage option provides a maximum throughput of 500 MB/s.

Objective:
Cloud Concepts

Sub-Objective:
Identify design principles of the AWS Cloud

References:

Amazon EBS volume types - Amazon Elastic Compute Cloud

Question #20 of 65 Question ID: 1603264

Which of the following is considered a security best practice for an operating system or application?

A) Enable potentially useful protocols and services to save on development time.

19 of 68 12-02-2024, 23:34
CLF-C02 Exam Simulation https://www.kaplanlearn.com/education/test/print/90570307?testId=28...

B) Add additional EBS volumes in case they are needed at a later time.

C) Remove IAM user accounts every calendar month.

D) Create a primary function for the Amazon EC2 instance, such as


separating web servers from database servers.

Explanation

It is important to create a primary function for the Amazon Elastic Compute Cloud (EC2) instance, such as
separating web servers from database servers and database servers from domain name system (DNS) servers. It is
important to have a multi-tier solution to your Amazon EC2 infrastructure. This will reduce the potential overall
security risks within your AWS environment. You will also want to enable security features for protocols, daemons
used, and system services. You should enable security features, such as Secure Shell (SSH), to house built-in
features for encryption and data integrity authentication.

IAM users need to be removed every time a user leaves the company or the department, not every calendar month.

Any and all protocols and services need to be disabled if they are not being used.

You would not add additional Amazon Elastic Block Store (EBS) volumes in case they are needed at a later time.
Only EBS volumes that are currently needed should be added. If you are going to use the EBS volume, then it can
be quickly allocated and utilized appropriately.

Objective:
Security and Compliance

Sub-Objective:
Understand AWS Cloud security, governance, and compliance concepts

References:

AWS Security & Compliance

https://d1.awsstatic.com/whitepapers/compliance/AWS_Compliance_Quick_Reference.pdf?secd_comp6

Question #21 of 65 Question ID: 1603266

What kind of strategy does Amazon offer for situations regarding accidental deletion within Amazon S3?

A) Application-level encryption

B) Backup replication

C) Digital signatures

20 of 68 12-02-2024, 23:34
CLF-C02 Exam Simulation https://www.kaplanlearn.com/education/test/print/90570307?testId=28...

D) Versioning

Explanation

Versioning and multi-factor authentication (MFA) on deletions within Amazon Simple Storage Service (S3) could be
used as a viable strategy for preventing delete actions that are considered accidental. Amazon versioning means
that you have multiple versions of an object within the same bucket inside Amazon S3. With versioning you can
retrieve, restore, and preserve specific versions of an object. So, whether you accidentally delete the data or
experience a server failure, you are able to retrieve your information. When you use MFA, you have the ability to
validate or confirm your actions before the actual deletion takes place.

Application-level encryption is encryption that occurs at the application level. Data is encrypted through the
application and the only way to read the encrypted data is if you are officially authorized. This will not protect you
against accidental deletes.

A digital signature is a way to sign a digital document using encryption that involves the use of digital codes.
Amazon uses AWS Signature Version 4, which uses an access secret key that is then used to create a signing key.
This new key or the signing key can only be used within a uniquely identified AWS Region. It will not protect you
against accidental deletes.

Backup replication is an AWS solution geared more towards system, hardware, or infrastructure availability, and not
for accidental deletion.

Objective:
Security and Compliance

Sub-Objective:
Understand AWS Cloud security, governance, and compliance concepts

References:

How S3 Versioning works - Amazon Simple Storage Service

Configuring MFA delete - Amazon Simple Storage Service

Question #22 of 65 Question ID: 1603290

You work for a large company that manages electronic patient records. The primary application is configured with a
load balancer that is used to evenly distribute the workload between two production EC2 instances.

You are tasked with making sure the connection from the client medical facilities and the load balancer is properly
secured by using the appropriate SSL security policy.

21 of 68 12-02-2024, 23:34
CLF-C02 Exam Simulation https://www.kaplanlearn.com/education/test/print/90570307?testId=28...

What policy would you choose to accomplish this task?

A) Default security policy

B) Geolocation routing policy

C) Resource-based policy

D) Predefined security policy

Explanation

A predefined security policy allows you to determine which protocols and ciphers are used when negotiations are
occurring between the load balancer and the client. Elastic Load Balancing (ELB) uses the configuration of the load
balancer to leverage a custom security policy or a predefined security policy. These policies meet the security
standards and compliance requirements that also disable specific Transport Layer Security (TLS) protocol versions.

Geolocation routing policy is not the correct choice because this form of routing is only based on the geographic
location of moving traffic and is not a security policy.

Default security policy is not the correct choice. This type of policy is assumed because when you create an AWS
account, a default security group is created automatically within the default virtual private cloud (VPC). An ELB uses
a predefined security policy rather than a default.

Resource-based policy is not the correct choice because this type of policy uses JavaScript Object Notation (JSON)
documents to attach to specific AWS resources and work with inline policies. This type of policy is used within the
Amazon Simple Storage Service (S3) infrastructure.

Objective:
Cloud Technology and Services

Sub-Objective:
Define methods of deploying and operating in the AWS Cloud

References:

SSL negotiation configurations for Classic Load Balancers - Elastic Load Balancing (amazon.com)

Question #23 of 65 Question ID: 1603218

You need to analyze several terabytes worth of unstructured data using columnar storage. Which type of database
would you use?

A) Amazon RDS

22 of 68 12-02-2024, 23:34
CLF-C02 Exam Simulation https://www.kaplanlearn.com/education/test/print/90570307?testId=28...

B) Amazon Redshift

C) Amazon DynamoDB

D) Amazon Aurora

Explanation

Amazon Redshift is a petabyte-scaled infrastructure that is considered a data warehouse solution. Amazon Redshift
has the ability to process both structured and unstructured data. It uses columnar storage for query performance,
which reduces the amount of data loaded from disk to memory, lessening the amount of I/O needed for processing.

Amazon Aurora is a relational database that offers PostgreSQL- and MySQL-compatible database solutions. This
database is a relational database system, not a data warehouse environment.

An Amazon DynamoDB database provides a NoSQL database service that delivers millisecond returns on data to
the end user or application. It is a non-relational database rather than a data warehouse environment.

Amazon Relational Database Service (RDS) is a database cloud solution for applications that require Oracle,
PostgreSQL, MariaDB, MySQL, Microsoft SQL Server, and Amazon Aurora. Amazon Redshift is better suited for
processing terabytes worth of data.

Objective:
Cloud Concepts

Sub-Objective:
Identify design principles of the AWS Cloud

References:

Amazon Redshift conceptual overview - Amazon Redshift

Question #24 of 65 Question ID: 1603303

Which AWS service would you use to improve communication with your users that are located far from your existing
AWS Regions?

A) CloudWatch

B) Outposts

C) CloudTrail

D) CloudFront

23 of 68 12-02-2024, 23:34
CLF-C02 Exam Simulation https://www.kaplanlearn.com/education/test/print/90570307?testId=28...

Explanation

You would use Amazon CloudFront. CloudFront is a content delivery network (CDN) that caches copies of data at
locations around the world near customers. Caching copies of data locally near customers helps send data,
applications, and videos to your customers with low latency and high speeds. To do this, CloudFront uses Edge
locations, which are sites around the globe that speed up the delivery of content to users. Edge locations run a
combination of CloudFront and Route53 for ensuring customers access the correct web addresses with low latency.
CloudFront gets its files from an origin location that can be an Amazon S3 bucket or a web server.

You would not use AWS Outposts. AWS Outposts allow a company to use AWS services inside of their own data
center or company building. Outposts create a miniature Region inside of a data center, providing all AWS services
in an isolated private location. This is an example of a hybrid cloud approach.

You would not use CloudWatch. AWS CloudWatch allows you to monitor the AWS system in real-time by monitoring
and tracking resource metrics. A metric could be the CPU utilization for an Amazon Elastic Compute Cloud (EC2)
instance. You can also create a threshold for a metric and trigger an alert and/or an action when the metric reaches
the defined threshold.

You would not use CloudTrail. CloudTrail keeps track of all application programming interface (API) calls made in an
AWS account and records the API caller’s identity and source IP address, the time of the call, and other key
information. CloudTrail records events in it 15 minutes after an API call is made. API calls are used in AWS for
provisioning and managing resources. You can filter API calls in CloudTrail based on the date and time of the call,
the user making the call, and the resources accessed by the call.

Another AWS service you need to be aware of for the exam is the AWS Global Accelerator. This service greatly
improves the performance and availability of global applications by utilizing the AWS global network infrastructure.
The Global Accelerator can improve user traffic performance by up to 60% by optimizing the path to a company’s
application. This keeps latency, packet loss, and jitter low. This is made possible by providing customers with two
static public IP addresses that act as entry points to the application. Global Accelerator automatically routes traffic to
the nearest endpoint that is healthy, ensuring that endpoint failure is mitigated. This way you can modify application
endpoints, including load balancers, EC2 instances, and elastic IP, in the backend, without the need to make any
changes that face your end users.

Objective:
Cloud Technology and Services

Sub-Objective:
Define the AWS global infrastructure

References:

What is Amazon CloudFront? - Amazon CloudFront

24 of 68 12-02-2024, 23:34
CLF-C02 Exam Simulation https://www.kaplanlearn.com/education/test/print/90570307?testId=28...

Question #25 of 65 Question ID: 1603259

You need to set up user access permissions on your AWS account based on security best practices. How would you
implement the principle of least privilege?

A) Allowing users access to anything they want

B) Denying access to users to all resources

C) Allowing users access only to do what they need

D) Denying users access to what they need

Explanation

Allowing users access only to do what they need is an example of access granted on the basis of the principle of
least privilege.

The other options are incorrect as they do not follow the principle of least privilege and either give more permission
than is required or deny permissions altogether.

Granting permissions based on the principle of least privilege ensures that users or roles do not have more
permissions than they require to perform specific tasks for their jobs. You implement the principle of least privilege
on AWS by granting AWS Identity and Access Management (IAM) users and roles the minimum number of
permissions and then only add permissions as required. You can grant permissions using an IAM policy for IAM
users. An IAM policy is a JavaScript Object Notation (JSON) document that specifies what application programming
interface (API) calls a user can make.

For example, you can give a user who needs to access data on an S3 bucket the permission to view the contents of
the bucket but not permission to delete objects in the bucket or create new buckets. For this, you would create a
permission statement in the IAM policy that has an Allow effect with an action line s3:ListBucket with a
corresponding resource ID for the specific S3 bucket to be accessed. Now the user can view the bucket but do no
other actions on it as per the principle of least privilege.

Remember that by default an IAM user when created has no permissions. Each action that is permissible for that
user needs to be granted explicitly via IAM policies.

Objective:
Security and Compliance

Sub-Objective:
Understand AWS Cloud security, governance, and compliance concepts

References:

25 of 68 12-02-2024, 23:34
CLF-C02 Exam Simulation https://www.kaplanlearn.com/education/test/print/90570307?testId=28...

Security best practices in IAM - AWS Identity and Access Management (amazon.com)

Question #26 of 65 Question ID: 1615049

Your company wants to maintain reserve EC2 instances in multiple Availability Zones and Regions to ensure
services during a failover event. Which of the following systems should you use for this?

A) Savings Plans

B) On-Demand Instances

C) On-Demand Capacity Reservations

D) Regional Reserved Instances

Explanation

You would use On-Demand Capacity Reservations. This allows you to reserve Amazon Elastic Compute Cloud
(EC2) compute capacity in an Availability Zone for any length of time. This makes it ideal for business-critical
workloads that need assurance for long- and short-term compute capacity. Capacity Reservations is useful for
business-critical events, regulatory requirements, and disaster recovery situations.

Regional Reserved Instances and Savings Plans are not recommended options as neither of these reserve capacity
which is what is required in the scenario. Both options require a fixed commitment for one or three years. All
accounts in an organization can avail themselves of the hourly cost savings provided by Reserved Instances that
may have been bought by any other accounts. The consolidated billing feature of AWS Organizations considers all
accounts in an organization as one account.

On-Demand Instances are not a suitable choice for this scenario because there is a risk of not being able to get on-
demand capacity due to constraints with AWS.

Objective:
Billing, Pricing, and Support

Sub-Objective:
Compare AWS pricing models

References:

AWS > Documentation > Amazon EC2 > User Guide for Linux Instances > On-Demand Capacity Reservations >
Differences between Capacity Reservations, Reserved Instances, and Savings Plans

26 of 68 12-02-2024, 23:34
CLF-C02 Exam Simulation https://www.kaplanlearn.com/education/test/print/90570307?testId=28...

Question #27 of 65 Question ID: 1603271

You have created a VPC for your company’s AWS deployment. You need to implement a way of controlling
incoming and outgoing for your EC2 instance on the VPC. What would you use for this?

A) AWS WAF

B) NACLs

C) AWS Marketplace

D) Security groups

Explanation

You will use security groups for this. A security group is a virtual firewall that protects an EC2 instance. Security
groups allow all outbound traffic and deny all inbound traffic by default. You can customize a security group to
specify the kinds of traffic that may be permitted or denied.

You will not use a network access control list (NACL). An NACL is a virtual firewall for controlling incoming and
outgoing traffic on a subnet. A subnet is a section of a virtual private cloud (VPC) where resources can be grouped
based on their security or functional requirements.

You will not use AWS Web Application Firewall (WAF). A WAF controls incoming requests from a network into your
web applications. AWS WAF permits or denies traffic based on a web access control list (ACL). AWS WAF works in
conjunction with Amazon CloudFront as well as Application Load Balancer.

You will not use AWS Marketplace. This is a digital library that contains thousands of third-party software from
across varying applications and industries. You can view detailed data on each software listing, which includes user
reviews, pricing options, and support plans. Some of the categories that AWS Marketplace provides software for
include: DevOps, business applications, machine learning (ML), Internet of Things (IoT), security, data products, and
infrastructure.

Objective:
Security and Compliance

Sub-Objective:
Identify components and resources for security

References:

Control traffic to resources using security groups - Amazon Virtual Private Cloud

27 of 68 12-02-2024, 23:34
CLF-C02 Exam Simulation https://www.kaplanlearn.com/education/test/print/90570307?testId=28...

Question #28 of 65 Question ID: 1603238

Which Auto Scaling option configures a unique number of instances that run 24/7?

A) Maintain current level

B) Dynamic scaling

C) Scheduled scaling

D) Manual scaling

Explanation

If you want a specific or minimum number of instances to be executing 24/7, then you would use a maintain current
level type of Auto Scaling plan. A sporadic health check is performed on all executing instances. When it finds an
unhealthy or sick instance, it terminates that instance and creates a new instance automatically. These notifications
can be generated from Amazon Elastic Block Store (EBS), a customer health check script, or Amazon Elastic
Compute Cloud (EC2).

Dynamic scaling adds or subtracts Amazon EC2 instances based on changes in demand. The maintain current level
scaling only utilizes a minimum number of instances and does not change based on demand.

Scheduled scaling manages EC2 instances based on workload patterns that are created at predetermined times.
Maintaining current levels is primarily focused on the minimum number of EC2 instances running and not based on
a time or schedule.

Manual scaling is when you manually customize adding or subtracting Amazon EC2 instances within the Amazon
EC2 console. This option is based on adding or subtracting instances that is typically done in earlier configurations
or later on after the application has been up and running and experiences an increase in customer workloads or new
processes.

Objective:
Cloud Concepts

Sub-Objective:
Understand concepts of cloud economics

References:

Scale the size of your Auto Scaling group - Amazon EC2 Auto Scaling

Question #29 of 65 Question ID: 1603330

28 of 68 12-02-2024, 23:34
CLF-C02 Exam Simulation https://www.kaplanlearn.com/education/test/print/90570307?testId=28...

Which two statements are correct applications of VPC components? (Choose two.)

A) Use a virtual private gateway to connect a VPC and office network.

B) Keep company cryptographic data on a public subnet.

C) Use Direct Connect to link a data center to the Internet.

D) Keep databases with customer personal data on a private subnet.

E) Configure security groups for stateless packet filtering.

Explanation

You need to keep databases with customer personal data on a private subnet to ensure that no one outside of the
virtual private cloud (VPC) can access this sensitive information. In contrast, a customer-facing website can be
placed on a public subnet as it would need to be accessed from the Internet. A VPC spans across multiple
Availability Zones (AZs) in a Region. When you create a VPC, you need to specify a range of IP addresses for it
through a Classless Inter-Domain Routing (CIDR) block, which can be 10.0.0.0/15. This serves as the primary CIDR
block for the VPC that you create.

You would use a virtual private gateway (VPG) to connect a VPC and office network through a virtual private
network (VPN) connection. To make a secure connection between an on-premises data center and a VPC on AWS,
you need to have a VPN connection using a VPG. A VPC is an isolated portion of the AWS Cloud that has
resources defined and restricted for use by a customer.

You would not configure security groups for stateless packet filtering as security groups are stateful by default.
Security groups remember incoming packet responses if they have seen a corresponding packet request leaving.
By default, security groups allow all outgoing packets and deny all incoming packets. You can add Amazon Elastic
Compute Cloud (EC2) instances to security groups to restrict access to them.

You would not use Direct Connect to link a data center to the Internet. Direct Connect is used to create a private
dedicated connection from a data center to an AWS VPC using a fiber optic link. For connections to the Internet, an
Internet gateway needs to be used.

You would not keep company cryptographic data on a public subnet. This data is highly confidential and must be
stored securely on a private subnet with highly restricted access. Only data that is authorized for public access can
be placed on public subnets.

The AWS networking landscape works like this: A client sends a packet via the Internet, this packet enters the AWS
Cloud via an Internet gateway, then the packet is filtered through a network access control list (NACL). If it is
approved, then it enters a public subnet that has EC2 instances inside of their security groups, and if the packet is
permitted by the security groups, then it reaches the EC2 instances.

For providing connectivity to resources inside your VPC to resources outside of the VPC, you can use VPN
connections, Network Address Translation (NAT) devices, and Internet gateways. An AWS PrivateLink allows you to

29 of 68 12-02-2024, 23:34
CLF-C02 Exam Simulation https://www.kaplanlearn.com/education/test/print/90570307?testId=28...

connect resources inside your VPC to services using private IP addresses. This works as if these services were
being hosted inside of your VPC. VPC endpoints are created by service consumers that are used to connect to
endpoint services, which may be hosted by a service provider.

Objective:
Cloud Technology and Services

Sub-Objective:
Identify AWS network services

References:

Amazon Virtual Private Cloud Documentation

AWS PrivateLink concepts – Amazon Virtual Private Cloud

Question #30 of 65 Question ID: 1603201

Which two activities will a database company need to pay special attention to so that it can grow its business value?
(Choose two.)

A) The way database backups are stored

B) How MySQL is installed on its servers

C) The way its data tables are built

D) The methods used for creating its data structures

E) How the OS for servers is patched

Explanation

The database company will need to focus on the way its data tables are built and the methods used for creating its
data structures. This way the company can engineer solutions that better serve its customers and not invest time
and finances on non-productive activities like running and maintaining data centers.. A company needs to create
proprietary business assets that are unique to a business and directly generate its revenue.

The other options are incorrect because they entail everyday operational overhead that can be passed on to a cloud
provider using automation so that the company can focus on engineering its business solutions.

To focus on business value, a company needs to focus on activities that separate it from its competition. These
include the way tables and data structures are built and managed in a company’s database system. Tasks like
installing database engines, performing backups, patching servers, and managing drive storage are repetitive and

30 of 68 12-02-2024, 23:34
CLF-C02 Exam Simulation https://www.kaplanlearn.com/education/test/print/90570307?testId=28...

time consuming. These can easily be taken care of by AWS because they do not contribute to a company’s
business value.

Objective:
Cloud Concepts

Sub-Objective:
Define the benefits of the AWS Cloud

References:

Six advantages of cloud computing - Overview of Amazon Web Services

Question #31 of 65 Question ID: 1540861

You are asked to create user accounts and configure their access permissions for a corporate AWS deployment.
Which of these are IAM best practices you will use?

A) Use the principle of least privilege.

B) Use the AWS root account as much as possible.

C) Use IAM roles for granting temporary access.

D) Turn off MFA for the root account.

E) Reduce password complexity for users.

Explanation

As best practices you will use the principle of least privilege and use AWS Identity and Access Management (IAM)
roles for granting temporary access to employees.

You will not use the AWS root account as much as possible as that is not recommended. Instead, user accounts
should be used for everyday tasks.

You will not turn off multi-factor authentication (MFA) for the root account as it is a necessary security best practice.
MFA should be enabled for all accounts, including the root account.

You will not reduce password complexity for users as that is not a security best practice. Password complexity
provides enhanced security. You should also keep a password rotation system in place.

An IAM user is an AWS identity consisting of a name and credentials. It represents a person or application that can
access AWS resources and services. An IAM user should be created for every person who requires access to AWS.
You can assign several IAM users to IAM groups and then attach IAM policies to those groups. These policies will

31 of 68 12-02-2024, 23:34
CLF-C02 Exam Simulation https://www.kaplanlearn.com/education/test/print/90570307?testId=28...

then apply to all users within that group.

IAM policies are JavaScript Object Notation (JSON) documents that specify the permissions an IAM user has to
various AWS resources and services. As a rule, you must follow the principle of least privilege when creating IAM
policies. Managed policies are IAM policies that are created and managed by AWS. These provide permissions for
various common use cases, thereby simplifying the process for new AWS users to assign permissions to users and
groups. Permissions in AWS managed policies cannot be changed. In contrast, a customer-managed policy can
have the exact permissions you need to specify for a job role and can follow the principle of least privilege. An easy
way to create a customer-managed policy is to import an existing AWS managed policy and then customize it.

An IAM role is an identity that can be given to a user for attaining temporary permissions. When an IAM user or
service assumes an IAM role, they will discard all their other permissions and assume the permissions specified for
the IAM role. You should use IAM roles for situations that require access to resources and services on a temporary
basis.

You don’t have to create new IAM users for each employee in your organization, but you can instead simply federate
all existing users into your AWS account. This way, employees can use their company credentials to log into AWS,
and their corporate identities will be mapped to corresponding IAM roles. This is an example of AWS IAM
authentication and authorization as a service.

There are few key IAM security best practices, which include:

Never use your AWS account root access key, and keep it locked up.
Use IAM roles to give permissions for performing tasks.
Use the principle of least privilege for allowing only those permissions necessary for specific tasks.
Start by using AWS managed policies for common AWS use cases, and then later create custom policies that
can enforce the principle of least privilege.
Review and validate all your IAM policies to ensure their security.

Objective:
Security and Compliance

Sub-Objective:
Identify AWS access management capabilities

References:

Security best practices in IAM - AWS Identity and Access Management (amazon.com)

Question #32 of 65 Question ID: 1603321

You need to use a suitable technology for achieving single digit write and microsecond read speed for a modern

32 of 68 12-02-2024, 23:34
CLF-C02 Exam Simulation https://www.kaplanlearn.com/education/test/print/90570307?testId=28...

application with durability using multiple Availability Zones. Which of these systems should you use for this?

A) AWS Outposts

B) Amazon MemoryDB for Redis

C) AWS Fargate

D) AWS Elastic Load Balancing

Explanation

You will use Amazon MemoryDB for Redis which is an in-memory database suited for workloads needing a very fast
primary database which is Redis compatible. Redis is an open source in-memory data structure. It can be used as a
streaming engine, message broker, cache, and database. A related service is ElastiCache for Redis which is a web
service for managing and scaling cache environments or in-memory data stored in the cloud. You will opt for
MemoryDB for Redis in situations where your workloads need a durable database with extremely fast performance.
You can opt for ElastiCache for Redis when you have caching workloads and need accelerated access to an
existing primary database.

You will not use AWS Elastic Load Balancing (ELB) which is the process of splitting incoming traffic among several
instances. ELB ensures that no Amazon Elastic Compute Cloud (EC2) instance is left unused and similarly no
instance is used more than required by sharing traffic evenly across several EC2 instances.

You would not use AWS Outposts. AWS Outposts allow a company to use AWS services inside of their own data
center or company building. Outposts create a miniature Region inside of a data center, providing all AWS services
in an isolated private location. This is an example of a hybrid cloud approach.

You will not use AWS Fargate. This technology gives you a serverless compute platform for Amazon Elastic
Container Service (ECS) and Elastic Kubernetes Service (EKS). This allows you to run your containers inside of a
fully managed serverless environment.

Objective:
Cloud Technology and Services

Sub-Objective:
Identify AWS database services

References:

AWS > Documentation > Amazon MemoryDB > Developer Guide > What is MemoryDB for Redis?

https://docs.aws.amazon.com/memorydb/latest/devguide/what-is-memorydb-for-redis.html

When should I use MemoryDB versus Amazon ElastiCache for Redis?

https://aws.amazon.com/memorydb/faqs/

33 of 68 12-02-2024, 23:34
CLF-C02 Exam Simulation https://www.kaplanlearn.com/education/test/print/90570307?testId=28...

Question #33 of 65 Question ID: 1603371

It is important that as you transition from your on-premises system into the Amazon Cloud that you focus on savings
as you design your Amazon infrastructure and as you plan for your data migration.

Which pillar of the AWS Well-Architected Framework is focused on the logical and functional requirements and the
overall refinement of the smallest price point available in a system?

A) Storage

B) Cost Optimization

C) Performance Efficiency

D) Reliability

Explanation

Cost Optimization is a framework pillar that supports the improvement and efficiencies of an AWS infrastructure over
a complete lifetime. The purpose for Cost Optimization is to meet functional requirements while achieving the
smallest price point available. From a design perspective, its principles are overall efficiency, consumption modeling,
not spending money on data centers, analyzing expenditures, and using managed services.

Performance Efficiency is a framework pillar that supports computing resources to maintain and meet business
requirements as technologies change over time within the AWS infrastructure. The focus is on performance rather
than cost.

Storage is a sub-component found under the performance efficiency pillar. Storage refers to the different types of
storage relating to file, block, and object level storage. This is not related to the cost pillar.

Reliability is focused on the stability of AWS systems and their ability to support business value with long uptimes
and durable systems. This pillar is not focused on cost but rather the durability and uptime of an AWS resource.

Objective:
Billing, Pricing, and Support

Sub-Objective:
Compare AWS pricing models

References:

Cost Optimization Pillar - AWS Well-Architected Framework - Cost Optimization Pillar (amazon.com)

34 of 68 12-02-2024, 23:34
CLF-C02 Exam Simulation https://www.kaplanlearn.com/education/test/print/90570307?testId=28...

Question #34 of 65 Question ID: 1615042

You need to run a certain software tool on AWS compute resources and allow your users to securely access a
single version of it using any device while ensuring high levels of performance. Which AWS system should you use
for this requirement?

A) Amazon WorkSpaces

B) Amazon Connect

C) AWS Data Exchange

D) Amazon AppStream

Explanation

You would use Amazon AppStream. AppStream enables users to access desktop applications instantly from any
location. The AWS resources needed to run the applications are managed by AppStream which provides automatic
scaling. AppStream's automatic scaling feature automatically adjusts infrastructure to match user demand, optimizes
resource utilization for both performance and cost, and eliminates manual capacity planning and management.
Application streaming can be done using an HTML5-capable web browser or the AppStream client. AppStream is
suited to hosting a specific application on AWS, while Amazon WorkSpaces is used for creating virtual desktops for
a team.

You would not use Amazon WorkSpaces. Amazon WorkSpaces enables you to provision workspaces which include
virtual desktops for Ubuntu Linux, Microsoft Windows, or Amazon Linux. This removes the need for you to acquire
hardware or install software. WorkSpaces allows users to access virtual desktops using a browser. Amazon
WorkSpaces Web is a WorkSpaces capability suitable for web-based workloads that are secure.

You would not use Amazon Connect. This is a cloud contact center service that enables you to use omnichannel
communications for creating personalized experiences for your users. With Amazon Connect, you can offer chat
and voice support using factors such as tentative wait times and customer preferences.

You would not use AWS Data Exchange for this scenario. AWS Data Exchange is an AWS service that can be used
by AWS customers to locate and use third-party data on AWS. It allows subscribers to find data products from
qualified data providers and subscribe to these products. For data providers, AWS Data Exchange removes the
requirement for building and maintaining technology for data delivery or billing.

Objective:
Cloud Technology and Services

Sub-Objective:
Identify services from other in-scope AWS service categories

35 of 68 12-02-2024, 23:34
CLF-C02 Exam Simulation https://www.kaplanlearn.com/education/test/print/90570307?testId=28...

References:

AWS > Documentation > Amazon AppStream 2.0 > Administration Guide > What Is Amazon AppStream 2.0?

Question #35 of 65 Question ID: 1603314

You need to create a historical CPU report on your Amazon Elastic Compute Cloud (EC2) instance. How long does
Amazon CloudWatch keep metric data?

A) 15 days

B) 15 months

C) 6 months

D) 1 month

Explanation

Amazon CloudWatch metrics automatically expire after 15 months when no data is published into them. Metrics are
a basic CloudWatch concept which represent a set of data points that are time-ordered. A metric is a variable that
can be monitored and each data point is a value of that variable at a specific time.

You can monitor disk reads and writes and even CPU information that can help you determine if you need to add
additional Amazon Elastic Compute Cloud (EC2) instances for load balancing purposes.

15 days, 1 month (30 days), and 6 months are all incorrect choices because CloudWatch automatically expires
metrics after 15 months. A data point older than 15 months expires as new data points arrive.

Objective:
Cloud Technology and Services

Sub-Objective:
Identify AWS compute services

References:

AWS > Documentation > Amazon CloudWatch > User Guide > Amazon CloudWatch concepts > Metrics

https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cloudwatch_concepts.html#Metric

Question #36 of 65 Question ID: 1603273

36 of 68 12-02-2024, 23:34
CLF-C02 Exam Simulation https://www.kaplanlearn.com/education/test/print/90570307?testId=28...

Which category in AWS Trusted Advisor provides you with a check to ensure that MFA is activated for the root
account?

A) Fault Tolerance

B) Security

C) Performance

D) Cost Optimization

Explanation

The Security category in AWS Trusted Advisor checks to ensure that multi-factor authentication (MFA) is activated
for the root account. The other categories do not have this ability.

AWS Trusted Advisor is a tool available online that can scan your entire AWS environment and provide real-time
feedback based on AWS best practices. AWS Trusted Advisor checks your resources based on five pillars:

Cost Optimization – This indicates underutilized instances or idle databases that can be stopped or deleted to
save money.
Performance – This indicates ways to improve the throughput of provisioned compute instances. An example is
an Amazon Elastic Block Store (EBS) volume whose performance may be affected by its associated Amazon
Elastic Compute Cloud (EC2) instance’s throughput.
Security – This indicates settings like weak password policies for AWS Identity and Access Management (IAM)
user accounts or lack of MFA activation for the root account. This can also cover public access for EC2
instances, which can pose a security threat.
Fault Tolerance – This can indicate a lack of backups, like EBS volumes that do not have snapshots taken of
them. This can also cover EC2 instances that have not been launched across several Availability Zones (AZs).
Service Limits – This indicates AWS service limits, like using up five virtual private clouds (VPCs), which is the
maximum limit for an AWS Region.

Based on its analysis, Trusted Advisor provides a dashboard that lists items requiring your attention, each of which
have associated levels of severity going from high to low. The severity is indicated by:

Red – Needs immediate action


Orange – Investigation required
Green – No issues found

Objective:
Security and Compliance

Sub-Objective:
Identify components and resources for security

37 of 68 12-02-2024, 23:34
CLF-C02 Exam Simulation https://www.kaplanlearn.com/education/test/print/90570307?testId=28...

References:

AWS Trusted Advisor - AWS Support (amazon.com)

Question #37 of 65 Question ID: 1603213

Which Amazon resource could you use to gather IP addresses for subnets classified as private within your VPC
infrastructure?

A) VPC peering

B) AWS Direct Connect

C) Virtual private cloud

D) VPC Flow Logs

Explanation

Virtual Private Cloud (VPC) Flow Logs is an Amazon feature that gives you the ability to gather details regarding IP
addresses going from and to different network components within your VPC. All of this data is stored in Amazon
CloudWatch logs. Once the data is stored within the CloudWatch logs, you can retrieve and view the data
appropriately. You are not charged for using VPC Flow Logs, but you are charged for using Amazon CloudWatch
logs.

VPC peering uses private IPv4 or IPv6 addresses to connect Amazon VPCs in the same Region, in different
Regions, or in different AWS accounts, allowing them to communicate as though they were in the same network.
VPC peering is a highly recommended solution for connecting several Amazon VPCs within an individual Region.
This resource is not used for gathering IP address information.

An AWS Direct Connect is a private connection that links your remote network to an Amazon VPC. Another way of
describing an AWS Direct Connect is a link between your on-premises network and your Amazon VPC. This is not
an Amazon feature for gathering statistics associated with IP address connections.

A VPC is a logical entity that gives you the ability to create subnets, modify IP address ranges, change network
gateways, configure route tables, and modify advanced security settings to build your own logical network. This is
not a tool used for gathering IP address information.

Objective:
Cloud Concepts

Sub-Objective:
Define the benefits of the AWS Cloud

38 of 68 12-02-2024, 23:34
CLF-C02 Exam Simulation https://www.kaplanlearn.com/education/test/print/90570307?testId=28...

References:

Logging IP traffic using VPC Flow Logs - Amazon Virtual Private Cloud

Question #38 of 65 Question ID: 1603407

You are supporting a production AWS EC2 instance. You are notified that your AWS instance has a corrupted EBS
volume. What AWS resource would graphically identify the issue and allow you to create and configure forward-
looking notifications across multiple channels?

A) AWS Personal Health Dashboard

B) AWS Inspector

C) AWS Config

D) AWS Trusted Advisor

Explanation

AWS Personal Health Dashboard allows you to mitigate problems graphically and to create alerts across multiple
different channels within the AWS infrastructure. The graphical dashboard is a personal view that monitors AWS
services and the underlying AWS resources. It also displays timely information and is relevant to your specific
system, while giving you the ability to receive proactive notifications.

AWS Trusted Advisor is for displaying areas of your AWS infrastructure with over limits and under thresholds so that
you are not wasting AWS resources. This resource is not for monitoring the overall health of your AWS
infrastructure, but mainly focuses on areas that need to be governed.

AWS Config is incorrect because you would not use a configuration service to inspect resource limits.

AWS Inspector is incorrect because this resource is used for risk assessments related to vulnerability information
rather than resource limitations.

Objective:
Billing, Pricing, and Support

Sub-Objective:
Identify AWS technical resources and AWS Support options

References:

AWS service quotas - AWS General Reference (amazon.com)

AWS Health Dashboard (amazon.com)

39 of 68 12-02-2024, 23:34
CLF-C02 Exam Simulation https://www.kaplanlearn.com/education/test/print/90570307?testId=28...

Question #39 of 65 Question ID: 1603246

You work for a large company that manages automobile sales. They have just moved their first production
application to the AWS infrastructure. Your technical leadership wants to make sure that security practices are
structured and implemented within the AWS environment.

Which of the following options would be considered the customer’s responsibility?

A) Patching the underlying system within AWS.

B) Patching the AWS storage.

C) Patching the OS within an EC2 instance.

D) No patching is required by the customer.

Explanation

Patching the OS within an Amazon Elastic Compute Cloud (EC2) instance is the customer’s responsibility. When
you create an Amazon EC2 environment, you create the specific operating system based on a preselected Amazon
Machine Image (AMI). It is the customer’s responsibility to appropriately patch their EC2 instance after it has been
created. Keep in mind that the way to keep the EC2 instances patched is by keeping an up-to-date AMI, which
would include the base operating system and the necessary patches.

Patching the underlying system within AWS is not correct because this falls under the responsibility of Amazon.

Patching the AWS storage is not correct because this is also a requirement for Amazon. They are responsible for
maintaining the hardware and global infrastructure.

No patching is required by the customer is not correct because patching is considered a shared responsibility, which
means that the customer and Amazon both have specific infrastructures that are required to be patched
appropriately.

Objective:
Security and Compliance

Sub-Objective:
Understand the AWS shared responsibility model

References:

What is Amazon EKS? - Amazon EKS

Maintaining a DB instance - Amazon Relational Database Service

40 of 68 12-02-2024, 23:34
CLF-C02 Exam Simulation https://www.kaplanlearn.com/education/test/print/90570307?testId=28...

Question #40 of 65 Question ID: 1603359

You need to use an AWS service that allows you to automate evidence collection and perform risk and compliance
management. Which of these services should you use for this scenario?

A) AWS Certificate Manager (ACM)

B) AWS Audit Manager

C) AWS Directory Service

D) Amazon Detective

Explanation

You will use AWS Audit Manager which is a service that simplifies the process of risk and compliance management
as per industry standards and regulations. It performs automation for evidence collection and allows you to assess
the correct working of controls including activities, procedures, and policies. During a scheduled audit, AWS Audit
Manager enables the management of stakeholder review of controls which eases the process of creating reports
that are audit ready.

You will not use AWS Certificate Manager (ACM). ACM helps you create, store, and renew public and private
Secure Sockets Layer (SSL)/Transport Layer Security (TLS) X.509 certificates and encryption keys that are used for
protecting your AWS applications and websites. You can deploy ACM certificates via Amazon CloudFront, Amazon
Application Programming Interface (API) Gateway, or Elastic Load Balancing.

You will not use Amazon Detective. This is a service for analyzing, investigating, and identifying underlying causes
of suspicious activities. Detective collects AWS resource log data automatically and generates visualizations using
graph theory, machine learning (ML), and statistical analysis for fast and efficient investigation for security issues.

You will not use AWS Directory Service. This is a service providing several means to utilize Microsoft Active
Directory (AD) with AWS services. AWS Directory Service provides a choice of multiple directories and allows the
use of applications that are aware of Lightweight Directory Access Protocol (LDAP) and Microsoft AD. A directory
stores data related to devices, users, and groups. An administrator can use directories for managing access to
resources and data.

Objective:
Cloud Technology and Services

Sub-Objective:
Identify services from other in-scope AWS service categories

References:

AWS > Documentation > AWS Audit Manager > User Guide > What is AWS Audit Manager?

41 of 68 12-02-2024, 23:34
CLF-C02 Exam Simulation https://www.kaplanlearn.com/education/test/print/90570307?testId=28...

https://docs.aws.amazon.com/audit-manager/latest/userguide/what-is.html

Question #41 of 65 Question ID: 1603341

Which technology would you use for a block storage option for EC2 instances that is scalable to petabyte levels and
provides high-availability and concurrent data access capabilities?

A) S3

B) Instance store

C) EFS

D) SQS

Explanation

You would use Amazon Elastic File System (EFS), which is a managed file system that scales automatically to
petabyte levels with no interruptions to applications. EFS is regional in its service and stores data across several
Availability Zones (AZs). This allows data stored using EFS to be accessed at the same time from each AZ in the
Region. You can also use AWS Direct Connect to access EFS using on-premises servers.

You would not need to use Amazon Simple Queue Service (SQS). SQS is a service that provides message
queueing. With SQS, you can allow messages between several components of applications and microservices to be
sent, received, and stored. This approach allows for components to be decoupled, thus removing any single points
of failure from an application. This way, each component of the application can perform independently and with
greater efficiency.

You would not use instance stores for this scenario. An instance store is a block-level storage option that provides
temporary data storage for an EC2 instance while the instance is running. The instance store is part of the physical
storage on the host on which an EC2 instance is running. When the instance is terminated or restarted on a different
physical host, then the associated instance store is lost.

You cannot attach Amazon S3 buckets to EC2 instances. Amazon S3 allows you to store an unlimited number of
objects with each object being up to 5 TB in size. These objects are stored in S3 buckets. The files stored on
Amazon S3 can be of any type, including images, videos, and documents.

Another block storage technology you should know is Amazon Elastic Block Store (EBS). EBS allows data to be
stored in EBS volumes that are virtual hard drives in a single AZ. EBS provides block-level storage. You use EBS
volumes with EC2 instances where you need persistent data storage. When you create an EBS volume, you need to
first define its size and type and provision it. To attach an EC2 instance to an EBS volume, both the instance and the
volume need to be in the same AZ. You can back up EBS data by taking snapshots, which are incremental backups.
The first backup creates a copy of the entire volume. Then the next backups only save the blocks of data changed

42 of 68 12-02-2024, 23:34
CLF-C02 Exam Simulation https://www.kaplanlearn.com/education/test/print/90570307?testId=28...

since the last snapshot.

Objective:
Cloud Technology and Services

Sub-Objective:
Identify AWS storage services

References:

Logging IP traffic using VPC Flow Logs – Amazon Virtual Private Cloud

Question #42 of 65 Question ID: 1603258

You need to find ways to ensure accountability across your AWS Cloud deployment. Which of these services will
you use to check all user activity and API calls made on the AWS system?

A) Config

B) CloudWatch

C) CloudTrail

D) Trusted Advisor

Explanation

You will use CloudTrail. CloudTrail keeps a track of all API calls made in an AWS account and records the API
caller’s identity and source IP address, the time of the call, and other key information. CloudTrail records updated
events 15 minutes after an API call is made. API calls are used in AWS for provisioning and managing resources.
You can filter API calls in CloudTrail based on the date and time of the call, the user making the call, and the
resources accessed by the call.

CloudTrail Insights is a CloudTrail feature that can be enabled for detecting unusual activity inside an AWS account.
An example of such activity could be an unusually large number of EC2 instances launched in an account.

CloudTrail can save security logs indefinitely inside of S3 buckets and then further secured using tamper-proof
technology like Vault Lock. This is useful for situations where an auditor needs to verify that your company’s
database cannot be accessed from outside the company.

Monitoring is the process of watching a system, gathering its metrics, and analyzing them for taking key decisions
and actions. Monitoring on AWS helps ensure that AWS resources are being utilized correctly and that the system is
not running into any issues.

43 of 68 12-02-2024, 23:34
CLF-C02 Exam Simulation https://www.kaplanlearn.com/education/test/print/90570307?testId=28...

You will not use CloudWatch. AWS CloudWatch allows you to monitor the AWS system in real-time by monitoring
and tracking various metrics related to resources. A metric could be the CPU utilization for an EC2 instance. You
can also create a threshold for a metric and trigger an alert and an action when the metric reaches the threshold.
The CloudWatch dashboard displays key system metrics graphically in real-time, offering an updated view of the
AWS system. This helps you get a clear analysis of how your systems infrastructure, applications, and services are
doing. This way, you can use metrics and logs to track and resolve issues swiftly, thus improving your Total Cost of
Ownership (TCO) and reducing Mean Time to Recovery (MTTR). This way, your developers can focus more on
creating business value. CloudWatch also helps you optimize resources and applications by analyzing their overall
usage. CloudWatch alarms allow you to trigger an action when the value of a metric goes over or under a preset
threshold. For example, if an EC2 instance has had low CPU utilization for a specified period, then a Cloud Watch
alarm can stop the instance to save on unnecessary charges on unused instances.

You will not use Config. AWS Config provides you with all the details related to the configuration of your AWS
resources within your AWS account. These resources include Amazon EC2 instances, EBS volumes, security
groups, or virtual private clouds (VPCs). AWS Config allows you to see how resources are related and how their
configurations have changed.

You will not use Trusted Advisor. AWS Trusted Advisor is a tool that indicates how you should provision your AWS
resources as per AWS best practices. It performs real-time monitoring of your AWS resources and recommends
actions accordingly.

Objective:
Security and Compliance

Sub-Objective:
Understand AWS Cloud security, governance, and compliance concepts

References:

What Is AWS CloudTrail? - AWS CloudTrail (amazon.com)

Question #43 of 65 Question ID: 1603237

Which types of instances would an Auto Scaling group use? (Choose all that apply.)

A) Instances that are stopped

B) Instances classified as on-premises

C) Instances that are running and not part of an Auto Scaling group

44 of 68 12-02-2024, 23:34
CLF-C02 Exam Simulation https://www.kaplanlearn.com/education/test/print/90570307?testId=28...

D) Instances classified as On-Demand

E) Instances classified as Spot

Explanation

When you implement an Auto Scaling group, you have two primary methods for managing additional Amazon
Elastic Compute Cloud (EC2) instances for your workload: Spot Instances and On-Demand Instances. Spot
Instances are requests for otherwise unused instances, which can help reduce the overall price of your environment
because they consume fewer resources. On-Demand Instances are charged by each second for their overall
resources to your AWS environment.

When you use an Auto Scaling group to manage your instance, you cannot use an instance that is running or one
that is used by another AWS resource. If an instance is stopped, the Auto Scaling group marks the instance as
unhealthy and launches a new instance.

Auto Scaling groups do not use on-premises instances because they are not considered AWS resources.

Objective:
Cloud Concepts

Sub-Objective:
Understand concepts of cloud economics

References:

Instance purchasing options - Amazon Elastic Compute Cloud

Question #44 of 65 Question ID: 1603280

What is an Amazon best practice for securing applications running within an Amazon EC2 infrastructure?

A) Run services on the system that might be used later on to save on startup
costs to the system.

B) Keep in place vendor provided defaults when creating new AMIs to prevent
difficulties in application installs.

C) Enable scripts used on a system ahead of predetermined maintenance tasks.

D) Disable or dispose of unused user accounts.

Explanation

Amazon best practices include disabling or disposing of user accounts that are no longer needed. They also

45 of 68 12-02-2024, 23:34
CLF-C02 Exam Simulation https://www.kaplanlearn.com/education/test/print/90570307?testId=28...

describe the importance of changing all vendor-supplied defaults when creating new Amazon Machine Images
(AMIs). It is also important to use only protocols, services, and daemons as they are needed. Finally, they
recommend deleting scripts, software features, miscellaneous functions, unused Amazon Elastic Block Store (EBS)
volumes, and even web servers that are not being utilized appropriately.

Run services on the system that might be used later on to save on startup costs to the system is not correct.
Amazon recommends that you only use services that are needed, and protocols and services that are actually
required.

Enable scripts used on a system ahead of predetermined maintenance tasks is not a correct choice. Amazon
recommends that you disable or remove scripts, software features, EBS volumes, and even Amazon Elastic
Compute Cloud (EC2) instances that are not being used.

Keep in place vendor provided defaults when creating new AMIs to prevent difficulties in application installs is not
correct. Amazon recommends that you change all vendor-related defaults when creating new AMIs.

Objective:
Security and Compliance

Sub-Objective:
Identify components and resources for security

References:

Regularly review and remove unused users, roles, permissions, policies, and credentials

https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html?secd_iam7#remove-credentials

Question #45 of 65 Question ID: 1603228

Which AWS service should you use to reduce misreporting and non-compliance risk, save costs on cloud
infrastructure, and ensure non-compliant server usage is stopped before it occurs?

A) Amazon Forecast

B) Amazon Rekognition

C) AWS X-Ray

D) AWS License Manager

Explanation

You will use AWS License Manager which is an AWS service that simplifies the management of software licenses

46 of 68 12-02-2024, 23:34
CLF-C02 Exam Simulation https://www.kaplanlearn.com/education/test/print/90570307?testId=28...

from multiple vendors including IBM, Oracle, SAP, and Microsoft through a centralized system. This covers both
your AWS and on-premises systems. AWS License Manager also lets you change license types between bring-
your-own-license (BYOL) and AWS provided licenses with your licensed media. Using BYOL opportunities can help
you save costs on cloud infrastructure.

You will not use Amazon Rekognition. Amazon Rekognition allows you to have video and image analysis capabilities
in your applications.

You will not use Amazon Forecast. Amazon Forecast provides accurate time-series forecasts using ML and
statistical algorithms.

You will not use AWS X-Ray. AWS X-Ray provides detailed data on requests that your application serves.

Objective:
Cloud Concepts

Sub-Objective:
Understand concepts of cloud economics

References:

AWS > Documentation > AWS License Manager > What is AWS License Manager?

https://docs.aws.amazon.com/license-manager/latest/userguide/license-manager.html

Question #46 of 65 Question ID: 1603310

What DynamoDB feature is an in-memory caching component that delivers microsecond responses for its front-end
applications?

A) Amazon ElastiCache for Redis

B) Amazon DynamoDB Streams

C) Cross-Region replication

D) DAX

Explanation

Amazon DynamoDB Accelerator (DAX) is an AWS resource that implements an in-memory acceleration component
used by the DynamoDB application programming interface (API) that creates up to ten times faster response times.
There is no advanced configuration, you can just enable this feature from the AWS Management Console. This
caching feature lets you dynamically scale to the demand of the application workload. DAX leverages AWS Identity

47 of 68 12-02-2024, 23:34
CLF-C02 Exam Simulation https://www.kaplanlearn.com/education/test/print/90570307?testId=28...

and Access Management (IAM) for security purposes, such as user access and resource limitations. DAX is also
configured to use Amazon CloudWatch for monitoring and CloudTrail for auditing logs for analysis.

Amazon ElastiCache for Redis is a web service that allows you to manage and scale caching components or data
stores within the cloud. This service supports data across 15 shards, if needed. This AWS resource works with data
stores and not DynamoDB tables for caching.

Amazon DynamoDB Streams is a feature within a DynamoDB that allows you to keep up with the most recent items
within the last day or simply the last change to an item within a table. This is not an in-memory caching tool.

Cross-Region replication is a feature that automatically replicates data within an AWS Region to another AWS
Region dynamically and creates a global distribution of sustained data. This is not a caching feature but a replication
feature.

Objective:
Cloud Technology and Services

Sub-Objective:
Define the AWS global infrastructure

References:

Amazon DynamoDB Accelerator (DAX)

Question #47 of 65 Question ID: 1603278

What does Amazon recommend for protecting data in transit when you have a concern of accidental information
disclosure?

A) Digital signature

B) IPSec ESP

C) Encryption server side

D) Amazon S3 Lifecycles

Explanation

Amazon recommends that data in transit should be encrypted using Secure Sockets Layer/Transport Layer Security
(SSL/TLS) or IPSec ESP. Amazon supports IPSec, which is Internet Protocol Security, used in combination with a
virtual private network (VPN) network. ESP stands for Encapsulating Security Payload, which is a protocol that can
protect data integrity and create authentication for network packets, or what is referred to as payloads, that can be
encrypted/decrypted. You could also use both forms of encryption (SSL/TLS and IPSec ESP). When it comes to

48 of 68 12-02-2024, 23:34
CLF-C02 Exam Simulation https://www.kaplanlearn.com/education/test/print/90570307?testId=28...

accidental disclosure of private information, you should always limit access. Amazon describes the concern of
having confidential information touch a public network, which should always have a basic level of encryption.

You would not use Amazon Simple Storage Service (S3) Lifecycles in this scenario. You can add rules to the
configuration of an S3 Lifecycle which makes S3 move objects from one storage class to another. An S3 Lifecycle
configuration is made using an eXtensible Markup Language (XML) file that contains rules and actions to be
performed on S3 objects in the object lifecycle. Class transitions can be done through a waterfall model, which
means that objects stored with higher storage classes can be transitioned to lower storage tiers.

Amazon uses encryption server side to encrypt customer data on the physical server, and the entire process is
holistic or transparent because it is executed on the server side and not the client side. Client-side encryption is
handled by the end-user or customer. Again, this is encryption for data at rest.

A digital signature is a way to sign a digital document using encryption that involves the use of digital codes.
Amazon uses AWS Signature Version 4, which uses an access secret key that will then be used to create a signing
key. This new key or the signing key can only be used within a uniquely identified AWS Region. It is not concerned
with protecting data in transit.

Objective:
Security and Compliance

Sub-Objective:
Identify components and resources for security

References:

What is IPSec?

https://aws.amazon.com/what-is/ipsec/

Question #48 of 65 Question ID: 1603232

Which AWS framework pillar is focused on supporting compute changes in AWS adaptive technologies as
businesses evolve?

A) Cost Optimization

B) Operational Excellence

C) Performance Efficiency

D) Reliability

Explanation

49 of 68 12-02-2024, 23:34
CLF-C02 Exam Simulation https://www.kaplanlearn.com/education/test/print/90570307?testId=28...

Performance Efficiency is a framework pillar that supports computing resources to maintain and meet business
requirements as technologies change over time within the AWS infrastructure. The principles of Performance
Efficiency are going global, implementing serverless technologies, experimenting with development, and mechanical
sympathy.

The five design principles of Operational Excellence in the cloud are documentation, frequent and small changes,
operations as code, refining procedures quickly, and anticipating system failure. This pillar is not focused on
performance.

Cost Optimization is focused more on cost and less on the performance aspects of the AWS infrastructure.

Reliability is focused on the stability of AWS systems and their ability to support business value with long uptimes
and durable systems. This pillar is not focused on performance but on dependability and service times.

Objective:
Cloud Concepts

Sub-Objective:
Understand concepts of cloud economics

References:

Operational Excellence Pillar - AWS Well-Architected Framework - Operational Excellence Pillar (amazon.com)

Question #49 of 65 Question ID: 1603245

As it relates to the shared responsibility model, which security option is the customer’s responsibility?

A) Facilities

B) Network infrastructure

C) Physical security of hardware

D) Amazon Machine Images (AMIs)

Explanation

Amazon’s shared responsibility model states that the customer is responsible for all aspects of creating and
managing Amazon Machine Image (AMI) components used within their respective Amazon Elastic Compute Cloud
(EC2) infrastructures. Customers are also responsible for configurations, policies, data stores, data at rest, data in
transit, applications, and operating systems.

Customers are not responsible for virtualization infrastructures, network infrastructures, facilities, or the physical

50 of 68 12-02-2024, 23:34
CLF-C02 Exam Simulation https://www.kaplanlearn.com/education/test/print/90570307?testId=28...

security of the hardware.

For protecting data at rest, customers can set up and configure partition, file, or volume encryption at the application
level. If you have data integrity concerns, then you can set up versioning within Amazon Simple Storage Service
(S3) and configure digital signatures or authentication encryption in other Amazon services as well. Accidental
deletes can be solved with multi-factor authentication (MFA) and versioning. Lastly, when it comes to hardware or
software availability as it relates to a DR situation, you can use replicas and data replication solutions to recover
your most critical areas.

For protecting data in transit, the customer can configure Internet Protocol Security (IPSec), Encapsulating Security
Payload (ESP) and secure sockets layer/transport layer security (SSL/TLS). They can also use X.509 certificates to
authenticate the remote end destination.

Amazon is responsible for the facilities, virtualization and network infrastructures, and for the physical security of the
hardware.

AWS is responsible for fixing flaws within the infrastructure and patching, but customers patch their own guest
applications and OSes.
AWS actively supports infrastructure devices, customers are responsible for databases, applications, and
operating systems.
AWS educates AWS employees; customers educate their own employees.

Objective:
Security and Compliance

Sub-Objective:
Understand the AWS shared responsibility model

References:

Shared Responsibility Model - Amazon Web Services (AWS)

Question #50 of 65 Question ID: 1603287

Which of these is a private cloud deployment?

A) On-premises

B) PAYG

C) Hybrid

D) Cloud-based

51 of 68 12-02-2024, 23:34
CLF-C02 Exam Simulation https://www.kaplanlearn.com/education/test/print/90570307?testId=28...

Explanation

A private cloud deployment is also referred to as an on-premises deployment. In an on-premises deployment, all the
infrastructure and applications that are part of the cloud or virtualization system are running on systems that are
inside a company’s on-premises data center. There is no part of this system outside of the company’s on-premises
systems or on a public cloud or network.

A cloud-based deployment or cloud native deployment includes applications that are fully functional in a cloud and
have no portions that are running on-premises. Because of this a cloud-based deployment is not a private cloud
deployment.

A hybrid cloud deployment is not a private cloud deployment because it contains some applications and
infrastructure that are based on a cloud. A hybrid cloud deployment shares infrastructure and applications between
an on-premises environment and a cloud.

Pay-As-You-Go (PAYG) is not a cloud deployment but a payment model where cloud customers only have to pay for
resources and services they actually consume.

Objective:
Cloud Technology and Services

Sub-Objective:
Define methods of deploying and operating in the AWS Cloud

References:

Types of cloud computing - Overview of Amazon Web Services

Question #51 of 65 Question ID: 1603381

You work for ABC corporation that is actively using Amazon S3 storage solutions. The company has files that are
stored using Amazon S3, but they want to save costs because a majority of their files are not used after 40 days.
However, they need the ability to recover files within a few minutes after the request to see a file. Which option
below best meets these requirements?

A) Move the data to Amazon S3 Standard using Infrequent Access (IA)


option after 40 days.

B) Enable the delete option on each bucket and recover the data as requested.

C) Move the objects to Amazon Glacier after 40 days.

D) Enable versioning and delete certain files after 40 days.

52 of 68 12-02-2024, 23:34
CLF-C02 Exam Simulation https://www.kaplanlearn.com/education/test/print/90570307?testId=28...

Explanation

In this case you would use a lifecycle policy that is essentially a set of rules that dictate the life of an object. You
would move the data to Amazon Simple Storage Service (S3) Standard-Infrequent Access (Standard-IA) option after
40 days. This option will minimize the costs and at the same time give you the ability to recover the files within a few
minutes of the request.

You must remember that data moved to Amazon Glacier could take several hours to recover so this would not be a
viable option for your situation.

Also, versioning does give you the ability to store specific versions of an object but could be very labor intensive if
you were managing several hundred files. Configuring a lifecycle policy to move files after so many days would save
time and would be the most efficient.

Objective:
Billing, Pricing, and Support

Sub-Objective:
Understand resources for billing, budget, and cost management

References:

Using Amazon S3 storage classes

https://docs.aws.amazon.com/AmazonS3/latest/userguide/storage-class-intro.html

Question #52 of 65 Question ID: 1615028

You need a way for the development team to retrieve secrets such as credentials and passwords programmatically
without the need to embed them in the application code. Which of the following services should you use for this?

A) AWS Secrets Manager

B) AWS IAM Identity Center

C) AWS Step Functions

D) AWS Shield

Explanation

You would use AWS Secrets Manager which allows you to retrieve secrets such as credentials and passwords
programmatically using application programming interfaces (APIs). It removes the need to embed credentials in
application code, which also lets you rotate credentials easily using short-term instead of long-term credentials.

53 of 68 12-02-2024, 23:34
CLF-C02 Exam Simulation https://www.kaplanlearn.com/education/test/print/90570307?testId=28...

You would not use AWS Identity and Access Management (IAM) Identity Center. This provides a centralized
approach to administer users and access to various AWS accounts and applications. You can provide federated
access to users using AWS IAM Identity Center. The previously used terms, AWS SSO user and SSO user, are now
referred to as workforce user and user, respectively. Users can sign in to the AWS IAM Identity Center portal using
their existing Active Directory credentials. Adding users to Active Directory groups would quickly provide access to
AWS services and accounts for new and existing users.

You would not use AWS Shield. AWS Shield is a managed service for protection from distributed denial-of-service
(DDoS) attacks and offers automatic mitigations that reduce downtime and latency to applications.

You would not use AWS Step Functions. AWS Step Functions is a workflow service that allows you to orchestrate
various AWS services and provide automation for business processes.

Objective:
Security and Compliance

Sub-Objective:
Identify AWS access management capabilities

References:

AWS > Documentation > AWS Secrets Manager > User Guide > What is AWS Secrets Manager?

Question #53 of 65 Question ID: 1615048

You want to implement security measures that:

Automatically enforce security policies across resources.


Centralize the deployment of baseline security groups for VPC protection.

Which of the following AWS services best supports these requirements?

A) AWS Resource Access Manager (AWS RAM)

B) Amazon Macie

C) Amazon Detective

D) AWS Firewall Manager

Explanation

You would use AWS Firewall Manager which is a service for security management and enables you to perform
configuration and management of firewall rules centrally across accounts and applications in AWS Organizations.

54 of 68 12-02-2024, 23:34
CLF-C02 Exam Simulation https://www.kaplanlearn.com/education/test/print/90570307?testId=28...

You can enforce a set of security rules to keep new resources and applications in compliance. You can also deploy
AWS Network Firewall using Firewall Manager. AWS Network Firewall enables securing virtual networks at a large
scale. It allows for traffic filtering at the perimeter of a virtual private cloud (VPC). This lets you filter traffic to and
from a NAT or Internet gateway and over Direct Connect or a virtual private network (VPN).

You would not use Amazon Macie. Amazon Macie is a fully managed service providing data security and privacy. It
utilizes machine learning and pattern matching for monitoring and protecting sensitive data on an AWS system.
Macie provides automation for discovering sensitive data stored in Simple Storage Service (S3) buckets. This data
can include both Personally Identifiable Information (PII) and financial data.

You would not use AWS Resource Access Manager (AWS RAM). This is an AWS service that you can use for
sharing resources securely across multiple AWS accounts and within organizational units (OUs) and organizations.
AWS RAM enables resource sharing with AWS Identity and Access Management (IAM) users and roles for various
resource types, allowing you to create a resource once and then use AWS RAM to share that resource with other
accounts.

You would not use Amazon Detective. This is a service for analyzing, investigating, and identifying underlying
causes of suspicious activities. Detective collects AWS resource log data automatically and generates visualizations
using graph theory, machine learning (ML), and statistical analysis for fast and efficient investigation for security
issues.

Objective:
Cloud Technology and Services

Sub-Objective:
Identify services from other in-scope AWS service categories

References:

AWS > Products > Security, Identity & Compliance > AWS Firewall Manager features

Question #54 of 65 Question ID: 1615025

Which AWS service should you use when you want to perform pricing for climate risk in a portfolio, reduce a
company’s carbon footprint, or align with new environmental, social, and governance (ESG) requirements?

55 of 68 12-02-2024, 23:34
CLF-C02 Exam Simulation https://www.kaplanlearn.com/education/test/print/90570307?testId=28...

A) AWS Outposts

B) AWS Data Exchange

C) AWS Storage Gateway

D) AWS Trusted Advisor

Explanation

You would use AWS Data Exchange. This allows you to locate and use third-party information that is related to
sustainability. It provides you access to data sets which are accessible through the Open Data Sponsorship
Program and Amazon Sustainability Data Initiative. AWS collaborates with companies for making environmental,
social, and governance (ESG), weather, satellite imagery, and air quality data accessible to clients.

You would not use AWS Trusted Advisor. AWS Trusted Advisor is a tool that indicates how you should provision
your AWS resources as per AWS best practices. It performs real-time monitoring of your AWS resources and
recommends actions accordingly.

You would not use AWS Outposts. AWS Outposts allows a company to use AWS services in their own datacenter or
company building. It creates a miniature Region in a datacenter, providing all AWS services in an isolated, private
location. This is an example of a hybrid cloud approach.

You would not use AWS Storage Gateway. AWS Storage Gateway provides secure and seamless access for on-
premises systems and applications to unlimited storage on AWS.

Objective:
Cloud Concepts

Sub-Objective:
Understand the benefits of and strategies for migration to the AWS Cloud

References:

AWS > Documentation > AWS Data Exchange > What is AWS Data Exchange?

Question #55 of 65 Question ID: 1603234

Which option is considered a cost-effective resource discussed in the AWS framework pillar Cost Optimization?

A) Appropriate provisioning

B) Using Reserved Instances only

C) Fixed sizing

56 of 68 12-02-2024, 23:34
CLF-C02 Exam Simulation https://www.kaplanlearn.com/education/test/print/90570307?testId=28...

D) Limiting Regions

Explanation

Appropriate provisioning is creating and maintaining the underlying AWS resources to support your business needs.
You must be able to manage services to support basic capacity demands, and proper provisioning is the key. When
you provision a managed service, you have to comply with the requirements. Those requirements are based on
unforeseen changes to normal business operations, overall effort, and time itself.

Fixed sizing is not considered a cost-effective resource. The purpose of utilizing AWS resources is your ability to
configure right sizing or adaptive resources to your infrastructure.

Using Reserved Instances only is not considered a cost-effective resource. You do not have to limit your AWS
environment to just Reserved Instances. You have three options: Reserved, Spot, and On-Demand Instances.

Limiting Regions is not considered a cost-effective resource. When you utilize AWS resources, you have the ability
to expand your business to multiple geographic Regions based on your business demand, allowing you to expand to
multiple Regions at any given time.

Objective:
Cloud Concepts

Sub-Objective:
Understand concepts of cloud economics

References:

Cost Optimization Pillar - AWS Well-Architected Framework - Cost Optimization Pillar (amazon.com)

Question #56 of 65 Question ID: 1603291

You are tasked with understanding the different sections of an IAM policy. Your boss wants to know what section of
an IAM policy manages the behaviors, such as allow or deny. What should you tell him?

A) IAM permission boundaries

B) Resources

C) Actions

D) Effects

Explanation

57 of 68 12-02-2024, 23:34
CLF-C02 Exam Simulation https://www.kaplanlearn.com/education/test/print/90570307?testId=28...

The Effects section of an AWS Identity and Access Management (IAM) policy determines the behaviors and actions
of what the policy will allow. You have to understand the effects of what occurs when a user might request access by
either allowing or denying the request. Because the default option is to deny everything within the first type of
requests, you have to grant specific permissions that you might need.

Actions is not correct because every AWS service consists of its own group of actions. If you do not specify an
action, then those action options are always denied. This is a safeguard that protects the AWS infrastructure and is
used as a separate security option.

Resources is not correct because this section is all about determining which resources are going to be associated
with what actions.

IAM permission boundaries is not correct because these set permissions that limit users from doing anything outside
of the dominion.

Objective:
Cloud Technology and Services

Sub-Objective:
Define methods of deploying and operating in the AWS Cloud

References:

Policies and permissions in IAM - AWS Identity and Access Management (amazon.com)

Question #57 of 65 Question ID: 1603328

Your company has several AWS Cloud accounts and VPCs in all of them. You need to integrate several VPCs into a
much larger network. Which two connectivity options can you use for this? (Choose two.)

A) Software Site-to-Site VPN

B) Snowmobile

C) AWS Transit Gateway

D) Global tables

E) Storage Gateway

Explanation

You can use AWS Transit Gateway or Software Site-To-Site virtual private network (VPN) for integrating multiple
virtual private clouds (VPCs) into a larger network. The best way to achieve VPC connectivity between VPCs is to

58 of 68 12-02-2024, 23:34
CLF-C02 Exam Simulation https://www.kaplanlearn.com/education/test/print/90570307?testId=28...

ensure you use IP address ranges that do not overlap. For this, you need to use a unique classless inter-domain
routing (CIDR) range for each VPC. You can use the following design options when creating VPC to VPC
connectivity:

VPC peering – AWS provides network connectivity between two VPCs.


AWS Transit Gateway – AWS provides regional router connections between VPCs.
Software Site-to-Site VPN – VPN connections between VPCs using software appliances.
Software VPN-to-AWS Managed VPN – Connectivity between VPCs through software appliance to VPN
connections.
AWS Managed VPN – Customer-managed VPC-to-VPC routing using IPSec VPN connections.
AWS PrivateLink – AWS uses interface endpoints to provide network connectivity between two VPCs.

You will not use global tables. DynamoDB global tables provide a multi-active database that is multi-regional and
fully managed. Global tables use the DynamoDB global architecture to provide fast local read and write performance
for very large scaled global systems. Global tables automatically replicate data across your selection of AWS
Regions. This helps resolve any update conflicts and boosts the high availability of applications. Global tables can
be configured using the AWS Management Console or command line interface (CLI).

You will not use AWS Storage Gateway. An AWS Storage Gateway is a technology for providing secure and
seamless access for on-premises systems and applications to unlimited storage on AWS using Amazon S3, Tape
Library, and Amazon FSx. Storage Gateway can be accessed by endpoints like Amazon VPC and the Internet.
Storage Gateway provides low latency access to unlimited data storage while ensuring agility and security through
the AWS Cloud. It also supports compliance needs through encryption and logging audit data. It also provides write-
once, read-many (WORM) storage for compliance needs.

Snowmobile is a data migration technology from AWS and is unsuitable for this scenario. AWS offers various
devices as part of its Snow family for migrating data in and out of AWS.

The Snow family moves data in a more secure manner and with more throughput. The Snow family includes:

Snowcone – Ideal for edge computing and data transfer. Its specifications are: 8 TB of storage, 2 CPUs, and 4
GB of memory.
Snowball Edge – Ideal for large scale migration of data, workflows that need data transfer, and high-capacity
local computing. It has two variations:
Storage Optimized – 80 TB of HDD for Block and Amazon Simple Storage Service (S3) storage, 1 TB of
SATA SSD for Block volumes, and has 40vCPUs with 80 GiB for EC2 instances.
Compute Optimized – 42 TB of Amazon Elastic Block Store (EBS) or Amazon S3 storage, 7.68 TB of SSD
storage for EBS compatible block volumes, and 52vCPUs, 208GiB of memory, and an NVIDIA Tesla V100
GPU for EC2 instances.
Snowmobile – Provides 100 PBs of data transfer via a semi-trailer truck with a 45-foot shipping container.

Legend: HDD: Hard Disk Drive, GB: Gigabyte, PB: Petabyte, TB: Terabyte, SSD: Solid State Drive, CPU: Central
Processing Unit, vCPU: Virtual Centralized Processing Unit, GiB: Gibibytes (10243 bytes), GPU: Graphics

59 of 68 12-02-2024, 23:34
CLF-C02 Exam Simulation https://www.kaplanlearn.com/education/test/print/90570307?testId=28...

Processing Unit.

Objective:
Cloud Technology and Services

Sub-Objective:
Identify AWS network services

References:

Amazon VPC-to-Amazon VPC connectivity options – Amazon Virtual Private Cloud Connectivity Options

Question #58 of 65 Question ID: 1603230

What are two key advantages of using AWS Cloud technology? (Choose two.)

A) Benefit from smaller user aggregates

B) Trade upfront expenses for variable expenses

C) Only pay for resources in advance

D) Go global in minutes

E) Guess resource capacity

Explanation

Using AWS Cloud you can trade upfront expenses for variable expenses and go global in minutes with your
applications.

You do not need to guess resource capacity or only pay for resources in advance and payments are done only for
resources consumed and resources can be dynamically added or removed as required to meet demands. You do
not benefit from smaller user aggregates as economies of scale relate to large user aggregates.

Cloud technology allows you to trade upfront expenses in favor of variable expenses. The costs of physical servers
and data centers are examples of upfront expenses. Variable expenses through cloud technology ensure a
company does not have to make investments in any technology before using them. By shifting its time and costs
onto revenue-generating activities, a company can grow its business value.

Cloud technology allows a business to benefit from economies of scale. This ensures low pay-as-you-go prices due
to usage by millions of customers on the cloud.

In addition, using a cloud-based solution removes the need to guess capacity for the IT resources that a business
needs for its workloads. Administrators can provision resources on demand in the cloud to ensure resources are

60 of 68 12-02-2024, 23:34
CLF-C02 Exam Simulation https://www.kaplanlearn.com/education/test/print/90570307?testId=28...

neither in excess or limited in their capacity.

Spending money on running data centers can be a huge drain on a company’s resources. AWS Cloud allows you to
focus on growing your business value and leave out the complexity and overheads associated with managing
infrastructure.

Also, by providing low latency applications to customers regardless of their location, the cloud ensures that a
business can go global in minutes.

Objective:
Cloud Concepts

Sub-Objective:
Understand concepts of cloud economics

References:

Six advantages of cloud computing - Overview of Amazon Web Services

Question #59 of 65 Question ID: 1603363

Your company needs to maintain reserve EC2 instances in multiple Availability Zones and Regions to ensure
services during a failover event. Which of these systems should you use for this?

A) On-Demand Capacity Reservations

B) On-Demand Instances.

C) Regional Reserved Instances

D) Savings Plans

Explanation

You will use On-Demand Capacity Reservations. This allows you to reserve Amazon Elastic Compute Cloud (EC2)
compute capacity in an Availability Zone for any length of time. This makes it ideal for business-critical workloads
that need assurance for long and short term compute capacity. Capacity Reservations are useful for business-
critical events, regulatory requirements, and disaster recovery situations.

Regional Reserved Instances and Savings Plans are not recommended options as neither of these reserve capacity
which is what is required in the scenario. Both options require a fixed commitment for one or three years. All
accounts inside an organization can avail of the hourly cost savings provided by Reserved Instances that may have
been bought by any other accounts. The consolidated billing feature of AWS Organizations considers all accounts
inside of an organization as one account.

61 of 68 12-02-2024, 23:34
CLF-C02 Exam Simulation https://www.kaplanlearn.com/education/test/print/90570307?testId=28...

On-Demand Instances are not a suitable choice for this scenario because there is a risk of not being able to get On-
Demand capacity due to constraints with AWS.

Objective:
Billing, Pricing, and Support

Sub-Objective:
Compare AWS pricing models

References:

AWS > Documentation > Amazon EC2 > User Guide for Linux Instances > On-Demand Capacity Reservations

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-capacity-reservations.html#capacity-reservations-
differences

Question #60 of 65 Question ID: 1603214

You are guiding an application development team to design a new system that needs to be high performing and
resilient. Which architecture type should you implement to ensure that the failure of a single component does not
bring the whole system down?

A) Monolithic

B) Tightly coupled

C) Single-threaded

D) Microservices-based

Explanation

You should implement a microservices-based system architecture. This ensures that all the components of the
system are loosely coupled and failure of an individual system component will not stop the entire system from
functioning. Such systems utilize messaging options between components using message queues. One component
leaves a message in a queue for another component, and when the recipient component is free to take in a new
message, it can retrieve it from the queue. This way, none of the components are in an infinite waiting state, and so
the system continues to function even if individual components face delays or issues. Amazon Simple Queueing
Service (SQS) helps create message queues.

You will not create a monolithic, tightly coupled, or single-threaded system architecture. None of these provide fault
tolerance as required in the scenario. A monolithic system architecture is part of the classic mode of software
development and features all components within a unified code base. This causes issues with code maintenance

62 of 68 12-02-2024, 23:34
CLF-C02 Exam Simulation https://www.kaplanlearn.com/education/test/print/90570307?testId=28...

and scaling. A tightly coupled architecture is where each component of the system is dependent on other
components for functioning. If any component fails or encounters a delay, then the entire system can be affected.
Single threaded systems only allow a single command to be processed at a time. A multi-threaded system can allow
several sections of a program to be executed concurrently.

Designing a robust system architecture on AWS Cloud involves designing for failure. This means that you need to
design your system architecture to ensure that the system will still be operational even if multiple components of the
architecture fail. This ensures fault tolerance and high availability.

Elasticity on a cloud system allows for dynamic provisioning of resources to meet varying business needs. That kind
of provisioning is not possible with a comparable on-premises solution. Cloud technology can allow thousands of
servers to be provisioned instantly if required or deallocated if not needed.

Cloud technology also allows you to think in parallel, which means that you can create applications that utilize
parallelization technologies and also use automation to handle mundane time-consuming tasks. Parallelization
involves using multi-threading within an application that allows requests for retrieving and storing data to run in
parallel using multiple threads that are running concurrently. This frees up more resources and teams to create
applications that deliver high value for your business.

Objective:
Cloud Concepts

Sub-Objective:
Identify design principles of the AWS Cloud

References:

Microservices - Implementing Microservices on AWS (amazon.com)

Question #61 of 65 Question ID: 1603322

Which of the following is considered a fully functional and heavily used NoSQL database as a service (DBaaS)?

A) Amazon DynamoDB

B) Amazon SNS

C) Amazon ElastiCache

D) Amazon Relational Database Service (RDS)

Explanation

Amazon DynamoDB is an efficient and flexible solution for using a NoSQL database engine for applications that

63 of 68 12-02-2024, 23:34
CLF-C02 Exam Simulation https://www.kaplanlearn.com/education/test/print/90570307?testId=28...

require millisecond latency while processing large amounts of data. This type of database supports key-value and
document store models and offers scalable options for increasing capacity. You can also use AWS Lambda
functions to create triggers that may be needed to process high-level business logic.

Amazon ElastiCache is not a NoSQL database but is a web service offered by AWS that allows you to configure and
manage cache application environments. You can deploy and manage multiple distributed environments easily and
effectively.

Amazon Simple Notification Service (SNS) is not a NoSQL database but is a tool that allows you to set up and
configure SNS mail services using a publisher and subscriber that leverages mobile notification services.

Amazon Relational Database Service (RDS) is a relational database service, not a NoSQL database, within Amazon
cloud services that allows you to administer a scalable database within the cloud while being cost effective with
proactive maintenance options.

Objective:
Cloud Technology and Services

Sub-Objective:
Identify AWS database services

References:

What is Amazon DynamoDB? - Amazon DynamoDB

Question #62 of 65 Question ID: 1603410

Which AWS framework pillar is built off of protecting data, systems, and assets, while increasing business value
using established mitigation strategies?

A) Cost Optimization

B) Operational Excellence

C) Security

D) Reliability

Explanation

Security is one of the five key pillars of a framework within AWS. The five key areas within the security pillar that
make up this framework are: incident response, data protection, infrastructure or hardware protection, detective
controls, and identity and access management.

64 of 68 12-02-2024, 23:34
CLF-C02 Exam Simulation https://www.kaplanlearn.com/education/test/print/90570307?testId=28...

The five design principles of Operational Excellence in the cloud are documentation, frequent and small changes,
operations as code, refining procedures quickly, and anticipating system failure. A good example would be
anticipating system failure. When you test a failed system, you can validate corrective procedures to make sure that
the system is functional and get to a zero percentage of downtime. However, this pillar does not cover all the
security aspects within AWS.

The Reliability pillar is focused on the stability of AWS systems and their ability to support business value with long
uptimes and durable systems. This pillar is not focused on protecting data and systems as the Security pillar is.

The Cost Optimization pillar is focused more on cost and less on the security aspects of the AWS infrastructure.

Objective:
Billing, Pricing, and Support

Sub-Objective:
Identify AWS technical resources and AWS Support options

References:

Security Pillar - AWS Well-Architected Framework - Security Pillar (amazon.com)

Question #63 of 65 Question ID: 1603256

Which of these will you use to achieve regulatory compliance on AWS by enforcing encryption for data at rest and in
transit?

A) AWS Compliance Center

B) Security groups

C) AWS Artifact

D) AWS IAM

Explanation

For regulatory compliance and organization needs your company may need its data to be encrypted at rest or in
transit. You can use AWS Identity and Access Management (IAM) identity-based policies to ensure that Amazon file
system resources, like Elastic File System (EFS), have all data encrypted at rest. Similarly, you can use IAM policies
for enforcing encryption of data in transit. This forces Network File System (NFS) clients to use Transport Layer
Security (TLS) when making connections to EFS.

Neither AWS Artifact nor AWS Compliance Center allows you to enforce encryption of data at rest or in transit.
These are ways to find reports and information related to compliance. As a customer, you can know more about

65 of 68 12-02-2024, 23:34
CLF-C02 Exam Simulation https://www.kaplanlearn.com/education/test/print/90570307?testId=28...

regulatory compliance using AWS by accessing AWS Artifact. AWS Compliance Center provides information related
to compliance in one location. This includes services that enable compliance as well as whitepapers on compliance
and risk security. These whitepapers provide details of security compliance with AWS.

You will not use security groups for achieving regulatory compliance. Security groups act as a virtual firewall and
protect Amazon Elastic Compute Cloud (EC2) Instances by performing stateful packet filtering.

Objective:
Security and Compliance

Sub-Objective:
Understand AWS Cloud security, governance, and compliance concepts

References:

Enforcing Encryption of Data at Rest - Encrypting File Data with Amazon Elastic File System

Question #64 of 65 Question ID: 1603202

Which pillar of the AWS Well-Architected Framework covers the capability for effective running of workloads and
gaining insight into their operations?

A) Performance Efficiency

B) Reliability

C) Operational Excellence

D) Security

Explanation

The Operational Efficiency pillar covers effectively running workloads and gaining insight into their operations. It also
covers process improvements that support the workloads so that business value can be delivered. This pillar
recommends making small reversible changes, using operations as code, making documentation, and being aware
of possible failures that can happen. An example is using deployment pipelines for automating the process of
making changes.

The Reliability pillar is related to a workload doing its functions correctly and with consistency. It recommends ways
to recover from infrastructural issues, gain computing resources for changing demand, and handle any issues
related to misconfiguration or network loss. It covers concepts related to recovery planning, like dealing with
DynamoDB issues or Amazon Elastic Compute Cloud (EC2) node losses.

The Performance Efficiency pillar is related to the efficient usage of computing resources so that system

66 of 68 12-02-2024, 23:34
CLF-C02 Exam Simulation https://www.kaplanlearn.com/education/test/print/90570307?testId=28...

requirements can be met. An example is choosing the right EC2 instance type per memory needs and workload
requirements. It also covers how this efficiency can be maintained as computing demands and technologies change.
It recommends designing serverless architectures and systems that can be swiftly deployed globally.

The Security pillar is related to ensuring workload security using the relevant cloud technologies. This pillar covers
asset protection, including data and systems, for a company. It recommends automating security best practices and
protecting data at rest and in transit. An example is using data encryption to ensure data confidentiality and integrity.

There is a fifth pillar of the AWS Well-Architected Framework: Cost Optimization. This pillar covers the capability to
operate systems in a way that delivers business value in the most economical way. It recommends analyzing
expenditure and attributing it to specific resources and cost centers. It also recommends using a consumption model
and managed services to reduce the total cost of ownership (TCO). An example of this pillar is ensuring that EC2
server sizes are selected to match actual computing needs.

To summarize, there are five pillars of the AWS Well-Architected Framework:

Operational Efficiency
Security
Performance Efficiency
Reliability
Cost Optimization

The Framework is available as the Well-Architected Tool, which is a self-service tool available from the AWS
Management Console

Objective:
Cloud Concepts

Sub-Objective:
Define the benefits of the AWS Cloud

References:

AWS Well-Architected Framework - AWS Well-Architected Framework (amazon.com)

Question #65 of 65 Question ID: 1603275

You work for a company that has several AWS resources, which consists of seven Amazon EC2 instances that
support several back-end databases. You are under constant pressure from the security department to make sure
that these EC2 instances comply with the company’s best security practices and stay within the company’s strict
compliance rules.

67 of 68 12-02-2024, 23:34
CLF-C02 Exam Simulation https://www.kaplanlearn.com/education/test/print/90570307?testId=28...

Which of the following Amazon resources would you use to meet this requirement?

A) Amazon CloudFront

B) Amazon Inspector

C) AWS System Manager

D) Dynamic Scaling

Explanation

Amazon Inspector is an AWS resource that is used to assess applications for vulnerabilities, abnormal security
exposures, and deviations from a core of best practices. This tool allows you to automate security assessments as
you develop your AWS IT infrastructure. This tool also offers an agent component that will monitor the behavior of
the Amazon Elastic Compute Cloud (EC2) instance, which includes monitoring of file systems, process activities,
and networking components.

Amazon CloudFront is not correct because this Amazon resource is used for distributing content within the AWS
infrastructure. It is not used for security exposures.

Dynamic scaling is used for adding EC2 instances to or removing them from your AWS environment based on the
demands of the system. It does not provide security analysis.

AWS System Manager is not correct because this is a resource used for patching on-premises and AWS operating
systems.

Objective:
Security and Compliance

Sub-Objective:
Identify components and resources for security

References:

What is Amazon Inspector Classic? - Amazon Inspector

68 of 68 12-02-2024, 23:34

You might also like