Sap c02 True2

Download as pdf or txt
Download as pdf or txt
You are on page 1of 24

Welcome to download the Newest 2passeasy SAP-C02 dumps

https://www.2passeasy.com/dumps/SAP-C02/ (300 New Questions)

Exam Questions SAP-C02


AWS Certified Solutions Architect - Professional

https://www.2passeasy.com/dumps/SAP-C02/

Passing Certification Exams Made Easy visit - https://www.2PassEasy.com


Welcome to download the Newest 2passeasy SAP-C02 dumps
https://www.2passeasy.com/dumps/SAP-C02/ (300 New Questions)

NEW QUESTION 1
- (Exam Topic 1)
A company wants to deploy an AWS WAF solution to manage AWS WAF rules across multiple AWS accounts. The accounts are managed under different OUs in
AWS Organizations.
Administrators must be able to add or remove accounts or OUs from managed AWS WAF rule sets as needed. Administrators also must have the ability to
automatically update and remediate noncompliant AWS WAF rules in all accounts
Which solution meets these requirements with the LEAST amount of operational overhead?

A. Use AWS Firewall Manager to manage AWS WAF rules across accounts in the organizatio
B. Use an AWS Systems Manager Parameter Store parameter to store accountnumbers and OUs to manage Update the parameter as needed to add or remove
accounts or OUs Use an Amazon EventBridge (Amazon CloudWatch Events) rule to identify any changes to the parameter and to invoke an AWS Lambda
function to update the security policy in the Firewall Manager administrative account
C. Deploy an organization-wide AWS Conng rule that requires all resources in the selected OUs to associate the AWS WAF rule
D. Deploy automated remediation actions by using AWS Lambda to fix noncompliant resource
E. Deploy AWS WAF rules by using an AWS CloudFormation stack set to target the same OUs where the AWS Config rule is applied.
F. Create AWS WAF rules in the management account of the organizatio
G. Use AWS Lambda environment variables to store account numbers and OUs to manage Update environment variables as needed to add or remove accounts
or OUs Create cross-account IAM roles in member account
H. Assume the roles by using AWS Security Token Service (AWS STS) in the Lambda function to create and update AWS WAF rules in the member accounts
I. Use AWS Control Tower to manage AWS WAF rules across accounts in the organizatio
J. Use AWS Key Management Service (AWS KMS) to store account numbers and OUs to manage Update AWS KMS as needed to add or remove accounts or
OU
K. Create IAM users in member accounts Allow AWS Control Tower in the management account to use the access key and secret access key to create and
update AWS WAF rules in the member accounts

Answer: B

NEW QUESTION 2
- (Exam Topic 1)
A company Is serving files to its customers through an SFTP server that Is accessible over the internet The SFTP server Is running on a single Amazon EC2
instance with an Elastic IP address attached Customers connect to the SFTP server through its Elastic IP address and use SSH for authentication The EC2
instance also has an attached security group that allows access from all customer IP addresses.
A solutions architect must implement a solution to improve availability minimize the complexity ot infrastructure management and minimize the disruption to
customers who access files. The solution must not change the way customers connect.
Which solution will meet these requirements?

A. Disassociate the Elastic IP address from me EC2 instance Create an Amazon S3 bucket to be used for sftp file hosting Create an AWS Transfer Family server
Configure the Transfer Family server with a publicly accessible endpoin
B. Associate the SFTP Elastic IP address with the new endpoin
C. Point the Transfer Family server to the S3 bucket Sync all files from the SFTP server to the S3 bucket.
D. Disassociate the Elastic IP address from the EC2 instanc
E. Create an Amazon S3 bucket to be used for SFTP file hosting Create an AWS Transfer Family serve
F. Configure the Transfer Family server with a VPC-hoste
G. internet-facing endpoin
H. Associate the SFTP Elastic IP address with the new endpoin
I. Attach the security group with customer IP addresses to the new endpoin
J. Point the Transfer Family server to the S3 bucke
K. Sync all files from the SFTP server to The S3 bucket
L. Disassociate the Elastic IP address from the EC2 instanc
M. Create a new Amazon Elastic File System (Amazon EFS) file system to be used for SFTP file hostin
N. Create an AWS Fargate task definition to run an SFTP serve
O. Specify the EFS file system as a mount in the task definition Create a Fargate service by using the task definition, and place a Network Load Balancer (NLB> «i
front of the service When configuring the service, attach the security group with customer IP addresses to the tasks that run the SFTP server Associate the Elastic
IP address with the Nl B Sync all files from the SFTP server to the S3 bucket
P. Disassociate the Elastic IP address from the EC2 instance Create a multi-attach Amazon Elastic Block Store (Amazon EBS) volume to be used to SFTP file
hosting Create a Network Load Balancer (NLB) with the Elastic IP address attached Create an Auto Scaling group with EC2 instances that run an SFTP server
Define in the Auto Scaling group that instances that are launched should attach the newmulti-attach EBS volume Configure the Auto Scaling group to automatically
add instances behind the NLB Configure the Auto Scaling group to use the security group that allows customer IP addresses for the EC2 instances that the Auto
Scaling group launches Sync all files from the SFTP server to the new multi-attach EBS volume

Answer: B

Explanation:
https://aws.amazon.com/premiumsupport/knowledge-center/aws-sftp-endpoint-type/
https://docs.aws.amazon.com/transfer/latest/userguide/create-server-in-vpc.html https://aws.amazon.com/premiumsupport/knowledge-center/aws-sftp-endpoint-
type/

NEW QUESTION 3
- (Exam Topic 1)
A company has a website that enables users to upload videos. Company policy states the uploaded videos must be analyzed for restricted content. An uploaded
video is placed in Amazon S3, and a message is pushed to an Amazon SOS queue with the video's location. A backend application pulls this location from
Amazon SOS and analyzes the video.
The video analysis is compute-intensive and occurs sporadically during the day The website scales with demand. The video analysis application runs on a fixed
number of instances. Peak demand occurs during the holidays, so the company must add instances to the application dunng this time. All instances used are
currently on-demand Amazon EC2 T2 instances. The company wants to reduce the cost of the current solution.
Which of the following solutions is MOST cost-effective?

A. Keep the website on T2 instance


B. Determine the minimum number of website instances required during off-peak times and use Spot Instances to cover them while using Reserved Instances to

Passing Certification Exams Made Easy visit - https://www.2PassEasy.com


Welcome to download the Newest 2passeasy SAP-C02 dumps
https://www.2passeasy.com/dumps/SAP-C02/ (300 New Questions)

cover peak deman


C. Use Amazon EC2 R4 and Amazon EC2 R5 Reserved Instances in an Auto Scaling group for the video analysis application
D. Keep the website on T2 instance
E. Determine the minimum number of website instances required during off-peak times and use Reserved Instances to cover them while using On-Demand
Instances to cover peak deman
F. Use Spot Fleet for the video analysis application comprised of Amazon EC2 C4 and Amazon EC2 C5 Spot Instances.
G. Migrate the website to AWS Elastic Beanstalk and Amazon EC2 C4 instance
H. Determine the minimum number of website instances required during off-peak times and use On-Demand Instances to cover them while using Spot capacity to
cover peak demand Use Spot Fleet for the video anarysis application comprised of C4 and Amazon EC2 C5 instances.
I. Migrate the website to AWS Elastic Beanstalk and Amazon EC2 R4 instance
J. Determine the minimum number of website instances required during off-peak times and use Reserved Instances to cover them while using On-Demand
Instances to cover peak demand Use Spot Fleet for the video analysis application comprised of R4 and Amazon EC2 R5 instances

Answer: B

NEW QUESTION 4
- (Exam Topic 1)
A company hosts a large on-premises MySQL database at its main office that supports an issue tracking system used by employees around the world. The
company already uses AWS for some workloads and has created an Amazon Route 53 entry for the database endpoint that points to the on-premises database.
Management is concerned about the database being a single point of failure and wants a solutions architect to migrate the database to AWS without any data loss
or downtime.
Which set of actions should the solutions architect implement?

A. Create an Amazon Aurora DB cluste


B. Use AWS Database Migration Service (AWS DMS) to do a full load from the on-premises database lo Auror
C. Update the Route 53 entry for the database to point to the Aurora cluster endpoin
D. and shut down the on-premises database.
E. During nonbusiness hours, shut down the on-premises database and create a backu
F. Restore this backup to an Amazon Aurora DB cluste
G. When the restoration is complete, update the Route 53 entry for the database to point to the Aurora cluster endpoint, and shut down the on-premises database.
H. Create an Amazon Aurora DB cluste
I. Use AWS Database Migration Service (AWS DMS) to do a full load with continuous replication from the on-premises database to Auror
J. When the migration is complete, update the Route 53 entry for the database to point to the Aurora cluster endpoint, and shut down the on-premises database.
K. Create a backup of the database and restore it to an Amazon Aurora multi-master cluste
L. This Aurora cluster will be in a master-master replication configuration with the on-premises databas
M. Update the Route 53 entry for the database to point to the Aurora cluster endpoin
N. and shut down the on-premises database.

Answer: C

Explanation:
“Around the world” eliminates possibility for the maintenance window at night. The other difference is ability to leverage continuous replication in MySQL to Aurora
case.

NEW QUESTION 5
- (Exam Topic 1)
An online e-commerce business is running a workload on AWS. The application architecture includes a web tier, an application tier for business logic, and a
database tier for user and transactional data management. The database server has a 100 GB memory requirement. The business requires cost-efficient disaster
recovery for the application with an RTO of 5 minutes and an RPO of 1 hour. The business also has a regulatory requirement for out-of-region disaster recovery
with a minimum distance between the primary and alternate sites of 250 miles.
Which of the following options can the solutions architect design to create a comprehensive solution for this customer that meets the disaster recovery
requirements?

A. Back up the application and database data frequently and copy them to Amazon S3. Replicate the backups using S3 cross-region replication, and use AWS
Cloud Formation to instantiate infrastructure for disaster recovery and restore data from Amazon S3.
B. Employ a pilot light environment in which the primary database is configured with mirroring to build a standby database on m4.large in Ihe alternate regio
C. Use AWS Cloud Formation to instantiate the web servers, application servers, and load balancers in case of a disaster to bring the application up in the
alternate regio
D. Vertically resize the database to meet the full production demands, and use Amazon Route 53 to switch traffic to the alternate region.
E. Use a scaled-down version of the fully functional production environment in the alternate region that includes one instance of the web server, one instance of
the application server, and a replicated instance of the database server in standby mod
F. Place the web and the application tiers in an Auto Scaling group behind a load balancer, which can automatically scale when the load arrives to the applicatio
G. Use Amazon Route 53 to switch traffic to the alternate region,
H. Employ a multi-region solution with fully functional we
I. application, and database tiers in both regions with equivalent capacit
J. Activate the primary database in one region only and the standby database in the other regio
K. Use Amazon Route 53 to automatically switch traffic from one region to another using health check routing policies.

Answer: C

Explanation:
As RTO is in minutes
(https://docs.aws.amazon.com/wellarchitected/latest/reliability-pillar/plan-for-disaster-recovery-dr.html ) Warm standby (RPO in seconds, RTO in minutes): Maintain
a scaled-down version of a fully functional environment always running in the DR Region. Business-critical systems are fully duplicated and are always on, but with
a scaled down fleet. When the time comes for recovery, the system is scaled up quickly to handle the production load.

NEW QUESTION 6
- (Exam Topic 1)
A financial services company logs personally identifiable information 10 its application logs stored in Amazon S3. Due to regulatory compliance requirements, the
log files must be encrypted at rest. The security team has mandated that the company's on-premises hardware security modules (HSMs) be used to generate the

Passing Certification Exams Made Easy visit - https://www.2PassEasy.com


Welcome to download the Newest 2passeasy SAP-C02 dumps
https://www.2passeasy.com/dumps/SAP-C02/ (300 New Questions)

CMK material.
Which steps should the solutions architect take to meet these requirements?

A. Create an AWS CloudHSM cluste


B. Create a new CMK in AWS KMS using AWS_CloudHSM as the source (or the key material and an origin of AWS_CLOUDHS
C. Enable automatic key rotation on the CMK with a duration of 1 yea
D. Configure a bucket policy on the togging bucket thai disallows uploads of unencrypted data and requires that the encryption source be AWS KMS.
E. Provision an AWS Direct Connect connection, ensuring there is no overlap of the RFC 1918 address space between on-premises hardware and the VPC
F. Configure an AWS bucket policy on the logging bucket that requires all objects to be encrypte
G. Configure the logging application to query theon-premises HSMs from the AWS environment for the encryption key material, and create a unique CMK for each
logging event.
H. Create a CMK in AWS KMS with no key material and an origin of EXTERNA
I. Import the key material generated from the on-premises HSMs into the CMK using the public key and import token provided by AW
J. Configure a bucket policy on the logging bucket that disallows uploads ofnon-encrypted data and requires that the encryption source be AWS KMS.
K. Create a new CMK in AWS KMS with AWS-provided key material and an origin of AWS_KM
L. Disable this CM
M. and overwrite the key material with the key material from the on-premises HSM using the public key and import token provided by AW
N. Re-enable the CM
O. Enable automatic key rotation on the CMK with a duration of 1 yea
P. Configure a bucket policy on the logging bucket that disallows uploads of non-encrypted data and requires that the encryption source be AWS KMS.

Answer: C

Explanation:
https://aws.amazon.com/blogs/security/how-to-byok-bring-your-own-key-to-aws-kms-for-less-than-15-00-a-yea
https://docs.aws.amazon.com/kms/latest/developerguide/importing-keys-create-cmk.html

NEW QUESTION 7
- (Exam Topic 1)
A development team has created a new flight tracker application that provides near-real-time data to users. The application has a front end that consists of an
Application Load Balancer (ALB) in front of two large Amazon EC2 instances in a single Availability Zone. Data is stored in a single Amazon RDS MySQL DB
instance. An Amazon Route 53 DNS record points to the ALB.
Management wants the development team to improve the solution to achieve maximum reliability with the least amount of operational overhead.
Which set of actions should the team take?

A. Create RDS MySQL read replica


B. Deploy the application to multiple AWS Region
C. Use a Route 53 latency-based routing policy to route to the application.
D. Configure the DB instance as Multi-A
E. Deploy the application to two additional EC2 instances in different Availability Zones behind an ALB.
F. Replace the DB instance with Amazon DynamoDB global table
G. Deploy the application in multiple AWS Region
H. Use a Route 53 latency-based routing policy to route to the application.
I. Replace the DB instance with Amazon Aurora with Aurora Replica
J. Deploy the application to mulliple smaller EC2 instances across multiple Availability Zones in an Auto Scaling group behind an ALB.

Answer: D

Explanation:
Multi AZ ASG + ALB + Aurora = Less over head and automatic scaling

NEW QUESTION 8
- (Exam Topic 1)
A company has a complex web application that leverages Amazon CloudFront for global scalability and performance. Over time, users report that the web
application is slowing down.
The company's operations team reports that the CloudFront cache hit ratio has been dropping steadily. The cache metrics report indicates that query strings on
some URLs are inconsistently ordered and are specified sometimes in mixed-case letters and sometimes in lowercase letters.
Which set of actions should the solutions architect take to increase the cache hit ratio as quickly as possible?

A. Deploy a Lambda@Edge function to sort parameters by name and force them to be lowercas
B. Select the CloudFront viewer request trigger to invoke the function.
C. Update the CloudFront distribution to disable caching based on query string parameters.
D. Deploy a reverse proxy after the load balancer to post-process the emitted URLs in the application to force the URL strings to be lowercase.
E. Update the CloudFront distribution to specify casing-insensitive query string processing.

Answer: A

Explanation:
https://docs.amazonaws.cn/en_us/AmazonCloudFront/latest/DeveloperGuide/lambda-examples.html#lambda-ex Before CloudFront serves content from the cache
it will trigger any Lambda function associated with the Viewer Request, in which we can normalize parameters.
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/lambda-examples.html#lambda-examp

NEW QUESTION 9
- (Exam Topic 1)
A company has an application that sells tickets online and experiences bursts of demand every 7 days. The application has a stateless presentation layer running
on Amazon EC2. an Oracle database to store unstructured data catalog information, and a backend API layer. The front-end layer uses an Elastic Load Balancer
to distribute the load across nine On-Demand Instances over three Availability Zones (AZs). The Oracle database is running on a single EC2 instance. The
company is experiencing performance issues when running more than two concurrent campaigns. A solutions architect must design a solution that meets the
following requirements:
• Address scalability issues.

Passing Certification Exams Made Easy visit - https://www.2PassEasy.com


Welcome to download the Newest 2passeasy SAP-C02 dumps
https://www.2passeasy.com/dumps/SAP-C02/ (300 New Questions)

• Increase the level of concurrency.


• Eliminate licensing costs.
• Improve reliability.
Which set of steps should the solutions architect take?

A. Create an Auto Scaling group for the front end with a combination of On-Demand and Spot Instances to reduce cost
B. Convert the Oracle database into a single Amazon RDS reserved DB instance.
C. Create an Auto Scaling group for the front end with a combination of On-Demand and Spot Instances to reduce cost
D. Create two additional copies of the database instance, then distribute the databases in separate AZs.
E. Create an Auto Scaling group for the front end with a combination of On-Demand and Spot Instances to reduce cost
F. Convert the tables in the Oracle database into Amazon DynamoDB tables.
G. Convert the On-Demand Instances into Spot Instances to reduce costs for the front en
H. Convert the tables in the Oracle database into Amazon DynamoDB tables.

Answer: C

Explanation:
Combination of On-Demand and Spot Instances + DynamoDB.

NEW QUESTION 10
- (Exam Topic 1)
A company wants to change its internal cloud billing strategy for each of its business units. Currently, the cloud governance team shares reports for overall cloud
spending with the head of each business unit. The company uses AWS Organizations lo manage the separate AWS accounts for each business unit. The existing
tagging standard in Organizations includes the application, environment, and owner. The cloud governance team wants a centralized solution so each business
unit receives monthly reports on its cloud spending. The solution should also send notifications for any cloud spending that exceeds a set threshold.
Which solution is the MOST cost-effective way to meet these requirements?

A. Configure AWS Budgets in each account and configure budget alerts that are grouped by application, environment, and owne
B. Add each business unit to an Amazon SNS topic for each aler
C. Use Cost Explorer in each account to create monthly reports for each business unit.
D. Configure AWS Budgets in the organization's master account and configure budget alerts that are grouped by application, environment, and owne
E. Add each business unit to an Amazon SNS topic for each aler
F. Use Cost Explorer in the organization's master account to create monthly reports for each business unit.
G. Configure AWS Budgets in each account and configure budget alerts lhat are grouped by application, environment, and owne
H. Add each business unit to an Amazon SNS topic for each aler
I. Use the AWS Billing and Cost Management dashboard in each account to create monthly reports for each business unit.
J. Enable AWS Cost and Usage Reports in the organization's master account and configure reports grouped by application, environment, and owne
K. Create an AWS Lambda function that processes AWS Cost and Usage Reports, sends budget alerts, and sends monthly reports to each business unit's email
list.

Answer: B

Explanation:
Configure AWS Budgets in the organization€™s master account and configure budget alerts that are grouped by application, environment, and owner. Add each
business unit to an Amazon SNS topic for each alert. Use Cost Explorer in the organization€™s master account to create monthly reports for each business unit.
https://aws.amazon.com/about-aws/whats-new/2019/07/introducing-aws-budgets-reports/#:~:text=AWS%20Bud

NEW QUESTION 10
- (Exam Topic 1)
A company has a three-tier application running on AWS with a web server, an application server, and an Amazon RDS MySQL DB instance. A solutions architect
is designing a disaster recovery (OR) solution with an RPO of 5 minutes.
Which solution will meet the company's requirements?

A. Configure AWS Backup to perform cross-Region backups of all servers every 5 minute
B. Reprovision the three tiers in the DR Region from the backups using AWS CloudFormation in the event of a disaster.
C. Maintain another running copy of the web and application server stack in the DR Region using AWS CloudFormation drill detectio
D. Configure cross-Region snapshots ol the DB instance to the DR Region every 5 minute
E. In the event of a disaster, restore the DB instance using the snapshot in the DR Region.
F. Use Amazon EC2 Image Builder to create and copy AMIs of the web and application server to both the primary and DR Region
G. Create a cross-Region read replica of the DB instance in the DR Regio
H. In the event of a disaster, promote the read replica to become the master and reprovision the servers with AWS CloudFormation using the AMIs.
I. Create AMts of the web and application servers in the DR Regio
J. Use scheduled AWS Glue jobs to synchronize the DB instance with another DB instance in the DR Regio
K. In the event of a disaster, switch to the DB instance in the DR Region and reprovision the servers with AWS CloudFormation using the AMIs.

Answer: C

Explanation:
deploying a brand new RDS instance will take >30 minutes. You will use EC2 Image builder to put the AMIs into the new region, but not use image builder to
LAUNCH them.

NEW QUESTION 15
- (Exam Topic 1)
A company is running a web application on Amazon EC2 instances in a production AWS account. The company requires all logs generated from the web
application to be copied to a central AWS account (or analysis and archiving. The company's AWS accounts are currently managed independently. Logging agents
are configured on the EC2 instances to upload the tog files to an Amazon S3 bucket in the central AWS account.
A solutions architect needs to provide access for a solution that will allow the production account to store log files in the central account. The central account also
needs to have read access to the tog files.
What should the solutions architect do to meet these requirements?

Passing Certification Exams Made Easy visit - https://www.2PassEasy.com


Welcome to download the Newest 2passeasy SAP-C02 dumps
https://www.2passeasy.com/dumps/SAP-C02/ (300 New Questions)

A. Create a cross-account role in the central accoun


B. Assume the role from the production account when the logs are being copied.
C. Create a policy on the S3 bucket with the production account ID as the principa
D. Allow S3 access from a delegated user.
E. Create a policy on the S3 bucket with access from only the CIDR range of the EC2 instances in the production accoun
F. Use the production account ID as the principal.
G. Create a cross-account role in the production accoun
H. Assume the role from the production account when the logs are being copied.

Answer: B

NEW QUESTION 17
- (Exam Topic 1)
A scientific organization requires the processing of text and picture data stored in an Amazon S3 bucket. The data is gathered from numerous radar stations during
a mission's live, time-critical phase. The data is uploaded by the radar stations to the source S3 bucket. The data is preceded with the identification number of the
radar station.
In a second account, the business built a destination S3 bucket. To satisfy a compliance target, data must be transferred from the source S3 bucket to the
destination S3 bucket. Replication is accomplished by using an S3 replication rule that covers all items in the source S3 bucket.
A single radar station has been recognized as having the most precise data. At this radar station, data replication must be completed within 30 minutes of the radar
station uploading the items to the source S3 bucket.
What actions should a solutions architect take to ensure that these criteria are met?

A. Set up an AWS DataSync agent to replicate the prefixed data from the source S3 bucket to the destination S3 bucke
B. Select to use at available bandwidth on the task, and monitor the task to ensure that it is in the TRANSFERRING statu
C. Create an Amazon EventBridge (Amazon CloudWatch Events) rule to trigger an alert if this status changes.
D. In the second account, create another S3 bucket to receive data from the radar station with the most accurate data Set up a new replication rule for this new S3
bucket to separate the replication from the other radar stations Monitor the maximum replication time to the destinatio
E. Create an Amazon EventBridge (Amazon CloudWatch Events) rule to trigger an alert when the time exceeds the desired threshold
F. Enable Amazon S3 Transfer Acceleration on the source S3 bucket, and configure the radar station with the most accurate data to use the new endpoint Monitor
the S3 destination bucket's TotalRequestLatency metric Create an Amazon EventBridge (Amazon CloudWatch Events) rule to trigger an alert if this status changes
G. Create a new S3 replication rule on the source S3 bucket that filters for the keys that use the prefix of the radar station with the most accurate data Enable S3
Replication Time Control (S3 RTC) Monitor the maximum replication time to the destination Create an Amazon EventBridge (Amazon CloudWatch Events) rule to
trigger an alert when the time exceeds the desired threshold

Answer: D

Explanation:
https://docs.aws.amazon.com/AmazonS3/latest/userguide/replication-time-control.html

NEW QUESTION 21
- (Exam Topic 1)
A large company with hundreds of AWS accounts has a newly established centralized internal process for purchasing new or modifying existing Reserved
Instances. This process requires all business units that want to purchase or modify Reserved Instances to submit requests to a dedicated team for procurement or
execution. Previously, business units would directly purchase or modify Reserved Instances in their own respective AWS accounts autonomously.
Which combination of steps should be taken to proactively enforce the new process in the MOST secure way possible? (Select TWO.)

A. Ensure all AWS accounts are part of an AWS Organizations structure operating in all features mode.
B. Use AWS Contig lo report on the attachment of an IAM policy that denies access to the ec2:PurchaseReservedlnstancesOffering and
ec2:ModifyReservedlnstances actions.
C. In each AWS account, create an IAM policy with a DENY rule to the ec2:PurchaseReservedlnstancesOffering and ec2:ModifyReservedInstances actions.
D. Create an SCP that contains a deny rule to the ec2:PurchaseReservedlnstancesOffering and ec2: Modify Reserved Instances action
E. Attach the SCP to each organizational unit (OU) of the AWS Organizations structure.
F. Ensure that all AWS accounts are part of an AWS Organizations structure operating in consolidated billing features mode.

Answer: AD

Explanation:
https://docs.aws.amazon.com/organizations/latest/APIReference/API_EnableAllFeatures.html
https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scp-strategies.html

NEW QUESTION 24
- (Exam Topic 1)
A company has a new application that needs to run on five Amazon EC2 instances in a single AWS Region. The application requires high-throughput, low-latency
network connections between all of the EC2 instances where the application will run. There is no requirement for the application to be fault tolerant.
Which solution will meet these requirements?

A. Launch five new EC2 instances into a cluster placement grou


B. Ensure that the EC2 instance type supports enhanced networking.
C. Launch five new EC2 instances into an Auto Scaling group in the same Availability Zon
D. Attach an extra elastic network interface to each EC2 instance.
E. Launch five new EC2 instances into a partition placement grou
F. Ensure that the EC2 instance type supports enhanced networking.
G. Launch five new EC2 instances into a spread placement grou
H. Attach an extra elastic network interface to each EC2 instance.

Answer: A

Explanation:
When you launch EC2 instances in a cluster they benefit from performance and low latency. No redundancy though as per the question
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html.

Passing Certification Exams Made Easy visit - https://www.2PassEasy.com


Welcome to download the Newest 2passeasy SAP-C02 dumps
https://www.2passeasy.com/dumps/SAP-C02/ (300 New Questions)

NEW QUESTION 25
- (Exam Topic 1)
A company is running a data-intensive application on AWS. The application runs on a cluster of hundreds of Amazon EC2 instances. A shared file system also
runs on several EC2 instances that store 200 TB of data. The application reads and modifies the data on the shared file system and generates a report. The job
runs once monthly, reads a subset of the files from the shared file system, and takes about 72 hours to complete. The compute instances scale in an Auto Scaling
group, but the instances that host the shared file system run continuously. The compute and storage instances are all in the same AWS Region.
A solutions architect needs to reduce costs by replacing the shared file system instances. The file system must provide high performance access to the needed
data for the duration of the 72-hour run.
Which solution will provide the LARGEST overall cost reduction while meeting these requirements?

A. Migrate the data from the existing shared file system to an Amazon S3 bucket that uses the S3 Intelligent-Tiering storage clas
B. Before the job runs each month, use Amazon FSx for Lustre to create a new file system with the data from Amazon S3 by using lazy loadin
C. Use the new file system as the shared storage for the duration of the jo
D. Delete the file system when the job is complete.
E. Migrate the data from the existing shared file system to a large Amazon Elastic Block Store (Amazon EBS) volume with Multi-Attach enable
F. Attach the EBS volume to each of the instances by using a user data script in the Auto Scaling group launch templat
G. Use the EBS volume as the shared storage for the duration of the jo
H. Detach the EBS volume when the job is complete.
I. Migrate the data from the existing shared file system to an Amazon S3 bucket that uses the S3 Standard storage clas
J. Before the job runs each month, use Amazon FSx for Lustre to create a new file system with the data from Amazon S3 by using batch loadin
K. Use the new file system as the shared storage for the duration of the jo
L. Delete the file system when the job is complete.
M. Migrate the data from the existing shared file system to an Amazon S3 bucke
N. Before the job runs each month, use AWS Storage Gateway to create a file gateway with the data from Amazon S3. Use the file gateway as the shared storage
for the jo
O. Delete the file gateway when the job is complete.

Answer: B

NEW QUESTION 29
- (Exam Topic 1)
An e-commerce company is revamping its IT infrastructure and is planning to use AWS services. The company's CIO has asked a solutions architect to design a
simple, highly available, and loosely coupled order processing application. The application is responsible (or receiving and processing orders before storing them
in an Amazon DynamoDB table. The application has a sporadic traffic pattern and should be able to scale during markeling campaigns to process the orders with
minimal delays.
Which of the following is the MOST reliable approach to meet the requirements?

A. Receive the orders in an Amazon EC2-hosted database and use EC2 instances to process them.
B. Receive the orders in an Amazon SOS queue and trigger an AWS Lambda function lo process them.
C. Receive the orders using the AWS Step Functions program and trigger an Amazon ECS container lo process them.
D. Receive the orders in Amazon Kinesis Data Streams and use Amazon EC2 instances to process them.

Answer: B

Explanation:
Q: How does Amazon Kinesis Data Streams differ from Amazon SQS?
Amazon Kinesis Data Streams enables real-time processing of streaming big data. It provides ordering of records, as well as the ability to read and/or replay
records in the same order to multiple Amazon Kinesis Applications. The Amazon Kinesis Client Library (KCL) delivers all records for a given partition key to the
same record processor, making it easier to build multiple applications reading from the same Amazon Kinesis data stream (for example, to perform counting,
aggregation, and filtering).
https://aws.amazon.com/kinesis/data-streams/faqs/
https://aws.amazon.com/blogs/big-data/unite-real-time-and-batch-analytics-using-the-big-data-lambda-architect

NEW QUESTION 30
- (Exam Topic 1)
A solutions architect needs to advise a company on how to migrate its on-premises data processing application to the AWS Cloud. Currently, users upload input
files through a web portal. The web server then stores the uploaded files on NAS and messages the processing server over a message queue. Each media file can
take up to 1 hour to process. The company has determined that the number of media files awaiting processing is significantly higher during business hours, with
the number of files rapidly declining after business hours.
What is the MOST cost-effective migration recommendation?

A. Create a queue using Amazon SQ


B. Configure the existing web server to publish to the new queue.When there are messages in the queue, invoke an AWS Lambda function to pull requests from
the queue and process the file
C. Store the processed files in an Amazon S3 bucket.
D. Create a queue using Amazon M
E. Configure the existing web server to publish to the new queue.When there are messages in the queue, create a new Amazon EC2 instance to pull requests
from the queue and process the file
F. Store the processed files in Amazon EF
G. Shut down the EC2 instance after the task is complete.
H. Create a queue using Amazon M
I. Configure the existing web server to publish to the new queue.When there are messages in the queue, invoke an AWS Lambda function to pull requests from the
queue and process the file
J. Store the processed files in Amazon EFS.
K. Create a queue using Amazon SO
L. Configure the existing web server to publish to the new queu
M. Use Amazon EC2 instances in an EC2 Auto Scaling group to pull requests from the queue and process the file
N. Scale the EC2 instances based on the SOS queue lengt
O. Store the processed files in an Amazon S3 bucket.

Passing Certification Exams Made Easy visit - https://www.2PassEasy.com


Welcome to download the Newest 2passeasy SAP-C02 dumps
https://www.2passeasy.com/dumps/SAP-C02/ (300 New Questions)

Answer: D

Explanation:
https://aws.amazon.com/blogs/compute/operating-lambda-performance-optimization-part-1/

NEW QUESTION 33
- (Exam Topic 1)
A company standardized its method of deploying applications to AWS using AWS CodePipeline and AWS Cloud Formation. The applications are in Typescript and
Python. The company has recently acquired another business that deploys applications to AWS using Python scripts.
Developers from the newly acquired company are hesitant to move their applications under CloudFormation because it would require than they learn a new
domain-specific language and eliminate their access to language features, such as looping.
How can the acquired applications quickly be brought up to deployment standards while addressing the developers' concerns?

A. Create CloudFormation templates and re-use parts of the Python scripts as instance user dat
B. Use the AWS Cloud Development Kit (AWS CDK) to deploy the application using these template
C. Incorporate the AWS CDK into CodePipeline and deploy the application to AWS using these templates.
D. Use a third-party resource provisioning engine inside AWS CodeBuild to standardize the deployment processes of the existing and acquired compan
E. Orchestrate the CodeBuild job using CodePipeline.
F. Standardize on AWS OpsWork
G. Integrate OpsWorks with CodePipelin
H. Have the developers create Chef recipes to deploy their applications on AWS.
I. Define the AWS resources using Typescript or Pytho
J. Use the AWS Cloud Development Kit (AWS CDK) to create CloudFormation templates from the developers' code, and use the AWS CDK to create
CloudFormation stack
K. Incorporate the AWS CDK as a CodeBuild job in CodePipeline.

Answer: D

NEW QUESTION 37
- (Exam Topic 1)
A company is using AWS Organizations lo manage multiple accounts. Due to regulatory requirements, the company wants to restrict specific member accounts to
certain AWS Regions, where they are permitted to deploy resources. The resources in the accounts must be tagged, enforced based on a group standard, and
centrally managed with minimal configuration.
What should a solutions architect do to meet these requirements?

A. Create an AWS Config rule in the specific member accounts to limit Regions and apply a tag policy.
B. From the AWS Billing and Cost Management console, in the master account, disable Regions for the specific member accounts and apply a tag policy on the
root.
C. Associate the specific member accounts with the roo
D. Apply a tag policy and an SCP using conditions to limit Regions.
E. Associate the specific member accounts with a new O
F. Apply a tag policy and an SCP using conditions to limit Regions.

Answer: D

NEW QUESTION 40
- (Exam Topic 1)
A financial services company receives a regular data feed from its credit card servicing partner Approximately 5.1 records are sent every 15 minutes in plaintext,
delivered over HTTPS directly into an Amazon S3 bucket with server-side encryption. This feed contains sensitive credit card primary account number (PAN) data
The company needs to automatically mask the PAN before sending the data to another S3 bucket for additional internal processing. The company also needs to
remove and merge specific fields, and then transform the record into JSON format Additionally, extra feeds are likely to be added in the future, so any design
needs to be easily expandable.
Which solutions will meet these requirements?

A. Trigger an AWS Lambda function on file delivery that extracts each record and writes it to an Amazon SQS queu
B. Trigger another Lambda function when new messages arrive in the SOS queue to process the records, writing the results to a temporary location in Amazon S3.
Trigger a final Lambda function once the SOS queue is empty to transform the records into JSON format and send the results to another S3 bucket for internal
processing.
C. Tigger an AWS Lambda function on file delivery that extracts each record and wntes it to an Amazon SOS queu
D. Configure an AWS Fargate container application to
E. automatically scale to a single instance when the SOS queue contains message
F. Have the application process each record, and transform the record into JSON forma
G. When the queue is empty, send the results to another S3 bucket for internal processing and scale down the AWS Fargate instance.
H. Create an AWS Glue crawler and custom classifier based on the data feed formats and build a table definition to match Trigger an AWS Lambda function on file
delivery to start an AWS Glue ETL job to transform the entire record according to the processing and transformation requirement
I. Define the output format as JSO
J. Once complete, have the ETL job send the results to another S3 bucket for internal processing.
K. Create an AWS Glue crawler and custom classifier based upon the data feed formats and build a table definition to matc
L. Perform an Amazon Athena query on file delivery to start an Amazon EMR ETL job to transform the entire record according to the processing and
transformation requirement
M. Define the output format as JSO
N. Once complete, send the results to another S3 bucket for internal processing and scale down the EMR cluster.

Answer: C

Explanation:
You can use a Glue crawler to populate the AWS Glue Data Catalog with tables. The Lambda function can be triggered using S3 event notifications when object
create events occur. The Lambda function will then trigger the Glue ETL job to transform the records masking the sensitive data and modifying the output format to
JSON. This solution meets all requirements.
Create an AWS Glue crawler and custom classifier based on the data feed formats and build a table definition to match. Trigger an AWS Lambda function on file

Passing Certification Exams Made Easy visit - https://www.2PassEasy.com


Welcome to download the Newest 2passeasy SAP-C02 dumps
https://www.2passeasy.com/dumps/SAP-C02/ (300 New Questions)

delivery to start an AWS Glue ETL job to transform the entire record according to the processing and transformation requirements. Define the output format as
JSON. Once complete, have the ETL job send the results to another S3 bucket for internal processing.
https://docs.aws.amazon.com/glue/latest/dg/trigger-job.html
https://d1.awsstatic.com/Products/product-name/diagrams/product-page-diagram_Glue_Event-driven-ETL-Pipel

NEW QUESTION 41
- (Exam Topic 1)
An online retail company hosts its stateful web-based application and MySQL database in an on-premises data center on a single server. The company wants to
increase its customer base by conducting more marketing campaigns and promotions. In preparation, the company wants to migrate its application and database
to AWS to increase the reliability of its architecture.
Which solution should provide the HIGHEST level of reliability?

A. Migrate the database to an Amazon RDS MySQL Multi-AZ DB instanc


B. Deploy the application in an Auto Scaling group on Amazon EC2 instances behind an Application Load Balance
C. Store sessions in Amazon Neptune.
D. Migrate the database to Amazon Aurora MySQ
E. Deploy the application in an Auto Scaling group on Amazon EC2 instances behind an Application Load Balance
F. Store sessions in an Amazon ElastiCache for Redis replication group.
G. Migrate the database to Amazon DocumentDB (with MongoDB compatibility). Deploy the application in an Auto Scaling group on Amazon EC2 instances
behind a Network Load Balance
H. Store sessions in Amazon Kinesis Data Firehose.
I. Migrate the database to an Amazon RDS MariaDB Multi-AZ DB instanc
J. Deploy the application in an Auto Scaling group on Amazon EC2 instances behind an Application Load Balance
K. Store sessions in Amazon ElastiCache for Memcached.

Answer: B

NEW QUESTION 44
- (Exam Topic 1)
A company built an ecommerce website on AWS using a three-tier web architecture. The application is
Java-based and composed of an Amazon CloudFront distribution, an Apache web server layer of Amazon EC2 instances in an Auto Scaling group, and a backend
Amazon Aurora MySQL database.
Last month, during a promotional sales event, users reported errors and timeouts while adding items to their shopping carts. The operations team recovered the
logs created by the web servers and reviewed Aurora DB cluster performance metrics. Some of the web servers were terminated before logs could be collected
and the Aurora metrics were not sufficient for query performance analysis.
Which combination of steps must the solutions architect take to improve application performance visibility during peak traffic events? (Select THREE.)

A. Configure the Aurora MySQL DB cluster to publish slow query and error logs to Amazon CloudWatch Logs.
B. Implement the AWS X-Ray SDK to trace incoming HTTP requests on the EC2 instances and implement tracing of SQL queries with the X-Ray SDK for Java.
C. Configure the Aurora MySQL DB cluster to stream slow query and error logs to Amazon Kinesis.
D. Install and configure an Amazon CloudWatch Logs agent on the EC2 instances to send the Apache logsto CloudWatch Logs.
E. Enable and configure AWS CloudTrail to collect and analyze application activity from Amazon EC2 and Aurora.
F. Enable Aurora MySQL DB cluster performance benchmarking and publish the stream to AWS X-Ray.

Answer: ABD

Explanation:
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_LogAccess.Concepts.MySQL.html# https://aws.amazon.com/blogs/mt/simplifying-
apache-server-logs-with-amazon-cloudwatch-logs-insights/ https://docs.aws.amazon.com/xray/latest/devguide/xray-sdk-dotnet-messagehandler.html
https://docs.aws.amazon.com/xray/latest/devguide/xray-sdk-java-sqlclients.html

NEW QUESTION 47
- (Exam Topic 1)
A large company in Europe plans to migrate its applications to the AWS Cloud. The company uses multiple AWS accounts for various business groups. A data
privacy law requires the company to restrict developers'
access to AWS European Regions only.
What should the solutions architect do to meet this requirement with the LEAST amount of management overhead^

A. Create IAM users and IAM groups in each accoun


B. Create IAM policies to limit access to non-European Regions Attach the IAM policies to the IAM groups
C. Enable AWS Organizations, attach the AWS accounts, and create OUs for European Regions andnon-European Region
D. Create SCPs to limit access to non-European Regions and attach the policies to the OUs.
E. Set up AWS Single Sign-On and attach AWS account
F. Create permission sets with policies to restrict access to non-European Regions Create IAM users and IAM groups in each account.
G. Enable AWS Organizations, attach the AWS accounts, and create OUs for European Regions andnon-European Region
H. Create permission sets with policies to restrict access to non-European Region
I. Create IAM users and IAM groups in the primary account.

Answer: B

Explanation:
"This policy uses the Deny effect to deny access to all requests for operations that don't target one of the two approved regions (eu-central-1 and eu-west-1)."
https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps_examples_general.htm
https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_condition.html

NEW QUESTION 48
- (Exam Topic 1)
A company is building a hybrid solution between its existing on-premises systems and a new backend in AWS. The company has a management application to
monitor the state of its current IT infrastructure and automate responses to issues. The company wants to incorporate the status of its consumed AWS services

Passing Certification Exams Made Easy visit - https://www.2PassEasy.com


Welcome to download the Newest 2passeasy SAP-C02 dumps
https://www.2passeasy.com/dumps/SAP-C02/ (300 New Questions)

into the application. The application uses an HTTPS endpoint to receive updates.
Which approach meets these requirements with the LEAST amount of operational overhead?

A. Configure AWS Systems Manager OpsCenter to ingest operational events from the on-premises systems Retire the on-premises management application and
adopt OpsCenter as the hub
B. Configure Amazon EventBridge (Amazon CloudWatch Events) to detect and react to changes for AWS Health events from the AWS Personal Health
Dashboard Configure the EventBridge (CloudWatch Events) event to publish a message to an Amazon Simple Notification Service (Amazon SNS) topic and
subscribe the topic to the HTTPS endpoint of the management application
C. Modify the on-premises management application to call the AWS Health API to poll for status events of AWS services.
D. Configure Amazon EventBridge (Amazon CloudWatch Events) to detect and react to changes for AWS Health events from the AWS Service Health Dashboard
Configure the EventBridge (CloudWatch Events) event to publish a message to an Amazon Simple Notification Service (Amazon SNS) topic and subscribe the
topic to an HTTPS endpoint for the management application with a topic filter corresponding to the services being used

Answer: A

Explanation:
ALB & NLB both supports IPs as targets. Questions is based on TCP traffic over VPN to on-premise. TCP is layer 4 and the , load balancer should be NLB. Then
next questions does NLB supports loadbalcning traffic over VPN. And answer is YEs based on below URL.
https://aws.amazon.com/about-aws/whats-new/2018/09/network-load-balancer-now-supports-aws-vpn/
Target as IPs for NLB & ALB: https://aws.amazon.com/elasticloadbalancing/faqs/?nc=sn&loc=5 https://aws.amazon.com/elasticloadbalancing/application-load-
balancer/

NEW QUESTION 52
- (Exam Topic 1)
A company is building an image service on the web that will allow users to upload and search random photos. At peak usage, up to 10.000 users worldwide will
upload their images. The service will then overlay text on the uploaded images, which will then be published on the company website.
Which design should a solutions architect implement?

A. Store the uploaded images in Amazon Elastic File System (Amazon EFS). Send application log information about each image to Amazon CloudWatch Log
B. Create a fleet of Amazon EC2 instances that use CloudWatch Logs to determine which images need to be processe
C. Place processed images in anolher directory in Amazon EF
D. Enable Amazon CloudFront and configure the origin to be the one of the EC2 instances in the fleet.
E. Store the uploaded images in an Amazon S3 bucket and configure an S3 bucket event notification to send a message to Amazon Simple Notification Service
(Amazon SNS). Create a fleet of Amazon EC2 instances behind an Application Load Balancer (ALB) to pull messages from Amazon SNS to process the images
and place them in Amazon Elastic File System (Amazon EFS). Use Amazon CloudWatch metrics for the SNS message volume to scale out EC2 instance
F. Enable Amazon CloudFront and configure the origin lo be the ALB in front of the EC2 instances.
G. Store the uploaded images in an Amazon S3 bucket and configure an S3 bucket event notification to send a message to the Amazon Simple Queue Service
(Amazon SOS) queu
H. Create a fleet of Amazon EC2 instances to pull messages from Ihe SOS queue to process the images and place them in another S3 bucke
I. Use Amazon CloudWatch metrics for queue depth to scale out EC2 instance
J. Enable Amazon CloudFront and configure the origin to be the S3 bucket that contains the processed images.
K. Store the uploaded images on a shared Amazon Elastic Block Store (Amazon EBS) volume mounted toa fleet of Amazon EC2 Spot instance
L. Create an Amazon DynamoDB table that contains information about each uploaded image and whether it has been processe
M. Use an Amazon EventBridge (Amazon CloudWatch Events) rule lo scale out EC2 instance
N. Enable Amazon CloudFront and configure the origin to reference an Elastic Load Balancer in front of the fleet of EC2 instances.

Answer: C

NEW QUESTION 55
- (Exam Topic 1)
A large company is running a popular web application. The application runs on several Amazon EC2 Linux Instances in an Auto Scaling group in a private subnet.
An Application Load Balancer is targeting the Instances In the Auto Scaling group in the private subnet. AWS Systems Manager Session Manager Is configured,
and AWS Systems Manager Agent is running on all the EC2 instances.
The company recently released a new version of the application Some EC2 instances are now being marked as unhealthy and are being terminated As a result,
the application is running at reduced capacity A solutions architect tries to determine the root cause by analyzing Amazon CloudWatch logs that are collected from
the application, but the logs are inconclusive
How should the solutions architect gain access to an EC2 instance to troubleshoot the issue1?

A. Suspend the Auto Scaling group's HealthCheck scaling proces


B. Use Session Manager to log in to an instance that is marked as unhealthy
C. Enable EC2 instance termination protection Use Session Manager to log In to an instance that is marked as unhealthy.
D. Set the termination policy to Oldestinstance on the Auto Scaling grou
E. Use Session Manager to log in to an instance that is marked as unhealthy
F. Suspend the Auto Scaling group's Terminate proces
G. Use Session Manager to log in to an instance that is marked as unhealthy

Answer: D

Explanation:
https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-suspend-resume-processes.html
it shows For Amazon EC2 Auto Scaling, there are two primary process types: Launch and Terminate. The Launch process adds a new Amazon EC2 instance to
an Auto Scaling group, increasing its capacity. The Terminate process removes an Amazon EC2 instance from the group, decreasing its capacity. HealthCheck
process for EC2 autoscaling is not a primary process! It is a process along with the following AddToLoadBalancer AlarmNotification AZRebalance HealthCheck
InstanceRefresh ReplaceUnhealthy ScheduledActions From the requirements, Some EC2 instances are now being marked as unhealthy and are being
terminated. Application is running at reduced capacity not because instances are marked unhealthy but because they are being terminated.
https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-suspend-resume-processes.html#choosing-suspend-r

NEW QUESTION 60
- (Exam Topic 1)
A solutions architect is designing a network for a new cloud deployment. Each account will need autonomy to modify route tables and make changes. Centralized

Passing Certification Exams Made Easy visit - https://www.2PassEasy.com


Welcome to download the Newest 2passeasy SAP-C02 dumps
https://www.2passeasy.com/dumps/SAP-C02/ (300 New Questions)

and controlled egress internet connectivity is also needed. The cloud footprint is expected to grow to thousands of AWS accounts.
Which architecture will meet these requirements?

A. A centralized transit VPC with a VPN connection to a standalone VPC in each accoun
B. Outbound internet traffic will be controlled by firewall appliances.
C. A centralized shared VPC with a subnet for each accoun
D. Outbound internet traffic will controlled through a fleet of proxy servers.
E. A shared services VPC to host central assets to include a fleet of firewalls with a route to the internet.Each spoke VPC will peer to the central VPC.
F. A shared transit gateway to which each VPC will be attache
G. Outbound internet access will route through a fleet of VPN-attached firewalls.

Answer: D

Explanation:
https://docs.aws.amazon.com/whitepapers/latest/building-scalable-secure-multi-vpc-network-infrastructure/centr
https://docs.aws.amazon.com/whitepapers/latest/building-scalable-secure-multi-vpc-network-infrastructure/centr
AWS Transit Gateway helps you design and implement networks at scale by acting as a cloud router. As your network grows, the complexity of managing
incremental connections can slow you down. AWS Transit Gateway connects VPCs and on-premises networks through a central hub. This simplifies your network
and puts an end to complex peering relationships -- each new connection is only made once.

NEW QUESTION 62
- (Exam Topic 1)
A company maintains a restaurant review website. The website is a single-page application where files are stored in Amazon S3 and delivered using Amazon
CloudFront. The company receives several fake postings every day that are manually removed.
The security team has identified that most of the fake posts are from bots with IP addresses that have a bad reputation within the same global region. The team
needs to create a solution to help restrict the bots from accessing the website.
Which strategy should a solutions architect use?

A. Use AWS Firewall Manager to control the CloudFront distribution security setting
B. Create a geographical block rule and associate it with Firewall Manager.
C. Associate an AWS WAF web ACL with the CloudFront distributio
D. Select the managed Amazon IP reputation rule group for the web ACL with a deny action.
E. Use AWS Firewall Manager to control the CloudFront distribution security setting
F. Select the managed Amazon IP reputation rule group and associate it with Firewall Manager with a deny action.
G. Associate an AWS WAF web ACL with the CloudFront distributio
H. Create a rule group for the web ACL with a geographical match statement with a deny action.

Answer: B

Explanation:
IP reputation rule groups allow you to block requests based on their source. Choose one or more of these rule groups if you want to reduce your exposure to
BOTS!!!! traffic or exploitation attempts
The Amazon IP reputation list rule group contains rules that are based on Amazon internal threat intelligence. This is useful if you would like to block IP addresses
typically associated with bots or other threats. Inspects for a list of IP addresses that have been identified as bots by Amazon threat intelligence.

NEW QUESTION 66
- (Exam Topic 1)
A company is serving files to Its customers through an SFTP server that is accessible over the internet The SFTP server is running on a single Amazon EC2
instance with an Elastic IP address attached Customers connect to the SFTP server through its Elastic IP address and use SSH (or authentication. The EC2
instance also has an attached security group that allows access from all customer IP addresses.
A solutions architect must implement a solution to improve availability, minimize the complexity of infrastructure management, and minimize the disruption to
customers who access files The solution must not change the way customers connect.
Which solution will meet these requirements?

A. Disassociate the Elastic IP address from the EC2 instanc


B. Create an Amazon S3 bucket to be used for SFTP file hostin
C. Create an AWS Transfer Family server Configure the Transfer Family server with a publicly accessible endpoint Associate the SFTP Elastic IP address with the
new endpoint Point the Transfer Family server to the S3 bucke
D. Sync all files from the SFTP server to the S3 bucket.
E. Disassociate the Elastic IP address from the EC2 instanc
F. Create an Amazon S3 bucket to be used for SFTP file hostin
G. Create an AWS Transfer Family serve
H. Configure the Transfer Family server with aVPC-hoste
I. internet-facing endpoin
J. Associate the SFTP Elastic IP address with the new endpoin
K. Attach the security group with customer IP addresses to the new endpoin
L. Point the Transfer Family server to the S3 bucket Sync all files from the SFTP server to the S3 bucket.
M. Disassociate the Elastic IP address from the EC2 instanc
N. Create a new Amazon Elastic File System{Amazon EFS) file system to be used for SFTP file hostin
O. Create an AWS Fargate task definition to run an SFTP serve
P. Specify the EFS file system as a mount in the task definitio
Q. Create a Fargate service by using the task definition, and place a Network Load Balancer (NLB) in front of the service When configuring the service, attach the
security group with customer IP addresses to the tasks that run the SFTP serve
R. Associate the Elastic IP address with the NL
S. Sync all files from the SFTP server to the S3 bucket.
T. Disassociate the Elastic IP address from the EC2 instanc
. Create a multi-attach Amazon Elastic Block Store (Amazon EBS) volume to be used for SFTP file hostin
. Create a Network Load Balancer (NLB) with the Elastic IP address attache
. Create an Auto Scaling group with EC2 instances that run an SFTP server Define in the Auto Scaling group that instances that are launched should attach the
newmulti-attach EBS volume Configure the Auto Scaling group to automatically add instances behind the NLB Configure the Auto Scaling group to use the security
group that allows customer IP addresses for the EC2 instances that the Auto Scaling group launche

Passing Certification Exams Made Easy visit - https://www.2PassEasy.com


Welcome to download the Newest 2passeasy SAP-C02 dumps
https://www.2passeasy.com/dumps/SAP-C02/ (300 New Questions)

. Sync all files from the SFTP server to the new multi-attach EBS volume.

Answer: B

Explanation:
https://docs.aws.amazon.com/transfer/latest/userguide/create-server-in-vpc.html https://aws.amazon.com/premiumsupport/knowledge-center/aws-sftp-endpoint-
type/

NEW QUESTION 69
- (Exam Topic 1)
A company wants to migrate an application to Amazon EC2 from VMware Infrastructure that runs in an
on-premises data center. A solutions architect must preserve the software and configuration settings during the migration.
What should the solutions architect do to meet these requirements?

A. Configure the AWS DataSync agent to start replicating the data store to Amazon FSx for Windows File Server Use the SMB share to host the VMware data stor
B. Use VM Import/Export to move the VMs to Amazon EC2.
C. Use the VMware vSphere client to export the application as an image in Open Virealization Format (OVF) format Create an Amazon S3 bucket to store the
image in the destination AWS Regio
D. Create and apply an IAM role for VM Import Use the AWS CLI to run the EC2 import command.
E. Configure AWS Storage Gateway for files service to export a Common Internet File System (CIFSJ shar
F. Create a backup copy to the shared folde
G. Sign in to the AWS Management Console and create an AMI from the backup copy Launch an EC2 instance that is based on the AMI.
H. Create a managed-instance activation for a hybrid environment in AWS Systems Manage
I. Download and install Systems Manager Agent on the on-premises VM Register the VM with Systems Manager to be a managed instance Use AWS Backup to
create a snapshot of the VM and create an AM
J. Launch an EC2 instance that is based on the AMI

Answer: B

Explanation:
https://docs.aws.amazon.com/vm-import/latest/userguide/vmimport-image-import.html
- Export an OVF Template
- Create / use an Amazon S3 bucket for storing the exported images. The bucket must be in the Region where you want to import your VMs.
- Create an IAM role named vmimport.
- You'll use AWS CLI to run the import commands. https://aws.amazon.com/premiumsupport/knowledge-center/import-instances/

NEW QUESTION 73
- (Exam Topic 1)
A solutions architect is building a web application that uses an Amazon RDS for PostgreSQL DB instance The DB instance is expected to receive many more
reads than writes The solutions architect needs to ensure that the large amount of read traffic can be accommodated and that the DB instance is highly available.
Which steps should the solutions architect take to meet these requirements? (Select THREE.)

A. Create multiple read replicas and put them into an Auto Scaling group
B. Create multiple read replicas in different Availability Zones.
C. Create an Amazon Route 53 hosted zone and a record set for each read replica with a TTL and a weighted routing policy
D. Create an Application Load Balancer (ALBJ and put the read replicas behind the ALB.
E. Configure an Amazon CloudWatch alarm to detect a failed read replica Set the alarm to directly invoke an AWS Lambda function to delete its Route 53 record
set.
F. Configure an Amazon Route 53 health check for each read replica using its endpoint

Answer: BCF

Explanation:
https://aws.amazon.com/premiumsupport/knowledge-center/requests-rds-read-replicas/
You can use Amazon Route 53 weighted record sets to distribute requests across your read replicas. Within a Route 53 hosted zone, create individual record sets
for each DNS endpoint associated with your read replicas and give them the same weight. Then, direct requests to the endpoint of the record set. You can
incorporate Route 53 health checks to be sure that Route 53 directs traffic away from unavailable read replicas

NEW QUESTION 78
- (Exam Topic 1)
A company is moving a business-critical multi-tier application to AWS. The architecture consists of a desktop client application and server infrastructure. The server
infrastructure resides in an on-premises data center that frequently fails to maintain the application uptime SLA of 99.95%. A solutions architect must re-architect
the application to ensure that it can meet or exceed the SLA.
The application contains a PostgreSQL database running on a single virtual machine. The business logic and presentation layers are load balanced between
multiple virtual machines. Remote users complain about slow load times while using this latency-sensitive application.
Which of the following will meet the availability requirements with little change to the application while improving user experience and minimizing costs?

A. Migrate the database to a PostgreSQL database in Amazon EC2. Host the application and presentation layers in automatically scaled Amazon ECS containers
behind an Application Load Balance
B. Allocate an Amazon Workspaces Workspace for each end user to improve the user experience.
C. Migrate the database to an Amazon RDS Aurora PostgreSQL configuratio
D. Host the application and presentation layers in an Auto Scaling configuration on Amazon EC2 instances behind an Application Load Balance
E. Use Amazon AppStream 2.0 to improve the user experience.
F. Migrate the database to an Amazon RDS PostgreSQL Mulli-AZ configuratio
G. Host the application and presentation layers in automatically scaled AWS Fargate containers behind a Network Load Balance
H. Use Amazon ElastiCache to improve the user experience.
I. Migrate the database to an Amazon Redshift cluster with at least two node
J. Combine and host the application and presentation layers in automatically scaled Amazon ECS containers behind an Application Load Balance
K. Use Amazon CloudFront to improve the user experience.

Passing Certification Exams Made Easy visit - https://www.2PassEasy.com


Welcome to download the Newest 2passeasy SAP-C02 dumps
https://www.2passeasy.com/dumps/SAP-C02/ (300 New Questions)

Answer: B

Explanation:
Aurora would improve availability that can replicate to multiple AZ (6 copies). Auto scaling would improve the performance together with a ALB. AppStream is like
Citrix that deliver hosted Apps to users.

NEW QUESTION 82
- (Exam Topic 1)
A company requires that all internal application connectivity use private IP addresses. To facilitate this policy, a solutions architect has created interface endpoints
to connect to AWS public services. Upon testing, the solutions architect notices that the service names are resolving to public IP addresses, and that internal
services cannot connect to the interface endpoints.
Which step should the solutions architect take to resolve this issue?

A. Update the subnet route table with a route to the interface endpoint.
B. Enable the private DNS option on the VPC attributes.
C. Configure the security group on the interface endpoint to allow connectivity to the AWS services.
D. Configure an Amazon Route 53 private hosted zone with a conditional forwarder for the internal application.

Answer: C

Explanation:
https://docs.aws.amazon.com/vpc/latest/privatelink/vpce-interface.html

NEW QUESTION 83
- (Exam Topic 1)
A company runs a popular public-facing ecommerce website. Its user base is growing quickly from a local
market to a national market. The website is hosted in an on-premises data center with web servers and a MySQL database. The company wants to migrate its
workload (o AWS. A solutions architect needs to create a solution to:
• Improve security
• Improve reliability Improve availability
• Reduce latency
• Reduce maintenance
Which combination of steps should the solutions architect take to meet these requirements? (Select THREE.)

A. Use Amazon EC2 instances in two Availability Zones for the web servers in an Auto Scaling group behind an Application Load Balancer.
B. Migrate the database to a Multi-AZ Amazon Aurora MySQL DB cluster.
C. Use Amazon EC2 instances in two Availability Zones to host a highly available MySQL database cluster.
D. Host static website content in Amazon S3. Use S3 Transfer Acceleration to reduce latency while serving webpage
E. Use AWS WAF to improve website security.
F. Host static website content in Amazon S3. Use Amazon CloudFronl to reduce latency while serving webpage
G. Use AWS WAF to improve website security
H. Migrate the database to a single-AZ Amazon RDS for MySQL DB instance.

Answer: ABE

NEW QUESTION 85
- (Exam Topic 1)
A company is using AWS CodePipeline for the CI/CO of an application to an Amazon EC2 Auto Scaling group. All AWS resources are defined in AWS
CloudFormation templates. The application artifacts are stored in an Amazon S3 bucket and deployed to the Auto Scaling group using instance user data scripts.
As the application has become more complex, recent resource changes in the Cloud Formation templates have caused unplanned downtime.
How should a solutions architect improve the CI'CD pipeline to reduce the likelihood that changes in the templates will cause downtime?

A. Adapt the deployment scripts to detect and report CloudFormation error conditions when performing deployment
B. Write test plans for a testing team to execute in a non-production environment before approving the change for production.
C. Implement automated testing using AWS CodeBuild in a test environmen
D. Use CloudFormation changesets to evaluate changes before deploymen
E. Use AWS CodeDeploy to leverage blue/green deployment patterns to allow evaluations and the ability to revert changes, if needed.
F. Use plugins for the integrated development environment (IDE) to check the templates for errors, and use the AWS CLI to validate that the templates are correc
G. Adapt the deployment code to check for error conditions and generate notifications on error
H. Deploy to a test environment and execute a manual test plan before approving the change for production.
I. Use AWS CodeDeploy and a blue/green deployment pattern with CloudFormation to replace the user data deployment script
J. Have the operators log in to running instances and go through a manual test plan to verify the application is running as expected.

Answer: B

Explanation:
https://aws.amazon.com/blogs/devops/performing-bluegreen-deployments-with-aws-codedeploy-and-auto-scalin When one adopts go infrastructure as code, we
need to test the infrastructure code as well via automated testing, and revert to original if things are not performing correctly.

NEW QUESTION 86
- (Exam Topic 1)
A multimedia company needs to deliver its video-on-demand (VOD) content to its subscribers in a
cost-effective way. The video files range in size from 1-15 GB and are typically viewed frequently for the first 6 months alter creation, and then access decreases
considerably. The company requires all video files to remain immediately available for subscribers. There are now roughly 30.000 files, and the company
anticipates doubling that number over time.
What is the MOST cost-effective solution for delivering the company's VOD content?

A. Store the video files in an Amazon S3 bucket using S3 Intelligent-Tierin


B. Use Amazon CloudFront to deliver the content with the S3 bucket as the origin.

Passing Certification Exams Made Easy visit - https://www.2PassEasy.com


Welcome to download the Newest 2passeasy SAP-C02 dumps
https://www.2passeasy.com/dumps/SAP-C02/ (300 New Questions)

C. Use AWS Elemental MediaConvert and store the adaptive bitrate video files in Amazon S3. Configure an AWS Elemental MediaPackage endpoint to deliver the
content from Amazon S3.
D. Store the video files in Amazon Elastic File System (Amazon EFS) Standar
E. Enable EFS lifecycle management to move the video files to EFS Infrequent Access after 6 month
F. Create an Amazon EC2 Auto Scaling group behind an Elastic Load Balancer to deliver the content from Amazon EFS.
G. Store the video files in Amazon S3 Standar
H. Create S3 Lifecycle rules to move the video files to S3 Standard-Infrequent Access (S3 Standard-IA) after 6 months and to S3 Glacier Deep Archive after 1 yea
I. Use Amazon CloudFront to deliver the content with the S3 bucket as the origin.

Answer: A

Explanation:
https://d1.awsstatic.com/whitepapers/amazon-cloudfront-for-media.pdf https://aws.amazon.com/solutions/implementations/video-on-demand-on-aws/

NEW QUESTION 88
- (Exam Topic 1)
A team collects and routes behavioral data for an entire company The company runs a Multi-AZ VPC environment with public subnets, private subnets, and in
internet gateway Each public subnet also contains a NAT gateway Most of the company's applications read from and write to Amazon Kinesis Data Streams. Most
of the workloads am in private subnets.
A solutions architect must review the infrastructure The solutions architect needs to reduce costs and maintain the function of the applications The solutions
architect uses Cost Explorer and notices that the cost in the EC2-Other category is consistently high A further review shows that NatGateway-Bytes charges are
increasing the cost in the EC2-Other category.
What should the solutions architect do to meet these requirements?

A. Enable VPC Flow Log


B. Use Amazon Athena to analyze the logs for traffic that can be remove
C. Ensure that security groups are Mocking traffic that is responsible for high costs.
D. Add an interface VPC endpoint for Kinesis Data Streams to the VP
E. Ensure that applications have the correct IAM permissions to use the interface VPC endpoint.
F. Enable VPC Flow Logs and Amazon Detective Review Detective findings for traffic that is not related to Kinesis Data Streams Configure security groups to block
that traffic
G. Add an interface VPC endpoint for Kinesis Data Streams to the VP
H. Ensure that the VPC endpoint policy allows traffic from the applications.

Answer: D

Explanation:
https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints-access.html
https://aws.amazon.com/premiumsupport/knowledge-center/vpc-reduce-nat-gateway-transfer-costs/
VPC endpoint policies enable you to control access by either attaching a policy to a VPC endpoint or by using additional fields in a policy that is attached to an IAM
user, group, or role to restrict access to only occur via the specified VPC endpoint

NEW QUESTION 90
- (Exam Topic 1)
A developer reports receiving an Error 403: Access Denied message when they try to download an object from an Amazon S3 bucket. The S3 bucket is accessed
using an S3 endpoint inside a VPC. and is encrypted with an AWS KMS key. A solutions architect has verified that (he developer is assuming the correct IAM role
in the account that allows the object to be downloaded. The S3 bucket policy and the NACL are also valid.
Which additional step should the solutions architect take to troubleshoot this issue?

A. Ensure that blocking all public access has not been enabled in the S3 bucket.
B. Verify that the IAM rote has permission to decrypt the referenced KMS key.
C. Verify that the IAM role has the correct trust relationship configured.
D. Check that local firewall rules are not preventing access to the S3 endpoint.

Answer: B

NEW QUESTION 94
- (Exam Topic 1)
A company has an internal application running on AWS that is used to track and process shipments in the company's warehouse. Currently, after the system
receives an order, it emails the staff the information needed to ship a package. Once the package is shipped, the staff replies to the email and the order is marked
as shipped.
The company wants to stop using email in the application and move to a serverless application model. Which architecture solution meets these requirements?

A. Use AWS Batch to configure the different tasks required lo ship a packag
B. Have AWS Batch trigger an AWS Lambda function that creates and prints a shipping labe
C. Once that label is scanne
D. as it leaves the warehouse, have another Lambda function move the process to the next step in the AWS Batch job.B.
E. When a new order is created, store the order information in Amazon SQ
F. Have AWS Lambda check the queue every 5 minutes and process any needed wor
G. When an order needs to be shipped, have Lambda print the label in the warehous
H. Once the label has been scanned, as it leaves the warehouse, have an Amazon EC2 instance update Amazon SOS.
I. Update the application to store new order information in Amazon DynamoD
J. When a new order is created, trigger an AWS Step Functions workflow, mark the orders as "in progress," and print a package label to the warehous
K. Once the label has been scanned and fulfilled, the application will trigger an AWS Lambda function that will mark the order as shipped and complete the
workflow.
L. Store new order information in Amazon EF
M. Have instances pull the new information from the NFS and send that information to printers in the warehous
N. Once the label has been scanned, as it leaves the warehouse, have Amazon API Gateway call the instances to remove the order information from Amazon
EFS.

Passing Certification Exams Made Easy visit - https://www.2PassEasy.com


Welcome to download the Newest 2passeasy SAP-C02 dumps
https://www.2passeasy.com/dumps/SAP-C02/ (300 New Questions)

Answer: C

NEW QUESTION 97
- (Exam Topic 1)
A company uses AWS Transit Gateway for a hub-and-spoke model to manage network traffic between many VPCs. The company is developing a new service that
must be able to send data at 100 Gbps. The company needs a faster connection to other VPCs in the same AWS Region.
Which solution will meet these requirements?

A. Establish VPC peering between the necessary VPC


B. Ensure that all route tables are updated as required.
C. Attach an additional transit gateway to the VPC
D. Update the route tables accordingly.
E. Create AWS Site-to-Site VPN connections that use equal-cost multi-path (ECMP) routing between the necessary VPCs.
F. Create an additional attachment from the necessary VPCs to the existing transit gateway.

Answer: D

NEW QUESTION 102


- (Exam Topic 1)
A company is migrating applications from on premises to the AWS Cloud. These applications power the company's internal web forms. These web forms collect
data for specific events several times each quarter. The web forms use simple SQL statements to save the data to a local relational database.
Data collection occurs for each event, and the on-premises servers are idle most of the time. The company needs to minimize the amount of idle infrastructure that
supports the web forms.
Which solution will meet these requirements?

A. Use Amazon EC2 Image Builder to create AMIs for the legacy server
B. Use the AMIs to provision EC2 instances to recreate the applications in the AWS.Clou
C. Place an Application Load Balancer (ALB) in front of the EC2 instance
D. Use Amazon Route 53 to point the DNS names of the web forms to the ALB.
E. Create one Amazon DynamoDB table to store data for all the data input Use the application form name as the table key to distinguish data item
F. Create an Amazon Kinesis data stream to receive the data input and store the input in DynamoD
G. Use Amazon Route 53 to point the DNS names of the web forms to the Kinesis data stream's endpoint.
H. Create Docker images for each server of the legacy web form application
I. Create an Amazon Elastic Container Service (Amazon ECS) cluster on AWS Fargat
J. Place an Application Load Balancer in front of the ECS cluste
K. Use Fargate task storage to store the web form data.
L. Provision an Amazon Aurora Serverless cluste
M. Build multiple schemas for each web form's data storag
N. Use Amazon API Gateway and an AWS Lambda function to recreate the data input form
O. Use Amazon Route 53 to point the DNS names of the web forms to their corresponding API Gateway endpoint.

Answer: D

Explanation:
Provision an Amazon Aurora Serverless cluster. Build multiple schemas for each web forms data storage. Use Amazon API Gateway and an AWS Lambda
function to recreate the data input forms. Use Amazon Route 53 to point the DNS names of the web forms to their corresponding API Gateway endpoint.

NEW QUESTION 104


- (Exam Topic 1)
An ecommerce website running on AWS uses an Amazon RDS for MySQL DB instance with General Purpose SSD storage. The developers chose an appropriate
instance type based on demand, and configured 100 GB of storage with a sufficient amount of free space.
The website was running smoothly for a few weeks until a marketing campaign launched. On the second day of the campaign, users reported long wait times and
time outs. Amazon CloudWatch metrics indicated that both reads and writes to the DB instance were experiencing long response times. The CloudWatch metrics
show 40% to 50% CPU and memory utilization, and sufficient free storage space is still available. The application server logs show no evidence of database
connectivity issues.
What could be the root cause of the issue with the marketing campaign?

A. It exhausted the I/O credit balance due to provisioning low disk storage during the setup phase.
B. It caused the data in the tables to change frequently, requiring indexes to be rebuilt to optimize queries.
C. It exhausted the maximum number of allowed connections to the database instance.
D. It exhausted the network bandwidth available to the RDS for MySQL DB instance.

Answer: A

Explanation:
"When using General Purpose SSD storage, your DB instance receives an initial I/O credit balance of 5.4 million I/O credits. This initial credit balance is enough to
sustain a burst performance of 3,000 IOPS for 30 minutes."
https://aws.amazon.com/blogs/database/how-to-use-cloudwatch-metrics-to-decide-between-general-purpose-or

NEW QUESTION 108


- (Exam Topic 2)
A company hosts a blog post application on AWS using Amazon API Gateway. Amazon DynamoDB, and AWS Lambda The application currently does not use API
keys to authorize requests The API model is as follows:
GET /posts/Jpostld) to get post details
GET /users/{userld}. to get user details
GET /comments/{commentld}: to get comments details
The company has noticed users are actively discussing topics in the comments section, and the company wants to increase user engagement by making the
comments appear in real time
Which design should be used to reduce comment latency and improve user experience?

Passing Certification Exams Made Easy visit - https://www.2PassEasy.com


Welcome to download the Newest 2passeasy SAP-C02 dumps
https://www.2passeasy.com/dumps/SAP-C02/ (300 New Questions)

A. Use edge-optimized API with Amazon CloudFront to cache API responses.


B. Modify the blog application code to request GET/commentsV{commentld} every 10 seconds
C. Use AWS AppSync and leverage WebSockets to deliver comments
D. Change the concurrency limit of the Lambda functions to lower the API response time.

Answer: C

NEW QUESTION 112


- (Exam Topic 2)
A company has built a high performance computing (HPC) cluster in AWS for a tightly coupled workload that generates a large number of shared files stored in
Amazon EFS. The cluster was performing well when the number of Amazon EC2 instances in the cluster was 100. However, when the company increased the
cluster size to 1,000 EC2 instances, overall performance was well below expectations
Which collection of design choices should a solutions architect make to achieve the maximum performance from the HPC cluster? (Select THREE.)

A. Ensure the HPC cluster is launched within a single Availability Zone.


B. Launch the EC2 instances and attach elastic network interfaces in multiples of four.
C. Select EC2 instance types with an Elastic Fabric Adapter (EFA) enabled
D. Ensure the cluster is launched across multiple Availability Zones.
E. Replace Amazon EFS with multiple Amazon EBS volumes in a RAID array.
F. Replace Amazon EFS with Amazon FSx for Lustre.

Answer: ACE

NEW QUESTION 113


- (Exam Topic 2)
A company is migrating an on-premises application and a MySQL database to AWS. The application processes highly sensitive data, and new data is constantly
updated in the database. The data must not be transferred over the internet. The company also must encrypt the data in transit and at rest.
The database is 5 TB in size. The company already has created the database schema in an Amazon RDS for MySQL DB instance The company has set up a 1
Gbps AWS Direct Connect connection to AWS. The company also has set up a public VIF and a private VIF. A solutions architect needs to design a solution that
will migrate the data to AWS with the least possible downtime
Which solution will meet these requirements?

A. Perform a database backu


B. Copy the backup files to an AWS Snowball Edge Storage Optimized device.Import the backup to Amazon S3. Use server-side encryption with Amazon S3
managed encryption keys (SSE-S3) for encryption at rest Use TLS for encryption in transit Import the data from Amazon S3 to the DB instance.
C. Use AWS Database Migration Service (AWS DMS) to migrate the data to AW
D. Create a DMS replication instance in a private subne
E. Create VPC endpoints for AWS DM
F. Configure a DMS task to copy data from the on-premises database to the DB instance by using full load plus change data capture (CDC). Use the AWS Key
Management Service (AWS KMS) default key for encryption at res
G. Use TLS for encryption in transit.
H. Perform a database backu
I. Use AWS DataSync to transfer the backup files to Amazon S3 Useserver-side encryption with Amazon S3 managed encryption keys (SSE-S3) for encryption at
res
J. Use TLS for encryption in transit Import the data from Amazon S3 to the DB instance.
K. Use Amazon S3 File Gateway Set up a private connection to Amazon S3 by using AWS PrivateLink.Perform a database backu
L. Copy the backup files to Amazon S3. Use server-side encryption with Amazon S3 managed encryption keys (SSE-S3) for encryption at res
M. Use TLS for encryption in transi
N. Import the data from Amazon S3 to the DB instance.

Answer: D

NEW QUESTION 118


- (Exam Topic 2)
A company uses multiple AWS accounts in a single AWS Region A solutions architect is designing a solution to consolidate logs generated by Elastic Load
Balancers (ELBs) in the AppDev, AppTest and AppProd accounts. The logs should be stored in an existing Amazon S3 bucket named s3-eib-logs in the central
AWS account. The central account is used for log consolidation only and does not have ELBs deployed ELB logs must be encrypted at rest
Which combination of steps should the solutions architect take to build the solution'' (Select TWO )

A. Update the S3 bucket policy for the s3-elb-logs bucket to allow the s3 PutBucketLogging action for the central AWS account ID
B. Update the S3 bucket policy for the s3-eib-logs bucket to allow the s3 PutObject and s3 DeleteObject actions for the AppDev AppTest and AppProd account IDs
C. Update the S3 bucket policy for the s3-elb-logs bucket to allow the s3 PutObject action for the AppDev AppTest and AppProd account IDs
D. Enable access logging for the ELB
E. Set the S3 location to the s3-elb-logs bucket
F. Enable Amazon S3 default encryption using server-side encryption with S3 managed encryption keys (SSE-S3) for the s3-elb-logs S3 bucket

Answer: AE

NEW QUESTION 123


- (Exam Topic 2)
A company has a web application that allows users to upload short videos. The videos are stored on Amazon EBS volumes and analyzed by custom recognition
software for categorization.
The website contains stat c content that has variable traffic with peaks in certain months. The architecture consists of Amazon EC2 instances running in an Auto
Scaling group for the web application and EC2 instances running in an Auto Scaling group to process an Amazon SQS queue The company wants to
re-architect the application to reduce operational overhead using AWS managed services where possible and remove dependencies on third-party software.
Which solution meets these requirements?

A. Use Amazon ECS containers for the web application and Spot Instances for the Auto Scaling group that processes the SQS queu
B. Replace the custom software with Amazon Recognition to categorize the videos.

Passing Certification Exams Made Easy visit - https://www.2PassEasy.com


Welcome to download the Newest 2passeasy SAP-C02 dumps
https://www.2passeasy.com/dumps/SAP-C02/ (300 New Questions)

C. Store the uploaded videos n Amazon EFS and mount the file system to the EC2 instances for Te web applicatio
D. Process the SOS queue with an AWS Lambda function that calls the Amazon Rekognition API to categorize the videos.
E. Host the web application in Amazon S3. Store the uploaded videos in Amazon S3. Use S3 event notifications to publish events to the SQS queue Process the
SQS queue with an AWS Lambda function that calls the Amazon Rekognition API to categorize the videos.
F. Use AWS Elastic Beanstalk to launch EC2 instances in an Auto Scaling group for the web application and launch a worker environment to process the SQS
queue Replace the custom software with Amazon Rekognition to categorize the videos.

Answer: D

NEW QUESTION 125


- (Exam Topic 2)
A company has deployed an application to multiple environments in AWS. including production and testing the company has separate accounts for production and
testing, and users are allowed to create additional
application users for team members or services. as needed. The security team has asked the operations team tor better isolation between production and testing
with centralized controls on security credentials and improved management of permissions between environments
Which of the following options would MOST securely accomplish this goal?

A. Create a new AWS account to hold user and service accounts, such as an identity account Create users and groups m the identity accoun
B. Create roles with appropriate permissions in the production and testing accounts Add the identity account to the trust policies for the roles
C. Modify permissions in the production and testing accounts to limit creating new IAM users to members of the operations team Set a strong IAM password policy
on each account Create new IAM users and groups in each account to Limit developer access to just the services required to complete their job function.
D. Create a script that runs on each account that checks user accounts For adherence to a security policy.Disable any user or service accounts that do not comply.
E. Create all user accounts in the production account Create roles for access in me production account and testing account
F. Grant cross-account access from the production account to the testing account

Answer: A

NEW QUESTION 128


- (Exam Topic 2)
A retail company is running an application that stores invoice files in an Amazon S3 bucket and metadata about the files in an Amazon DynamoDB table. The
application software runs in both us-east-1 and eu-west-1 The S3 bucket and DynamoDB table are in us-east-1. The company wants to protect itself from data
corruption and loss of connectivity to either Region
Which option meets these requirements?

A. Create a DynamoDB global table to replicate data between us-east-1 and eu-west-1. Enable continuous backup on the DynamoDB table in us-east-1. Enable
versioning on the S3 bucket
B. Create an AWS Lambda function triggered by Amazon CloudWatch Events to make regular backups of the DynamoDB table Set up S3 cross-region replication
from us-east-1 to eu-west-1 Set up MFA delete on the S3 bucket in us-east-1.
C. Create a DynamoDB global table to replicate data between us-east-1 and eu-west-1. Enable versioning on the S3 bucket Implement strict ACLs on the S3
bucket
D. Create a DynamoDB global table to replicate data between us-east-1 and eu-west-1. Enable continuous backup on the DynamoDB table in us-east-1. Set up S3
cross-region replication from us-east-1 toeu-west-1.

Answer: B

NEW QUESTION 133


- (Exam Topic 2)
A company is using a lift-and-shift strategy to migrate applications from several on-premises Windows servers to AWS. The Windows servers will be hosted on
Amazon EC2 instances in the us-east-1 Region.
The company's security policy allows the installation of migration tools on servers. The migration data must be encrypted in transit and encrypted at rest. The
applications are business critical. The company needs to minimize the cutover window and minimize the downtime that results from the migration. The company
wants to use Amazon CloudWatch and AWS CloudTrail for monitoring.
Which solution will meet these requirements?

A. Use AWS Application Migration Service (CloudEnsure Migration) to migrate the Windows servers to AW
B. Create a Replication Settings templat
C. Install the AWS Replication Agent on the source servers
D. Use AWS DataSync to migrate the Windows servers to AW
E. Install the DataSync agent on the source server
F. Configure a blueprint for the target server
G. Begin the replication process.
H. Use AWS Server Migration Service (AWS SMS) to migrate the Windows servers to AW
I. Install the SMS Connector on the source server
J. Replicate the source servers to AW
K. Convert the replicated volumes to AMIs to launch EC2 instances.
L. Use AWS Migration Hub to migrate the Windows servers to AW
M. Create a project in Migration Hub.Track the progress of server migration by using the built-in dashboard.

Answer: A

NEW QUESTION 134


- (Exam Topic 2)
A company is migrating its data centre from on premises to the AWS Cloud. The migration will take several months to complete. The company will use Amazon
Route 53 for private DNS zones.
During the migration, the company must Keep its AWS services pointed at the VPC's Route 53 Resolver for DNS. The company also must maintain the ability to
resolve addresses from its on-premises DNS server A solutions architect must set up DNS so that Amazon EC2 instances can use native Route 53 endpoints to
resolve on-premises DNS queries
Which configuration writ meet these requirements?

Passing Certification Exams Made Easy visit - https://www.2PassEasy.com


Welcome to download the Newest 2passeasy SAP-C02 dumps
https://www.2passeasy.com/dumps/SAP-C02/ (300 New Questions)

A. Configure Vie VPC DHCP options set to point to on-premises DNS server IP addresse
B. Ensure that security groups for EC2 instances allow outbound access to port 53 on those DNS server IP addresses.
C. Launch an EC2 instance that has DNS BIND installed and configure
D. Ensure that the security groups that are attached to the EC2 instance can access the on-premises DNS server IP address on port 53. Configure BIND to
forward DNS queries to on-premises DNS server IP addresses Configure each migrated EC2 instances DNS settings to point to the BIND server IP address.
E. Create a new outbound endpoint in Route 53. and attach me endpoint to the VP
F. Ensure that the security groups that are attached to the endpoint can access the on-premises DNS server IP address on port 53 Create a new Route 53
Resolver rule that routes on-premises designated traffic to theon-premises DNS server.
G. Create a new private DNS zone in Route 53 with the same domain name as the on-premises domain.Create a single wildcard record with the on-premises DNS
server IP address as the record's address.

Answer: A

NEW QUESTION 137


- (Exam Topic 2)
A software development company has multiple engineers who are working remotely. The company is running Active Directory Domain Services (AD DS) on an
Amazon EC2 instance. The company's security policy states that all internal, nonpublic services that are deployed in a VPC must be accessible through a VPN
Multi-factor authentication (MFA) must be used for access to a VPN.
Whet should a solution architect do to meet these requirements?

A. Create an AWS Site-to-Site VPN connection Configure integration between a VPN and AD D
B. Use an Amazon Workspaces client with MFA support enabled to establish a VPN connection.
C. Create an AWS Client VPN endpoint Create an AD Connector directory for integration with AD DS Enable MFA for AD Connector Use AWS Client VPN to
establish a VPN connection.
D. Create multiple AWS Site-to-Site VPN connections by using AWS VPN CloudHub Configure integration between AWS VPN CloudHub and AD DS Use AWS
Cop4ot to establish a VPN connection.
E. Create an Amazon WorkLink endpoint Configure integration between Amazon WorkLink and AD D
F. Enable MFA in Amazon WorkLink Use AWS Client VPN to establish a VPN connection.

Answer: B

NEW QUESTION 138


- (Exam Topic 2)
A company is migrating its infrastructure to the AW5 Cloud. The company must comply with a variety of regulatory standards for different projects. The company
needs a multi-account environment.
A solutions architect needs to prepare the baseline infrastructure The solution must provide a consistent baseline of management and security but it must allow
flexibility for different compliance requirements within various AWS accounts. The solution also needs to integrate with the existing on-premises Active Directory
Federation Services (AD FS) server.
Which solution meets these requirements with the LEAST amount of operational overhead?

A. Create an organization In AWS Organizations Create a single SCP for least privilege access across all accounts Create a single OU for all accounts Configure
an IAM identity provider tor federation with the on-premises AD FS server Configure a central togging account with a defined process for log generating services to
send log events to the central accoun
B. Enable AWS Config in the central account with conformance packs for all accounts.
C. Create an organization In AWS Organizations Enable AWS Control Tower on the organizatio
D. Review included guardrails for SCP
E. Check AWS Config for areas that require additions Add OUs as necessary Connect AWS Single Sign-On to the on-premises AD FS server
F. Create an organization in AWS Organizations Create SCPs for least privilege access Create an OU structure, and use it to group AWS accounts Connect AWS
Single Sign-On to the on-premises AD FS serve
G. Configure a central logging account with a defined process for tog generating services to send log events to the central account Enable AWS Config in the
central account with aggregators and conformance packs.
H. Create an organization in AWS Organizations Enable AWS Control Tower on the organization Review included guardrails for SCP
I. Check AWS Config for areas that require additions Configure an IAM identity provider for federation with the on-premises AD FS server.

Answer: A

NEW QUESTION 143


- (Exam Topic 2)
A company is processing videos in the AWS Cloud by using Amazon EC2 instances in an Auto Scaling group. It takes 30 minutes to process a video. Several EC2
instances scale in and out depending on the number of videos in an Amazon Simple Queue Service (Amazon SQS) queue.
The company has configured the SQS queue with a redrive policy that specifies a target dead-letter queue and a maxReceiveCount of 1. The company has set the
visibility timeout for the SQS queue to 1 hour. The company has set up an Amazon CloudWatch alarm to notify the development team when there are messages in
the dead-letter queue.
Several times during the day, the development team receives notification that messages are in the dead-letter queue and that videos have not been processed
properly. An investigation finds no errors in the application logs.
How can the company solve this problem?

A. Turn on termination protection for the EC2 instances.


B. Update the visibility timeout for the SOS queue to 3 hours.
C. Configure scale-in protection for the instances during processing.
D. Update the redrive policy and set maxReceiveCount to 0.

Answer: A

NEW QUESTION 146


- (Exam Topic 2)
a company needs to create a centralized logging architecture for all of its AWS accounts. The architecture should provide near-real-time data analysis for all AWS
CloudTrail logs and VPC Flow logs across an AWS accounts. The company plans to use Amazon Elasticsearch Service (Amazon ES) to perform log analyses in
me logging account.

Passing Certification Exams Made Easy visit - https://www.2PassEasy.com


Welcome to download the Newest 2passeasy SAP-C02 dumps
https://www.2passeasy.com/dumps/SAP-C02/ (300 New Questions)

Which strategy should a solutions architect use to meet These requirements?

A. Configure CloudTrail and VPC Flow Logs m each AWS account to send data to a centralized Amazon S3 Ducket in the fogging accoun
B. Create an AWS Lambda function to load data from the S3 bucket to Amazon ES m the togging account
C. Configure CloudTrail and VPC Flow Logs to send data to a fog group m Amazon CloudWatch Logs n each AWS account Configure a CloudWatch subscription
filter m each AWS account to send data to Amazon Kinesis Data Firehose In the fogging account Load data from Kinesis Data Firehose Into Amazon ES in the
logging account
D. Configure CloudTrail and VPC Flow Logs to send data to a separate Amazon S3 bucket In each AWS accoun
E. Create an AWS Lambda function triggered by S3 evens to copy the data to a centralized logging bucke
F. Create another Lambda function lo load data from the S3 bucket to Amazon ES in the logging account.
G. Configure CloudTrail and VPC Flow Logs to send data to a fog group in Amazon CloudWatch Logs n each AWS account Create AWS Lambda functions in
each AWS account to subscribe to the tog groups and stream the data to an Amazon S3 bucket in the togging accoun
H. Create another Lambda function to toad data from the S3 bucket to Amazon ES in the logging account.

Answer: A

NEW QUESTION 150


- (Exam Topic 2)
A company operates quick-service restaurants. The restaurants follow a predictable model with high sales traffic for -4 hours daily Sates traffic is lower outside of
those peak hours.
The point of sale and management platform is deployed in the AWS Cloud and has a backend that is based or Amazon DynamoDB The database table uses
provisioned throughput mode with 100.000 RCUs and 80.000 WCUs to match Known peak resource consumption.
The company wants to reduce its DynamoDB cost and minimize the operational overhead for the IT staff. Which solution meets these requirements MOST cost-
effectively?

A. Reduce the provisioned RCUs and WCUs


B. Change the DynamoDB table to use on-demand capacity
C. Enable Dynamo DB auto seating for the table.
D. Purchase 1-year reserved capacity that is sufficient to cover the peak load for 4 hours each day.

Answer: C

NEW QUESTION 154


- (Exam Topic 2)
A company is running multiple workloads in the AWS Cloud. The company has separate units for software development The company uses AWS Organizations
and federation with SAML to give permissions to developers to manage resources in their AWS accounts The development units each deploy their production
workloads into a common production account
Recently, an incident occurred in the production account in which members of a development unit terminated an EC2 instance that belonged to a different
development unit. A solutions architect must create a solution that prevents a similar incident from happening in the future. The solution also must a low
developers the possibilityy to manage the instances used for their workloads.
Which strategy will meet these requirements?

A. Create separate OUs in AWS Organizations for each development unit Assign the created OUs to the company AWS accounts Create separate SCPs with a
deny action and a StringNotEquals condition for the DevelopmentUnit resource tag that matches the development unit name Assign the SCP to the corresponding
OU
B. Pass an attribute for DevelopmentUnit as an AWS Security Token Service (AWS STS) session tag during SAML federation Update the IAM policy for the
developers' assumed IAM role with a deny action and a StringNotEquals condition for the DevelopmentUnit resource tag and aws PrincipalTag/DevelopmentUnit
C. Pass an attribute for DevelopmentUnit as an AWS Security Token Service (AWS STS) session tag during SAML federation Create an SCP with an allow action
and a StrmgEquals condition for the DevelopmentUnit resource tag and aws Principal Tag 'DevelopmentUnit Assign the SCP to the root OU.
D. Create separate IAM policies for each development unit For every IAM policy add an allow action and a StringEquals condition for the DevelopmentUnit
resource tag and the development unit name During SAML federation use AWS Security Token Service (AWS STS) to assign the IAM policy and match the
development unit name to the assumed IAM role

Answer: A

NEW QUESTION 156


- (Exam Topic 2)
A company is planning to migrate an Amazon RDS for Oracle database to an RDS for PostgreSQL DB instance in another AWS account A solutions architect
needs to design a migration strategy that will require no downtime and that will minimize the amount of time necessary to complete the migration The migration
strategy must replicate all existing data and any new data that is created during the migration The target database must be identical to the source database at
completion of the migration process
All applications currently use an Amazon Route 53 CNAME record as their endpoint for communication with the RDS for Oracle DB instance The RDS for Oracle
DB instance is in a private subnet
Which combination of steps should the solutions architect take to meet these requirements? (Select THREE )

A. Create a new RDS for PostgreSQL DB instance in the target account Use the AWS Schema Conversion Tool (AWS SCT) to migrate the database schema from
the source database to the target database.
B. Use the AWS Schema Conversion Tool (AWS SCT) to create a new RDS for PostgreSQL DB instance in the target account with the schema and initial data
from the source database
C. Configure VPC peering between the VPCs in the two AWS accounts to provide connectivity to both DB instances from the target accoun
D. Configure the security groups that are attached to each DB instance to allow traffic on the database port from the VPC in the target account
E. Temporarily allow the source DB instance to be publicly accessible to provide connectivity from the VPC in the target account Configure the security groups that
are attached to each DB instance to allow traffic on the database port from the VPC in the target account.
F. Use AWS Database Migration Service (AWS DMS) in the target account to perform a full load plus change data capture (CDC) migration from the source
database to the target database When the migration is complete, change the CNAME record to point to the target DB instance endpoint
G. Use AWS Database Migration Service (AWS DMS) in the target account to perform a change data capture (CDC) migration from the source database to the
target database When the migration is complete change the CNAME record to point to the target DB instance endpoint

Answer: BCE

Passing Certification Exams Made Easy visit - https://www.2PassEasy.com


Welcome to download the Newest 2passeasy SAP-C02 dumps
https://www.2passeasy.com/dumps/SAP-C02/ (300 New Questions)

NEW QUESTION 158


- (Exam Topic 2)
A company is running an application in the AWS Cloud. The company's security team must approve the creation of all new IAM users. When a new 1AM user is
created, all access for the user must be removed automatically. The security team must then receive a notification to approve the user. The company has a multi-
Region AWS CloudTrail trail In the AWS account.
Which combination of steps will meet these requirements? (Select THREE.)

A. Create an Amazon EventBridge (Amazon CloudWatch Events) rul


B. Define a pattern with the detail-type value set to AWS API Call via CloudTrail and an eventName of CreateUser.
C. Configure CloudTrail to send a notification for the CreateUser event to an Amazon Simple Notification Service (Amazon SNS) topic.
D. Invoke a container that runs in Amazon Elastic Container Service (Amazon ECS) with AWS Fargatetechnology to remove access
E. Invoke an AWS Step Functions state machine to remove access.
F. Use Amazon Simple Notification Service (Amazon SNS) to notify the security team.
G. Use Amazon Pinpoint to notify the security team.

Answer: ABE

NEW QUESTION 160


- (Exam Topic 2)
A gaming company created a game leaderboard by using a Multi-AZ deployment of an Amazon RDS database. The number of users is growing, and the queries to
get individual player rankings are getting slower over time. The company expects a surge in users for an upcoming version and wants to optimize the design for
scalability and performance.
Which solution will meet these requirements?

A. Migrate the database to Amazon DynamoD


B. Store the leader different table
C. Use Apache HiveQL JOIN statements to build the leaderboard
D. Keep the leaderboard data in the RDS DB instanc
E. Provision a Multi-AZ deployment of an Amazon ElastiCache for Redis cluster.
F. Stream the leaderboard data by using Amazon Kinesis Data Firehose with an Amazon S3 bucket as the destinatio
G. Query the S3 bucket by using Amazon Athena for the leaderboard.
H. Add a read-only replica to the RDS DB instanc
I. Add an RDS Proxy database proxy.

Answer: C

NEW QUESTION 163


- (Exam Topic 2)
A company recently deployed a new application that runs on a group of Amazon EC2 Linux instances in a VPC In a peered VPC the company launched an EC2
Linux instance that serves as a bastion host The security group of the application instances allows access only on TCP port 22 from the private IP of the bastion
host The security group of the bastion host allows access to TCP port 22 from 0 0 0.0/0 so that system administrators can use SSH to remotely log in to the
application instances from several branch offices
While looking through operating system logs on the bastion host, a cloud engineer notices thousands of failed SSH logins to the bastion host from locations around
the world The cloud engineer wants to change how remote access is granted to the application instances and wants to meet the following requirements:
• Eliminate brute-force SSH login attempts
• Retain a log of commands run during an SSH session
• Retain the ability to forward ports
Which solution meets these requirements for remote access to the application instances?

A. Configure the application instances to communicate with AWS Systems Manager Grant access to the system administrators to use Session Manager to
establish a session with the application instances Terminate the bastion host
B. Update the security group of the bastion host to allow traffic from only the public IP addresses of the branch offices
C. Configure an AWS Client VPN endpoint and provision each system administrator with a certificate to establish a VPN connection to the application VPC Update
the security group of the application instances to allow traffic from only the Client VPN IPv4 CID
D. Terminate the bastion host.
E. Configure the application instances to communicate with AWS Systems Manage
F. Grant access to the system administrators to issue commands to the application instances by using Systems Manager Run Comman
G. Terminate the bastion host.

Answer: A

Explanation:
"Session Manager removes the need to open inbound ports, manage SSH keys, or use bastion hosts" Ref: https://docs.aws.amazon.com/systems-
manager/latest/userguide/session-manager.html

NEW QUESTION 165


- (Exam Topic 2)
A company has an organization that has many AWS accounts in AWS Organizations A solutions architect must improve how the company manages common
security group rules for the AWS accounts in the organization.
The company has a common set of IP CIDR ranges in an allow list in each AWS account lo allow access to and from the company's on-premises network
Developers within each account are responsible for adding new IP CIDR ranges to their security groups. The security team has its own AWS account. Currently,
the security team notifies the owners of the other AWS accounts when changes are made to the allow list.
The solutions architect must design a solution that distributes the common set of CIDR ranges across all accounts Which solution meets these requirements with
the LEAST amount of operational overhead.

A. Set up an Amazon Simple Notification Service (Amazon SNS) topic in the security team's AWS account Deploy an AWS Lambda function in each AWS account
Configure the Lambda function to run every time an SNS topic receives a message Configure the Lambda function to take an IP address as input and add it to a
list of security groups in the account Instruct the security team to distribute changes by publishing messages to its SNS topic
B. Create new customer-managed prefix lists in each AWS account within the organization Populate theprefix lists in each account with all internal CIDR ranges
Notify the owner of each AWS account to allow the new customer-managed prefix list IDs in their accounts in their security groups Instruct the security team to

Passing Certification Exams Made Easy visit - https://www.2PassEasy.com


Welcome to download the Newest 2passeasy SAP-C02 dumps
https://www.2passeasy.com/dumps/SAP-C02/ (300 New Questions)

share updates with each AWS account owner.


C. Create a new customer-managed prefix list in the security team's AWS account Populate thecustomer-managed prefix list with all internal CIDR range
D. Share the customer-managed prefix list.... organization by using AWS Resource Access Manager Notify the owner of each AWS account to allow the new
customer-managed prefix list ID in their security groups

Answer: A

NEW QUESTION 166


- (Exam Topic 2)
A car rental company has built a serverless REST API to provide data to its mobile app. The app consists of an Amazon API Gateway API with a Regional
endpoint, AWS Lambda functions and an Amazon Aurora MySQL Serverless DB cluster The company recently opened the API to mobile apps of partners A
significant increase in the number of requests resulted causing sporadic database memory errors Analysis of the API traffic indicates that clients are making
multiple HTTP GET requests for the same queries in a short period of time Traffic is concentrated during business hours, with spikes around holidays and other
events
The company needs to improve its ability to support the additional usage while minimizing the increase in costs associated with the solution.
Which strategy meets these requirements?

A. Convert the API Gateway Regional endpoint to an edge-optimized endpoint Enable caching in the production stage.
B. Implement an Amazon ElastiCache for Redis cache to store the results of the database calls Modify the Lambda functions to use the cache
C. Modify the Aurora Serverless DB cluster configuration to increase the maximum amount of available memory
D. Enable throttling in the API Gateway production stage Set the rate and burst values to limit the incoming calls

Answer: A

NEW QUESTION 168


- (Exam Topic 2)
A company is running a serverless application that consists of several AWS Lambda functions and Amazon DynamoDB tables. The company has created new
functionality that requires the Lambda functions to access an Amazon Neptune DB cluster The Neptune DB cluster is located in three subnets in a VPC.
Which of the possible solutions will allow the Lambda functions to access the Neptune DB cluster and DynamoDB tables? (Select TWO )

A. Create three public subnets in the Neptune VPC and route traffic through an interne: gateway Host theLambda functions m the three new public subnets
B. Create three private subnets in the Neptune VPC and route internet traffic through a NAT gateway Host the Lambda functions In the three new private subnets.
C. Host the Lambda functions outside the VP
D. Update the Neptune security group to allow access from the IP ranges of the Lambda functions.
E. Host the Lambda functions outside the VP
F. Create a VPC endpoint for the Neptune database, and have the Lambda functions access Neptune over the VPC endpoint
G. Create three private subnets in the Neptune VP
H. Host the Lambda functions m the three new isolated subnet
I. Create a VPC endpoint for DynamoD
J. and route DynamoDB traffic to the VPC endpoint

Answer: AC

NEW QUESTION 171


- (Exam Topic 2)
A company plans to refactor a monolithic application into a modern application designed deployed or AWS. The CLCD pipeline needs to be upgraded to support
the modem design for the application with the following requirements
• It should allow changes to be released several times every hour.
* It should be able to roll back the changes as quickly as possible Which design will meet these requirements?

A. Deploy a Cl-CD pipeline that incorporates AMIs to contain the application and their configurations Deploy the application by replacing Amazon EC2 instances
B. Specify AWS Elastic Beanstak to sage in a secondary environment as the deployment target for the CI/CD pipeline of the applicatio
C. To deploy swap the staging and production environment URLs.
D. Use AWS Systems Manager to re-provision the infrastructure for each deployment Update the AmazonEC2 user data to pull the latest code art-fact from
Amazon S3 and use Amazon Route 53 weighted routing to point to the new environment
E. Roll out At application updates as pan of an Auto Scaling event using prebuilt AMI
F. Use new versions of the AMIs to add instances, and phase out all instances that use the previous AMI version with the configured termination policy during a
deployment event.

Answer: B

Explanation:
It is the fastest when it comes to rollback and deploying changes every hour

NEW QUESTION 176


- (Exam Topic 2)
A new startup is running a serverless application using AWS Lambda as the primary source of compute New versions of the application must be made available to
a subset of users before deploying changes to all users Developers should also have the ability to stop the deployment and have access to an easy rollback
mechanism A solutions architect decides to use AWS CodeDeploy to deploy changes when a new version is available.
Which CodeDeploy configuration should the solutions architect use?

A. A blue/green deployment
B. A linear deployment
C. A canary deployment
D. An all-at-once deployment

Answer: C

Passing Certification Exams Made Easy visit - https://www.2PassEasy.com


Welcome to download the Newest 2passeasy SAP-C02 dumps
https://www.2passeasy.com/dumps/SAP-C02/ (300 New Questions)

NEW QUESTION 178


- (Exam Topic 2)
A large education company recently introduced Amazon Workspaces to provide access to internal applications across multiple universities. The company is storing
user proxies on an Amazon FSx for Windows File Server tile system. The Me system is configured with a DNS alias and is connected to a self-managed Active
Directory As more users begin to use the Workspaces login time increases to unacceptable levels
An investigation reveals a degradation in performance of the file system. The company created the file system on HDD storage with a throughput of 16 MBps A
solutions architect must improve the performance of the file system during a defined maintenance window
What should the solutions architect do to meet these requirements with the LEAST administrative effort?

A. Use AWS Backup to create a point-in-time backup of the file system Restore the backup to a new FSx for Windows File Server file system Select SSD as the
storage type Select 32 MBps as the throughput capacity When the backup and restore process is completed adjust the DNS alias accordingly Delete the original
file system
B. Disconnect users from the file system In the Amazon FSx console, update the throughput capacity to 32 MBps Update the storage type to SSD Reconnect
users to the file system
C. Deploy an AWS DataSync agent onto a new Amazon EC2 instanc
D. Create a task Configure the existing file system as the source location Configure a new FSx forWindows File Server file system with SSD storage and 32 MBps
of throughput as the target location Schedule the task When the task is completed adjust the DNS alias accordingly Delete the original file system.
E. Enable shadow copies on the existing file system by using a Windows PowerShell command Schedule the shadow copy job to create a point-in-time backup of
the file system Choose to restore previousversions Create a new FSx for Windows File Server file system with SSD storage and 32 MBps of throughput When the
copy job is completed, adjust the DNS alias Delete the original file system

Answer: D

NEW QUESTION 181


- (Exam Topic 2)
A company hosts its primary API on AWS by using an Amazon API Gateway API and AWS Lambda functions that contain the logic for the API methods. The
company s internal applications use the API for core functionality and business logic. The company's customers use the API to access data from their accounts
Several customers also have access to a legacy API that is running on a single standalone Amazon EC2 instance.
The company wants to increase the security for these APIs to better prevent denial of service (DoS) attacks, check for vulnerabilities, and guard against common
exploits
What should a solutions architect do to meet these requirements?

A. Use AWS WAF to protect both APIs Configure Amazon Inspector to analyze the legacy API Configure Amazon GuardDuty to monitor for malicious attempts to
access the APIs
B. Use AWS WAF to protect the API Gateway API Configure Amazon Inspector to analyze both APIs Configure Amazon GuardDuty to block malicious attempts to
access the APIs.
C. Use AWS WAF to protect the API Gateway API Configure Amazon inspector to analyze the legacy APIConfigure Amazon GuardDuty to monitor for malicious
attempts to access the APIs.
D. Use AWS WAF to protect the API Gateway API Configure Amazon inspector to protect the legacy API Configure Amazon GuardDuty to block malicious
attempts to access the APIs.

Answer: C

NEW QUESTION 186


- (Exam Topic 2)
A solutions architect must update an application environment within AWS Elastic Beanstalk using a With green deployment methodology. The solutions architect
creates an environment that is identical to the existing application environment and deploys the application to the new environment.
What should be done next to complete the update?

A. Redirect to the new environment using Amazon Route 53


B. Select the Swap Environment URLs option.
C. Replace the Auto Scaling launch configuration
D. Update the DNS records to point to the green environment

Answer: B

NEW QUESTION 189


- (Exam Topic 2)
A company has a platform that contains an Amazon S3 bucket for user content. The S3 bucket has thousands of terabytes of objects, all in the S3 Standard
storage class. The company has an RTO of 6 hours The company must replicate the data from its primary AWS Region to a replication S3 bucket in another
Region
The user content S3 bucket contains user-uploaded files such as videos and photos. The user content S3 bucket has an unpredictable access pattern. The
number of users is increasing quickly, and the company wants to create an S3 Lifecycle policy to reduce storage costs
Which combination of steps will meet these requirements MOST cost-effectively'? (Select TWO )

A. Move the objects in the user content S3 bucket to S3 Intelligent-Tiering immediately


B. Move the objects in the user content S3 bucket to S3 Intelligent-Tiering after 30 days
C. Move the objects in the replication S3 bucket to S3 Standard-Infrequent Access (S3 Standard-IA) after 30 days and to S3 Glacier after 90 days
D. Move the objects in the replication S3 bucket to S3 One Zone-Infrequent Access (S3 One Zone-IA) after 30 days and to S3 Glacier Deep Archive after 90 days
E. Move the objects in the replication S3 bucket to S3 Standard-infrequent Access (S3 Standard-IA) after 30 days and to S3 Glacier Deep Archive after 180 days

Answer: AD

NEW QUESTION 191


- (Exam Topic 2)
A company is using multiple AWS accounts The DNS records are stored in a private hosted zone for Amazon Route 53 in Account A The company's applications
and databases are running in Account B.
A solutions architect win deploy a two-net application In a new VPC To simplify the configuration, the db.example com CNAME record set tor the Amazon RDS
endpoint was created in a private hosted zone for Amazon Route 53.

Passing Certification Exams Made Easy visit - https://www.2PassEasy.com


Welcome to download the Newest 2passeasy SAP-C02 dumps
https://www.2passeasy.com/dumps/SAP-C02/ (300 New Questions)

During deployment, the application failed to start. Troubleshooting revealed that db.example com is not resolvable on the Amazon EC2 instance The solutions
architect confirmed that the record set was created correctly in Route 53.
Which combination of steps should the solutions architect take to resolve this issue? (Select TWO J

A. Deploy the database on a separate EC2 instance in the new VPC Create a record set for the instance's private IP in the private hosted zone
B. Use SSH to connect to the application tier EC2 instance Add an RDS endpoint IP address to the/eto/resolv.conf file
C. Create an authorization lo associate the private hosted zone in Account A with the new VPC In Account B
D. Create a private hosted zone for the example.com domain m Account B Configure Route 53 replicationbetween AWS accounts
E. Associate a new VPC in Account B with a hosted zone in Account
F. Delete the association authorization In Account A.

Answer: CE

NEW QUESTION 194


......

Passing Certification Exams Made Easy visit - https://www.2PassEasy.com


Welcome to download the Newest 2passeasy SAP-C02 dumps
https://www.2passeasy.com/dumps/SAP-C02/ (300 New Questions)

THANKS FOR TRYING THE DEMO OF OUR PRODUCT

Visit Our Site to Purchase the Full Set of Actual SAP-C02 Exam Questions With Answers.

We Also Provide Practice Exam Software That Simulates Real Exam Environment And Has Many Self-Assessment Features. Order the SAP-
C02 Product From:

https://www.2passeasy.com/dumps/SAP-C02/

Money Back Guarantee

SAP-C02 Practice Exam Features:

* SAP-C02 Questions and Answers Updated Frequently

* SAP-C02 Practice Questions Verified by Expert Senior Certified Staff

* SAP-C02 Most Realistic Questions that Guarantee you a Pass on Your FirstTry

* SAP-C02 Practice Test Questions in Multiple Choice Formats and Updatesfor 1 Year

Passing Certification Exams Made Easy visit - https://www.2PassEasy.com


Powered by TCPDF (www.tcpdf.org)

You might also like