Sap c02 True2
Sap c02 True2
Sap c02 True2
https://www.2passeasy.com/dumps/SAP-C02/
NEW QUESTION 1
- (Exam Topic 1)
A company wants to deploy an AWS WAF solution to manage AWS WAF rules across multiple AWS accounts. The accounts are managed under different OUs in
AWS Organizations.
Administrators must be able to add or remove accounts or OUs from managed AWS WAF rule sets as needed. Administrators also must have the ability to
automatically update and remediate noncompliant AWS WAF rules in all accounts
Which solution meets these requirements with the LEAST amount of operational overhead?
A. Use AWS Firewall Manager to manage AWS WAF rules across accounts in the organizatio
B. Use an AWS Systems Manager Parameter Store parameter to store accountnumbers and OUs to manage Update the parameter as needed to add or remove
accounts or OUs Use an Amazon EventBridge (Amazon CloudWatch Events) rule to identify any changes to the parameter and to invoke an AWS Lambda
function to update the security policy in the Firewall Manager administrative account
C. Deploy an organization-wide AWS Conng rule that requires all resources in the selected OUs to associate the AWS WAF rule
D. Deploy automated remediation actions by using AWS Lambda to fix noncompliant resource
E. Deploy AWS WAF rules by using an AWS CloudFormation stack set to target the same OUs where the AWS Config rule is applied.
F. Create AWS WAF rules in the management account of the organizatio
G. Use AWS Lambda environment variables to store account numbers and OUs to manage Update environment variables as needed to add or remove accounts
or OUs Create cross-account IAM roles in member account
H. Assume the roles by using AWS Security Token Service (AWS STS) in the Lambda function to create and update AWS WAF rules in the member accounts
I. Use AWS Control Tower to manage AWS WAF rules across accounts in the organizatio
J. Use AWS Key Management Service (AWS KMS) to store account numbers and OUs to manage Update AWS KMS as needed to add or remove accounts or
OU
K. Create IAM users in member accounts Allow AWS Control Tower in the management account to use the access key and secret access key to create and
update AWS WAF rules in the member accounts
Answer: B
NEW QUESTION 2
- (Exam Topic 1)
A company Is serving files to its customers through an SFTP server that Is accessible over the internet The SFTP server Is running on a single Amazon EC2
instance with an Elastic IP address attached Customers connect to the SFTP server through its Elastic IP address and use SSH for authentication The EC2
instance also has an attached security group that allows access from all customer IP addresses.
A solutions architect must implement a solution to improve availability minimize the complexity ot infrastructure management and minimize the disruption to
customers who access files. The solution must not change the way customers connect.
Which solution will meet these requirements?
A. Disassociate the Elastic IP address from me EC2 instance Create an Amazon S3 bucket to be used for sftp file hosting Create an AWS Transfer Family server
Configure the Transfer Family server with a publicly accessible endpoin
B. Associate the SFTP Elastic IP address with the new endpoin
C. Point the Transfer Family server to the S3 bucket Sync all files from the SFTP server to the S3 bucket.
D. Disassociate the Elastic IP address from the EC2 instanc
E. Create an Amazon S3 bucket to be used for SFTP file hosting Create an AWS Transfer Family serve
F. Configure the Transfer Family server with a VPC-hoste
G. internet-facing endpoin
H. Associate the SFTP Elastic IP address with the new endpoin
I. Attach the security group with customer IP addresses to the new endpoin
J. Point the Transfer Family server to the S3 bucke
K. Sync all files from the SFTP server to The S3 bucket
L. Disassociate the Elastic IP address from the EC2 instanc
M. Create a new Amazon Elastic File System (Amazon EFS) file system to be used for SFTP file hostin
N. Create an AWS Fargate task definition to run an SFTP serve
O. Specify the EFS file system as a mount in the task definition Create a Fargate service by using the task definition, and place a Network Load Balancer (NLB> «i
front of the service When configuring the service, attach the security group with customer IP addresses to the tasks that run the SFTP server Associate the Elastic
IP address with the Nl B Sync all files from the SFTP server to the S3 bucket
P. Disassociate the Elastic IP address from the EC2 instance Create a multi-attach Amazon Elastic Block Store (Amazon EBS) volume to be used to SFTP file
hosting Create a Network Load Balancer (NLB) with the Elastic IP address attached Create an Auto Scaling group with EC2 instances that run an SFTP server
Define in the Auto Scaling group that instances that are launched should attach the newmulti-attach EBS volume Configure the Auto Scaling group to automatically
add instances behind the NLB Configure the Auto Scaling group to use the security group that allows customer IP addresses for the EC2 instances that the Auto
Scaling group launches Sync all files from the SFTP server to the new multi-attach EBS volume
Answer: B
Explanation:
https://aws.amazon.com/premiumsupport/knowledge-center/aws-sftp-endpoint-type/
https://docs.aws.amazon.com/transfer/latest/userguide/create-server-in-vpc.html https://aws.amazon.com/premiumsupport/knowledge-center/aws-sftp-endpoint-
type/
NEW QUESTION 3
- (Exam Topic 1)
A company has a website that enables users to upload videos. Company policy states the uploaded videos must be analyzed for restricted content. An uploaded
video is placed in Amazon S3, and a message is pushed to an Amazon SOS queue with the video's location. A backend application pulls this location from
Amazon SOS and analyzes the video.
The video analysis is compute-intensive and occurs sporadically during the day The website scales with demand. The video analysis application runs on a fixed
number of instances. Peak demand occurs during the holidays, so the company must add instances to the application dunng this time. All instances used are
currently on-demand Amazon EC2 T2 instances. The company wants to reduce the cost of the current solution.
Which of the following solutions is MOST cost-effective?
Answer: B
NEW QUESTION 4
- (Exam Topic 1)
A company hosts a large on-premises MySQL database at its main office that supports an issue tracking system used by employees around the world. The
company already uses AWS for some workloads and has created an Amazon Route 53 entry for the database endpoint that points to the on-premises database.
Management is concerned about the database being a single point of failure and wants a solutions architect to migrate the database to AWS without any data loss
or downtime.
Which set of actions should the solutions architect implement?
Answer: C
Explanation:
“Around the world” eliminates possibility for the maintenance window at night. The other difference is ability to leverage continuous replication in MySQL to Aurora
case.
NEW QUESTION 5
- (Exam Topic 1)
An online e-commerce business is running a workload on AWS. The application architecture includes a web tier, an application tier for business logic, and a
database tier for user and transactional data management. The database server has a 100 GB memory requirement. The business requires cost-efficient disaster
recovery for the application with an RTO of 5 minutes and an RPO of 1 hour. The business also has a regulatory requirement for out-of-region disaster recovery
with a minimum distance between the primary and alternate sites of 250 miles.
Which of the following options can the solutions architect design to create a comprehensive solution for this customer that meets the disaster recovery
requirements?
A. Back up the application and database data frequently and copy them to Amazon S3. Replicate the backups using S3 cross-region replication, and use AWS
Cloud Formation to instantiate infrastructure for disaster recovery and restore data from Amazon S3.
B. Employ a pilot light environment in which the primary database is configured with mirroring to build a standby database on m4.large in Ihe alternate regio
C. Use AWS Cloud Formation to instantiate the web servers, application servers, and load balancers in case of a disaster to bring the application up in the
alternate regio
D. Vertically resize the database to meet the full production demands, and use Amazon Route 53 to switch traffic to the alternate region.
E. Use a scaled-down version of the fully functional production environment in the alternate region that includes one instance of the web server, one instance of
the application server, and a replicated instance of the database server in standby mod
F. Place the web and the application tiers in an Auto Scaling group behind a load balancer, which can automatically scale when the load arrives to the applicatio
G. Use Amazon Route 53 to switch traffic to the alternate region,
H. Employ a multi-region solution with fully functional we
I. application, and database tiers in both regions with equivalent capacit
J. Activate the primary database in one region only and the standby database in the other regio
K. Use Amazon Route 53 to automatically switch traffic from one region to another using health check routing policies.
Answer: C
Explanation:
As RTO is in minutes
(https://docs.aws.amazon.com/wellarchitected/latest/reliability-pillar/plan-for-disaster-recovery-dr.html ) Warm standby (RPO in seconds, RTO in minutes): Maintain
a scaled-down version of a fully functional environment always running in the DR Region. Business-critical systems are fully duplicated and are always on, but with
a scaled down fleet. When the time comes for recovery, the system is scaled up quickly to handle the production load.
NEW QUESTION 6
- (Exam Topic 1)
A financial services company logs personally identifiable information 10 its application logs stored in Amazon S3. Due to regulatory compliance requirements, the
log files must be encrypted at rest. The security team has mandated that the company's on-premises hardware security modules (HSMs) be used to generate the
CMK material.
Which steps should the solutions architect take to meet these requirements?
Answer: C
Explanation:
https://aws.amazon.com/blogs/security/how-to-byok-bring-your-own-key-to-aws-kms-for-less-than-15-00-a-yea
https://docs.aws.amazon.com/kms/latest/developerguide/importing-keys-create-cmk.html
NEW QUESTION 7
- (Exam Topic 1)
A development team has created a new flight tracker application that provides near-real-time data to users. The application has a front end that consists of an
Application Load Balancer (ALB) in front of two large Amazon EC2 instances in a single Availability Zone. Data is stored in a single Amazon RDS MySQL DB
instance. An Amazon Route 53 DNS record points to the ALB.
Management wants the development team to improve the solution to achieve maximum reliability with the least amount of operational overhead.
Which set of actions should the team take?
Answer: D
Explanation:
Multi AZ ASG + ALB + Aurora = Less over head and automatic scaling
NEW QUESTION 8
- (Exam Topic 1)
A company has a complex web application that leverages Amazon CloudFront for global scalability and performance. Over time, users report that the web
application is slowing down.
The company's operations team reports that the CloudFront cache hit ratio has been dropping steadily. The cache metrics report indicates that query strings on
some URLs are inconsistently ordered and are specified sometimes in mixed-case letters and sometimes in lowercase letters.
Which set of actions should the solutions architect take to increase the cache hit ratio as quickly as possible?
A. Deploy a Lambda@Edge function to sort parameters by name and force them to be lowercas
B. Select the CloudFront viewer request trigger to invoke the function.
C. Update the CloudFront distribution to disable caching based on query string parameters.
D. Deploy a reverse proxy after the load balancer to post-process the emitted URLs in the application to force the URL strings to be lowercase.
E. Update the CloudFront distribution to specify casing-insensitive query string processing.
Answer: A
Explanation:
https://docs.amazonaws.cn/en_us/AmazonCloudFront/latest/DeveloperGuide/lambda-examples.html#lambda-ex Before CloudFront serves content from the cache
it will trigger any Lambda function associated with the Viewer Request, in which we can normalize parameters.
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/lambda-examples.html#lambda-examp
NEW QUESTION 9
- (Exam Topic 1)
A company has an application that sells tickets online and experiences bursts of demand every 7 days. The application has a stateless presentation layer running
on Amazon EC2. an Oracle database to store unstructured data catalog information, and a backend API layer. The front-end layer uses an Elastic Load Balancer
to distribute the load across nine On-Demand Instances over three Availability Zones (AZs). The Oracle database is running on a single EC2 instance. The
company is experiencing performance issues when running more than two concurrent campaigns. A solutions architect must design a solution that meets the
following requirements:
• Address scalability issues.
A. Create an Auto Scaling group for the front end with a combination of On-Demand and Spot Instances to reduce cost
B. Convert the Oracle database into a single Amazon RDS reserved DB instance.
C. Create an Auto Scaling group for the front end with a combination of On-Demand and Spot Instances to reduce cost
D. Create two additional copies of the database instance, then distribute the databases in separate AZs.
E. Create an Auto Scaling group for the front end with a combination of On-Demand and Spot Instances to reduce cost
F. Convert the tables in the Oracle database into Amazon DynamoDB tables.
G. Convert the On-Demand Instances into Spot Instances to reduce costs for the front en
H. Convert the tables in the Oracle database into Amazon DynamoDB tables.
Answer: C
Explanation:
Combination of On-Demand and Spot Instances + DynamoDB.
NEW QUESTION 10
- (Exam Topic 1)
A company wants to change its internal cloud billing strategy for each of its business units. Currently, the cloud governance team shares reports for overall cloud
spending with the head of each business unit. The company uses AWS Organizations lo manage the separate AWS accounts for each business unit. The existing
tagging standard in Organizations includes the application, environment, and owner. The cloud governance team wants a centralized solution so each business
unit receives monthly reports on its cloud spending. The solution should also send notifications for any cloud spending that exceeds a set threshold.
Which solution is the MOST cost-effective way to meet these requirements?
A. Configure AWS Budgets in each account and configure budget alerts that are grouped by application, environment, and owne
B. Add each business unit to an Amazon SNS topic for each aler
C. Use Cost Explorer in each account to create monthly reports for each business unit.
D. Configure AWS Budgets in the organization's master account and configure budget alerts that are grouped by application, environment, and owne
E. Add each business unit to an Amazon SNS topic for each aler
F. Use Cost Explorer in the organization's master account to create monthly reports for each business unit.
G. Configure AWS Budgets in each account and configure budget alerts lhat are grouped by application, environment, and owne
H. Add each business unit to an Amazon SNS topic for each aler
I. Use the AWS Billing and Cost Management dashboard in each account to create monthly reports for each business unit.
J. Enable AWS Cost and Usage Reports in the organization's master account and configure reports grouped by application, environment, and owne
K. Create an AWS Lambda function that processes AWS Cost and Usage Reports, sends budget alerts, and sends monthly reports to each business unit's email
list.
Answer: B
Explanation:
Configure AWS Budgets in the organization€™s master account and configure budget alerts that are grouped by application, environment, and owner. Add each
business unit to an Amazon SNS topic for each alert. Use Cost Explorer in the organization€™s master account to create monthly reports for each business unit.
https://aws.amazon.com/about-aws/whats-new/2019/07/introducing-aws-budgets-reports/#:~:text=AWS%20Bud
NEW QUESTION 10
- (Exam Topic 1)
A company has a three-tier application running on AWS with a web server, an application server, and an Amazon RDS MySQL DB instance. A solutions architect
is designing a disaster recovery (OR) solution with an RPO of 5 minutes.
Which solution will meet the company's requirements?
A. Configure AWS Backup to perform cross-Region backups of all servers every 5 minute
B. Reprovision the three tiers in the DR Region from the backups using AWS CloudFormation in the event of a disaster.
C. Maintain another running copy of the web and application server stack in the DR Region using AWS CloudFormation drill detectio
D. Configure cross-Region snapshots ol the DB instance to the DR Region every 5 minute
E. In the event of a disaster, restore the DB instance using the snapshot in the DR Region.
F. Use Amazon EC2 Image Builder to create and copy AMIs of the web and application server to both the primary and DR Region
G. Create a cross-Region read replica of the DB instance in the DR Regio
H. In the event of a disaster, promote the read replica to become the master and reprovision the servers with AWS CloudFormation using the AMIs.
I. Create AMts of the web and application servers in the DR Regio
J. Use scheduled AWS Glue jobs to synchronize the DB instance with another DB instance in the DR Regio
K. In the event of a disaster, switch to the DB instance in the DR Region and reprovision the servers with AWS CloudFormation using the AMIs.
Answer: C
Explanation:
deploying a brand new RDS instance will take >30 minutes. You will use EC2 Image builder to put the AMIs into the new region, but not use image builder to
LAUNCH them.
NEW QUESTION 15
- (Exam Topic 1)
A company is running a web application on Amazon EC2 instances in a production AWS account. The company requires all logs generated from the web
application to be copied to a central AWS account (or analysis and archiving. The company's AWS accounts are currently managed independently. Logging agents
are configured on the EC2 instances to upload the tog files to an Amazon S3 bucket in the central AWS account.
A solutions architect needs to provide access for a solution that will allow the production account to store log files in the central account. The central account also
needs to have read access to the tog files.
What should the solutions architect do to meet these requirements?
Answer: B
NEW QUESTION 17
- (Exam Topic 1)
A scientific organization requires the processing of text and picture data stored in an Amazon S3 bucket. The data is gathered from numerous radar stations during
a mission's live, time-critical phase. The data is uploaded by the radar stations to the source S3 bucket. The data is preceded with the identification number of the
radar station.
In a second account, the business built a destination S3 bucket. To satisfy a compliance target, data must be transferred from the source S3 bucket to the
destination S3 bucket. Replication is accomplished by using an S3 replication rule that covers all items in the source S3 bucket.
A single radar station has been recognized as having the most precise data. At this radar station, data replication must be completed within 30 minutes of the radar
station uploading the items to the source S3 bucket.
What actions should a solutions architect take to ensure that these criteria are met?
A. Set up an AWS DataSync agent to replicate the prefixed data from the source S3 bucket to the destination S3 bucke
B. Select to use at available bandwidth on the task, and monitor the task to ensure that it is in the TRANSFERRING statu
C. Create an Amazon EventBridge (Amazon CloudWatch Events) rule to trigger an alert if this status changes.
D. In the second account, create another S3 bucket to receive data from the radar station with the most accurate data Set up a new replication rule for this new S3
bucket to separate the replication from the other radar stations Monitor the maximum replication time to the destinatio
E. Create an Amazon EventBridge (Amazon CloudWatch Events) rule to trigger an alert when the time exceeds the desired threshold
F. Enable Amazon S3 Transfer Acceleration on the source S3 bucket, and configure the radar station with the most accurate data to use the new endpoint Monitor
the S3 destination bucket's TotalRequestLatency metric Create an Amazon EventBridge (Amazon CloudWatch Events) rule to trigger an alert if this status changes
G. Create a new S3 replication rule on the source S3 bucket that filters for the keys that use the prefix of the radar station with the most accurate data Enable S3
Replication Time Control (S3 RTC) Monitor the maximum replication time to the destination Create an Amazon EventBridge (Amazon CloudWatch Events) rule to
trigger an alert when the time exceeds the desired threshold
Answer: D
Explanation:
https://docs.aws.amazon.com/AmazonS3/latest/userguide/replication-time-control.html
NEW QUESTION 21
- (Exam Topic 1)
A large company with hundreds of AWS accounts has a newly established centralized internal process for purchasing new or modifying existing Reserved
Instances. This process requires all business units that want to purchase or modify Reserved Instances to submit requests to a dedicated team for procurement or
execution. Previously, business units would directly purchase or modify Reserved Instances in their own respective AWS accounts autonomously.
Which combination of steps should be taken to proactively enforce the new process in the MOST secure way possible? (Select TWO.)
A. Ensure all AWS accounts are part of an AWS Organizations structure operating in all features mode.
B. Use AWS Contig lo report on the attachment of an IAM policy that denies access to the ec2:PurchaseReservedlnstancesOffering and
ec2:ModifyReservedlnstances actions.
C. In each AWS account, create an IAM policy with a DENY rule to the ec2:PurchaseReservedlnstancesOffering and ec2:ModifyReservedInstances actions.
D. Create an SCP that contains a deny rule to the ec2:PurchaseReservedlnstancesOffering and ec2: Modify Reserved Instances action
E. Attach the SCP to each organizational unit (OU) of the AWS Organizations structure.
F. Ensure that all AWS accounts are part of an AWS Organizations structure operating in consolidated billing features mode.
Answer: AD
Explanation:
https://docs.aws.amazon.com/organizations/latest/APIReference/API_EnableAllFeatures.html
https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scp-strategies.html
NEW QUESTION 24
- (Exam Topic 1)
A company has a new application that needs to run on five Amazon EC2 instances in a single AWS Region. The application requires high-throughput, low-latency
network connections between all of the EC2 instances where the application will run. There is no requirement for the application to be fault tolerant.
Which solution will meet these requirements?
Answer: A
Explanation:
When you launch EC2 instances in a cluster they benefit from performance and low latency. No redundancy though as per the question
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html.
NEW QUESTION 25
- (Exam Topic 1)
A company is running a data-intensive application on AWS. The application runs on a cluster of hundreds of Amazon EC2 instances. A shared file system also
runs on several EC2 instances that store 200 TB of data. The application reads and modifies the data on the shared file system and generates a report. The job
runs once monthly, reads a subset of the files from the shared file system, and takes about 72 hours to complete. The compute instances scale in an Auto Scaling
group, but the instances that host the shared file system run continuously. The compute and storage instances are all in the same AWS Region.
A solutions architect needs to reduce costs by replacing the shared file system instances. The file system must provide high performance access to the needed
data for the duration of the 72-hour run.
Which solution will provide the LARGEST overall cost reduction while meeting these requirements?
A. Migrate the data from the existing shared file system to an Amazon S3 bucket that uses the S3 Intelligent-Tiering storage clas
B. Before the job runs each month, use Amazon FSx for Lustre to create a new file system with the data from Amazon S3 by using lazy loadin
C. Use the new file system as the shared storage for the duration of the jo
D. Delete the file system when the job is complete.
E. Migrate the data from the existing shared file system to a large Amazon Elastic Block Store (Amazon EBS) volume with Multi-Attach enable
F. Attach the EBS volume to each of the instances by using a user data script in the Auto Scaling group launch templat
G. Use the EBS volume as the shared storage for the duration of the jo
H. Detach the EBS volume when the job is complete.
I. Migrate the data from the existing shared file system to an Amazon S3 bucket that uses the S3 Standard storage clas
J. Before the job runs each month, use Amazon FSx for Lustre to create a new file system with the data from Amazon S3 by using batch loadin
K. Use the new file system as the shared storage for the duration of the jo
L. Delete the file system when the job is complete.
M. Migrate the data from the existing shared file system to an Amazon S3 bucke
N. Before the job runs each month, use AWS Storage Gateway to create a file gateway with the data from Amazon S3. Use the file gateway as the shared storage
for the jo
O. Delete the file gateway when the job is complete.
Answer: B
NEW QUESTION 29
- (Exam Topic 1)
An e-commerce company is revamping its IT infrastructure and is planning to use AWS services. The company's CIO has asked a solutions architect to design a
simple, highly available, and loosely coupled order processing application. The application is responsible (or receiving and processing orders before storing them
in an Amazon DynamoDB table. The application has a sporadic traffic pattern and should be able to scale during markeling campaigns to process the orders with
minimal delays.
Which of the following is the MOST reliable approach to meet the requirements?
A. Receive the orders in an Amazon EC2-hosted database and use EC2 instances to process them.
B. Receive the orders in an Amazon SOS queue and trigger an AWS Lambda function lo process them.
C. Receive the orders using the AWS Step Functions program and trigger an Amazon ECS container lo process them.
D. Receive the orders in Amazon Kinesis Data Streams and use Amazon EC2 instances to process them.
Answer: B
Explanation:
Q: How does Amazon Kinesis Data Streams differ from Amazon SQS?
Amazon Kinesis Data Streams enables real-time processing of streaming big data. It provides ordering of records, as well as the ability to read and/or replay
records in the same order to multiple Amazon Kinesis Applications. The Amazon Kinesis Client Library (KCL) delivers all records for a given partition key to the
same record processor, making it easier to build multiple applications reading from the same Amazon Kinesis data stream (for example, to perform counting,
aggregation, and filtering).
https://aws.amazon.com/kinesis/data-streams/faqs/
https://aws.amazon.com/blogs/big-data/unite-real-time-and-batch-analytics-using-the-big-data-lambda-architect
NEW QUESTION 30
- (Exam Topic 1)
A solutions architect needs to advise a company on how to migrate its on-premises data processing application to the AWS Cloud. Currently, users upload input
files through a web portal. The web server then stores the uploaded files on NAS and messages the processing server over a message queue. Each media file can
take up to 1 hour to process. The company has determined that the number of media files awaiting processing is significantly higher during business hours, with
the number of files rapidly declining after business hours.
What is the MOST cost-effective migration recommendation?
Answer: D
Explanation:
https://aws.amazon.com/blogs/compute/operating-lambda-performance-optimization-part-1/
NEW QUESTION 33
- (Exam Topic 1)
A company standardized its method of deploying applications to AWS using AWS CodePipeline and AWS Cloud Formation. The applications are in Typescript and
Python. The company has recently acquired another business that deploys applications to AWS using Python scripts.
Developers from the newly acquired company are hesitant to move their applications under CloudFormation because it would require than they learn a new
domain-specific language and eliminate their access to language features, such as looping.
How can the acquired applications quickly be brought up to deployment standards while addressing the developers' concerns?
A. Create CloudFormation templates and re-use parts of the Python scripts as instance user dat
B. Use the AWS Cloud Development Kit (AWS CDK) to deploy the application using these template
C. Incorporate the AWS CDK into CodePipeline and deploy the application to AWS using these templates.
D. Use a third-party resource provisioning engine inside AWS CodeBuild to standardize the deployment processes of the existing and acquired compan
E. Orchestrate the CodeBuild job using CodePipeline.
F. Standardize on AWS OpsWork
G. Integrate OpsWorks with CodePipelin
H. Have the developers create Chef recipes to deploy their applications on AWS.
I. Define the AWS resources using Typescript or Pytho
J. Use the AWS Cloud Development Kit (AWS CDK) to create CloudFormation templates from the developers' code, and use the AWS CDK to create
CloudFormation stack
K. Incorporate the AWS CDK as a CodeBuild job in CodePipeline.
Answer: D
NEW QUESTION 37
- (Exam Topic 1)
A company is using AWS Organizations lo manage multiple accounts. Due to regulatory requirements, the company wants to restrict specific member accounts to
certain AWS Regions, where they are permitted to deploy resources. The resources in the accounts must be tagged, enforced based on a group standard, and
centrally managed with minimal configuration.
What should a solutions architect do to meet these requirements?
A. Create an AWS Config rule in the specific member accounts to limit Regions and apply a tag policy.
B. From the AWS Billing and Cost Management console, in the master account, disable Regions for the specific member accounts and apply a tag policy on the
root.
C. Associate the specific member accounts with the roo
D. Apply a tag policy and an SCP using conditions to limit Regions.
E. Associate the specific member accounts with a new O
F. Apply a tag policy and an SCP using conditions to limit Regions.
Answer: D
NEW QUESTION 40
- (Exam Topic 1)
A financial services company receives a regular data feed from its credit card servicing partner Approximately 5.1 records are sent every 15 minutes in plaintext,
delivered over HTTPS directly into an Amazon S3 bucket with server-side encryption. This feed contains sensitive credit card primary account number (PAN) data
The company needs to automatically mask the PAN before sending the data to another S3 bucket for additional internal processing. The company also needs to
remove and merge specific fields, and then transform the record into JSON format Additionally, extra feeds are likely to be added in the future, so any design
needs to be easily expandable.
Which solutions will meet these requirements?
A. Trigger an AWS Lambda function on file delivery that extracts each record and writes it to an Amazon SQS queu
B. Trigger another Lambda function when new messages arrive in the SOS queue to process the records, writing the results to a temporary location in Amazon S3.
Trigger a final Lambda function once the SOS queue is empty to transform the records into JSON format and send the results to another S3 bucket for internal
processing.
C. Tigger an AWS Lambda function on file delivery that extracts each record and wntes it to an Amazon SOS queu
D. Configure an AWS Fargate container application to
E. automatically scale to a single instance when the SOS queue contains message
F. Have the application process each record, and transform the record into JSON forma
G. When the queue is empty, send the results to another S3 bucket for internal processing and scale down the AWS Fargate instance.
H. Create an AWS Glue crawler and custom classifier based on the data feed formats and build a table definition to match Trigger an AWS Lambda function on file
delivery to start an AWS Glue ETL job to transform the entire record according to the processing and transformation requirement
I. Define the output format as JSO
J. Once complete, have the ETL job send the results to another S3 bucket for internal processing.
K. Create an AWS Glue crawler and custom classifier based upon the data feed formats and build a table definition to matc
L. Perform an Amazon Athena query on file delivery to start an Amazon EMR ETL job to transform the entire record according to the processing and
transformation requirement
M. Define the output format as JSO
N. Once complete, send the results to another S3 bucket for internal processing and scale down the EMR cluster.
Answer: C
Explanation:
You can use a Glue crawler to populate the AWS Glue Data Catalog with tables. The Lambda function can be triggered using S3 event notifications when object
create events occur. The Lambda function will then trigger the Glue ETL job to transform the records masking the sensitive data and modifying the output format to
JSON. This solution meets all requirements.
Create an AWS Glue crawler and custom classifier based on the data feed formats and build a table definition to match. Trigger an AWS Lambda function on file
delivery to start an AWS Glue ETL job to transform the entire record according to the processing and transformation requirements. Define the output format as
JSON. Once complete, have the ETL job send the results to another S3 bucket for internal processing.
https://docs.aws.amazon.com/glue/latest/dg/trigger-job.html
https://d1.awsstatic.com/Products/product-name/diagrams/product-page-diagram_Glue_Event-driven-ETL-Pipel
NEW QUESTION 41
- (Exam Topic 1)
An online retail company hosts its stateful web-based application and MySQL database in an on-premises data center on a single server. The company wants to
increase its customer base by conducting more marketing campaigns and promotions. In preparation, the company wants to migrate its application and database
to AWS to increase the reliability of its architecture.
Which solution should provide the HIGHEST level of reliability?
Answer: B
NEW QUESTION 44
- (Exam Topic 1)
A company built an ecommerce website on AWS using a three-tier web architecture. The application is
Java-based and composed of an Amazon CloudFront distribution, an Apache web server layer of Amazon EC2 instances in an Auto Scaling group, and a backend
Amazon Aurora MySQL database.
Last month, during a promotional sales event, users reported errors and timeouts while adding items to their shopping carts. The operations team recovered the
logs created by the web servers and reviewed Aurora DB cluster performance metrics. Some of the web servers were terminated before logs could be collected
and the Aurora metrics were not sufficient for query performance analysis.
Which combination of steps must the solutions architect take to improve application performance visibility during peak traffic events? (Select THREE.)
A. Configure the Aurora MySQL DB cluster to publish slow query and error logs to Amazon CloudWatch Logs.
B. Implement the AWS X-Ray SDK to trace incoming HTTP requests on the EC2 instances and implement tracing of SQL queries with the X-Ray SDK for Java.
C. Configure the Aurora MySQL DB cluster to stream slow query and error logs to Amazon Kinesis.
D. Install and configure an Amazon CloudWatch Logs agent on the EC2 instances to send the Apache logsto CloudWatch Logs.
E. Enable and configure AWS CloudTrail to collect and analyze application activity from Amazon EC2 and Aurora.
F. Enable Aurora MySQL DB cluster performance benchmarking and publish the stream to AWS X-Ray.
Answer: ABD
Explanation:
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_LogAccess.Concepts.MySQL.html# https://aws.amazon.com/blogs/mt/simplifying-
apache-server-logs-with-amazon-cloudwatch-logs-insights/ https://docs.aws.amazon.com/xray/latest/devguide/xray-sdk-dotnet-messagehandler.html
https://docs.aws.amazon.com/xray/latest/devguide/xray-sdk-java-sqlclients.html
NEW QUESTION 47
- (Exam Topic 1)
A large company in Europe plans to migrate its applications to the AWS Cloud. The company uses multiple AWS accounts for various business groups. A data
privacy law requires the company to restrict developers'
access to AWS European Regions only.
What should the solutions architect do to meet this requirement with the LEAST amount of management overhead^
Answer: B
Explanation:
"This policy uses the Deny effect to deny access to all requests for operations that don't target one of the two approved regions (eu-central-1 and eu-west-1)."
https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps_examples_general.htm
https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_condition.html
NEW QUESTION 48
- (Exam Topic 1)
A company is building a hybrid solution between its existing on-premises systems and a new backend in AWS. The company has a management application to
monitor the state of its current IT infrastructure and automate responses to issues. The company wants to incorporate the status of its consumed AWS services
into the application. The application uses an HTTPS endpoint to receive updates.
Which approach meets these requirements with the LEAST amount of operational overhead?
A. Configure AWS Systems Manager OpsCenter to ingest operational events from the on-premises systems Retire the on-premises management application and
adopt OpsCenter as the hub
B. Configure Amazon EventBridge (Amazon CloudWatch Events) to detect and react to changes for AWS Health events from the AWS Personal Health
Dashboard Configure the EventBridge (CloudWatch Events) event to publish a message to an Amazon Simple Notification Service (Amazon SNS) topic and
subscribe the topic to the HTTPS endpoint of the management application
C. Modify the on-premises management application to call the AWS Health API to poll for status events of AWS services.
D. Configure Amazon EventBridge (Amazon CloudWatch Events) to detect and react to changes for AWS Health events from the AWS Service Health Dashboard
Configure the EventBridge (CloudWatch Events) event to publish a message to an Amazon Simple Notification Service (Amazon SNS) topic and subscribe the
topic to an HTTPS endpoint for the management application with a topic filter corresponding to the services being used
Answer: A
Explanation:
ALB & NLB both supports IPs as targets. Questions is based on TCP traffic over VPN to on-premise. TCP is layer 4 and the , load balancer should be NLB. Then
next questions does NLB supports loadbalcning traffic over VPN. And answer is YEs based on below URL.
https://aws.amazon.com/about-aws/whats-new/2018/09/network-load-balancer-now-supports-aws-vpn/
Target as IPs for NLB & ALB: https://aws.amazon.com/elasticloadbalancing/faqs/?nc=sn&loc=5 https://aws.amazon.com/elasticloadbalancing/application-load-
balancer/
NEW QUESTION 52
- (Exam Topic 1)
A company is building an image service on the web that will allow users to upload and search random photos. At peak usage, up to 10.000 users worldwide will
upload their images. The service will then overlay text on the uploaded images, which will then be published on the company website.
Which design should a solutions architect implement?
A. Store the uploaded images in Amazon Elastic File System (Amazon EFS). Send application log information about each image to Amazon CloudWatch Log
B. Create a fleet of Amazon EC2 instances that use CloudWatch Logs to determine which images need to be processe
C. Place processed images in anolher directory in Amazon EF
D. Enable Amazon CloudFront and configure the origin to be the one of the EC2 instances in the fleet.
E. Store the uploaded images in an Amazon S3 bucket and configure an S3 bucket event notification to send a message to Amazon Simple Notification Service
(Amazon SNS). Create a fleet of Amazon EC2 instances behind an Application Load Balancer (ALB) to pull messages from Amazon SNS to process the images
and place them in Amazon Elastic File System (Amazon EFS). Use Amazon CloudWatch metrics for the SNS message volume to scale out EC2 instance
F. Enable Amazon CloudFront and configure the origin lo be the ALB in front of the EC2 instances.
G. Store the uploaded images in an Amazon S3 bucket and configure an S3 bucket event notification to send a message to the Amazon Simple Queue Service
(Amazon SOS) queu
H. Create a fleet of Amazon EC2 instances to pull messages from Ihe SOS queue to process the images and place them in another S3 bucke
I. Use Amazon CloudWatch metrics for queue depth to scale out EC2 instance
J. Enable Amazon CloudFront and configure the origin to be the S3 bucket that contains the processed images.
K. Store the uploaded images on a shared Amazon Elastic Block Store (Amazon EBS) volume mounted toa fleet of Amazon EC2 Spot instance
L. Create an Amazon DynamoDB table that contains information about each uploaded image and whether it has been processe
M. Use an Amazon EventBridge (Amazon CloudWatch Events) rule lo scale out EC2 instance
N. Enable Amazon CloudFront and configure the origin to reference an Elastic Load Balancer in front of the fleet of EC2 instances.
Answer: C
NEW QUESTION 55
- (Exam Topic 1)
A large company is running a popular web application. The application runs on several Amazon EC2 Linux Instances in an Auto Scaling group in a private subnet.
An Application Load Balancer is targeting the Instances In the Auto Scaling group in the private subnet. AWS Systems Manager Session Manager Is configured,
and AWS Systems Manager Agent is running on all the EC2 instances.
The company recently released a new version of the application Some EC2 instances are now being marked as unhealthy and are being terminated As a result,
the application is running at reduced capacity A solutions architect tries to determine the root cause by analyzing Amazon CloudWatch logs that are collected from
the application, but the logs are inconclusive
How should the solutions architect gain access to an EC2 instance to troubleshoot the issue1?
Answer: D
Explanation:
https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-suspend-resume-processes.html
it shows For Amazon EC2 Auto Scaling, there are two primary process types: Launch and Terminate. The Launch process adds a new Amazon EC2 instance to
an Auto Scaling group, increasing its capacity. The Terminate process removes an Amazon EC2 instance from the group, decreasing its capacity. HealthCheck
process for EC2 autoscaling is not a primary process! It is a process along with the following AddToLoadBalancer AlarmNotification AZRebalance HealthCheck
InstanceRefresh ReplaceUnhealthy ScheduledActions From the requirements, Some EC2 instances are now being marked as unhealthy and are being
terminated. Application is running at reduced capacity not because instances are marked unhealthy but because they are being terminated.
https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-suspend-resume-processes.html#choosing-suspend-r
NEW QUESTION 60
- (Exam Topic 1)
A solutions architect is designing a network for a new cloud deployment. Each account will need autonomy to modify route tables and make changes. Centralized
and controlled egress internet connectivity is also needed. The cloud footprint is expected to grow to thousands of AWS accounts.
Which architecture will meet these requirements?
A. A centralized transit VPC with a VPN connection to a standalone VPC in each accoun
B. Outbound internet traffic will be controlled by firewall appliances.
C. A centralized shared VPC with a subnet for each accoun
D. Outbound internet traffic will controlled through a fleet of proxy servers.
E. A shared services VPC to host central assets to include a fleet of firewalls with a route to the internet.Each spoke VPC will peer to the central VPC.
F. A shared transit gateway to which each VPC will be attache
G. Outbound internet access will route through a fleet of VPN-attached firewalls.
Answer: D
Explanation:
https://docs.aws.amazon.com/whitepapers/latest/building-scalable-secure-multi-vpc-network-infrastructure/centr
https://docs.aws.amazon.com/whitepapers/latest/building-scalable-secure-multi-vpc-network-infrastructure/centr
AWS Transit Gateway helps you design and implement networks at scale by acting as a cloud router. As your network grows, the complexity of managing
incremental connections can slow you down. AWS Transit Gateway connects VPCs and on-premises networks through a central hub. This simplifies your network
and puts an end to complex peering relationships -- each new connection is only made once.
NEW QUESTION 62
- (Exam Topic 1)
A company maintains a restaurant review website. The website is a single-page application where files are stored in Amazon S3 and delivered using Amazon
CloudFront. The company receives several fake postings every day that are manually removed.
The security team has identified that most of the fake posts are from bots with IP addresses that have a bad reputation within the same global region. The team
needs to create a solution to help restrict the bots from accessing the website.
Which strategy should a solutions architect use?
A. Use AWS Firewall Manager to control the CloudFront distribution security setting
B. Create a geographical block rule and associate it with Firewall Manager.
C. Associate an AWS WAF web ACL with the CloudFront distributio
D. Select the managed Amazon IP reputation rule group for the web ACL with a deny action.
E. Use AWS Firewall Manager to control the CloudFront distribution security setting
F. Select the managed Amazon IP reputation rule group and associate it with Firewall Manager with a deny action.
G. Associate an AWS WAF web ACL with the CloudFront distributio
H. Create a rule group for the web ACL with a geographical match statement with a deny action.
Answer: B
Explanation:
IP reputation rule groups allow you to block requests based on their source. Choose one or more of these rule groups if you want to reduce your exposure to
BOTS!!!! traffic or exploitation attempts
The Amazon IP reputation list rule group contains rules that are based on Amazon internal threat intelligence. This is useful if you would like to block IP addresses
typically associated with bots or other threats. Inspects for a list of IP addresses that have been identified as bots by Amazon threat intelligence.
NEW QUESTION 66
- (Exam Topic 1)
A company is serving files to Its customers through an SFTP server that is accessible over the internet The SFTP server is running on a single Amazon EC2
instance with an Elastic IP address attached Customers connect to the SFTP server through its Elastic IP address and use SSH (or authentication. The EC2
instance also has an attached security group that allows access from all customer IP addresses.
A solutions architect must implement a solution to improve availability, minimize the complexity of infrastructure management, and minimize the disruption to
customers who access files The solution must not change the way customers connect.
Which solution will meet these requirements?
. Sync all files from the SFTP server to the new multi-attach EBS volume.
Answer: B
Explanation:
https://docs.aws.amazon.com/transfer/latest/userguide/create-server-in-vpc.html https://aws.amazon.com/premiumsupport/knowledge-center/aws-sftp-endpoint-
type/
NEW QUESTION 69
- (Exam Topic 1)
A company wants to migrate an application to Amazon EC2 from VMware Infrastructure that runs in an
on-premises data center. A solutions architect must preserve the software and configuration settings during the migration.
What should the solutions architect do to meet these requirements?
A. Configure the AWS DataSync agent to start replicating the data store to Amazon FSx for Windows File Server Use the SMB share to host the VMware data stor
B. Use VM Import/Export to move the VMs to Amazon EC2.
C. Use the VMware vSphere client to export the application as an image in Open Virealization Format (OVF) format Create an Amazon S3 bucket to store the
image in the destination AWS Regio
D. Create and apply an IAM role for VM Import Use the AWS CLI to run the EC2 import command.
E. Configure AWS Storage Gateway for files service to export a Common Internet File System (CIFSJ shar
F. Create a backup copy to the shared folde
G. Sign in to the AWS Management Console and create an AMI from the backup copy Launch an EC2 instance that is based on the AMI.
H. Create a managed-instance activation for a hybrid environment in AWS Systems Manage
I. Download and install Systems Manager Agent on the on-premises VM Register the VM with Systems Manager to be a managed instance Use AWS Backup to
create a snapshot of the VM and create an AM
J. Launch an EC2 instance that is based on the AMI
Answer: B
Explanation:
https://docs.aws.amazon.com/vm-import/latest/userguide/vmimport-image-import.html
- Export an OVF Template
- Create / use an Amazon S3 bucket for storing the exported images. The bucket must be in the Region where you want to import your VMs.
- Create an IAM role named vmimport.
- You'll use AWS CLI to run the import commands. https://aws.amazon.com/premiumsupport/knowledge-center/import-instances/
NEW QUESTION 73
- (Exam Topic 1)
A solutions architect is building a web application that uses an Amazon RDS for PostgreSQL DB instance The DB instance is expected to receive many more
reads than writes The solutions architect needs to ensure that the large amount of read traffic can be accommodated and that the DB instance is highly available.
Which steps should the solutions architect take to meet these requirements? (Select THREE.)
A. Create multiple read replicas and put them into an Auto Scaling group
B. Create multiple read replicas in different Availability Zones.
C. Create an Amazon Route 53 hosted zone and a record set for each read replica with a TTL and a weighted routing policy
D. Create an Application Load Balancer (ALBJ and put the read replicas behind the ALB.
E. Configure an Amazon CloudWatch alarm to detect a failed read replica Set the alarm to directly invoke an AWS Lambda function to delete its Route 53 record
set.
F. Configure an Amazon Route 53 health check for each read replica using its endpoint
Answer: BCF
Explanation:
https://aws.amazon.com/premiumsupport/knowledge-center/requests-rds-read-replicas/
You can use Amazon Route 53 weighted record sets to distribute requests across your read replicas. Within a Route 53 hosted zone, create individual record sets
for each DNS endpoint associated with your read replicas and give them the same weight. Then, direct requests to the endpoint of the record set. You can
incorporate Route 53 health checks to be sure that Route 53 directs traffic away from unavailable read replicas
NEW QUESTION 78
- (Exam Topic 1)
A company is moving a business-critical multi-tier application to AWS. The architecture consists of a desktop client application and server infrastructure. The server
infrastructure resides in an on-premises data center that frequently fails to maintain the application uptime SLA of 99.95%. A solutions architect must re-architect
the application to ensure that it can meet or exceed the SLA.
The application contains a PostgreSQL database running on a single virtual machine. The business logic and presentation layers are load balanced between
multiple virtual machines. Remote users complain about slow load times while using this latency-sensitive application.
Which of the following will meet the availability requirements with little change to the application while improving user experience and minimizing costs?
A. Migrate the database to a PostgreSQL database in Amazon EC2. Host the application and presentation layers in automatically scaled Amazon ECS containers
behind an Application Load Balance
B. Allocate an Amazon Workspaces Workspace for each end user to improve the user experience.
C. Migrate the database to an Amazon RDS Aurora PostgreSQL configuratio
D. Host the application and presentation layers in an Auto Scaling configuration on Amazon EC2 instances behind an Application Load Balance
E. Use Amazon AppStream 2.0 to improve the user experience.
F. Migrate the database to an Amazon RDS PostgreSQL Mulli-AZ configuratio
G. Host the application and presentation layers in automatically scaled AWS Fargate containers behind a Network Load Balance
H. Use Amazon ElastiCache to improve the user experience.
I. Migrate the database to an Amazon Redshift cluster with at least two node
J. Combine and host the application and presentation layers in automatically scaled Amazon ECS containers behind an Application Load Balance
K. Use Amazon CloudFront to improve the user experience.
Answer: B
Explanation:
Aurora would improve availability that can replicate to multiple AZ (6 copies). Auto scaling would improve the performance together with a ALB. AppStream is like
Citrix that deliver hosted Apps to users.
NEW QUESTION 82
- (Exam Topic 1)
A company requires that all internal application connectivity use private IP addresses. To facilitate this policy, a solutions architect has created interface endpoints
to connect to AWS public services. Upon testing, the solutions architect notices that the service names are resolving to public IP addresses, and that internal
services cannot connect to the interface endpoints.
Which step should the solutions architect take to resolve this issue?
A. Update the subnet route table with a route to the interface endpoint.
B. Enable the private DNS option on the VPC attributes.
C. Configure the security group on the interface endpoint to allow connectivity to the AWS services.
D. Configure an Amazon Route 53 private hosted zone with a conditional forwarder for the internal application.
Answer: C
Explanation:
https://docs.aws.amazon.com/vpc/latest/privatelink/vpce-interface.html
NEW QUESTION 83
- (Exam Topic 1)
A company runs a popular public-facing ecommerce website. Its user base is growing quickly from a local
market to a national market. The website is hosted in an on-premises data center with web servers and a MySQL database. The company wants to migrate its
workload (o AWS. A solutions architect needs to create a solution to:
• Improve security
• Improve reliability Improve availability
• Reduce latency
• Reduce maintenance
Which combination of steps should the solutions architect take to meet these requirements? (Select THREE.)
A. Use Amazon EC2 instances in two Availability Zones for the web servers in an Auto Scaling group behind an Application Load Balancer.
B. Migrate the database to a Multi-AZ Amazon Aurora MySQL DB cluster.
C. Use Amazon EC2 instances in two Availability Zones to host a highly available MySQL database cluster.
D. Host static website content in Amazon S3. Use S3 Transfer Acceleration to reduce latency while serving webpage
E. Use AWS WAF to improve website security.
F. Host static website content in Amazon S3. Use Amazon CloudFronl to reduce latency while serving webpage
G. Use AWS WAF to improve website security
H. Migrate the database to a single-AZ Amazon RDS for MySQL DB instance.
Answer: ABE
NEW QUESTION 85
- (Exam Topic 1)
A company is using AWS CodePipeline for the CI/CO of an application to an Amazon EC2 Auto Scaling group. All AWS resources are defined in AWS
CloudFormation templates. The application artifacts are stored in an Amazon S3 bucket and deployed to the Auto Scaling group using instance user data scripts.
As the application has become more complex, recent resource changes in the Cloud Formation templates have caused unplanned downtime.
How should a solutions architect improve the CI'CD pipeline to reduce the likelihood that changes in the templates will cause downtime?
A. Adapt the deployment scripts to detect and report CloudFormation error conditions when performing deployment
B. Write test plans for a testing team to execute in a non-production environment before approving the change for production.
C. Implement automated testing using AWS CodeBuild in a test environmen
D. Use CloudFormation changesets to evaluate changes before deploymen
E. Use AWS CodeDeploy to leverage blue/green deployment patterns to allow evaluations and the ability to revert changes, if needed.
F. Use plugins for the integrated development environment (IDE) to check the templates for errors, and use the AWS CLI to validate that the templates are correc
G. Adapt the deployment code to check for error conditions and generate notifications on error
H. Deploy to a test environment and execute a manual test plan before approving the change for production.
I. Use AWS CodeDeploy and a blue/green deployment pattern with CloudFormation to replace the user data deployment script
J. Have the operators log in to running instances and go through a manual test plan to verify the application is running as expected.
Answer: B
Explanation:
https://aws.amazon.com/blogs/devops/performing-bluegreen-deployments-with-aws-codedeploy-and-auto-scalin When one adopts go infrastructure as code, we
need to test the infrastructure code as well via automated testing, and revert to original if things are not performing correctly.
NEW QUESTION 86
- (Exam Topic 1)
A multimedia company needs to deliver its video-on-demand (VOD) content to its subscribers in a
cost-effective way. The video files range in size from 1-15 GB and are typically viewed frequently for the first 6 months alter creation, and then access decreases
considerably. The company requires all video files to remain immediately available for subscribers. There are now roughly 30.000 files, and the company
anticipates doubling that number over time.
What is the MOST cost-effective solution for delivering the company's VOD content?
C. Use AWS Elemental MediaConvert and store the adaptive bitrate video files in Amazon S3. Configure an AWS Elemental MediaPackage endpoint to deliver the
content from Amazon S3.
D. Store the video files in Amazon Elastic File System (Amazon EFS) Standar
E. Enable EFS lifecycle management to move the video files to EFS Infrequent Access after 6 month
F. Create an Amazon EC2 Auto Scaling group behind an Elastic Load Balancer to deliver the content from Amazon EFS.
G. Store the video files in Amazon S3 Standar
H. Create S3 Lifecycle rules to move the video files to S3 Standard-Infrequent Access (S3 Standard-IA) after 6 months and to S3 Glacier Deep Archive after 1 yea
I. Use Amazon CloudFront to deliver the content with the S3 bucket as the origin.
Answer: A
Explanation:
https://d1.awsstatic.com/whitepapers/amazon-cloudfront-for-media.pdf https://aws.amazon.com/solutions/implementations/video-on-demand-on-aws/
NEW QUESTION 88
- (Exam Topic 1)
A team collects and routes behavioral data for an entire company The company runs a Multi-AZ VPC environment with public subnets, private subnets, and in
internet gateway Each public subnet also contains a NAT gateway Most of the company's applications read from and write to Amazon Kinesis Data Streams. Most
of the workloads am in private subnets.
A solutions architect must review the infrastructure The solutions architect needs to reduce costs and maintain the function of the applications The solutions
architect uses Cost Explorer and notices that the cost in the EC2-Other category is consistently high A further review shows that NatGateway-Bytes charges are
increasing the cost in the EC2-Other category.
What should the solutions architect do to meet these requirements?
Answer: D
Explanation:
https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints-access.html
https://aws.amazon.com/premiumsupport/knowledge-center/vpc-reduce-nat-gateway-transfer-costs/
VPC endpoint policies enable you to control access by either attaching a policy to a VPC endpoint or by using additional fields in a policy that is attached to an IAM
user, group, or role to restrict access to only occur via the specified VPC endpoint
NEW QUESTION 90
- (Exam Topic 1)
A developer reports receiving an Error 403: Access Denied message when they try to download an object from an Amazon S3 bucket. The S3 bucket is accessed
using an S3 endpoint inside a VPC. and is encrypted with an AWS KMS key. A solutions architect has verified that (he developer is assuming the correct IAM role
in the account that allows the object to be downloaded. The S3 bucket policy and the NACL are also valid.
Which additional step should the solutions architect take to troubleshoot this issue?
A. Ensure that blocking all public access has not been enabled in the S3 bucket.
B. Verify that the IAM rote has permission to decrypt the referenced KMS key.
C. Verify that the IAM role has the correct trust relationship configured.
D. Check that local firewall rules are not preventing access to the S3 endpoint.
Answer: B
NEW QUESTION 94
- (Exam Topic 1)
A company has an internal application running on AWS that is used to track and process shipments in the company's warehouse. Currently, after the system
receives an order, it emails the staff the information needed to ship a package. Once the package is shipped, the staff replies to the email and the order is marked
as shipped.
The company wants to stop using email in the application and move to a serverless application model. Which architecture solution meets these requirements?
A. Use AWS Batch to configure the different tasks required lo ship a packag
B. Have AWS Batch trigger an AWS Lambda function that creates and prints a shipping labe
C. Once that label is scanne
D. as it leaves the warehouse, have another Lambda function move the process to the next step in the AWS Batch job.B.
E. When a new order is created, store the order information in Amazon SQ
F. Have AWS Lambda check the queue every 5 minutes and process any needed wor
G. When an order needs to be shipped, have Lambda print the label in the warehous
H. Once the label has been scanned, as it leaves the warehouse, have an Amazon EC2 instance update Amazon SOS.
I. Update the application to store new order information in Amazon DynamoD
J. When a new order is created, trigger an AWS Step Functions workflow, mark the orders as "in progress," and print a package label to the warehous
K. Once the label has been scanned and fulfilled, the application will trigger an AWS Lambda function that will mark the order as shipped and complete the
workflow.
L. Store new order information in Amazon EF
M. Have instances pull the new information from the NFS and send that information to printers in the warehous
N. Once the label has been scanned, as it leaves the warehouse, have Amazon API Gateway call the instances to remove the order information from Amazon
EFS.
Answer: C
NEW QUESTION 97
- (Exam Topic 1)
A company uses AWS Transit Gateway for a hub-and-spoke model to manage network traffic between many VPCs. The company is developing a new service that
must be able to send data at 100 Gbps. The company needs a faster connection to other VPCs in the same AWS Region.
Which solution will meet these requirements?
Answer: D
A. Use Amazon EC2 Image Builder to create AMIs for the legacy server
B. Use the AMIs to provision EC2 instances to recreate the applications in the AWS.Clou
C. Place an Application Load Balancer (ALB) in front of the EC2 instance
D. Use Amazon Route 53 to point the DNS names of the web forms to the ALB.
E. Create one Amazon DynamoDB table to store data for all the data input Use the application form name as the table key to distinguish data item
F. Create an Amazon Kinesis data stream to receive the data input and store the input in DynamoD
G. Use Amazon Route 53 to point the DNS names of the web forms to the Kinesis data stream's endpoint.
H. Create Docker images for each server of the legacy web form application
I. Create an Amazon Elastic Container Service (Amazon ECS) cluster on AWS Fargat
J. Place an Application Load Balancer in front of the ECS cluste
K. Use Fargate task storage to store the web form data.
L. Provision an Amazon Aurora Serverless cluste
M. Build multiple schemas for each web form's data storag
N. Use Amazon API Gateway and an AWS Lambda function to recreate the data input form
O. Use Amazon Route 53 to point the DNS names of the web forms to their corresponding API Gateway endpoint.
Answer: D
Explanation:
Provision an Amazon Aurora Serverless cluster. Build multiple schemas for each web forms data storage. Use Amazon API Gateway and an AWS Lambda
function to recreate the data input forms. Use Amazon Route 53 to point the DNS names of the web forms to their corresponding API Gateway endpoint.
A. It exhausted the I/O credit balance due to provisioning low disk storage during the setup phase.
B. It caused the data in the tables to change frequently, requiring indexes to be rebuilt to optimize queries.
C. It exhausted the maximum number of allowed connections to the database instance.
D. It exhausted the network bandwidth available to the RDS for MySQL DB instance.
Answer: A
Explanation:
"When using General Purpose SSD storage, your DB instance receives an initial I/O credit balance of 5.4 million I/O credits. This initial credit balance is enough to
sustain a burst performance of 3,000 IOPS for 30 minutes."
https://aws.amazon.com/blogs/database/how-to-use-cloudwatch-metrics-to-decide-between-general-purpose-or
Answer: C
Answer: ACE
Answer: D
A. Update the S3 bucket policy for the s3-elb-logs bucket to allow the s3 PutBucketLogging action for the central AWS account ID
B. Update the S3 bucket policy for the s3-eib-logs bucket to allow the s3 PutObject and s3 DeleteObject actions for the AppDev AppTest and AppProd account IDs
C. Update the S3 bucket policy for the s3-elb-logs bucket to allow the s3 PutObject action for the AppDev AppTest and AppProd account IDs
D. Enable access logging for the ELB
E. Set the S3 location to the s3-elb-logs bucket
F. Enable Amazon S3 default encryption using server-side encryption with S3 managed encryption keys (SSE-S3) for the s3-elb-logs S3 bucket
Answer: AE
A. Use Amazon ECS containers for the web application and Spot Instances for the Auto Scaling group that processes the SQS queu
B. Replace the custom software with Amazon Recognition to categorize the videos.
C. Store the uploaded videos n Amazon EFS and mount the file system to the EC2 instances for Te web applicatio
D. Process the SOS queue with an AWS Lambda function that calls the Amazon Rekognition API to categorize the videos.
E. Host the web application in Amazon S3. Store the uploaded videos in Amazon S3. Use S3 event notifications to publish events to the SQS queue Process the
SQS queue with an AWS Lambda function that calls the Amazon Rekognition API to categorize the videos.
F. Use AWS Elastic Beanstalk to launch EC2 instances in an Auto Scaling group for the web application and launch a worker environment to process the SQS
queue Replace the custom software with Amazon Rekognition to categorize the videos.
Answer: D
A. Create a new AWS account to hold user and service accounts, such as an identity account Create users and groups m the identity accoun
B. Create roles with appropriate permissions in the production and testing accounts Add the identity account to the trust policies for the roles
C. Modify permissions in the production and testing accounts to limit creating new IAM users to members of the operations team Set a strong IAM password policy
on each account Create new IAM users and groups in each account to Limit developer access to just the services required to complete their job function.
D. Create a script that runs on each account that checks user accounts For adherence to a security policy.Disable any user or service accounts that do not comply.
E. Create all user accounts in the production account Create roles for access in me production account and testing account
F. Grant cross-account access from the production account to the testing account
Answer: A
A. Create a DynamoDB global table to replicate data between us-east-1 and eu-west-1. Enable continuous backup on the DynamoDB table in us-east-1. Enable
versioning on the S3 bucket
B. Create an AWS Lambda function triggered by Amazon CloudWatch Events to make regular backups of the DynamoDB table Set up S3 cross-region replication
from us-east-1 to eu-west-1 Set up MFA delete on the S3 bucket in us-east-1.
C. Create a DynamoDB global table to replicate data between us-east-1 and eu-west-1. Enable versioning on the S3 bucket Implement strict ACLs on the S3
bucket
D. Create a DynamoDB global table to replicate data between us-east-1 and eu-west-1. Enable continuous backup on the DynamoDB table in us-east-1. Set up S3
cross-region replication from us-east-1 toeu-west-1.
Answer: B
A. Use AWS Application Migration Service (CloudEnsure Migration) to migrate the Windows servers to AW
B. Create a Replication Settings templat
C. Install the AWS Replication Agent on the source servers
D. Use AWS DataSync to migrate the Windows servers to AW
E. Install the DataSync agent on the source server
F. Configure a blueprint for the target server
G. Begin the replication process.
H. Use AWS Server Migration Service (AWS SMS) to migrate the Windows servers to AW
I. Install the SMS Connector on the source server
J. Replicate the source servers to AW
K. Convert the replicated volumes to AMIs to launch EC2 instances.
L. Use AWS Migration Hub to migrate the Windows servers to AW
M. Create a project in Migration Hub.Track the progress of server migration by using the built-in dashboard.
Answer: A
A. Configure Vie VPC DHCP options set to point to on-premises DNS server IP addresse
B. Ensure that security groups for EC2 instances allow outbound access to port 53 on those DNS server IP addresses.
C. Launch an EC2 instance that has DNS BIND installed and configure
D. Ensure that the security groups that are attached to the EC2 instance can access the on-premises DNS server IP address on port 53. Configure BIND to
forward DNS queries to on-premises DNS server IP addresses Configure each migrated EC2 instances DNS settings to point to the BIND server IP address.
E. Create a new outbound endpoint in Route 53. and attach me endpoint to the VP
F. Ensure that the security groups that are attached to the endpoint can access the on-premises DNS server IP address on port 53 Create a new Route 53
Resolver rule that routes on-premises designated traffic to theon-premises DNS server.
G. Create a new private DNS zone in Route 53 with the same domain name as the on-premises domain.Create a single wildcard record with the on-premises DNS
server IP address as the record's address.
Answer: A
A. Create an AWS Site-to-Site VPN connection Configure integration between a VPN and AD D
B. Use an Amazon Workspaces client with MFA support enabled to establish a VPN connection.
C. Create an AWS Client VPN endpoint Create an AD Connector directory for integration with AD DS Enable MFA for AD Connector Use AWS Client VPN to
establish a VPN connection.
D. Create multiple AWS Site-to-Site VPN connections by using AWS VPN CloudHub Configure integration between AWS VPN CloudHub and AD DS Use AWS
Cop4ot to establish a VPN connection.
E. Create an Amazon WorkLink endpoint Configure integration between Amazon WorkLink and AD D
F. Enable MFA in Amazon WorkLink Use AWS Client VPN to establish a VPN connection.
Answer: B
A. Create an organization In AWS Organizations Create a single SCP for least privilege access across all accounts Create a single OU for all accounts Configure
an IAM identity provider tor federation with the on-premises AD FS server Configure a central togging account with a defined process for log generating services to
send log events to the central accoun
B. Enable AWS Config in the central account with conformance packs for all accounts.
C. Create an organization In AWS Organizations Enable AWS Control Tower on the organizatio
D. Review included guardrails for SCP
E. Check AWS Config for areas that require additions Add OUs as necessary Connect AWS Single Sign-On to the on-premises AD FS server
F. Create an organization in AWS Organizations Create SCPs for least privilege access Create an OU structure, and use it to group AWS accounts Connect AWS
Single Sign-On to the on-premises AD FS serve
G. Configure a central logging account with a defined process for tog generating services to send log events to the central account Enable AWS Config in the
central account with aggregators and conformance packs.
H. Create an organization in AWS Organizations Enable AWS Control Tower on the organization Review included guardrails for SCP
I. Check AWS Config for areas that require additions Configure an IAM identity provider for federation with the on-premises AD FS server.
Answer: A
Answer: A
A. Configure CloudTrail and VPC Flow Logs m each AWS account to send data to a centralized Amazon S3 Ducket in the fogging accoun
B. Create an AWS Lambda function to load data from the S3 bucket to Amazon ES m the togging account
C. Configure CloudTrail and VPC Flow Logs to send data to a fog group m Amazon CloudWatch Logs n each AWS account Configure a CloudWatch subscription
filter m each AWS account to send data to Amazon Kinesis Data Firehose In the fogging account Load data from Kinesis Data Firehose Into Amazon ES in the
logging account
D. Configure CloudTrail and VPC Flow Logs to send data to a separate Amazon S3 bucket In each AWS accoun
E. Create an AWS Lambda function triggered by S3 evens to copy the data to a centralized logging bucke
F. Create another Lambda function lo load data from the S3 bucket to Amazon ES in the logging account.
G. Configure CloudTrail and VPC Flow Logs to send data to a fog group in Amazon CloudWatch Logs n each AWS account Create AWS Lambda functions in
each AWS account to subscribe to the tog groups and stream the data to an Amazon S3 bucket in the togging accoun
H. Create another Lambda function to toad data from the S3 bucket to Amazon ES in the logging account.
Answer: A
Answer: C
A. Create separate OUs in AWS Organizations for each development unit Assign the created OUs to the company AWS accounts Create separate SCPs with a
deny action and a StringNotEquals condition for the DevelopmentUnit resource tag that matches the development unit name Assign the SCP to the corresponding
OU
B. Pass an attribute for DevelopmentUnit as an AWS Security Token Service (AWS STS) session tag during SAML federation Update the IAM policy for the
developers' assumed IAM role with a deny action and a StringNotEquals condition for the DevelopmentUnit resource tag and aws PrincipalTag/DevelopmentUnit
C. Pass an attribute for DevelopmentUnit as an AWS Security Token Service (AWS STS) session tag during SAML federation Create an SCP with an allow action
and a StrmgEquals condition for the DevelopmentUnit resource tag and aws Principal Tag 'DevelopmentUnit Assign the SCP to the root OU.
D. Create separate IAM policies for each development unit For every IAM policy add an allow action and a StringEquals condition for the DevelopmentUnit
resource tag and the development unit name During SAML federation use AWS Security Token Service (AWS STS) to assign the IAM policy and match the
development unit name to the assumed IAM role
Answer: A
A. Create a new RDS for PostgreSQL DB instance in the target account Use the AWS Schema Conversion Tool (AWS SCT) to migrate the database schema from
the source database to the target database.
B. Use the AWS Schema Conversion Tool (AWS SCT) to create a new RDS for PostgreSQL DB instance in the target account with the schema and initial data
from the source database
C. Configure VPC peering between the VPCs in the two AWS accounts to provide connectivity to both DB instances from the target accoun
D. Configure the security groups that are attached to each DB instance to allow traffic on the database port from the VPC in the target account
E. Temporarily allow the source DB instance to be publicly accessible to provide connectivity from the VPC in the target account Configure the security groups that
are attached to each DB instance to allow traffic on the database port from the VPC in the target account.
F. Use AWS Database Migration Service (AWS DMS) in the target account to perform a full load plus change data capture (CDC) migration from the source
database to the target database When the migration is complete, change the CNAME record to point to the target DB instance endpoint
G. Use AWS Database Migration Service (AWS DMS) in the target account to perform a change data capture (CDC) migration from the source database to the
target database When the migration is complete change the CNAME record to point to the target DB instance endpoint
Answer: BCE
Answer: ABE
Answer: C
A. Configure the application instances to communicate with AWS Systems Manager Grant access to the system administrators to use Session Manager to
establish a session with the application instances Terminate the bastion host
B. Update the security group of the bastion host to allow traffic from only the public IP addresses of the branch offices
C. Configure an AWS Client VPN endpoint and provision each system administrator with a certificate to establish a VPN connection to the application VPC Update
the security group of the application instances to allow traffic from only the Client VPN IPv4 CID
D. Terminate the bastion host.
E. Configure the application instances to communicate with AWS Systems Manage
F. Grant access to the system administrators to issue commands to the application instances by using Systems Manager Run Comman
G. Terminate the bastion host.
Answer: A
Explanation:
"Session Manager removes the need to open inbound ports, manage SSH keys, or use bastion hosts" Ref: https://docs.aws.amazon.com/systems-
manager/latest/userguide/session-manager.html
A. Set up an Amazon Simple Notification Service (Amazon SNS) topic in the security team's AWS account Deploy an AWS Lambda function in each AWS account
Configure the Lambda function to run every time an SNS topic receives a message Configure the Lambda function to take an IP address as input and add it to a
list of security groups in the account Instruct the security team to distribute changes by publishing messages to its SNS topic
B. Create new customer-managed prefix lists in each AWS account within the organization Populate theprefix lists in each account with all internal CIDR ranges
Notify the owner of each AWS account to allow the new customer-managed prefix list IDs in their accounts in their security groups Instruct the security team to
Answer: A
A. Convert the API Gateway Regional endpoint to an edge-optimized endpoint Enable caching in the production stage.
B. Implement an Amazon ElastiCache for Redis cache to store the results of the database calls Modify the Lambda functions to use the cache
C. Modify the Aurora Serverless DB cluster configuration to increase the maximum amount of available memory
D. Enable throttling in the API Gateway production stage Set the rate and burst values to limit the incoming calls
Answer: A
A. Create three public subnets in the Neptune VPC and route traffic through an interne: gateway Host theLambda functions m the three new public subnets
B. Create three private subnets in the Neptune VPC and route internet traffic through a NAT gateway Host the Lambda functions In the three new private subnets.
C. Host the Lambda functions outside the VP
D. Update the Neptune security group to allow access from the IP ranges of the Lambda functions.
E. Host the Lambda functions outside the VP
F. Create a VPC endpoint for the Neptune database, and have the Lambda functions access Neptune over the VPC endpoint
G. Create three private subnets in the Neptune VP
H. Host the Lambda functions m the three new isolated subnet
I. Create a VPC endpoint for DynamoD
J. and route DynamoDB traffic to the VPC endpoint
Answer: AC
A. Deploy a Cl-CD pipeline that incorporates AMIs to contain the application and their configurations Deploy the application by replacing Amazon EC2 instances
B. Specify AWS Elastic Beanstak to sage in a secondary environment as the deployment target for the CI/CD pipeline of the applicatio
C. To deploy swap the staging and production environment URLs.
D. Use AWS Systems Manager to re-provision the infrastructure for each deployment Update the AmazonEC2 user data to pull the latest code art-fact from
Amazon S3 and use Amazon Route 53 weighted routing to point to the new environment
E. Roll out At application updates as pan of an Auto Scaling event using prebuilt AMI
F. Use new versions of the AMIs to add instances, and phase out all instances that use the previous AMI version with the configured termination policy during a
deployment event.
Answer: B
Explanation:
It is the fastest when it comes to rollback and deploying changes every hour
A. A blue/green deployment
B. A linear deployment
C. A canary deployment
D. An all-at-once deployment
Answer: C
A. Use AWS Backup to create a point-in-time backup of the file system Restore the backup to a new FSx for Windows File Server file system Select SSD as the
storage type Select 32 MBps as the throughput capacity When the backup and restore process is completed adjust the DNS alias accordingly Delete the original
file system
B. Disconnect users from the file system In the Amazon FSx console, update the throughput capacity to 32 MBps Update the storage type to SSD Reconnect
users to the file system
C. Deploy an AWS DataSync agent onto a new Amazon EC2 instanc
D. Create a task Configure the existing file system as the source location Configure a new FSx forWindows File Server file system with SSD storage and 32 MBps
of throughput as the target location Schedule the task When the task is completed adjust the DNS alias accordingly Delete the original file system.
E. Enable shadow copies on the existing file system by using a Windows PowerShell command Schedule the shadow copy job to create a point-in-time backup of
the file system Choose to restore previousversions Create a new FSx for Windows File Server file system with SSD storage and 32 MBps of throughput When the
copy job is completed, adjust the DNS alias Delete the original file system
Answer: D
A. Use AWS WAF to protect both APIs Configure Amazon Inspector to analyze the legacy API Configure Amazon GuardDuty to monitor for malicious attempts to
access the APIs
B. Use AWS WAF to protect the API Gateway API Configure Amazon Inspector to analyze both APIs Configure Amazon GuardDuty to block malicious attempts to
access the APIs.
C. Use AWS WAF to protect the API Gateway API Configure Amazon inspector to analyze the legacy APIConfigure Amazon GuardDuty to monitor for malicious
attempts to access the APIs.
D. Use AWS WAF to protect the API Gateway API Configure Amazon inspector to protect the legacy API Configure Amazon GuardDuty to block malicious
attempts to access the APIs.
Answer: C
Answer: B
Answer: AD
During deployment, the application failed to start. Troubleshooting revealed that db.example com is not resolvable on the Amazon EC2 instance The solutions
architect confirmed that the record set was created correctly in Route 53.
Which combination of steps should the solutions architect take to resolve this issue? (Select TWO J
A. Deploy the database on a separate EC2 instance in the new VPC Create a record set for the instance's private IP in the private hosted zone
B. Use SSH to connect to the application tier EC2 instance Add an RDS endpoint IP address to the/eto/resolv.conf file
C. Create an authorization lo associate the private hosted zone in Account A with the new VPC In Account B
D. Create a private hosted zone for the example.com domain m Account B Configure Route 53 replicationbetween AWS accounts
E. Associate a new VPC in Account B with a hosted zone in Account
F. Delete the association authorization In Account A.
Answer: CE
Visit Our Site to Purchase the Full Set of Actual SAP-C02 Exam Questions With Answers.
We Also Provide Practice Exam Software That Simulates Real Exam Environment And Has Many Self-Assessment Features. Order the SAP-
C02 Product From:
https://www.2passeasy.com/dumps/SAP-C02/
* SAP-C02 Most Realistic Questions that Guarantee you a Pass on Your FirstTry
* SAP-C02 Practice Test Questions in Multiple Choice Formats and Updatesfor 1 Year