Practice Free SAP-C02 Exam Online Questions
A company runs an application on Amazon EC2 and AWS Lambda. The application stores temporary data in Amazon S3. The S3 objects are deleted after 24 hours. The company deploys new versions of the application by launching AWS CloudFormation stacks. The stacks create the required resources. After validating a new version, the company deletes the old stack. The deletion of an old development stack recently failed.
A solutions architect needs to resolve this issue without major architecture changes.
Which solution will meet these requirements?
- A . Create a Lambda function to delete objects from the S3 bucket. Add the Lambda function as a custom resource in the CloudFormation stack with a DependsOn attribute that points to the S3 bucket resource.
- B . Modify the CloudFormation stack to attach a DeletionPolicy attribute with a value of Delete to the S3 bucket.
- C . Update the CloudFormation stack to add a DeletionPolicy attribute with a value of Snapshot for the S3 bucket resource.
- D . Update the CloudFormation template to create an Amazon EFS file system to store temporary files instead of Amazon S3. Configure the Lambda functions to run in the same VPC as the EFS file system.
A
Explanation:
CloudFormation cannot delete non-empty S3 buckets. OptionAallows you to create acustom Lambda resourcethat deletes all objects in the S3 bucket before the stack deletes it. The DependsOn ensures the bucket deletion occurs only after the Lambda has completed.
B: Adding DeletionPolicy: Delete does not resolve the issue if the bucket still contains objects.
C: Snapshot doesn’t apply to S3 and won’t help here.
D: Changing to Amazon EFS would require architectural changes, which are not allowed per requirements.
Reference:
https://aws.amazon.com/blogs/devops/safely-delete-s3-buckets-using-aws-cloudformation/
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-deletionpolicy.html
A company is running a critical stateful web application on two Linux Amazon EC2 instances behind an Application Load Balancer (ALB) with an Amazon RDS for MySQL database. The company hosts the DNS records for the application in Amazon Route 53 A solutions architect must recommend a solution to improve the resiliency of the application.
The solution must meet the following objectives:
• Application tier RPO of 2 minutes. RTO of 30 minutes
• Database tier RPO of 5 minutes RTO of 30 minutes
The company does not want to make significant changes to the existing application architecture. The company must ensure optimal latency after a failover.
Which solution will meet these requirements?
- A . Configure the EC2 instances to use AWS Elastic Disaster Recovery Create a cross-Region read replica for the RDS DB instance Create an ALB in a second AWS Region Create an AWS Global Accelerator endpoint and associate the endpoint with the ALBs Update DNS records to point to the Global Accelerator endpoint
- B . Configure the EC2 instances to use Amazon Data Lifecycle Manager (Amazon DLM) to take snapshots of the EBS volumes Configure RDS automated backups Configure backup replication to a second AWS Region Create an ALB in the second Region Create an AWS Global Accelerator endpoint, and associate the endpoint with the ALBs Update DNS records to point to the Global Accelerator endpoint
- C . Create a backup plan in AWS Backup for the EC2 instances and RDS DB instance Configure backup replication to a second AWS Region Create an ALB in the second Region Configure an Amazon CloudFront distribution in front of the ALB Update DNS records to point to CloudFront
- D . Configure the EC2 instances to use Amazon Data Lifecycle Manager (Amazon DLM) to take snapshots of the EBS volumes Create a cross-Region read replica for the RDS DB instance Create an ALB in a second AWS Region Create an AWS Global Accelerator endpoint and associate the endpoint with the ALBs
B
Explanation:
This option meets the RPO and RTO requirements for both the application and database tiers and uses tools like Amazon DLM and RDS automated backups to create and manage the backups. Additionally, it uses Global Accelerator to ensure low latency after failover by directing traffic to the closest healthy endpoint.
A company is using multiple AWS accounts. The DNS records are stored in a private hosted zone for Amazon Route 53 in Account A. The company’s applications and databases are running in Account B. A solutions architect win deploy a two-net application In a new VPC To simplify the configuration, the db.example com CNAME record set tor the Amazon RDS endpoint was created in a private hosted zone for Amazon Route 53.
During deployment, the application failed to start. Troubleshooting revealed that db.example com is not resolvable on the Amazon EC2 instance. The solutions architect confirmed that the record set was created correctly in Route 53.
Which combination of steps should the solutions architect take to resolve this issue? (Select TWO)
- A . Deploy the database on a separate EC2 instance in the new VPC Create a record set for the instance’s private IP in the private hosted zone
- B . Use SSH to connect to the application tier EC2 instance Add an RDS endpoint IP address to the /eto/resolv.conf file
- C . Create an authorization lo associate the private hosted zone in Account A with the new VPC In Account B
- D . Create a private hosted zone for the example.com domain m Account B Configure Route 53 replication between AWS accounts
- E . Associate a new VPC in Account B with a hosted zone in Account A. Delete the association authorization In Account A.
C,E
Explanation:
https://aws.amazon.com/premiumsupport/knowledge-center/private-hosted-zone-different-account/
A company is hosting a monolithic REST-based API for a mobile app on five Amazon EC2 instances in public subnets of a VPC. Mobile clients connect to the API by using a domain name that is hosted on Amazon Route 53. The company has created a Route 53 multivalue answer routing policy with the IP addresses of all the EC2 instances. Recently, the app has been overwhelmed by large and sudden increases to traffic. The app has not been able to keep up with the traffic.
A solutions architect needs to implement a solution so that the app can handle the new and varying load.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Separate the API into individual AWS Lambda functions. Configure an Amazon API Gateway REST API with Lambda integration for the backend. Update the Route 53 record to point to the API Gateway API.
- B . Containerize the API logic. Create an Amazon Elastic Kubernetes Service (Amazon EKS) cluster. Run the containers in the cluster by using Amazon EC2. Create a Kubernetes ingress. Update the Route 53 record to point to the Kubernetes ingress.
- C . Create an Auto Scaling group. Place all the EC2 instances in the Auto Scaling group. Configure the Auto Scaling group to perform scaling actions that are based on CPU utilization. Create an AWS Lambda function that reacts to Auto Scaling group changes and updates the Route 53 record.
- D . Create an Application Load Balancer (ALB) in front of the API. Move the EC2 instances to private subnets in the VPC. Add the EC2 instances as targets for the ALB. Update the Route 53 record to point to the ALB.
D
Explanation:
By breaking down the monolithic API into individual Lambda functions and using API Gateway to handle the incoming requests, the solution can automatically scale to handle the new and varying load without the need for manual scaling actions. Additionally, this option will automatically handle the traffic without the need of having EC2 instances running all the time and only pay for the number of requests and the duration of the execution of the Lambda function.
By updating the Route 53 record to point to the API Gateway, the solution can handle the traffic and also it will direct the traffic to the correct endpoint.
A retail company wants to improve its application architecture. The company’s applications register new orders, handle returns of merchandise, and provide analytics. The applications store retail data in a MySQL database and an Oracle OLAP analytics database. All the applications and databases are hosted on Amazon EC2 instances.
Each application consists of several components that handle different parts of the order process. These components use incoming data from different sources. A separate ETL job runs every week and copies data from each application to the analytics database.
A solutions architect must redesign the architecture into an event-driven solution that uses serverless services. The solution must provide updated analytics in near real time.
Which solution will meet these requirements?
- A . Migrate the individual applications as microservices to Amazon ECS containers that use AWS Fargate. Keep the retail MySQL database on Amazon EC2. Move the analytics database to Amazon Neptune. Use Amazon SQS to send all the incoming data to the microservices and the analytics database.
- B . Create an Auto Scaling group for each application. Specify the necessary number of EC2 instances in each Auto Scaling group. Migrate the retail MySQL database and the analytics database to Amazon Aurora MySQL. Use Amazon SNS to send all the incoming data to the correct EC2 instances and the analytics database.
- C . Migrate the individual applications as microservices to Amazon EKS containers that use AWS
Fargate. Migrate the retail MySQL database to Amazon Aurora Serverless MySQL. Migrate the analytics database to Amazon Redshift Serverless. Use Amazon EventBridge to send all the incoming data to the microservices and the analytics database. - D . Migrate the individual applications as microservices to Amazon AppStream 2.0. Migrate the retail MySQL database to Amazon Aurora MySQL. Migrate the analytics database to Amazon Redshift Serverless. Use AWS IoT Core to send all the incoming data to the microservices and the analytics database.
C
Explanation:
Comprehensive and Detailed Explanation From Exact Extract:
The requirements call for an event-driven redesign that uses serverless services and provides near real-time analytics updates. The current architecture relies on weekly ETL jobs, which is incompatible with near real-time analytics.
A serverless, event-driven architecture on AWS typically uses a managed event bus for decoupling producers and consumers and enabling routing/filtering to multiple targets. The analytics requirement suggests that events (orders, returns, updates) should be streamed to an analytics store continuously rather than copied on a weekly schedule.
Option C aligns with these goals. It modernizes the application components into microservices on a serverless compute substrate (AWS Fargate) and modernizes the databases to managed serverless offerings where possible. Aurora Serverless for MySQL provides an on-demand relational database option that reduces operational overhead for the transactional store. Redshift Serverless provides a managed data warehousing service suitable for analytics without provisioning clusters. Using Amazon EventBridge provides a serverless event routing layer that can deliver events from multiple sources to multiple targets, which supports near real-time propagation of business events from the applications into analytics ingestion and processing.
Option A is not fully aligned: it retains the transactional MySQL database on EC2 (not serverless/managed for the database layer) and moves analytics to Amazon Neptune, which is a graph database, not the typical replacement for an OLAP analytics database. SQS is a queue suited for point-to-point message processing, not a flexible event bus for fan-out to multiple analytics consumers and routing based on event patterns.
Option B is not serverless because it uses EC2 Auto Scaling groups for application compute. Although SNS can distribute messages, this option keeps compute server-based and does not directly provide a near real-time analytics warehouse pattern; it also uses Aurora MySQL for analytics, which is not a typical OLAP warehouse replacement compared to Redshift.
Option D is not appropriate: Amazon AppStream 2.0 is a service for streaming desktop applications to users, not for hosting microservices. AWS IoT Core is optimized for IoT device messaging and telemetry and is not the standard integration backbone for business application event routing across microservices and analytics.
Therefore, option C best satisfies the requirement for an event-driven, serverless architecture with near real-time analytics.
References:
AWS documentation on event-driven architectures using Amazon EventBridge for routing events from producers to multiple consumers with filtering rules.
AWS documentation on Amazon Aurora Serverless for managed, on-demand relational database capacity with reduced operational management.
AWS documentation on Amazon Redshift Serverless as a managed analytics warehouse that supports near real-time ingestion and querying without provisioning clusters.
A company has many AWS accounts in an organization in AWS Organizations. The accounts contain many Amazon EC2 instances that run different types of workloads. The workloads have different usage patterns.
The company needs recommendations for how to rightsize the EC2 instances based on CPU and memory usage during the last 90 days.
Which combination of steps will provide these recommendations? (Select THREE.)
- A . Opt in to AWS Compute Optimizer and enable trusted access for Compute Optimizer for the organization.
- B . Configure a delegated administrator account for AWS Systems Manager for the organization.
- C . Use an AWS CloudFormation stack set to enable detailed monitoring for all the EC2 instances.
- D . Install and configure the Amazon CloudWatch agent on all the EC2 instances to send memory utilization metrics to CloudWatch.
- E . Activate enhanced metrics in AWS Compute Optimizer.
- F . Configure AWS Systems Manager to pass metrics to AWS Trusted Advisor.
A company’s solutions architect is analyzing costs of a multi-application environment. The environment is deployed across multiple Availability Zones in a single AWS Region. After a recent acquisition, the company manages two organizations in AWS Organizations. The company has created multiple service provider applications as AWS Private Link-powered VPC endpoint services in one organization. The company has created multiple service consumer applications in the other organization.
Data transfer charges are much higher than the company expected, and the solutions architect needs to reduce the costs. The solutions architect must recommend guidelines for developers to follow when they deploy services. These guidelines must minimize data transfer charges for the whole environment.
Which guidelines meet these requirements? (Select TWO.)
- A . Use AWS Resource Access Manager to share the subnets that host the service provider applications with other accounts in the organization.
- B . Place the service provider applications and the service consumer applications in AWS accounts in the same organization.
- C . Turn off cross-zone load balancing for the Network Load Balancer in all service provider application deployments.
- D . Ensure that service consumer compute resources use the Availability Zone-specific endpoint service by using the endpoint’s local DNS name.
- E . Create a Savings Plan that provides adequate coverage for the organization’s planned inter-Availability Zone data transfer usage.
C,D
Explanation:
Cross-zone load balancing enables traffic to be distributed evenly across all registered instances in all enabled Availability Zones. However, this also increases data transfer charges between Availability Zones. By turning off cross-zone load balancing, the service provider applications can reduce inter-Availability Zone data transfer costs. Similarly, by using the Availability Zone-specific endpoint service, the service consumer applications can ensure that they connect to the nearest service provider application in the same Availability Zone, avoiding cross-Availability Zone data transfer charges.
Reference:
https://docs.aws.amazon.com/elasticloadbalancing/latest/network/load-balancer-target-groups.html#cross-zone-load-balancing
https://docs.aws.amazon.com/vpc/latest/userguide/vpce-interface.html#vpce-interface-dns
A mobile gaming company is expanding into the global market. The company’s game servers run in the us-east-1 Region. The game’s client application uses UDP to communicate with the game servers and needs to be able to connect to a set of static IP addresses.
The company wants its game to be accessible on multiple continents. The company also wants the game to maintain its network performance and global availability.
Which solution meets these requirements?
- A . Provision an Application Load Balancer (ALB) in front of the game servers. Create an Amazon CloudFront distribution that has no geographical restrictions. Set the ALB as the origin. Perform DNS lookups for the cloudfront.net domain name. Use the resulting IP addresses in the game’s client application.
- B . Provision game servers in each AWS Region. Provision an Application Load Balancer in front of the game servers. Create an Amazon Route 53 latency-based routing policy for the game’s client application to use with DNS lookups.
- C . Provision game servers in each AWS Region. Provision a Network Load Balancer (NLB) in front of the game servers. Create an accelerator in AWS Global Accelerator, and configure endpoint groups in each Region. Associate the NLBs with the corresponding Regional endpoint groups. Point the game client’s application to the Global Accelerator endpoints.
- D . Provision game servers in each AWS Region. Provision a Network Load Balancer (NLB) in front of the game servers. Create an Amazon CloudFront distribution that has no geographical restrictions. Set the NLB as the origin. Perform DNS lookups for the cloudfront.net domain name. Use the resulting IP addresses in the game’s client application.
A company has a multi-tier web application that runs on a fleet of Amazon EC2 instances behind an Application Load Balancer (ALB). The instances are in an Auto Scaling group. The ALB and the Auto Scaling group are replicated in a backup AWS Region. The minimum value and the maximum value for the Auto Scaling group are set to zero. An Amazon RDS Multi-AZ DB instance stores the application’s data. The DB instance has a read replica in the backup Region. The application presents an endpoint to end users by using an Amazon Route 53 record.
The company needs to reduce its RTO to less than 15 minutes by giving the application the ability to automatically fail over to the backup Region. The company does not have a large enough budget for an active-active strategy.
What should a solutions architect recommend to meet these requirements?
- A . Reconfigure the application’s Route 53 record with a latency-based routing policy that load balances traffic between the two ALBs. Create an AWS Lambda function in the backup Region to promote the read replica and modify the Auto Scaling group values. Create an Amazon CloudWatch alarm that is based on the HTTPCode_Target_5XX_Count metric for the ALB in the primary Region. Configure the CloudWatch alarm to invoke the Lambda function.
- B . Create an AWS Lambda function in the backup Region to promote the read replica and modify the Auto Scaling group values. Configure Route 53 with a health check that monitors the web application and sends an Amazon Simple Notification Service (Amazon SNS) notification to the Lambda function when the health check status is unhealthy. Update the application’s Route 53 record with a failover policy that routes traffic to the ALB in the backup Region when a health check failure occurs.
- C . Configure the Auto Scaling group in the backup Region to have the same values as the Auto Scaling group in the primary Region. Reconfigure the application’s Route 53 record with a latency-based routing policy that load balances traffic between the two ALBs. Remove the read replica. Replace the read replica with a standalone RDS DB instance. Configure Cross-Region Replication between the RDS DB instances by using snapshots and Amazon S3.
- D . Configure an endpoint in AWS Global Accelerator with the two ALBs as equal weighted targets.
Create an AWS Lambda function in the backup Region to promote the read replica and modify the
Auto Scaling group values. Create an Amazon CloudWatch alarm that is based on the
HTTPCode_Target_5XX_Count metric for the ALB in the primary Region. Configure the CloudWatch
alarm to invoke the Lambda function.
B
Explanation:
an AWS Lambda function in the backup region to promote the read replica and modify the Auto Scaling group values, and then configuring Route 53 with a health check that monitors the web application and sends an Amazon SNS notification to the Lambda function when the health check status is unhealthy. Finally, the application’s Route 53 record should be updated with a failover policy that routes traffic to the ALB in the backup region when a health check failure occurs. This approach provides automatic failover to the backup region when a health check failure occurs, reducing the RTO to less than 15 minutes. Additionally, this approach is cost-effective as it does not require an active-active strategy.
A company is using AWS CloudFormation to deploy its infrastructure. The company is concerned that, if a production CloudFormation stack is deleted, important data stored in Amazon RDS databases or Amazon EBS volumes might also be deleted.
How can the company prevent users from accidentally deleting data in this way?
- A . Modify the CloudFormation templates to add a DeletionPolicy attribute to RDS and EBS resources.
- B . Configure a stack policy that disallows the deletion of RDS and EBS resources.
- C . Modify 1AM policies to deny deleting RDS and EBS resources that are tagged with an "awsrcloudformation: stack-name" tag.
- D . Use AWS Config rules to prevent deleting RDS and EBS resources.
A
Explanation:
With the DeletionPolicy attribute you can preserve or (in some cases) backup a resource when its stack is deleted. You specify a DeletionPolicy attribute for each resource that you want to control. If a resource has no DeletionPolicy attribute, AWS CloudFormation deletes the resource by default. To keep a resource when its stack is deleted, specify Retain for that resource. You can use retain for any resource. For example, you can retain a nested stack, Amazon S3 bucket, or EC2 instance so that you can continue to use or modify those resources after you delete their stacks. https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-deletionpolicy.html
