Practice Free SAP-C02 Exam Online Questions
A company runs its application on Amazon EC2 instances and AWS Lambda functions. The EC2 instances experience a continuous and stable load. The Lambda functions experience a varied and unpredictable load. The application includes a caching layer that uses an Amazon MemoryDB for Redis cluster.
A solutions architect must recommend a solution to minimize the company’s overall monthly costs.
Which solution will meet these requirements?
- A . Purchase an EC2 Instance Savings Plan to cover the EC2 instances. Purchase a Compute Savings Plan for Lambda to cover the minimum expectedconsumption of the Lambda functions. Purchase reserved nodes to cover the MemoryDB cache nodes.
- B . Purchase a Compute Savings Plan to cover the EC2 instances. Purchase Lambda reserved concurrency to cover the expected Lambda usage. Purchasereserved nodes to cover the MemoryDB cache nodes.
- C . Purchase a Compute Savings Plan to cover the entire expected cost of the EC2 instances, Lambda functions, and MemoryDB cache nodes.
- D . Purchase a Compute Savings Plan to cover the EC2 instances and the MemoryDB cache nodes. Purchase Lambda reserved concurrency to cover theexpected Lambda usage.
A
Explanation:
This option uses different types of savings plans and reserved nodes to minimize the company’s overall monthly costs for running its application on EC2 instances, Lambda functions, and MemoryDB cache nodes. Savings plans are flexible pricing models that offer significant savings on AWS usage (up to 72%) in exchange for a commitment of a consistent amount of usage (measured in $/hour) for a one-year or three-year term. There are two types of savings plans: Compute Savings Plans and EC2 Instance Savings Plans. Compute Savings Plans apply to any compute usage across EC2 instances, Fargate containers, Lambda functions, SageMaker notebooks, and ECS tasks. EC2 Instance Savings Plans apply to a specific instance family within a region and provide more savings than Compute Savings Plans (up to 66% versus up to 54%). Reserved nodes are similar to savings plans but apply only to MemoryDB cache nodes. They offer up to 55% savings compared to on-demand pricing.
A company is processing videos in the AWS Cloud by using Amazon EC2 instances in an Auto Scaling group. It takes 30 minutes to process a video. Several EC2 instances scale in and out depending on the number of videos in an Amazon Simple Queue Service (Amazon SQS) queue.
The company has configured the SQS queue with a redrive policy that specifies a target dead-letter queue and a maxReceiveCount of 1. The company has set the visibility timeout for the SQS queue to 1 hour. The company has set up an Amazon CloudWatch alarm to notify the development team when there are messages in the dead-letter queue.
Several times during the day, the development team receives notification that messages are in the dead-letter queue and that videos have not been processed properly. An investigation finds no errors in the application logs.
How can the company solve this problem?
- A . Turn on termination protection for the EC2 instances.
- B . Update the visibility timeout for the SOS queue to 3 hours.
- C . Configure scale-in protection for the instances during processing.
- D . Update the redrive policy and set maxReceiveCount to 0.
B
Explanation:
The best solution for this problem is to update the visibility timeout for the SQS queue to 3 hours. This is because when the visibility timeout is set to 1 hour, it means that if the EC2 instance doesn’t process the message within an hour, it will be moved to the dead-letter queue. By increasing the visibility timeout to 3 hours, this should give the EC2 instance enough time to process the message before it gets moved to the dead-letter queue. Additionally, configuring scale-in protection for the EC2 instances during processing will help to ensure that the instances are not terminated while the messages are being processed.
A global company has a mobile app that displays ticket barcodes. Customers use the tickets on the
mobile app to attend live events. Event scanners read the ticket barcodes and call a backend API to validate the barcode data against data in a database. After the barcode is scanned, the backend logic writes to the database’s single table to mark the barcode as used. The company needs to deploy the app on AWS with a DNS name of api.example.com. The company will host the database in three AWS Regions around the world.
Which solution will meet these requirements with the LOWEST latency?
- A . Host the database on Amazon Aurora global database clusters. Host the backend on three Amazon ECS clusters that are in the same Regions as the database. Create an accelerator in AWS Global Accelerator to route requests to the nearest ECS cluster. Create an Amazon Route 53 record that maps api.example.com to the accelerator endpoint.
- B . Host the database on Amazon Aurora global database clusters. Host the backend on three Amazon EKS clusters that are in the same Regions as the database. Create an Amazon CloudFront distribution with the three clusters as origins. Route requests to the nearest EKS cluster. Create an Amazon Route 53 record that maps api.example.com to the CloudFront distribution.
- C . Host the database on Amazon DynamoDB global tables. Create an Amazon CloudFront distribution. Associate the CloudFront distribution with a CloudFront function that contains the backend logic to validate the barcodes. Create an Amazon Route 53 record that maps api.example.com to the CloudFront distribution.
- D . Host the database on Amazon DynamoDB global tables. Create an Amazon CloudFront distribution. Associate the CloudFront distribution with a Lambda@Edge function that contains the backend logic to validate the barcodes. Create an Amazon Route 53 record that maps api.example.com to the CloudFront distribution.
D
Explanation:
Option D offers the lowest latency and highest efficiency:
Amazon DynamoDB global tables replicate data across multiple Regions, ensuring that the database is available and responsive in each Region.
Lambda@Edge allows the backend logic to execute at AWS edge locations, reducing the distance between the user’s device and the code execution environment, thereby minimizing latency. Amazon CloudFront serves as the content delivery network, routing requests to the nearest edge location where the Lambda@Edge function is executed.
Amazon Route 53 maps the DNS name to the CloudFront distribution, ensuring that users are directed to the optimal edge location.
This architecture ensures rapid response times and high availability for a globally distributed user base.
A company is using Amazon API Gateway to deploy a private REST API that will provide access to sensitive data. The API must be accessible only from an application that is deployed in a VPC. The company deploys the API successfully. However, the API is not accessible from an Amazon EC2 instance that is deployed in the VPC.
Which solution will provide connectivity between the EC2 instance and the API?
- A . Create an interface VPC endpoint for API Gateway. Attach an endpoint policy that allows apigateway: * actions. Disable private DNS naming for the VPC endpoint. Configure an API resource policy that allows access from the VPC. Use the VPC endpoint’s DNS name to access the API.
- B . Create an interface VPC endpoint for API Gateway. Attach an endpoint policy that allows the execute-api: lnvoke action. Enable private DNS naming for the VPC endpoint. Configure an API resource policy that allows access from the VPC endpoint. Use the API endpoint’s DNS names to access the API. Most Voted
- C . Create a Network Load Balancer (NLB) and a VPC link. Configure private integration between API Gateway and the NLB. Use the API endpoint’s DNS names to access the API.
- D . Create an Application Load Balancer (ALB) and a VPC Link. Configure private integration between API Gateway and the ALB. Use the ALB endpoint’s DNS name to access the API.
B
Explanation:
According to the AWS documentation1, to access a private API from a VPC, you need to do the following:
Create an interface VPC endpoint for API Gateway in your VPC. This creates a private connection between your VPC and API Gateway.
Attach an endpoint policy to the VPC endpoint that allows the execute-api:lnvoke action for your private API. This grants permission to invoke your API from the VPC.
Enable private DNS naming for the VPC endpoint. This allows you to use the same DNS names for your private APIs as you would for public APIs.
Configure a resource policy for your private API that allows access from the VPC endpoint. This controls who can access your API and under what conditions.
Use the API endpoint’s DNS names to access the API from your VPC.
For example, https://api-id.execute-api.region.amazonaws.com/stage.
A company has a data lake in Amazon S3 that needs to be accessed by hundreds of applications across many AWS accounts. The company’s information security policy states that the S3 bucket must not be accessed over the public internet and that each application should have the minimum permissions necessary to function.
To meet these requirements, a solutions architect plans to use an S3 access point that is restricted to specific VPCs for each application.
Which combination of steps should the solutions architect take to implement this solution? (Select TWO.)
- A . Create an S3 access point for each application in the AWS account that owns the S3 bucket. Configure each access point to be accessible only from the application’s VPC. Update the bucket policy to require access from an access point.
- B . Create an interface endpoint for Amazon S3 in each application’s VPC. Configure the endpoint policy to allow access to an S3 access point. Create a VPC gateway attachment for the S3 endpoint.
- C . Create a gateway endpoint for Amazon S3 in each application’s VPC. Configure the endpoint policy to allow access to an S3 access point. Specify the route table that is used to access the access point.
- D . Create an S3 access point for each application in each AWS account and attach the access points to the S3 bucket. Configure each access point to be accessible only from the application’s VPC. Update the bucket policy to require access from an access point.
- E . Create a gateway endpoint for Amazon S3 in the data lake’s VPC. Attach an endpoint policy to allow access to the S3 bucket. Specify the route table that is used to access the bucket.
A,C
Explanation:
https://joe.blog.freemansoft.com/2020/04/protect-data-in-cloud-with-s3-access.html
An online retail company is migrating its legacy on-premises .NET application to AWS. The application runs on load-balanced frontend web servers, load-balanced application servers, and a Microsoft SQL Server database.
The company wants to use AWS managed services where possible and does not want to rewrite the application. A solutions architect needs to implement a solution to resolve scaling issues and minimize licensing costs as the application scales.
Which solution will meet these requirements MOST cost-effectively?
- A . Deploy Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancerfor the web tier and for the application tier. Use Amazon Aurora PostgreSQL with Babelfish turned on to replatform the SOL Server database.
- B . Create images of all the servers by using AWS Database Migration Service (AWS DMS). Deploy Amazon EC2 instances that are based on the on-premises imports. Deploy the instances in an Auto Scaling group behind a Network Load Balancer for the web tier and for the application tier. Use Amazon DynamoDB as the database tier.
- C . Containerize the web frontend tier and the application tier. Provision an Amazon Elastic Kubernetes Service (Amazon EKS) cluster. Create an Auto Scaling group behind a Network Load Balancer for the web tier and for the application tier. Use Amazon RDS for SOL Server to host the database.
- D . Separate the application functions into AWS Lambda functions. Use Amazon API Gateway for the web frontend tier and the application tier. Migrate the data to Amazon S3. Use Amazon Athena to query the data.
A
Explanation:
The best solution is to create a tag policy that contains the allowed project tag values in the organization’s management account and create an SCP that denies the cloudformation: CreateStack API operation unless a project tag is added. A tag policy is a type of policy that can help standardize tags across resources in the organization’s accounts. A tag policy can specify the allowed tag keys, values, and case treatment for compliance. A service control policy (SCP) is a type of policy that can restrict the actions that users and roles can perform in the organization’s accounts. An SCP can deny access to specific API operations unless certain conditions are met, such as having a specific tag. By creating a tag policy in the management account and attaching it to each OU, the organization can enforce consistent tagging across all accounts. By creating an SCP that denies the cloudformation: CreateStack API operation unless a project tag is added, the organization can prevent users from creating new resources without proper tagging. This solution will meet the requirements with the least effort, as it does not involve creating additional resources or modifying existing ones.
Reference: Tag policies – AWS Organizations, Service control policies – AWS Organizations, AWS CloudFormation User Guide
A company uses a load balancer to distribute traffic to Amazon EC2 instances in a single Availability Zone.
The company is concerned about security and wants a solutions architect to re-architect the solution to meet the following requirements:
• Inbound requests must be filtered for common vulnerability attacks.
• Rejected requests must be sent to a third-party auditing application.
• All resources should be highly available.
Which solution meets these requirements?
- A . Configure a Multi-AZ Auto Scaling group using the application’s AMI. Create an Application Load Balancer (ALB) and select the previously created Auto Scaling group as the target. Use Amazon Inspector to monitor traffic to the ALB and EC2 instances. Create a web ACL in WAF. Create an AWS WAF using the web ACL and ALB. Use an AWS Lambda function to frequently push the Amazon Inspector report to the third-party auditing application.
- B . Configure an Application Load Balancer (ALB) and add the EC2 instances as targets Create a web ACL in WAF. Create an AWS WAF using the web ACL and ALB name and enable logging with Amazon CloudWatch Logs. Use an AWS Lambda function to frequently push the logs to the third-party auditing application.
- C . Configure an Application Load Balancer (ALB) along with a target group adding the EC2 instances as targets. Create an Amazon Kinesis Data Firehose with the destination of the third-party auditing application. Create a web ACL in WAF. Create an AWS WAF using the web ACL and ALB then enable logging by selecting the Kinesis Data Firehose as the destination. Subscribe to AWS Managed Rules in AWS Marketplace, choosing the WAF as the subscriber.
- D . Configure a Multi-AZ Auto Scaling group using the application’s AMI. Create an Application Load Balancer (ALB) and select the previously created Auto Scaling group as the target. Create an Amazon Kinesis Data Firehose with a destination of the third-party auditing application. Create a web ACL in WAF. Create an AWS WAF using the Web ACL and ALB then enable logging by selecting the Kinesis Data Firehose as the destination. Subscribe to AWS Managed Rules in AWS Marketplace, choosing the WAF as the subscriber.
D
Explanation:
https://docs.aws.amazon.com/waf/latest/developerguide/marketplace-managed-rule-groups.html
A company has multiple lines of business (LOBs) that toll up to the parent company.
The company has asked its solutions architect to develop a solution with the following requirements
• Produce a single AWS invoice for all of the AWS accounts used by its LOBs.
• The costs for each LOB account should be broken out on the invoice
• Provide the ability to restrict services and features in the LOB accounts, as defined by the company’s governance policy
• Each LOB account should be delegated full administrator permissions regardless of the governance policy
Which combination of steps should the solutions architect take to meet these requirements? (Select TWO.)
- A . Use AWS Organizations to create an organization in the parent account for each LOB Then invite each LOB account to the appropriate organization
- B . Use AWS Organizations to create a single organization in the parent account Then, invite each LOB’s AWS account lo join the organization.
- C . Implement service quotas to define the services and features that are permitted and apply the quotas to each LOB. as appropriate
- D . Create an SCP that allows only approved services and features then apply the policy to the LOB accounts
- E . Enable consolidated billing in the parent account’s billing console and link the LOB accounts
B,E
Explanation:
Create AWS Organization:
In the AWS Management Console, navigate to AWS Organizations and create a new organization in
the parent account.
Invite LOB Accounts:
Invite each Line of Business (LOB) account to join the organization. This allows centralized management and governance of all accounts.
Enable Consolidated Billing:
Enable consolidated billing in the billing console of the parent account. Link all LOB accounts to ensure a single consolidated invoice that breaks down costs per account. Apply Service Control Policies (SCPs):
Implement Service Control Policies (SCPs) to define the services and features permitted for each LOB account as per the governance policy, while still delegating full administrative permissions to the LOB accounts.
By consolidating billing and using AWS Organizations, the company can achieve centralized billing and governance while maintaining independent administrative control for each LOB account
A company is running a workload that consists of thousands of Amazon EC2 instances. The workload is running in a VPC that contains several public subnets and private subnets. The public subnets have a route for 0.0.0.0/0 to an existing internet gateway. The private subnets have a route for 0.0.0.0/0 to an existing NAT gateway.
A solutions architect needs to migrate the entire fleet of EC2 instances to use IPv6. The EC2 instances that are in private subnets must not be accessible from the public internet.
What should the solutions architect do to meet these requirements?
- A . Update the existing VPC, and associate a custom IPv6 CIDR block with the VPC and all subnets.
Update all the VPC route tables, and add a route for ::/0 to the internet gateway. - B . Update the existing VPC, and associate an Amazon-provided IPv6 CIDR block with the VPC and all subnets. Update the VPC route tables for all private subnets, and add a route for ::/0 to the NAT gateway.
- C . Update the existing VPC, and associate an Amazon-provided IPv6 CIDR block with the VPC and all subnets. Create an egress-only internet gateway. Update the VPC route tables for all private subnets, and add a route for ::/0 to the egress-only internet gateway.
- D . Update the existing VPC, and associate a custom IPv6 CIDR block with the VPC and all subnets. Create a new NAT gateway, and enable IPv6 support. Update the VPC route tables for all private subnets, and add a route for ::/0 to the IPv6-enabled NAT gateway.
A solutions architect has developed a web application that uses an Amazon API Gateway Regional endpoint and an AWS Lambda function. The consumers of the web application are all close to the AWS Region where the application will be deployed. The Lambda function only queries an Amazon Aurora MySQL database. The solutions architect has configured the database to have three read replicas.
During testing, the application does not meet performance requirements. Under high load, the application opens a large number of database connections. The solutions architect must improve the application’s performance.
Which actions should the solutions architect take to meet these requirements? (Choose two.)
- A . Use the cluster endpoint of the Aurora database.
- B . Use RDS Proxy to set up a connection pool to the reader endpoint of the Aurora database.
- C . Use the Lambda Provisioned Concurrency feature.
- D . Move the code for opening the database connection in the Lambda function outside of the event handler.
- E . Change the API Gateway endpoint to an edge-optimized endpoint.
B,D
Explanation:
Connect to RDS outside of Lambda handler method to improve performancehttps://awstut.com/en/2022/04/30/connect-to-rds-outside-of-lambda-handler-method-to-improve-performance-en/
Using RDS Proxy, you can handle unpredictable surges in database traffic. Otherwise, these surges might cause issues due to oversubscribing connections or creating new connections at a fast rate. RDS Proxy establishes a database connection pool and reuses connections in this pool. This approach avoids the memory and CPU overhead of opening a new database connection each time. To protect the database against oversubscription, you can control the number of database connections that are created. https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/rds-proxy.html
