Practice Free SAP-C02 Exam Online Questions
A company runs a web application on a single Amazon EC2 instance. End users experience slow application performance during times of peak usage, when CPU utilization is consistently more than 95%.
A user data script installs required custom packages on the EC2 instance. The process of launching the instance takes several minutes.
The company is creating an Auto Scaling group that has mixed instance groups, varied CPUs, and a maximum capacity limit. The Auto Scaling group will use a launch template for various configuration options. The company needs to decrease application latency when new instances are launched during auto scaling.
Which solution will meet these requirements?
- A . Use a predictive scaling policy. Use an instance maintenance policy to run the user data script. Set the default instance warmup time to 0 seconds.
- B . Use a dynamic scaling policy. Use lifecycle hooks to run the user data script. Set the default instance warmup time to 0 seconds.
- C . Use a predictive scaling policy. Enable warm pools for the Auto Scaling group. Use an instance maintenance policy to run the user data script.
- D . Use a dynamic scaling policy. Enable warm pools for the Auto Scaling group. Use lifecycle hooks to run the user data script.
A company is processing videos in the AWS Cloud by using Amazon EC2 instances in an Auto Scaling group. It takes 30 minutes to process a video. Several EC2 instances scale in and out depending on the number of videos in an Amazon Simple Queue Service (Amazon SQS) queue.
The company has configured the SQS queue with a redrive policy that specifies a target dead-letter queue and a maxReceiveCount of 1. The company has set the visibility timeout for the SQS queue to 1 hour. The company has set up an Amazon CloudWatch alarm to notify the development team when there are messages in the dead-letter queue.
Several times during the day, the development team receives notification that messages are in the dead-letter queue and that videos have not been processed properly. An investigation finds no errors in the application logs.
How can the company solve this problem?
- A . Turn on termination protection for the EC2 instances.
- B . Update the visibility timeout for the SOS queue to 3 hours.
- C . Configure scale-in protection for the instances during processing.
- D . Update the redrive policy and set maxReceiveCount to 0.
B
Explanation:
The best solution for this problem is to update the visibility timeout for the SQS queue to 3 hours. This is because when the visibility timeout is set to 1 hour, it means that if the EC2 instance doesn’t process the message within an hour, it will be moved to the dead-letter queue. By increasing the visibility timeout to 3 hours, this should give the EC2 instance enough time to process the message before it gets moved to the dead-letter queue. Additionally, configuring scale-in protection for the EC2 instances during processing will help to ensure that the instances are not terminated while the messages are being processed.
A company is planning to migrate an Amazon RDS for Oracle database to an RDS for PostgreSQL DB instance in another AWS account. A solutions architect needs to design a migration strategy that will require no downtime and that will minimize the amount of time necessary to complete the migration. The migration strategy must replicate all existing data and any new data that is created during the migration. The target database must be identical to the source database at completion of the migration process
All applications currently use an Amazon Route 53 CNAME record as their endpoint for communication with the RDS for Oracle DB instance. The RDS for Oracle DB instance is in a private subnet.
Which combination of steps should the solutions architect take to meet these requirements? (Select THREE)
- A . Create a new RDS for PostgreSQL DB instance in the target account Use the AWS Schema Conversion Tool (AWS SCT) to migrate the database schema from the source database to the target database
- B . Use the AWS Schema Conversion Tool (AWS SCT) to create a new RDS for PostgreSQL DB instance in the target account with the schema and initial data from the source database
- C . Configure VPC peering between the VPCs in the two AWS accounts to provide connectivity to both DB instances from the target account. Configure the security groups that are attached to each DB instance to allow traffic on the database port from the VPC in the target account.
- D . Temporarily allow the source DB instance to be publicly accessible to provide connectivity from the VPC in the target account Configure the security groups that are attached to each DB instance to allow traffic on the database port from the VPC in the target account.
- E . Use AWS Database Migration Service (AWS DMS) in the target account to perform a full load plus change data capture (CDC) migration from the source database to the target database When the migration is complete, change the CNAME record to point to the target DB instance endpoint
- F . Use AWS Database Migration Service (AWS DMS) in the target account to perform a change data capture (CDC) migration from the source database to the target database When the migration is complete change the CNAME record to pointto the target DB instance endpoint.
A company uses AWS Organizations for a multi-account setup in the AWS Cloud. The company’s finance team has a data processing application that uses AWS Lambda and Amazon DynamoDB. The company’s marketing team wants to access the data that is stored in the DynamoDB table.
The DynamoDB table contains confidential data. The marketing team can have access to only specific attributes of data in the DynamoDB table. The fi-nance team and the marketing team have separate AWS accounts.
What should a solutions architect do to provide the marketing team with the appropriate access to the DynamoDB table?
- A . Create an SCP to grant the marketing team’s AWS account access to the specific attributes of the DynamoDB table. Attach the SCP to the OU of the finance team.
- B . Create an IAM role in the finance team’s account by using IAM policy conditions for specific DynamoDB attributes (fine-grained access con-trol). Establish trust with the marketing team’s account. In the mar-keting team’s account, create an IAM role that has permissions to as-sume the IAM role in the finance team’s account.
- C . Create a resource-based IAM policy that includes conditions for spe-cific DynamoDB attributes (fine-grained access control). Attach the policy to the DynamoDB table. In the marketing team’s
account, create an IAM role that has permissions to access the DynamoDB table in the finance team’s account. - D . Create an IAM role in the finance team’s account to access the Dyna-moDB table. Use an IAM permissions boundary to limit the access to the specific attributes. In the marketing team’s account, create an IAM role that has permissions to assume the IAM role in the finance team’s account.
C
Explanation:
The company should create a resource-based IAM policy that includes conditions for specific DynamoDB attributes (fine-grained access control). The company should attach the policy to the DynamoDB table. In the marketing team’s account, the company should create an IAM role that has permissions to access the DynamoDB table in the finance team’s account. This solution will meet the requirements because a resource-based IAM policy is a policy that you attach to an AWS resource (such as a DynamoDB table) to control who can access that resource and what actions they can perform on it. You can use IAM policyconditions to specify fine-grained access control for DynamoDB items and attributes. For example, you can allow or deny access to specific attributes of all items in a table by matching on attribute names1. By creating a resource-based policy that allows access to only specific attributes of the DynamoDB table and attaching it to the table, the company can restrict access to confidential data. By creating an IAM role in the marketing team’s account that has permissions to access the DynamoDB table in the finance team’s account, the company can enable cross-account access.
The other options are not correct because:
Creating an SCP to grant the marketing team’s AWS account access to the specific attributes of the DynamoDB table would not work because SCPs are policies that you can use with AWS Organizations to manage permissions in your organization’s accounts. SCPs do not grant permissions; instead, they specify the maximum permissions that identities in an account can have2. SCPs cannot be used to specify fine-grained access control for DynamoDB items and attributes.
Creating an IAM role in the finance team’s account by using IAM policy conditions for specific DynamoDB attributes and establishing trust with the marketing team’s account would not work because IAM roles are identities that you can create in your account that have specific permissions. You can use an IAM role to delegate access to users, applications, or services that don’t normally have access to your AWS resources3. However, creating an IAM role in the finance team’s account would not restrict access to specific attributes of the DynamoDB table; it would only allow cross-account access. The company would still need a resource-based policy attached to the table to enforce fine-grained access control.
Creating an IAM role in the finance team’s account to access the DynamoDB table and using an IAM permissions boundary to limit the access to the specific attributes would not work because IAM permissions boundaries are policies that you use to delegate permissions management to other users. You can use permissions boundaries to limit the maximum permissions that an identity-based policy can grant to an IAM entity (user or role)4. Permissions boundaries cannot be used to specify fine-grained access control for DynamoDB items and attributes.
Reference: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/specifying-conditions.html
https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps.html
https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html
https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_boundaries.html
A solutions architect is designing an AWS account structure for a company that consists of multiple teams. All the teams will work in the same AWS Region. The company needs a VPC that is connected
to the on-premises network. The company expects less than 50 Mbps of total traffic to and from the on-premises network.
Which combination of steps will meet these requirements MOST cost-effectively? (Select TWO.)
- A . Create an AWS Cloud Formation template that provisions a VPC and the required subnets. Deploy the template to each AWS account.
- B . Create an AWS Cloud Formation template that provisions a VPC and the required subnets. Deploy the template to a shared services account Share the subnets by using AWS Resource Access Manager.
- C . Use AWS Transit Gateway along with an AWS Site-to-Site VPN for connectivity to the on-premises network. Share the transit gateway by using AWS Resource Access Manager.
- D . Use AWS Site-to-Site VPN for connectivity to the on-premises network.
- E . Use AWS Direct Connect for connectivity to the on-premises network.
A company has an application that runs on Amazon EC2 instances in an Amazon EC2 Auto Scaling group. The company uses AWS CodePipeline to deploy the application. The instances that run in the Auto Scaling group are constantly changing because of scaling events.
When the company deploys new application code versions, the company installs the AWS Code Deploy agent on any new target EC2 instances and associates the instances with the Code Deploy deployment group. The application is set to go live within the next 24 hours.
What should a solutions architect recommend to automate the application deployment process with the LEAST amount of operational overhead?
- A . Configure Amazon EventBridge to invoke an AWS Lambda function when a new EC2 instance is launched into the Auto Scaling group. Code the Lambda function to associate the EC2 instances with the Code Deploy deployment group.
- B . Write a script to suspend Amazon EC2 Auto Scaling operations before the deployment of new code When the deployment is complete, create a new AMI and configure the Auto Scaling group’s launch template to use the new AMI for new launches. Resume Amazon EC2 Auto Scaling operations.
- C . Create a new AWS CodeBuild project that creates a new AMI that contains the new code Configure CodeBuild to update the Auto Scaling group’s launch template to the new AMI. Run an Amazon EC2 Auto Scaling instance refresh operation.
- D . Create a new AMI that has the Code Deploy agent installed. Configure the Auto Scaling group’s launch template to use the new AMI. Associate the Code Deploy deployment group with the Auto Scaling group instead of the EC2 instances.
D
Explanation:
https://docs.aws.amazon.com/codedeploy/latest/userguide/integrations-aws-auto-scaling.html
A company has a serverless application comprised of Amazon CloudFront, Amazon API Gateway, and AWS Lambda functions. The current deployment process of the application code is to create a new version number of the Lambda function and run an AWS CLI script to update. If the new function version has errors, another CLI script reverts by deploying the previous working version of the function. The company would like to decrease the time to deploy new versions of the application logic provided by the Lambda functions, and also reduce the time to detect and revert when errors are identified.
How can this be accomplished?
- A . Create and deploy nested AWS CloudFormation stacks with the parent stack consisting of the AWS CloudFront distribution and API Gateway, and the child stack containing the Lambda function. For changes to Lambda, create an AWS CloudFormation change set and deploy; if errors are triggered, revert the AWS CloudFormation change set to the previous version.
- B . Use AWS SAM and built-in AWS Code Deploy to deploy the new Lambda version, gradually shift traffic to the new version, and use pre-traffic and post-traffic test functions to verify code. Rollback if Amazon CloudWatch alarms are triggered.
- C . Refactor the AWS CLI scripts into a single script that deploys the new Lambda version. When deployment is completed, the script tests execute. If errors are detected, revert to the previous Lambda version.
- D . Create and deploy an AWS CloudFormation stack that consists of a new API Gateway endpoint that references the new Lambda version. Change the CloudFront origin to the new API Gateway endpoint, monitor errors and if detected, change the AWS CloudFront origin to the previous API Gateway endpoint.
B
Explanation:
https://aws.amazon.com/about-aws/whats-new/2017/11/aws-lambda-supports-traffic-shifting-and-phased-deployments-with-aws-codedeploy/
A company has multiple lines of business (LOBs) that toll up to the parent company.
The company has asked its solutions architect to develop a solution with the following requirements
• Produce a single AWS invoice for all of the AWS accounts used by its LOBs.
•. The costs for each LOB account should be broken out on the invoice
• Provide the ability to restrict services and features in the LOB accounts, as defined by the company’s governance policy
• Each LOB account should be delegated full administrator permissions regardless of the governance policy
Which combination of steps should the solutions architect take to meet these requirements? (Select TWO.)
- A . Use AWS Organizations to create an organization in the parent account for each LOB Then invite each LOB account to the appropriate organization
- B . Use AWS Organizations to create a single organization in the parent account Then, invite each LOB’s AWS account lo join the organization.
- C . Implement service quotas to define the services and features that are permitted and apply the quotas to each LOB. as appropriate
- D . Create an SCP that allows only approved services and features then apply the policy to the LOB accounts
- E . Enable consolidated billing in the parent account’s billing console and link the LOB accounts
B,E
Explanation:
Create AWS Organization:
In the AWS Management Console, navigate to AWS Organizations and create a new organization in the parent account.
Invite LOB Accounts:
Invite each Line of Business (LOB) account to join the organization. This allows centralized management and governance of all accounts.
Enable Consolidated Billing:
Enable consolidated billing in the billing console of the parent account. Link all LOB accounts to ensure a single consolidated invoice that breaks down costs per account. Apply Service Control Policies (SCPs):
Implement Service Control Policies (SCPs) to define the services and features permitted for each LOB account as per the governance policy, while still delegating full administrative permissions to the LOB accounts.
By consolidating billing and using AWS Organizations, the company can achieve centralized billing and governance while maintaining independent administrative control for each LOB account
A company runs a processing engine in the AWS Cloud. The engine processes environmental data from logistics centers to calculate a sustainability index. The company has millions of devices in logistics centers that are spread across Europe. The devices send information to the processing engine through a RESTful API. The API experiences unpredictable bursts of traffic. The company must implement a solution to process all data that the devices send to the processing engine. Data loss is unacceptable.
Which solution will meet these requirements?
- A . Create an Application Load Balancer (ALB) for the RESTful API. Create an Amazon SQS queue. Create a listener and a target group for the ALB. Add the SQS queue as the target. Use a container that runs in Amazon ECS with the Fargate launch type to process messages in the queue.
- B . Create an Amazon API Gateway HTTP API that implements the RESTful API. Create an Amazon SQS queue. Create an API Gateway service integration with the SQS queue. Create an AWS Lambda function to process messages in the SQS queue.
- C . Create an Amazon API Gateway REST API that implements the RESTful API. Create a fleet of Amazon EC2 instances in an Auto Scaling group. Create an API Gateway Auto Scaling group proxy integration. Use the EC2 instances to process incoming data.
- D . Create an Amazon CloudFront distribution for the RESTful API. Create a data stream in Amazon Kinesis Data Streams. Set the data stream as the origin for the distribution. Create an AWS Lambda function to consume and process data in the data stream.
B
Explanation:
Using Amazon API Gateway with an SQS queue ensures all data sent to the API is durably stored for processing, even during traffic surges.
API Gateway service integration with SQS provides a managed, serverless ingestion layer that absorbs unpredictable traffic bursts without data loss.
The AWS Lambda function processes messages from SQS, providing scalable, on-demand compute to handle environmental data efficiently.
This solution aligns with AWS best practices for building resilient, serverless, event-driven architectures that can handle high-volume, bursty workloads.
A company owns a chain of travel agencies and is running an application in the AWS Cloud. Company employees use the application to search for information about travel destinations. Destination content is updated four times each year.
Two fixed Amazon EC2 instances serve the application. The company uses an Amazon Route 53 public
hosted zone with a multivalue record of travel.example.com that returns the Elastic IP addresses for the EC2 instances. The application uses Amazon DynamoDB as its primary data store. The company uses a self-hosted Redis instance as a caching solution.
During content updates, the load on the EC2 instances and the caching solution increases drastically. This increased load has led to downtime on several occasions. A solutions architect must update the application so that the application is highly available and can handle the load that is generated by the content updates.
Which solution will meet these requirements?
- A . Set up DynamoDB Accelerator (DAX) as in-memory cache. Update the application to use DAX. Create an Auto Scaling group for the EC2 instances. Create an Application Load Balancer (ALB). Set the Auto Scaling group as a target for the ALB. Update the Route 53 record to use a simple routing policy that targets the ALB’s DNS alias. Configure scheduled scaling for the EC2 instances before the content updates.
- B . Set up Amazon ElastiCache for Redis. Update the application to use ElastiCache. Create an Auto Scaling group for the EC2 instances. Create an Amazon CloudFront distribution, and set the Auto Scaling group as an origin for the distribution. Update the Route 53 record to use a simple routing policy that targets the CloudFront distribution’s DNS alias. Manually scale up EC2 instances before the content updates.
- C . Set up Amazon ElastiCache for Memcached. Update the application to use ElastiCache Create an Auto Scaling group for the EC2 instances. Create an Application Load Balancer (ALB). Set the Auto Scaling group as a target for the ALB. Update the Route 53 record to use a simple routing policy that targets the ALB’s DNS alias. Configure scheduled scaling for the application before the content updates.
- D . Set up DynamoDB Accelerator (DAX) as in-memory cache. Update the application to use DAX. Create an Auto Scaling group for the EC2 instances. Create an Amazon CloudFront distribution, and set the Auto Scaling group as an origin for the distribution. Update the Route 53 record to use a simple routing policy that targets the CloudFront distribution’s DNS alias. Manually scale up EC2 instances before the content updates.
A
Explanation:
This option allows the company to use DAX to improve the performance and reduce the latency of the DynamoDB queries by caching the results in memory1. By updating the application to use DAX, the company can reduce the load on the DynamoDB tables and avoid throttling errors1. By creating an Auto Scaling group for the EC2 instances, the company can adjust the number of instances based on the demand and ensure high availability2. By creating an ALB, the company can distribute the incoming traffic across multiple EC2 instances and improve fault tolerance3. By updating the Route 53 record to use a simple routing policy that targets the ALB’s DNS alias, the company can route users to the ALB endpoint and leverage its health checks and load balancing features4. By configuring scheduled scaling for the EC2 instances before the content updates, the company can anticipate and handle traffic spikes during peak periods5.
What is Amazon DynamoDB Accelerator (DAX)?
What is Amazon EC2 Auto Scaling?
What is an Application Load Balancer?
Choosing a routing policy
Scheduled scaling for Amazon EC2 Auto Scaling
