Practice Free SAP-C02 Exam Online Questions
A large company recently experienced an unexpected increase in Amazon RDS and Amazon DynamoDB costs. The company needs to increase visibility into details of AWS Billing and Cost Management There are various accounts associated with AWS Organizations, including many development and production accounts There is no consistent tagging strategy across the organization, but there are guidelines in place that require all infrastructure to be deployed using AWS CloudFormation with consistent tagging. Management requires cost center numbers and project ID numbers for all existing and future DynamoDB tables and RDS instances.
Which strategy should the solutions architect provide to meet these requirements?
- A . Use Tag Editor to tag existing resources Create cost allocation tags to define the cost center and project ID and allow 24 hours for tags to propagate to existing resources.
- B . Use an AWS Config rule to alert the finance team of untagged resources Create a centralized AWS Lambda based solution to tag untagged RDS databases and DynamoDB resources every hour using a cross-account role.
- C . Use Tag Editor to tag existing resources Create cost allocation tags to define the cost center and project ID Use SCPs to restrict resource creation that do not have the cost center and project ID on the resource.
- D . Create cost allocation tags to define the cost center and project ID and allow 24 hours for tags to propagate to existing resources Update existing federated roles to restrict privileges to provision resources that do not include the cost center and project ID on the resource.
C
Explanation:
Using Tag Editor to remediate untagged resources is a Best Practice (Page 14 or AWS Tagging Best Practices WhitePaper). However, that is were answer A stops. It doesn’t address the requirement of "Management requires cost center numbers and project ID number for all existing and future DynamoDB tables and RDS instances". That is where Answer C comes in and addresses that requirement with SCPs in the company’s AWS Organization. AWS Tagging Best Practices – https://d1.awsstatic.com/whitepapers/aws-tagging-best-practices.pdf
An application uses CloudFront, App Runner, and two S3 buckets ― one for static assets and one for user-uploaded content. User content is infrequently accessed after 30 days. Users are located only in Europe.
How can the company optimize cost?
- A . Expire S3 objects after 30 days.
- B . Transition S3 content toGlacier Deep Archiveafter 30 days.
- C . Use Spot Instances with App Runner.
- D . Add auto scaling to Aurora read replica.
- E . UseCloudFront Price Class 200(Europe & U.S. only).
B,E
Explanation:
B: Archiving inactive user uploads to Glacier Deep Archiveoffers the lowest storage cost.
E: Since the users are in Europe only, using Price Class 200reduces CloudFront delivery cost without affecting performance.
A (expiration) deletes data ― not suitable.
C is invalid (App Runner does not support Spot). D might save little unless read traffic is high.
Reference: CloudFront Price Classes
A public retail web application uses an Application Load Balancer (ALB) in front of Amazon EC2 instances running across multiple Availability Zones (AZs) in a Region backed by an Amazon RDS MySQL Multi-AZ deployment. Target group health checks are configured to use HTTP and pointed at the product catalog page. Auto Scaling is configured to maintain the web fleet size based on the ALB health check.
Recently, the application experienced an outage. Auto Scaling continuously replaced the instances during the outage. A subsequent investigation determined that the web server metrics were within the normal range, but the database tier was experiencing high toad, resulting in severely elevated query response times.
Which of the following changes together would remediate these issues while improving monitoring capabilities for the availability and functionality of the entire application stack for future growth? (Select TWO.)
- A . Configure read replicas for Amazon RDS MySQL and use the single reader endpoint in the web application to reduce the load on the backend database tier.
- B . Configure the target group health check to point at a simple HTML page instead of a product catalog page and the Amazon Route 53 health check against the product page to evaluate full application functionality. Configure Ama7on CloudWatch alarms to notify administrators when the site fails.
- C . Configure the target group health check to use a TCP check of the Amazon EC2 web server and the Amazon Route S3 health check against the product page to evaluate full application functionality. Configure Amazon CloudWatch alarms to notify administrators when the site fails.
- D . Configure an Amazon CtoudWatch alarm for Amazon RDS with an action to recover a high-load, impaired RDS instance in the database tier.
- E . Configure an Amazon Elastic ache cluster and place it between the web application and RDS MySQL instances to reduce the load on the backend database tier.
A,E
Explanation:
Configuring read replicas for Amazon RDS MySQL and using the single reader endpoint in the web application can significantly reduce the load on the backend database tier, improving overall application performance. Additionally, implementing an Amazon ElastiCache cluster between the web application and RDS MySQL instances can further reduce database load by caching frequently accessed data, thereby enhancing the application’s resilience and scalability. These changes address the root cause of the outage by alleviating the database tier’s high load and preventing similar issues in the future.
AWS Documentation on Amazon RDS Read Replicas and Amazon ElastiCache provides comprehensive guidance on improving application performance and scalability by offloading read traffic from the primary database and caching common queries. These solutions are in line with AWS best practices for building resilient and scalable web applications.
A company is migrating internal business applications to Amazon EC2 and Amazon RDS in a VPC. The migration requires connecting the cloud-based applications to the on-premises internal network. The company wants to set up an AWS 5ite-to-5ite VPN connection. The company has created two separate customer gateways. The gateways are configured for static routing and have been assigned distinct public IP addresses.
Which solution will meet these requirements?
- A . Create one virtual private gateway. Associate the virtual private gateway with the VPC. Enable route propagation for the virtual private gateway in all VPC route tables. Create two Site-to-Slte VPN connections with two tunnels for each connection. Configure the Site-to-Slte VPN connections to use the virtual private gateway and to use separate customer gateways.
- B . Create one customer gateway. Associate the customer gateway with the VPC. Enable route propagation for the customer gateway in all VPC route tables. Create two Site-to-Site VPN connections with two tunnels for each connection. Configure the Site-to-Site VPN connections to use the customer gateway.
- C . Create two virtual private gateways. Associate the virtual private gateways with the VPC. Enable route propagation for both customer gateways in all VPC route tables. Create two Site-to-Site VPN connections with two tunnels for each connection. Configure the Site-to-Site VPN connections to use separate virtual private gateways and separate customer gateways.
- D . Create two virtual private gateways. Associate the virtual private gateways with the VPC. Enable route propagation for both customer gateways in all VPC route tables. Create four Site-to-Site VPN connections with one tunnel for each connection. Configure the Site-to-Site VPN connections into groups of two. Configure each group to connect to separate customer gateways and separate virtual private gateways.
A company is subject to regulatory audits of its financial information. External auditors who use a single AWS account need access to the company’s AWS account. A solutions architect must provide the auditors with secure, read-only access to the company’s AWS account. The solution must comply with AWS security best practices.
Which solution will meet these requirements?
- A . In the company’s AWS account, create resource policies for all resources in the account to grant access to the auditors’ AWS account. Assign a unique external ID to the resource policy.
- B . In the company’s AWS account create an IAM role that trusts the auditors’ AWS account Create an IAM policy that has the required permissions. Attach the policy to the role. Assign a unique external ID to the role’s trust policy.
- C . In the company’s AWS account, create an IAM user. Attach the required IAM policies to the IAM user. Create API access keys for the IAM user. Share the access keys with the auditors.
- D . In the company’s AWS account, create an IAM group that has the required permissions Create an IAM user in the company s account for each auditor. Add the IAM users to the IAM group.
B
Explanation:
This solution will allow the external auditors to have read-only access to the company’s AWS account
while being compliant with AWS security best practices. By creating an IAM role, which is a secure
and flexible way of granting access to AWS resources, and trusting the auditors’ AWS account, the
company can ensure that the auditors only have the permissions that are required for their role and
nothing more. Assigning a unique external ID to the role’s trust policy, it will ensure that only the
auditors’ AWS account can assume the role.
Reference:
AWS IAM Roles documentation: https://aws.amazon.com/iam/features/roles/
AWS IAM Best practices: https://aws.amazon.com/iam/security-best-practices/
A company runs a serverless application in a single AWS Region. The application accesses external URLs and extracts metadata from those sites. The company uses an Amazon Simple Notification Service (Amazon SNS) topic to publish URLs to an Amazon Simple Queue Service (Amazon SQS) queue An AWS Lambda function uses the queue as an event source and processes the URLs from the queue Results are saved to an Amazon S3 bucket
The company wants to process each URL other Regions to compare possible differences in site localization URLs must be published from the existing Region. Results must be written to the existing S3 bucket in the current Region.
Which combination of changes will produce multi-Region deployment that meets these requirements? (Select TWO.)
- A . Deploy the SOS queue with the Lambda function to other Regions.
- B . Subscribe the SNS topic in each Region to the SQS queue.
- C . Subscribe the SQS queue in each Region to the SNS topics in each Region.
- D . Configure the SQS queue to publish URLs to SNS topics in each Region.
- E . Deploy the SNS topic and the Lambda function to other Regions.
A,C
Explanation:
https://docs.aws.amazon.com/sns/latest/dg/sns-cross-region-delivery.html
A company has applications in an AWS account that is named Source. The account is in an organization in AWS Organizations. One of the applications uses AWS Lambda functions and store’s inventory data in an Amazon Aurora database. The application deploys the Lambda functions by using a deployment package. The company has configured automated backups for Aurora.
The company wants to migrate the Lambda functions and the Aurora database to a new AWS account that is named Target. The application processes critical data, so the company must minimize downtime.
Which solution will meet these requirements?
- A . Download the Lambda function deployment package from the Source account. Use the
deployment package and create new Lambda functions in the Target account. Share the automated Aurora DB cluster snapshot with the Target account. - B . Download the Lambda function deployment package from the Source account. Use the deployment package and create new Lambda functions in the Target account Share the Aurora DB cluster with the Target account by using AWS Resource Access Manager {AWS RAM). Grant the Target account permission to clone the Aurora DB cluster.
- C . Use AWS Resource Access Manager (AWS RAM) to share the Lambda functions and the Aurora DB cluster with the Target account. Grant the Target account permission to clone the Aurora DB cluster.
- D . Use AWS Resource Access Manager (AWS RAM) to share the Lambda functions with the Target account. Share the automated Aurora DB cluster snapshot with the Target account.
C
Explanation:
This solution uses a combination of AWS Resource Access Manager (RAM) and automated backups to migrate the Lambda functions and the Aurora database to the Target account while minimizing downtime. In this solution, the Lambda function deployment package is downloaded from the Source account and used to create new Lambda functions in the Target account. The Aurora DB cluster is shared with the Target account using AWS RAM and the Target account is granted permission to clone the Aurora DB cluster, allowing for a new copy of the Aurora database to be created in the Target account. This approach allows for the data to be migrated to the Target account while minimizing downtime, as the Target account can use the cloned Aurora database while the original Aurora database continues to be used in the Source account.
A company built an application based on AWS Lambda deployed in an AWS CloudFormation stack. The last production release of the web application introduced an issue that resulted in an outage lasting several minutes. A solutions architect must adjust the deployment process to support a canary release.
Which solution will meet these requirements?
- A . Create an alias for every new deployed version of the Lambda function. Use the AWS CLI update-alias command with the routing-config parameter to distribute the load.
- B . Deploy the application into a new CloudFormation stack. Use an Amazon Route 53 weighted routing policy to distribute the load.
- C . Create a version for every new deployed Lambda function. Use the AWS CLI update-function-contiguration command with the routing-config parameter to distribute the load.
- D . Configure AWS Code Deploy and use Code DeployDefault.OneAtATime in the Deployment configuration to distribute the load.
A
Explanation:
https://aws.amazon.com/blogs/compute/implementing-canary-deployments-of-aws-lambda-functions-with-alias-traffic-shifting/
A company has an application that uses Amazon EC2 instances in an Auto Scaling group. The quality assurance (QA) department needs to launch and test the application. The application environments are currently launched by the manager of the department using an AWS CloudFormation template. To launch the stack, the manager uses a role with permission to use CloudFormation, EC2, and Auto Scaling APIs. The manager wants to allow QA to launch environments, but does not want to grant broad permissions to each user.
Which set up would achieve these goals?
- A . Upload the AWS CloudFormation template to Amazon S3. Give users in the QA department permission to assume the manager’s role, restricts the permissions to the template and the resources it creates. Train users to launch the template from the CloudFormation console.
- B . Create an AWS Service Catalog product from the environment template. Add a launch constraint to the product with the existing manager’s department permission to use AWS Service Catalog APIs only. Train users to launch the template from the AWS Service Catalog console.
- C . Upload the AWS CloudFormation template to Amazon S3. Give users in the QA department permission to use CloudFormation and restrict the permissions to the template and the resources it creates. Train users to launch the template from the CloudFormation console.
- D . Create an AWS Elastic Beanstalk application from the environment template. Give users in the QA department permission to use Elastic Beanstalk only. Train users to launch Elastic Beanstalk environments with the Elastic Beanstalk CLI, passing the existing role to the environment.
A company is hosting a three-tier web application in an on-premises environment. Due to a recentsurge in traffic that resulted in downtime and a significant financial impact, company management has ordered that the application be moved to AWS. The application is written in .NET and has a dependency on a MySQL database A solutions architect must design a scalable and highly available solution to meet the demand of 200000 daily users.
Which steps should the solutions architect take to design an appropriate solution?
- A . Use AWS Elastic Beanstalk to create a new application with a web server environment and an Amazon RDS MySQL Multi-AZ DB instance. The environment should launch a Network Load Balancer (NLB) in front of an Amazon EC2 Auto Scaling group in multiple Availability Zones Use an Amazon Route 53 alias record to route traffic from the company’s domain to the NLB.
- B . Use AWS CloudFormation to launch a stack containing an Application Load Balancer (ALB) in front of an Amazon EC2 Auto Scaling group spanning three Availability Zones. The stack should launch a Multi-AZ deployment of an Amazon Aurora MySQL DB cluster with a Retain deletion policy. Use an Amazon Route 53 alias record to route traffic from the company’s domain to the ALB
- C . Use AWS Elastic Beanstalk to create an automatically scaling web server environment that spans two separate Regions with an Application Load Balancer (ALB) in each Region. Create a Multi-AZ deployment of an Amazon Aurora MySQL DB cluster with a cross-Region read replica Use Amazon Route 53 with a geoproximity routing policy to route traffic between the two Regions.
- D . Use AWS CloudFormation to launch a stack containing an Application Load Balancer (ALB) in front of an Amazon ECS cluster of Spot Instances spanning three Availability Zones. The stack should launch an Amazon RDS MySQL DB instance with a Snapshot deletion policy Use an Amazon Route 53 alias record to route traffic from the company’s domain to the ALB
C
Explanation:
Using AWS CloudFormation to launch a stack with an Application Load Balancer (ALB) in front of an Amazon EC2 Auto Scaling group spanning three Availability Zones, a Multi-AZ deployment of an Amazon Aurora MySQL DB cluster with a Retain deletion policy, and an Amazon Route 53 alias record to route traffic from the company’s domain to the ALB will ensure that
