Practice Free SAA-C03 Exam Online Questions
A company is migrating its multi-tier on-premises application to AWS. The application consists of a single-node MySQL database and a multi-node web tier. The company must minimize changes to the application during the migration. The company wants to improve application resiliency after the migration.
Which combination of steps will meet these requirements? (Select TWO.)
- A . Migrate the web tier to Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer.
- B . Migrate the database to Amazon EC2 instances in an Auto Scaling group behind a Network Load Balancer.
- C . Migrate the database to an Amazon RDS Multi-AZ deployment.
- D . Migrate the web tier to an AWS Lambda function.
- E . Migrate the database to an Amazon DynamoDB table.
A, C
Explanation:
An Auto Scaling group is a collection of EC2 instances that share similar characteristics and can be scaled in or out automatically based on demand. An Auto Scaling group can be placed behind an Application Load Balancer, which is a type of Elastic Load Balancing load balancer that distributes incoming traffic across multiple targets in multiple Availability Zones. This solution will improve the resiliency of the web tier by providing high availability, scalability, and fault tolerance. An Amazon RDS Multi-AZ deployment is a configuration that automatically creates a primary database instance and synchronously replicates the data to a standby instance in a different Availability Zone. When a failure occurs, Amazon RDS automatically fails over to the standby instance without manual intervention. This solution will improve the resiliency of the database tier by providing data redundancy, backup support, and availability. This combination of steps will meet the requirements with minimal changes to the application during the migration.
Reference:
1 describes the concept and benefits of an Auto Scaling group.
2 provides an overview of Application Load Balancers and their benefits.
3 explains how Amazon RDS Multi-AZ deployments work and their benefits.
A solutions architect is designing the architecture for a software demonstration environment. The environment will run on Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer (ALB). The system will experience significant increases in traffic during working hours but Is not required to operate on weekends.
Which combination of actions should the solutions architect take to ensure that the system can scale to meet demand? (Select TWO)
- A . Use AWS Auto Scaling to adjust the ALB capacity based on request rate
- B . Use AWS Auto Scaling to scale the capacity of the VPC internet gateway
- C . Launch the EC2 instances in multiple AWS Regions to distribute the load across Regions
- D . Use a target tracking scaling policy to scale the Auto Scaling group based on instance CPU utilization
- E . Use scheduled scaling to change the Auto Scaling group minimum, maximum, and desired capacity to zero for weekends Revert to the default values at the start of the week
D, E
Explanation:
https: //docs.aws.amazon.com/autoscaling/ec2/userguide/as-scaling-target-tracking.html#target-tracking-choose-metrics
A target tracking scaling policy is a type of dynamic scaling policy that adjusts the capacity of an Auto Scaling group based on a specified metric and a target value1. A target tracking scaling policy can automatically scale out or scale in the Auto Scaling group to keep the actual metric value at or near the target value1. A target tracking scaling policy is suitable for scenarios where the load on the application changes frequently and unpredictably, such as during working hours2.
To meet the requirements of the scenario, the solutions architect should use a target tracking scaling policy to scale the Auto Scaling group based on instance CPU utilization. Instance CPU utilization is a common metric that reflects the demand on the application1. The solutions architect should specify a target value that represents the ideal average CPU utilization level for the application, such as 50 percent1. Then, the Auto Scaling group will scale out or scale in to maintain that level of CPU utilization.
Scheduled scaling is a type of scaling policy that performs scaling actions based on a date and time3. Scheduled scaling is suitable for scenarios where the load on the application changes periodically and predictably, such as on weekends2.
To meet the requirements of the scenario, the solutions architect should also use scheduled scaling to change the Auto Scaling group minimum, maximum, and desired capacity to zero for weekends. This way, the Auto Scaling group will terminate all instances on weekends when they are not required to operate. The solutions architect should also revert to the default values at the start of the week, so that the Auto Scaling group can resume normal operation.
A developer used the AWS SDK to create an application that aggregates and produces log records for 10 services. The application delivers data to an Amazon Kinesis Data Streams stream.
Each record contains a log message with a service name, creation timestamp, and other log information. The stream has 15 shards in provisioned capacity mode. The stream uses service name as the partition key.
The developer notices that when all the services are producing logs, ProvisionedThroughputExceededException errors occur during PutRecord requests. The stream metrics show that the write capacity the applications use is below the provisioned capacity.
How should the developer resolve this issue?
- A . Change the capacity mode from provisioned to on-demand.
- B . Double the number of shards until the throttling errors stop occurring.
- C . Change the partition key from service name to creation timestamp.
- D . Use a separate Kinesis stream for each service to generate the logs.
C
Explanation:
Partition Key Issue:
Using "service name" as the partition key results in uneven data distribution. Some shards may become hot due to excessive logs from certain services, leading to throttling errors.
Changing the partition key to "creation timestamp" ensures a more even distribution of records across shards.
Incorrect Options Analysis:
Option A: On-demand capacity mode eliminates throughput management but is more expensive and does not address the root cause.
Option B: Adding more shards does not solve the issue if the partition key still creates hot shards.
Option D: Using separate streams increases complexity and is unnecessary.
Reference: Kinesis Data Streams Partition Key Best Practices
A company is hosting a three-tier ecommerce application in the AWS Cloud. The company hosts the website on Amazon S3 and integrates the website with an API that handles sales requests. The company hosts the API on three Amazon EC2 instances behind an Application Load Balancer (ALB). The API consists of static and dynamic front-end content along with backend workers that process sales requests asynchronously.
The company is expecting a significant and sudden increase in the number of sales requests during events for the launch of new products
What should a solutions architect recommend to ensure that all the requests are processed successfully?
- A . Add an Amazon CloudFront distribution for the dynamic content. Increase the number of EC2 instances to handle the increase in traffic.
- B . Add an Amazon CloudFront distribution for the static content. Place the EC2 instances in an Auto Scaling group to launch new instances based on network traffic.
- C . Add an Amazon CloudFront distribution for the dynamic content. Add an Amazon ElastiCache instance in front of the ALB to reduce traffic for the API to handle.
- D . Add an Amazon CloudFront distribution for the static content. Add an Amazon Simple Queue Service (Amazon SOS) queue to receive requests from the website for later processing by the EC2 instances.
B
Explanation:
This option is the most efficient because it uses Amazon CloudFront, which is a web service that speeds up distribution of your static and dynamic web content, such as .html, .css, .js, and image files, to your users1. It also uses a CloudFront distribution for the static content, which reduces the load on the EC2 instances and improves the performance and availability of the website. It also uses an Auto Scaling group to launchnew instances based on network traffic, which automatically adjusts the compute capacity of your EC2 instances based on load or aschedule2. This solution meets the requirement of ensuring that all the requests are processed successfully during events for the launch of new products.
Option A is less efficient because it uses a CloudFront distribution for the dynamic content, which is not necessary as the dynamic content is already handled by the API on the EC2 instances. It also increases the number of EC2 instances to handle the increase in traffic, which could incur higher costs and complexity than using an Auto Scaling group.
Option C is less efficient because it uses an Amazon ElastiCache instance in front of the ALB to reduce traffic for the API to handle, which is a way to provide afully managed in-memory data store service that provides sub-millisecond latency for caching and data processing3. However, this could introduce additional complexity and latency, and does not scale automatically based on network traffic.
Option D is less efficient because it uses an Amazon Simple Queue Service (Amazon SQS) queue to receive requests from the website for later processing by the EC2 instances, which is a way to send, store, and receive messages between software components at any volume. However, this does not provide a faster response time to the users as they have to wait for their requests to be processed by the EC2 instances.
A solutions architect is designing the storage architecture for a new web application used for storing and viewing engineering drawings. All application components will be deployed on the AWS infrastructure.
The application design must support caching to minimize the amount of time that users wait for the engineering drawings to load. The application must be able to store petabytes of data.
Which combination of storage and caching should the solutions architect use?
- A . Amazon S3 with Amazon CloudFront
- B . Amazon S3 Glacier with Amazon ElastiCache
- C . Amazon Elastic Block Store (Amazon EBS) volumes with Amazon CloudFront
- D . AWS Storage Gateway with Amazon ElastiCache
A
Explanation:
To store and view engineering drawings with caching support, Amazon S3 and Amazon CloudFront are suitable solutions. Amazon S3 can store any amount of data with high durability, availability, and performance. Amazon CloudFront can distribute the engineering drawings to edge locations closer to the users, which can reduce the latency and improve the user experience. Amazon CloudFront can also cache the engineering drawings at the edge locations, which can minimize the amount of time that users wait for the drawings to load.
Reference:
What Is Amazon S3?
What Is Amazon CloudFront?
An ecommerce company wants to launch a one-deal-a-day website on AWS. Each day will feature exactly one product on sale for a period of 24 hours. The company wants to be able to handle millions of requests each hour with millisecond latency during peak hours.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Use Amazon S3 to host the full website in different S3 buckets Add Amazon CloudFront distributions Set the S3 buckets as origins for the distributions Store the order data in Amazon S3
- B . Deploy the full website on Amazon EC2 instances that run in Auto Scaling groups across multiple Availability Zones Add an Application Load Balancer (ALB) to distribute the website traffic Add another ALB for the backend APIs Store the data in Amazon RDS for MySQL
- C . Migrate the full application to run in containers Host the containers on Amazon Elastic Kubernetes Service (Amazon EKS) Use the Kubernetes Cluster Autoscaler to increase and decrease the number of pods to process bursts in traffic Store the data in Amazon RDS for MySQL
- D . Use an Amazon S3 bucket to host the website’s static content Deploy an Amazon CloudFront distribution. Set the S3 bucket as the origin Use Amazon API Gateway and AWS Lambda functions for the backend APIs Store the data in Amazon DynamoDB
D
Explanation:
To launch a one-deal-a-day website on AWS with millisecond latency during peak hours and with the least operational overhead, the best option is to use an Amazon S3 bucket to host the website’s static content, deploy an Amazon CloudFront distribution, set the S3 bucket as the origin, use Amazon API Gateway and AWS Lambda functions for the backend APIs, and store the data in Amazon DynamoDB. This option requires minimal operational overhead and can handle millions of requests each hour with millisecond latency during peak hours. Therefore, option D is the correct answer.
Reference: https: //aws.amazon.com/blogs/compute/building-a-serverless-multi-player-game-with-aws-lambda-and-amazon-dynamodb/
A company is migrating a production environment application to the AWS Cloud. The company uses Amazon RDS for Oracle for the database layer. The company needs to configure thedatabase to meet the needs of high I/O intensive workloads that require low latency and consistent throughput. The database workloads are read intensive and write intensive.
Which solution will meet these requirements?
- A . Use a Multi-AZ DB instance deployment for the RDS for Oracle database.
- B . Configure the RDS for Oracle database to use the Provisioned IOPS SSD storage type.
- C . Configure the RDS for Oracle database to use the General Purpose SSD storage type.
- D . Enable RDS read replicas for RDS for Oracle.
B
Explanation:
Provisioned IOPS SSD (io1 or io2) is designed for I/O-intensive workloads that require low latency and consistent throughput, which is critical for transactional and production databases. It provides predictable performance, unlike General Purpose SSD, which is burst-based.
Reference: AWS RDS Storage C Amazon RDS Storage Types (Provisioned IOPS SSD for latency-sensitive workloads)
A company’s application integrates with multiple software-as-a-service (SaaS) sources for data collection. The company runs Amazon EC2 instances to receive the data and to upload the data to an Amazon S3 bucket for analysis. The same EC2 instance that receives and uploads the data also sends a notification to the user when an upload is complete. The company has noticed slow application performance and wants to improve the performance as much as possible.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Create an Auto Scaling group so that EC2 instances can scale out. Configure an S3 event notification to send events to an Amazon Simple Notification Service (Amazon SNS) topic when the upload to the S3 bucket is complete.
- B . Create an Amazon AppFlow flow to transfer data between each SaaS source and the S3 bucket.
Configure an S3 event notification to send events to an Amazon Simple Notification Service (Amazon
SNS) topic when the upload to the S3 bucket is complete. - C . Create an Amazon EventBridge (Amazon CloudWatch Events) rule for each SaaS source to send output data. Configure the S3 bucket as the rule’s target. Create a second EventBridge (CloudWatch Events) rule to send events when the upload to the S3 bucket is complete. Configure an Amazon Simple Notification Service (Amazon SNS) topic as the second rule’s target.
- D . Create a Docker container to use instead of an EC2 instance. Host the containerized application on Amazon Elastic Container Service (Amazon ECS). Configure Amazon CloudWatch Container Insights to send events to an Amazon Simple Notification Service (Amazon SNS) topic when the upload to the S3 bucket is complete.
B
Explanation:
Amazon AppFlow is a fully managed integration service that enables you to securely transfer data between Software-as-a-Service (SaaS) applications like Salesforce, SAP, Zendesk, Slack, and ServiceNow, and AWS services like Amazon S3 and Amazon Redshift, in just a few clicks. https: //aws.amazon.com/appflow/
A company stores confidential data in an Amazon Aurora PostgreSQL database in the ap-southeast-3 Region. The database is encrypted with an AWS Key Management Service (AWS KMS) customer managed key. The company was recently acquired and must securely share a backup of the database with the acquiring company’s AWS account in ap-southeast-3.
What should a solutions architect do to meet these requirements?
- A . Create a database snapshot Copy the snapshot to a new unencrypted snapshot Share the new snapshot with the acquiring company’s AWS account
- B . Create a database snapshot Add the acquiring company’s AWS account to the KMS key policy Share the snapshot with the acquiring company’s AWS account
- C . Create a database snapshot that uses a different AWS managed KMS key Add the acquiring company’s AWS account to the KMS key alias. Share the snapshot with the acquiring company’s AWS account.
- D . Create a database snapshot Download the database snapshot Upload the database snapshot to an
Amazon S3 bucket Update the S3 bucket policy to allow access from the acquiring company’s AWS account
B
Explanation:
https: //docs.aws.amazon.com/kms/latest/developerguide/key-policy-modifying-external-accounts.html
There’s no need to create another custom AWS KMS key. https: //aws.amazon.com/premiumsupport/knowledge-center/aurora-share-encrypted-snapshot/ Give target account access to the custom AWS KMS key within the source account
A company has an application that runs on an Amazon Elastic Kubernetes Service (Amazon EKS) cluster on Amazon EC2 instances. The application has a U1 that uses Amazon DynamoDB and data services that use Amazon S3 as part of the application deployment.
The company must ensure that the EKS Pods for the U1 can access only Amazon DynamoDB and that the EKS Pods for the data services can access only Amazon S3. The company uses AWS Identity and Access Management |IAM).
Which solution meets these requirements?
- A . Create separate IAM policies (or Amazon S3 and DynamoDB access with the required permissions. Attach both IAM policies to the EC2 instance profile. Use role-based access control (RBAC) to control access to Amazon S3 or DynamoDB (or the respective EKS Pods.
- B . Create separate IAM policies (or Amazon S3 and DynamoDB access with the required permissions. Attach the Amazon S3 IAM policy directly to the EKS Pods (or the data services and the DynamoDB policy to the EKS Pods for the U1.
- C . Create separate Kubernetes service accounts for the U1 and data services to assume an IAM role. Attach the Amazon S3 Full Access policy to the data services account and the Amazon DynamoDB Full Access policy to the U1 service account.
- D . Create separate Kubernetes service accounts for the U1 and data services to assume an IAM role. Use IAM Role for Service Accounts (IRSA) to provide access to the EKS Pods for the U1 to Amazon S3 and the EKS Pods for the data services to DynamoDB.