Practice Free SAA-C03 Exam Online Questions
A company wants to share data that is collected from self-driving cars with the automobile community. The data will be made available from within an Amazon S3 bucket. The company wants to minimize its cost of making this data available to other AWS accounts.
What should a solutions architect do to accomplish this goal?
- A . Create an S3 VPC endpoint for the bucket.
- B . Configure the S3 bucket to be a Requester Pays bucket.
- C . Create an Amazon CloudFront distribution in front of the S3 bucket.
- D . Require that the files be accessible only with the use of the BitTorrent protocol.
B
Explanation:
The Requester Pays feature in Amazon S3 allows the bucket owner to configure the bucket so that the requester, rather than the bucket owner, pays for data transfer and request costs. This is ideal for sharing large datasets with the public or with other AWS accounts when you want to minimize your own data transfer expenses.
Reference Extract from AWS Documentation / Study Guide:
"Requester Pays buckets allow you to configure the bucket so that the requester instead of the bucket owner pays the cost of the request and the data download from the bucket."
Source: AWS Certified Solutions Architect C Official Study Guide, S3 Cost Management section.
A company is developing a latency-sensitive application. Part of the application includes several AWS Lambda functions that need to initialize as quickly as possible. The Lambda functions are written in Java and contain initialization code outside the handlers to load libraries, initialize classes, and generate unique IDs.
Which solution will meet the startup performance requirement MOST cost-effectively?
- A . Move all the initialization code to the handlers for each Lambda function. Activate Lambda SnapStart for each Lambda function. Configure SnapStart to reference the $LATEST version of each Lambda function.
- B . Publish a version of each Lambda function. Create an alias for each Lambda function. Configure each alias to point to its corresponding version. Set up a provisioned concurrency configuration for each Lambda function to point to the corresponding alias.
- C . Publish a version of each Lambda function. Set up a provisioned concurrency configuration for each Lambda function to point to the corresponding version. Activate Lambda SnapStart for the published versions of the Lambda functions.
- D . Update the Lambda functions to add a pre-snapshot hook. Move the code that generates unique IDs into the handlers. Publish a version of each Lambda function. Activate Lambda SnapStart for the published versions of the Lambda functions.
D
Explanation:
AWS Lambda SnapStart is designed to improve the cold start performance of Java Lambda functions by initializing the function, taking a snapshot of the execution environment, and then reusing that snapshot for subsequent invocations. However, some code (such as code that generates unique IDs or session-specific data) should run during each invocation, not during the snapshot process. By using a pre-snapshot hook and moving the unique ID generation into the handler, you ensure that non-deterministic or per-invocation code is executed correctly, while the rest of the initialization benefits from SnapStart. This delivers the lowest latency and cost, as you do not need to pay for provisioned concurrency.
AWS Documentation Extract:
"Lambda SnapStart is ideal for Java functions with long cold starts due to heavy initialization. Move non-deterministic code such as unique ID generation to the handler, and use pre-snapshot hooks to customize what gets snapshotted. SnapStart works only for published versions, not $LATEST."
(Source: AWS Lambda documentation, Using SnapStart for Java Functions)
Other options:
A: SnapStart is not supported for the $LATEST version; must be a published version.
B & C: Provisioned concurrency removes cold starts but is more expensive than SnapStart for most workloads.
C: You must use SnapStart on published versions, but provisioned concurrency is not required for SnapStart and adds cost.
Reference: AWS Certified Solutions Architect C Official Study Guide, Lambda Performance Section.
A solutions architect is designing the architecture for a two-tier web application. The web application consists of an internet-facing Application Load Balancer (ALB) that forwards traffic to an Auto Scaling group of Amazon EC2 instances.
The EC2 instances must be able to access an Amazon RDS database. The company does not want to rely solely on security groups or network ACLs. Only the minimum resources that are necessary should be routable from the internet.
Which network design meets these requirements?
- A . Place the ALB, EC2 instances, and RDS database in private subnets.
- B . Place the ALB in public subnets. Place the EC2 instances and RDS database in private subnets.
- C . Place the ALB and EC2 instances in public subnets. Place the RDS database in private subnets.
- D . Place the ALB outside the VPC. Place the EC2 instances and RDS database in private subnets.
B
Explanation:
The ALB must be in a public subnet to receive internet traffic. The EC2 instances and the RDS database should be in private subnets to prevent direct internet access, minimizing the attack surface. This aligns with AWS security best practices for web application architectures.
Reference Extract:
"Internet-facing ALBs should be placed in public subnets; EC2 instances and RDS databases should be in private subnets to restrict direct internet access."
Source: AWS Certified Solutions Architect C Official Study Guide, Network Security and Design section.
A company is building a serverless application that processes large volumes of data from a mobile app. A Lambda function processes the data and stores it in DynamoDB. The company must ensure the application can recover from failures and continue processing without losing records.
Which solution will meet these requirements?
- A . Configure the Lambda function with a dead-letter queue (DLQ) using SQS. Retry failed records from the DLQ with exponential backoff.
- B . Configure the Lambda function to read records from Amazon Data Firehose. Replay Firehose records in case of failures.
- C . Use Amazon OpenSearch Service to store failed records. Configure Lambda to retry failed records from OpenSearch. Use EventBridge for orchestration.
- D . Use Amazon SNS to store failed records. Configure Lambda to retry records from SNS. Use API Gateway to orchestrate retries.
A
Explanation:
AWS documentation states that when Lambda processes events, a dead-letter queue (DLQ) using Amazon SQS is the correct mechanism to capture failed invocations for later reprocessing.
SQS provides durable buffering and decoupling, allowing records to be retried safely without loss.
Kinesis Firehose (Option B) does not support SQL replays or arbitrary replay control. SNS (Option D) is not a durable store. OpenSearch (Option C) is not intended for durable retry queues.
An online video game company must maintain ultra-low latency for its game servers. The game servers run on Amazon EC2 instances. The company needs a solution that can handle millions of UDP internet traffic requests each second.
Which solution will meet these requirements MOST cost-effectively?
- A . Configure an Application Load Balancer with the required protocol and ports. Specify the EC2 instances as targets.
- B . Configure a Gateway Load Balancer for the internet traffic. Specify the EC2 instances as targets.
- C . Configure a Network Load Balancer with the required protocol and ports. Specify the EC2 instances as targets.
- D . Launch identical game servers in separate Regions. Route traffic to both sets of servers.
C
Explanation:
The requirements are UDP support, ultra-low latency, and the ability to handle millions of requests per second in a cost-effective manner. Network Load Balancer (NLB) is the AWS service designed for high-performance Layer 4 load balancing, including TCP, TLS, and UDP. NLB can scale to very high throughput and connection rates while maintaining low latency, which is critical for real-time gaming traffic.
Option C is correct because NLB supports UDP listeners and forwards traffic directly to EC2 targets with minimal processing overhead. This yields lower latency than Layer 7 solutions and is well suited for game protocols that are latency-sensitive and often use UDP for fast, lightweight communication. NLB also provides static IP addresses per Availability Zone and integrates cleanly with Auto Scaling groups.
Option A is incorrect because an Application Load Balancer operates at Layer 7 (HTTP/HTTPS) and does not handle raw UDP traffic; it’s optimized for web applications, not gaming protocols.
Option B is incorrect because Gateway Load Balancer is intended to deploy and scale third-party virtual appliances (such as firewalls) and uses Geneve encapsulation; it is not meant for distributing general internet game traffic to EC2 game servers.
Option D adds significant cost and complexity by deploying multi-Region fleets. It may help global latency in some scenarios, but the question asks for the most cost-effective way to handle massive UDP traffic volume; multi-Region duplication is typically not the first choice for that requirement alone.
Therefore, C (NLB with UDP) best meets the scale and latency requirements with the most efficient, purpose-built AWS load balancing service.
A company has an ecommerce application that users access through multiple mobile apps and web applications. The company needs a solution that will receive requests from the mobile apps and web applications through an API.
Request traffic volume varies significantly throughout each day. Traffic spikes during sales events. The solution must be loosely coupled and ensure that no requests are lost.
- A . Create an Application Load Balancer (ALB). Create an AWS Elastic Beanstalk endpoint to process the requests. Add the Elastic Beanstalk endpoint to the target group of the ALB.
- B . Set up an Amazon API Gateway REST API with an integration to an Amazon Simple Queue Service (Amazon SQS) queue. Configure a dead-letter queue. Create an AWS Lambda function to poll the queue to process the requests.
- C . Create an Application Load Balancer (ALB). Create an AWS Lambda function to process the requests. Add the Lambda function as a target of the ALB.
- D . Set up an Amazon API Gateway HTTP API with an integration to an Amazon Simple Notification Service (Amazon SNS) topic. Create an AWS Lambda function to process the requests. Subscribe the function to the SNS topic to process the requests.
B
Explanation:
Why Option B is Correct:
Amazon SQS: Ensures no requests are lost, even during traffic spikes.
API Gateway: Handles dynamic traffic patterns efficiently, integrating with SQS for asynchronous processing.
Lambda: Polls the queue and processes requests in a serverless and scalable manner. Dead-Letter Queue (DLQ): Ensures failed messages are retried or logged for debugging.
Why Other Options Are Not Ideal:
Option A: Elastic Beanstalk cannot handle queue-based decoupling, making it unsuitable for spiky traffic.
Option C: ALB to Lambda does not provide buffering for traffic spikes, risking request loss.
Option D: SNS is better suited for notifications, not reliable for ensuring message durability. AWS
Reference: Amazon SQS: AWS Documentation – SQS
A company hosts an industrial control application that receives sensor input through Amazon Kinesis Data Streams. The application needs to support new sensors for real-time anomaly detection in monitored equipment.
The company wants to integrate new sensors in a loosely-coupled, fully managed, and serverless way. The company cannot modify the application code.
Which solution will meet these requirements?
- A . Forward the existing stream in Kinesis Data Streams to Amazon Managed Service for Apache Flink for anomaly detection. Use a second stream in Kinesis Data Streams to send the Flink output to the application.
- B . Use Amazon Data Firehose to stream data to Amazon S3. Use Amazon Redshift Spectrum to perform anomaly detection on the S3 data. Use S3 Event Notifications to invoke an AWS Lambda function that sends analyzed data to the application through a second stream in Kinesis Data Streams.
- C . Configure Amazon EC2 instances in an Auto Scaling group to consume data from the data stream and to perform anomaly detection. Create a second stream in Kinesis Data Streams to send data from the EC2 instances to the application.
- D . Configure an Amazon Elastic Container Service (Amazon ECS) task that uses Amazon EC2 instances to consume data from the data stream and to perform anomaly detection. Create a second stream in Kinesis Data Streams to send data from the containers to the application.
A
Explanation:
Amazon Managed Service for Apache Flink (formerly Kinesis Data Analytics) is a fully managed, serverless service for real-time processing of streaming data. You can consume data from Kinesis Data Streams, perform anomaly detection, and then output the results to another Kinesis stream. This approach is loosely coupled, fully managed, and does not require modifying the application code.
AWS Documentation Extract:
“Amazon Managed Service for Apache Flink enables you to process streaming data in real time, integrating with Kinesis Data Streams as source and sink, and is fully managed and serverless.”
(Source: Apache Flink on AWS documentation)
B: S3/Redshift is not real-time and adds complexity.
C, D: EC2/ECS solutions are not serverless or fully managed.
Reference: AWS Certified Solutions Architect C Official Study Guide, Real-Time Analytics.
A company has a serverless web application that is comprised of AWS Lambda functions. The application experiences spikes in traffic that cause increased latency because of cold starts. The company wants to improve the application’s ability to handle traffic spikes and to minimize latency. The solution must optimize costs during periods when traffic is low.
Which solution will meet these requirements?
- A . Configure provisioned concurrency for the Lambda functions. Use AWS Application Auto Scaling to adjust the provisioned concurrency.
- B . Launch Amazon EC2 instances in an Auto Scaling group. Add a scheduled scaling policy to launch additional EC2 instances during peak traffic periods.
- C . Configure provisioned concurrency for the Lambda functions. Set a fixed concurrency level to handle the maximum expected traffic.
- D . Create a recurring schedule in Amazon EventBridge Scheduler. Use the schedule to invoke the Lambda functions periodically to warm the functions.
A
Explanation:
Provisioned Concurrency:
AWS Lambda’s provisioned concurrency ensures that a predefined number of execution environments are pre-warmed and ready to handle requests, reducing latency during traffic spikes.
This solution optimizes costs during low-traffic periods when combined with AWS Application Auto Scaling to dynamically adjust the provisioned concurrency based ondemand.
Incorrect Options Analysis:
Option B: Switching to EC2 would increase complexity and cost for a serverless application.
Option C: A fixed concurrency level may result in over-provisioning during low-traffic periods, leading to higher costs.
Option D: Periodically warming functions does not effectively handle sudden spikes in traffic.
Reference: AWS Lambda Provisioned Concurrency
A company runs production workloads in its AWS account. Multiple teams create and maintain the workloads.
The company needs to be able to detect changes in resource configurations. The company needs to capture changes as configuration items without changing or modifying the existing resources.
Which solution will meet these requirements?
- A . Use AWS Config. Start the configuration recorder for AWS resources to detect changes in resource configurations.
- B . Use AWS CloudFormation. Initiate drift detection to capture changes in resource configurations.
- C . Use Amazon Detective to detect, analyze, and investigate changes in resource configurations.
- D . Use AWS Audit Manager to capture management events and global service events for resource
configurations.
A
Explanation:
AWS Config is a service designed to assess, audit, and evaluate the configurations of AWS resources. It continuously monitors and records your AWS resource configurations and allows you to automate the evaluation of recorded configurations against desired configurations. By starting a configuration recorder, AWS Config will capture changes to supported resource types as configuration items― without the need to modify any of the existing resources. This provides a full history of configuration changes and is specifically intended for exactly this use case.
AWS Documentation Extract:
“AWS Config provides a detailed view of the configuration of AWS resources in your AWS account. This includes how the resources are related to one another and how they were configured in the past so you can see how the configurations and relationships change over time.”
“You can start the configuration recorder, which will record the configuration changes of the supported resources in your AWS account.”
(Source: AWS Config documentation, What is AWS Config?)
Other options:
B: CloudFormation drift detection only works for resources created and managed by CloudFormation and requires stacks.
C: Amazon Detective is used for analyzing and investigating security findings, not for resource configuration tracking.
D: AWS Audit Manager is used for automating evidence collection to help with audits, not for tracking resource configurations.
Reference: AWS Certified Solutions Architect C Official Study Guide, Chapter on Monitoring and Auditing.
A solutions architect is designing the architecture for a company website that is composed of static content. The company’s target customers are located in the United States and Europe.
Which architecture should the solutions architect recommend to MINIMIZE cost?
- A . Store the website files on Amazon S3 in the us-east-2 Region. Use an Amazon CloudFront distribution with the price class configured to limit the edge locations in use.
- B . Store the website files on Amazon S3 in the us-east-2 Region. Use an Amazon CloudFront distribution with the price class configured to maximize the use of edge locations.
- C . Store the website files on Amazon S3 in the us-east-2 Region and the eu-west-1 Region. Use an Amazon CloudFront geolocation routing policy to route requests to the closest Region to the user.
- D . Store the website files on Amazon S3 in the us-east-2 Region and the eu-west-1 Region. Use an Amazon CloudFront distribution with an Amazon Route 53 latency routing policy to route requests to the closest Region to the user.
A
Explanation:
The question focuses on minimizing costs while serving static content to users in the US and Europe.
Option Auses a single S3 bucket and configures CloudFront to limit edge locations, reducing costs by using fewer edge locations while still improving performance.
Option Bmaximizes edge locations, which increases costs unnecessarily.
Options C and Dinvolve storing data in multiple regions, which increases storage and operational costs. Thus, Option A is the most cost-effective solution.
