Practice Free SAA-C03 Exam Online Questions
A company wants to create a payment processing application. The application must run when a payment record arrives in an existing Amazon S3 bucket. The application must process each payment record exactly once. The company wants to use an AWS Lambda function to process the payments.
Which solution will meet these requirements?
- A . Configure the existing S3 bucket to send object creation events to Amazon EventBridge. Configure EventBridge to route events to an Amazon Simple Queue Service (Amazon SQS) FIFO queue. Configure the Lambda function to run when a new event arrives in the SQS queue.
- B . Configure the existing S3 bucket to send object creation events to an Amazon Simple Notification Service (Amazon SNS) topic. Configure the Lambda function to run when a new event arrives in the SNS topic.
- C . Configure the existing S3 bucket to send object creation events to an Amazon Simple Queue Service (Amazon SQS) queue. Configure the Lambda function to run when a new event arrives in the SQS queue.
- D . Configure the existing S3 bucket to send object creation events directly to the Lambda function.
Configure the Lambda function to handle object creation events and to process the payments.
A company runs an application on Amazon EC2 instances across multiple Availability Zones in the same AWS Region. The EC2 instances share an Amazon Elastic File System (Amazon EFS) volume that is mounted on all the instances. The EFS volume stores a variety of files such as installation media, third-party files, interface files, and other one-time files.
The company accesses some EFS files frequently and needs to retrieve the files quickly. The company accesses other files rarely. The EFS volume is multiple terabytes in size. The company needs to optimize storage costs for Amazon EFS.
Which solution will meet these requirements with the LEAST effort?
- A . Move the files to Amazon S3. Set up a lifecycle policy to move the files to S3 Glacier Flexible Retrieval.
- B . Apply a lifecycle policy to the EFS files to move the files to EFS Infrequent Access.
- C . Move the files to Amazon Elastic Block Store (Amazon EBS) Cold HDD Volumes (sc1).
- D . Move the files to Amazon S3. Set up a lifecycle policy to move the rarely-used files to S3 Glacier
Deep Archive.
B
Explanation:
Amazon EFS offers an Infrequent Access (IA) storage class, which can be managed via EFS lifecycle policies. Frequently accessed files remain in the Standard storage class, while infrequently accessed files are automatically moved to the IA class, significantly reducing storage costs with minimal effort and no application changes.
Reference Extract:
"EFS lifecycle management automatically transitions files that are not accessed for a set period to the EFS Infrequent Access (IA) storage class, reducing storage costs."
Source: AWS Certified Solutions Architect C Official Study Guide, EFS and Lifecycle Management section.
A company tracks customer satisfaction by using surveys that the company hosts on its website. The surveys sometimes reach thousands of customers every hour. Survey results are currently sent in email messages to the company so company employees can manually review results and assess customer sentiment.
The company wants to automate the customer survey process. Survey results must be available for the previous 12 months.
Which solution will meet these requirements in the MOST scalable way?
- A . Send the survey results data to an Amazon API Gateway endpoint that is connected to an Amazon Simple Queue Service (Amazon SQS) queue. Create an AWS Lambda function to poll the SQS queue, call Amazon Comprehend for sentiment analysis, and save the results to an Amazon DynamoDB table. Set the TTL for all records to 365 days in the future.
- B . Send the survey results data to an API that is running on an Amazon EC2 instance. Configure the API to store the survey results as a new record in an Amazon DynamoDB table, call Amazon Comprehend for sentiment analysis, and save the results in a second DynamoDB table. Set the TTL for all records to 365 days in the future.
- C . Write the survey results data to an Amazon S3 bucket. Use S3 Event Notifications to invoke an AWS Lambda function to read the data and call Amazon Rekognition for sentiment analysis. Store the sentiment analysis results in a second S3 bucket. Use S3 Lifecycle policies on each bucket to expire objects after 365 days.
- D . Send the survey results data to an Amazon API Gateway endpoint that is connected to an Amazon Simple Queue Service (Amazon SQS) queue. Configure the SQS queue to invoke an AWS Lambda function that calls Amazon Lex for sentiment analysis and saves the results to an Amazon DynamoDB table. Set the TTL for all records to 365 days in the future.
A
Explanation:
This solution is the most scalable and efficient way to handle large volumes of survey data while automating sentiment analysis:
API Gateway and SQS: The survey results are sent to API Gateway, which forwards the data to an SQS queue. SQS can handle large volumes of messages and ensures that messages are not lost. AWS Lambda: Lambda is triggered by polling the SQS queue, where it processes the survey data.
Amazon Comprehend: Comprehend is used for sentiment analysis, providing insights into customer satisfaction.
DynamoDB with TTL: Results are stored in DynamoDB with a Time to Live (TTL)attribute set to expire after 365 days, automatically removing old data and reducing storage costs.
Option B (EC2 API): Running an API on EC2 requires more maintenance and scalability management compared to API Gateway.
Option C (S3 and Rekognition): Amazon Rekognition is for image and video analysis, not sentiment analysis.
Option D (Amazon Lex): Amazon Lex is used for building conversational interfaces, not sentiment analysis.
AWS
Reference: Amazon Comprehend for Sentiment Analysis
Amazon SQS
DynamoDB TTL
A company collects data from sensors. The company needs a cloud-based solution to store and transform the sensor data to make critical decisions. The solution must store the data for up to 2 days. After 2 days, the solution must delete the data. The company needs to use the transformed data in an automated workflow that has manual approval steps.
Which solution will meet these requirements?
- A . Load the data into an Amazon Simple Queue Service (Amazon SQS) queue that has a retention period of 2 days. Use an Amazon EventBridge pipe to retrieve data from the queue, transform the data, and pass the data to an AWS Step Functions workflow.
- B . Load the data into AWS DataSync. Delete the DataSync task after 2 days. Invoke an AWS Lambda function to retrieve the data, transform the data, and invoke a second Lambda function that performs the remaining workflow steps.
- C . Load the data into an Amazon Simple Notification Service (Amazon SNS) topic. Use an Amazon EventBridge pipe to retrieve the data from the topic, transform the data, and send the data to Amazon EC2 instances to perform the remaining workflow steps.
- D . Load the data into an Amazon Simple Notification Service (Amazon SNS) topic. Use an Amazon EventBridge pipe to retrieve the data from the topic and transform the data into an appropriate format for an Amazon SQS queue. Use an AWS Lambda function to poll the queue to perform the remaining workflow steps.
A
Explanation:
Amazon SQS with a 2-day retention ensures the data lives just as long as needed. EventBridge Pipes allow direct integration between event producers and consumers, with optional filtering and transformation. AWS Step Functions supports manual approval steps, which fits the workflow requirement perfectly.
Reference: AWS Documentation C Amazon EventBridge Pipes, AWS Step Functions
A company needs to accommodate traffic for a web application that the company hosts on AWS, especially during peak usage hours.
The application uses Amazon EC2 instances as web servers, an Amazon RDS DB instance for database operations, and an Amazon S3 bucket to store transaction documents. The application struggles to scale effectively and experiences performance issues.
The company wants to improve the scalability of the application and prevent future performance issues. The company also wants to improve global access speeds to the transaction documents for the company’s global users.
Which solution will meet these requirements?
- A . Place the EC2 instances in Auto Scaling groups to scale appropriately during peak usage hours. Use Amazon RDS read replicas to improve database read performance. Deploy an Amazon CloudFront distribution that uses Amazon S3 as the origin.
- B . Increase the size of the EC2 instances to provide more compute capacity. Use Amazon ElastiCache to reduce database read loads. Use AWS Global Accelerator to optimize the delivery of the transaction documents that are in the S3 bucket.
- C . Transition workloads from the EC2 instances to AWS Lambda functions to scale in response to the usage peaks. Migrate the database to an Amazon Aurora global database to provide cross-Region reads. Use AWS Global Accelerator to deliver the transaction documents that are in the S3 bucket.
- D . Convert the application architecture to use Amazon Elastic Container Service (Amazon ECS) containers. Configure a Multi-AZ deployment of Amazon RDS to support database operations. Replicate the transaction documents that are in the S3 bucket across multiple AWS Regions.
A
Explanation:
This question centers on improving scalability and global access performance.
Auto Scaling groups enable EC2 instances to scale dynamically in response to demand, ensuring availability during peak hours without manual intervention. Amazon RDS read replicas offload read traffic, improving read throughput and reducing latency on the primary database instance. Deploying Amazon CloudFront with S3 as origin accelerates delivery of static transaction documents globally by caching content at edge locations, reducing latency for users worldwide.
Option B focuses on vertical scaling (larger instances) and caching with ElastiCache, but it does not address global content delivery optimally. AWS Global Accelerator accelerates network traffic but is better suited for accelerating TCP and UDP traffic; CloudFront is generally preferred for HTTP content delivery.
Option C migrates workloads to Lambda and Aurora global databases, which is an advanced and potentially costly redesign that may not be necessary.
Option D suggests moving to ECS and multi-AZ RDS but does not address global content delivery efficiently.
Therefore, option A uses proven scalability and caching best practices aligned with AWS Well-Architected Framework pillars for performance and operational excellence.
Reference: AWS Well-Architected Framework ― Performance Efficiency Pillar
(https: //d1.awsstatic.com/whitepapers/architecture/AWS_Well-Architected_Framework.pdf)
Amazon EC2 Auto Scaling (https: //docs.aws.amazon.com/autoscaling/ec2/userguide/what-is-amazon-ec2-auto-scaling.html)
Amazon RDS Read Replicas
(https: //docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ReadRepl.html)
Amazon CloudFront Overview
(https: //docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Introduction.html)
A solutions architect is provisioning an Amazon Elastic File System (Amazon EFS) file system to provide shared storage across multiple Amazon EC2 instances. The instances all exist in the same VPC across multiple Availability Zones. There are two instances in each Availability Zone. The solutions architect must make the file system accessible to each instance with the lowest possible latency.
Which solution will meet these requirements?
- A . Create a mount target for the EFS file system in the VPC. Use the mount target to mount the file system on each of the instances.
- B . Create a mount target for the EFS file system in one Availability Zone of the VPC. Use the mount target to mount the file system on the instances in that Availability Zone. Share the directory with the other instances.
- C . Create a mount target for each instance. Use each mount target to mount the EFS file system on each respective instance.
- D . Create a mount target in each Availability Zone of the VPC. Use the mount target to mount the EFS file system on the instances in the respective Availability Zone.
D
Explanation:
Amazon EFS requires a mount target in each Availability Zone where EC2 instances access the file system. This is because each mount target provides an elastic network interface in the subnet and AZ, reducing network latency by allowing EC2 instances to communicate locally with the EFS mount target. Creating a mount target in each AZ optimizes file system access performance and availability. Instances mount the EFS file system via the mount target in their respective AZ, which provides the lowest possible latency and avoids cross-AZ traffic.
Option A, with only a single mount target in the VPC, will cause cross-AZ traffic for instances in other AZs, increasing latency and potentially incurring data transfer costs.
Option B is incomplete and introduces complexity with sharing directories across instances.
Option C is invalid because mount targets are per AZ and per subnet, not per instance.
Reference: Amazon EFS Overview (https: //docs.aws.amazon.com/efs/latest/ug/whatisefs.html) Creating Mount Targets (https: //docs.aws.amazon.com/efs/latest/ug/manage-fs-access.html#creating-mount-targets)
AWS Well-Architected Framework ― Performance Efficiency Pillar (https: //d1.awsstatic.com/whitepapers/architecture/AWS_Well-Architected_Framework.pdf)
A company stores a large dataset for an online advertising business in an Amazon RDS for MySQL DB instance. The company wants to run business reporting queries on the data without affecting write operations to the DB instance.
Which solution will meet these requirements?
- A . Deploy RDS read replicas to process the business reporting queries.
- B . Scale out the DB instance horizontally by placing the instance behind an Elastic Load Balancing (ELB) load balancer.
- C . Scale up the DB instance to a larger instance type to handle write operations and reporting queries.
- D . Configure Amazon CloudWatch to monitor the DB instance. Deploy standby DB instances when a latency metric threshold is exceeded.
A
Explanation:
Amazon RDS for MySQL supports read replicas that offload read-intensive workloads such as reporting, leaving the primary instance free for write operations.
“You can use Amazon RDS read replicas to elastically scale out beyond the capacity constraints of a single DB instance for read-heavy database workloads.” ― Working with Read Replicas
This is the recommended approach for minimizing performance impact on the primary DB instance.
A media company hosts a web application on AWS. The application gives users the ability to upload and view videos. The application stores the videos in an Amazon S3 bucket. The company wants to ensure that only authenticated users can upload videos. Authenticated users must have the ability to upload videos only within a specified time frame after authentication.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Configure the application to generate IAM temporary security credentials for authenticated users.
- B . Create an AWS Lambda function that generates pre-signed URLs when a user authenticates.
- C . Develop a custom authentication service that integrates with Amazon Cognito to control and log direct S3 bucket access through the application.
- D . Use AWS Security Token Service (AWS STS) to assume a pre-defined IAM role that grants authenticated users temporary permissions to upload videos directly to the S3 bucket.
B
Explanation:
Pre-Signed URLs: Allow temporary access to S3 buckets, making it easy to manage time-limited upload permissions without complex operational overhead.
Lambda for Automation: Automatically generates and provides pre-signed URLs when users authenticate, minimizing manual steps and code complexity.
Least Operational Overhead: Requires no custom authentication service or deep integration with STS or Cognito.
Amazon S3 Pre-Signed URLs Documentation
A company runs an application that stores and shares photos. Users upload the photos to an Amazon
S3 bucket. Every day, users upload approximately 150 photos. The company wants to design a solution that creates a thumbnail of each new photo and stores the thumbnail in a second S3 bucket.
Which solution will meet these requirements MOST cost-effectively?
- A . Configure an Amazon EventBridge scheduled rule to invoke a scrip! every minute on a long-running Amazon EMR cluster. Configure the script to generate thumbnails for the photos that do not have thumbnails. Configure the script to upload the thumbnails to the second S3 bucket.
- B . Configure an Amazon EventBridge scheduled rule to invoke a script every minute on a memory-optimized Amazon EC2 instance that is always on. Configure the script to generate thumbnails for the photos that do not have thumbnails. Configure the script to upload the thumbnails to the second S3 bucket.
- C . Configure an S3 event notification to invoke an AWS Lambda function each time a user uploads a new photo to the application. Configure the Lambda function to generate a thumbnail and to upload the thumbnail to the second S3 bucket.
- D . Configure S3 Storage Lens to invoke an AWS Lambda function each time a user uploads a new photo to the application. Configure the Lambda function to generate a thumbnail and to upload the thumbnail to a second S3 bucket.
C
Explanation:
The most cost-effective and scalable solution for generating thumbnails when photos are uploaded to an S3 bucket is to useS3 event notifications to trigger an AWS Lambda function. This approach avoids the need for a long-running EC2 instance or EMR cluster, making it highly cost-effective because Lambda only charges for the time it takes to process each event.
S3 Event Notifications: Automatically triggers the Lambda function when a new photo is uploaded to the S3 bucket.
AWS Lambda: A serverless compute service that scales automatically and only charges for execution time, which makes it the most economical choice when dealing with periodic events like photo uploads.
The Lambda function can generate the thumbnail and upload it to a second S3 bucket, fulfilling the requirement efficiently.
Option A and Option B (EMR or EC2 with scheduled scripts): These are less cost-effective as they involve continuously running infrastructure, which incurs unnecessary costs.
Option D (S3 Storage Lens): S3 Storage Lens is a tool for storage analytics and is not designed for event-based photo processing.
AWS
Reference: Amazon S3 Event Notifications
AWS Lambda Pricing
The customers of a finance company request appointments with financial advisors by sending text messages. A web application that runs on Amazon EC2 instances accepts the appointment requests. The text messages are published to an Amazon Simple Queue Service (Amazon SQS) queue through the web application. Another application that runs on EC2 instances then sends meeting invitations and meeting confirmation email messages to the customers. After successful scheduling, this application stores the meeting information in an Amazon DynamoDB database.
As the company expands, customers report that their meeting invitations are taking longer to arrive.
What should a solutions architect recommend to resolve this issue?
- A . Add a DynamoDB Accelerator (DAX) cluster in front of the DynamoDB database.
- B . Add an Amazon API Gateway API in front of the web application that accepts the appointment requests.
- C . Add an Amazon CloudFront distribution. Set the origin as the web application that accepts the appointment requests.
- D . Add an Auto Scaling group for the application that sends meeting invitations. Configure the Auto Scaling group to scale based on the depth of the SQS queue.
D
Explanation:
For SQS-backed worker architectures, AWS recommends scaling consumers based on queue metrics (e.g., ApproximateNumberOfMessagesVisible, AgeOfOldestMessage). EC2 Auto Scaling can “scale out based on Amazon CloudWatch alarms” tied to SQS backlog, increasing worker capacity to reduce latency as demand grows. DAX (A) accelerates DynamoDB reads, not message processing. API Gateway (B) and CloudFront (C) optimize request ingress and content delivery, not the asynchronous email-sending application. The bottleneck is consumer throughput; scaling workers with an SQS-driven policy restores timely processing without downtime and follows Well-Architected patterns for decoupled, elastic systems.
Reference: Amazon SQS Developer Guide ― “Scaling consumers,” “Queue metrics”; EC2 Auto Scaling ― “Target tracking and step scaling with CloudWatch metrics”; Well-Architected ― Performance Efficiency (queue-based load leveling).
