Practice Free SAA-C03 Exam Online Questions
A company runs an application as a task in an Amazon Elastic Container Service (Amazon ECS) cluster. The application must have read and write access to a specific group of Amazon S3 buckets. The S3 buckets are in the same AWS Region and AWS account as the ECS cluster. The company needs to grant the application access to the S3 buckets according to the principle of least privilege.
Which combination of solutions will meet these requirements? (Select TWO.)
- A . Add a tag to each bucket. Create an IAM policy that includes a StringEquals condition that matches the tags and values of the buckets.
- B . Create an IAM policy that lists the full Amazon Resource Name (ARN) for each S3 bucket.
- C . Attach the IAM policy to the instance role of the ECS task.
- D . Create an IAM policy that includes a wildcard Amazon Resource Name (ARN) that matches all combinations of the S3 bucket names.
- E . Attach the IAM policy to the task role of the ECS task.
B, E
Explanation:
To grant the ECS application the least privilege, you create an IAM policy that explicitly lists the ARNs of the specific S3 buckets the task should access. You then attach this policy to the task role (not the instance role), ensuring only that task has the necessary permissions. This approach avoids granting permissions to other EC2 instances or services and restricts access strictly to the required buckets.
AWS Documentation Extract:
"Attach an IAM policy granting access to specific resources to the ECS task role, not to the instance role, to ensure that only the container has access according to the principle of least privilege."
"Use explicit ARNs to control access only to specified S3 buckets."
(Source: Amazon ECS documentation, IAM Roles for Tasks)
A: Tag-based access control for S3 buckets is possible but less direct and commonly used for identity-based access.
C: Attaching the policy to the instance role could grant unintended permissions to all containers or services running on the instance.
D: Wildcard ARNs grant broader access than necessary.
Reference: AWS Certified Solutions Architect C Official Study Guide, IAM Best Practices.
A company wants to migrate an Oracle database to AWS. The database consists of a single table that contains millions of geographic information systems (GIS) images that are high resolution and are identified by a geographic code.
When a natural disaster occurs, tens of thousands of images get updated every few minutes. Each geographic code has a single image or row that is associated with it. The company wants a solution that is highly available and scalable during such events.
- A . Store the images and geographic codes in a database table. Use Oracle running on an Amazon RDS Multi-AZ DB instance.
- B . Store the images in Amazon S3 buckets. Use Amazon DynamoDB with the geographic code as the key and the image S3 URL as the value.
- C . Store the images and geographic codes in an Amazon DynamoDB table. Configure DynamoDB Accelerator (DAX) during times of high load.
- D . Store the images in Amazon S3 buckets. Store geographic codes and image S3 URLs in a database table. Use Oracle running on an Amazon RDS Multi-AZ DB instance.
B
Explanation:
The most resilient and scalable architecture for handling millions of high-resolution images with frequent updates is to store the binary images in Amazon S3 and store their metadata or reference (geographic code and S3 URL) in Amazon DynamoDB.
From AWS Documentation:
“Store large objects like images in Amazon S3 and use Amazon DynamoDB to store metadata and references. This design pattern is scalable, highly available, and cost-effective.”
(Source: AWS Architecture Blog C Best Practices for Handling Large Objects)
Why B is correct:
Amazon S3 is designed for storing large volumes of binary data (images).
DynamoDB provides low-latency reads/writes using the geographic code as the partition key.
Highly available, serverless, and auto-scaling, suitable for disaster scenarios with bursts of activity.
Reduces pressure on the database layer by separating metadata from image storage.
Why the others are incorrect:
Option A and D: Storing images directly in RDS is expensive, unscalable, and not optimal for binary storage.
Option C: DynamoDB is suitable, but storing actual binary image data in DynamoDB is not best practice due to item size limits (400 KB) and performance concerns.
Reference: AWS Architecture Blog C “Best Practices for Amazon S3 and DynamoDB Integration” AWS Well-Architected Framework C Resilience Pillar Amazon DynamoDB Developer Guide
A company is designing a new application that uploads files to an Amazon S3 bucket. The uploaded files are processed to extract metadata.
Processing must take less than 5 seconds. The volume and frequency of the uploads vary from a few files each hour to hundreds of concurrent uploads.
Which solution will meet these requirements MOST cost-effectively?
- A . Configure AWS CloudTrail trails to log Amazon S3 API calls. Use AWS AppSync to process the files.
- B . Configure a new object created S3 event notification within the bucket to invoke an AWS Lambda function to process the files.
- C . Configure Amazon Kinesis Data Streams to deliver the files to the S3 bucket. Invoke an AWS Lambda function to process the files.
- D . Deploy an Amazon EC2 instance. Create a script that lists all files in the S3 bucket and processes new files. Use a cron job that runs every minute to run the script.
B
Explanation:
Using S3 event notifications to trigger AWS Lambda for file processing is a cost-effective and serverless solution. Lambda scales automatically with upload volume, and processing each file takes less than 5 seconds, fitting within Lambda’s execution time.
Option A: AWS AppSync is designed for GraphQL APIs and is not suitable for file processing.
Option C: Kinesis is overkill and more expensive for this use case.
Option D: Running an EC2 instance incurs ongoing costs and is less flexible compared to Lambda.
AWS Documentation
Reference: Amazon S3 Event Notifications
AWS Lambda Overview
A company is using microservices to build an ecommerce application on AWS. The company wants to preserve customer transaction information after customers submit orders. The company wants to store transaction data in an Amazon Aurora database. The company expects sales volumes to vary throughout each year.
- A . Use an Amazon API Gateway REST API to invoke an AWS Lambda function to send transaction data to the Aurora database. Send transaction data to an Amazon Simple Queue Service (Amazon SQS) queue that has a dead-letter queue. Use a second Lambda function to read from the SQS queue and to update the Aurora database.
- B . Use an Amazon API Gateway HTTP API to send transaction data to an Application Load Balancer (ALB). Use the ALB to send the transaction data to Amazon Elastic Container Service (Amazon ECS) on Amazon EC2. Use ECS tasks to store the data in Aurora database.
- C . Use an Application Load Balancer (ALB) to route transaction data to Amazon Elastic Kubernetes Service (Amazon EKS). Use Amazon EKS to send the data to the Aurora database.
- D . Use Amazon Data Firehose to send transaction data to Amazon S3. Use AWS Database Migration Service (AWS DMS) to migrate the data from Amazon S3 to the Aurora database.
A
Explanation:
Analysis:
The solution must handle variable sales volumes, preserve transaction information, and store data in an Amazon Aurora database with minimal operational overhead. UsingAPI Gateway, AWS Lambda, and SQSis the best option because it provides scalability, reliability, and resilience.
Why Option A is Correct:
API Gateway: Serves as an entry point for transaction data in a serverless, scalable manner.
AWS Lambda: Processes the transactions and sends them to Amazon SQS for queuing.
Amazon SQS: Buffers the transaction data, ensuring durability and resilience against spikes in transaction volume.
Second Lambda Function: Processes messages from the SQS queue and updates the Aurora database, decoupling the workflow for better scalability.
Dead-Letter Queue (DLQ): Ensures failed transactions are logged for later debugging or reprocessing.
Why Other Options Are Not Ideal:
Option B:
Using an ALB with ECS on EC2 introduces operational overhead, such as managing EC2 instances and scaling ECS tasks.Not cost-effective.
Option C:
EKS is highly operationally intensive and requires Kubernetes cluster management, which is unnecessary for this use case.Too complex.
Option D:
Amazon Data Firehose and DMS are not designed for real-time transactional workflows. They are better suited for data analytics pipelines.Not suitable.
AWS
Reference: Amazon API Gateway: AWS Documentation – API Gateway AWS Lambda: AWS Documentation – Lambda Amazon SQS: AWS Documentation – SQS Amazon Aurora: AWS Documentation – Aurora
A company wants to improve the availability and performance of its hybrid application. The application consists of a stateful TCP-based workload hosted on Amazon EC2 instances in different AWS Regions and a stateless UDP-based workload hosted on premises.
Which combination of actions should a solutions architect take to improve availability and performance? (Select TWO.)
- A . Create an accelerator using AWS Global Accelerator. Add the load balancers as endpoints.
- B . Create an Amazon CloudFront distribution with an origin that uses Amazon Route 53 latency-based routing to route requests to the load balancers.
- C . Configure two Application Load Balancers in each Region. The first will route to the EC2 endpoints.
and the second will route lo the on-premises endpoints. - D . Configure a Network Load Balancer in each Region to address the EC2 endpoints. Configure a Network Load Balancer in each Region that routes to the on-premises endpoints.
- E . Configure a Network Load Balancer in each Region to address the EC2 endpoints. Configure an Application Load Balancer in each Region that routes to the on-premises endpoints.
A, D
Explanation:
For improving availability and performance of the hybrid application, the following solutions are optimal:
AWS Global Accelerator (Option A): Global Accelerator provides high availability and improves performance by using the AWS global network to route user traffic to the nearest healthy endpoint (across AWS Regions). By adding the Network Load Balancers as endpoints, Global Accelerator ensures that traffic is routed efficiently to the closest endpoint, improving both availability and performance.
Network Load Balancer (Option D): Thestateful TCP-based workloadhosted on Amazon EC2 instances and thestateless UDP-based workloadhosted on-premises are best served by Network Load
Balancers (NLBs). NLBs are designed to handle TCP and UDP traffic with ultra-low latency and can route traffic to both EC2 and on-premises endpoints.
Option B (CloudFront and Route 53): CloudFront is better suited for HTTP/HTTPS workloads, not for TCP/UDP-based applications.
Option C (ALB): Application Load Balancers do not support the stateless UDP-based workload, making NLBs the better choice for both TCP and UDP.
AWS
Reference: AWS Global Accelerator
Network Load Balancer
A company’s ecommerce website has unpredictable traffic and uses AWS Lambda functions to directly access a private Amazon RDS for PostgreSQL DB instance. The company wants to maintain predictable database performance and ensure that the Lambda invocations do not overload the database with too many connections.
What should a solutions architect do to meet these requirements?
- A . Point the client driver at an RDS custom endpoint. Deploy the Lambda functions inside a VPC.
- B . Point the client driver at an RDS Proxy endpoint. Deploy the Lambda functions inside a VPC.
- C . Point the client driver at an RDS custom endpoint. Deploy the Lambda functions outside a VPC.
- D . Point the client driver at an RDS Proxy endpoint. Deploy the Lambda functions outside a VPC.
B
Explanation:
Unpredictable Lambda traffic can create a “connection storm” against a relational database because each concurrent Lambda invocation may open its own database connection. Relational engines like PostgreSQL have finite connection capacity, and excessive connections can degrade performance, increase latency, and exhaust database resources. Amazon RDS Proxy is designed to address this by pooling and reusing database connections, multiplexing many application requests over a smaller number of established database connections. This helps keep database performance more predictable under spiky, highly concurrent workloads such as Lambda.
Because the database is a private RDS for PostgreSQL instance, the Lambda functions must run with network access to the VPC subnets and security groups that can reach the database and proxy. Therefore, deploying the Lambda functions inside a VPC and pointing the database client to the RDS Proxy endpoint is the correct operational pattern. RDS Proxy also provides benefits like improved failover behavior handling and centralizing credential management integration patterns (commonly
with IAM authentication and/or Secrets Manager), further reducing operational risk.
Option A uses a custom endpoint, which does not solve the fundamental connection scaling problem; it’s not a connection pooler.
Option C and D place Lambda outside the VPC, which will not work for a private database that has no public connectivity path. Even if connectivity were possible, D would still be incomplete because you’d need private networking to reach the proxy.
Therefore, B best meets the requirement by introducing managed connection pooling via RDS Proxy and ensuring private network reachability by running Lambda in the VPC.
A company wants to deploy an internal web application on AWS. The web application must be accessible only from the company’s office. The company needs to download security patches for the web application from the internet. The company has created a VPC and has configured an AWS Site-to-Site VPN connection to the company’s office. A solutions architect must design a secure architecture for the web application.
Which solution will meet these requirements?
- A . Deploy the web application on Amazon EC2 instances in public subnets behind a public Application Load Balancer (ALB). Attach an internet gateway to the VPC. Set the inbound source of the ALB’s security group to 0.0.0.0/0.
- B . Deploy the web application on Amazon EC2 instances in private subnets behind an internal Application Load Balancer (ALB). Deploy NAT gateways in public subnets. Attach an internet gateway to the VPC. Set the inbound source of the ALB’s security group to the company’s office network CIDR block.
- C . Deploy the web application on Amazon EC2 instances in public subnets behind an internal Application Load Balancer (ALB). Deploy NAT gateways in private subnets. Attach an internet gateway to the VPC. Set the outbound destination of the ALB’s security group to the company’s office network CIDR block.
- D . Deploy the web application on Amazon EC2 instances in private subnets behind a public Application Load Balancer (ALB). Attach an internet gateway to the VPC. Set the outbound destination of the ALB’s security group to 0.0.0.0/0.
B
Explanation:
Deploying the web application on EC2 instances in private subnets behind an internal ALB ensures that the application is not directly accessible from the internet. By setting up a Site-to-Site VPN connection, the application can be accessed securely from the company’s office network. Deploying NAT gateways in public subnets allows instances in private subnets to initiate outbound connections to the internet for downloading security patches.
Reference: Example: VPC with servers in private subnets and NATAWS Documentation
A company is building a serverless application to process large video files that users upload. The application performs multiple tasks to process each video file. Processing can take up to 30 minutes for the largest files.
The company needs a scalable architecture to support the processing application.
Which solution will meet these requirements?
- A . Store the uploaded video files in Amazon Elastic File System (Amazon EFS). Configure a schedule in Amazon EventBridge Scheduler to invoke an AWS Lambda function periodically to check for new files. Configure the Lambda function to perform all the processing tasks.
- B . Store the uploaded video files in Amazon Elastic File System (Amazon EFS). Configure an Amazon EFS event notification to start an AWS Step Functions workflow that uses AWS Fargate tasks to perform the processing tasks.
- C . Store the uploaded video files in Amazon S3. Configure an Amazon S3 event notification to send an event to Amazon EventBridge when a user uploads a new video file. Configure an AWS Step Functions workflow as a target for an EventBridge rule. Use the workflow to manage AWS Fargate tasks to perform the processing tasks.
- D . Store the uploaded video files in Amazon S3. Configure an Amazon S3 event notification to invoke an AWS Lambda function when a user uploads a new video file. Configure the Lambda function to perform all the processing tasks.
C
Explanation:
The requirements include:
Scalability: The solution must scale as video files are uploaded.
Long-running tasks: Processing tasks can take up to 30 minutes. AWS Lambda has a maximum execution time of 15 minutes, which rules out options that involve Lambda performing all the processing.
Serverless and event-driven architecture: Ensures cost-effectiveness and high availability.
Analysis of Options:
Option A:
AWS Lambda has a 15-minute timeout, which cannot support tasks that take up to 30 minutes.
EventBridge Scheduler is unnecessary for monitoring files when native event notifications are available.Not a valid choice.
Option B:
AWS Step Functions and AWS Fargate can handle long-running processes, but Amazon EFS is not the ideal storage for uploaded video files in a serverless architecture.
Processing tasks triggered by EFS events are not a common pattern and may introduce complexities.Not the best practice.
Option C:
Amazon S3 is used for storing uploaded files, which integrates natively with event-driven services like EventBridge and Step Functions.
Amazon S3 event notifications trigger a Step Functions workflow, which can orchestrate Fargate tasks to process large video files, meeting the scalability and execution time requirements.Correct choice.
Option D:
Similar to Option A, AWS Lambda cannot handle long-running processes due to its 15-minute timeout.
Invoking Lambda for processing directly is not feasible for tasks that take up to 30 minutes.Not a valid choice.
AWS
Reference: Amazon S3 Event Notifications: AWS Documentation – S3 Event Notifications AWS Step Functions: AWS Documentation – Step Functions AWS Fargate: AWS Documentation – Fargate Comparison of Storage Services: AWS Storage Options
By leveragingAmazon S3, Step Functions, andFargate, this solution provides a scalable, efficient, and serverless approach to handling video processing tasks.
A company is building a serverless application to process orders from an ecommerce site. The application needs to handle bursts of traffic during peak usage hours and to maintain high availability. The orders must be processed asynchronously in the order the application receives them.
Which solution will meet these requirements?
- A . Use an Amazon Simple Notification Service (Amazon SNS) topic to receive orders. Use an AWS Lambda function to process the orders.
- B . Use an Amazon Simple Queue Service (Amazon SQS) FIFO queue to receive orders. Use an AWS Lambda function to process the orders.
- C . Use an Amazon Simple Queue Service (Amazon SQS) standard queue to receive orders. Use AWS Batch jobs to process the orders.
- D . Use an Amazon Simple Notification Service (Amazon SNS) topic to receive orders. Use AWS Batch jobs to process the orders.
B
Explanation:
Amazon SQS FIFO queuesensure that orders are processed in the exact order received and maintain message deduplication.
AWS Lambdascales automatically, handling bursts and maintaining high availability in a cost-effective manner.
Option A and D: Amazon SNS does not guarantee ordered processing.
Option C: Standard SQS queues do not guarantee order.
AWS Documentation
Reference: Amazon SQS FIFO Queues
A company is building a serverless application that processes large volumes of data from a mobile app. The application uses an AWS Lambda function to process the data and store the data in an Amazon DynamoDB table.
The company needs to ensure that the application can recover from failures and continue processing data without losing any records.
Which solution will meet these requirements?
- A . Configure the Lambda function to use a dead-letter queue with an Amazon Simple Queue Service (Amazon SQS) queue. Configure Lambda to retry failed records from the dead-letter queue. Use a retry mechanism by implementing an exponential backoff algorithm.
- B . Configure the Lambda function to read records from Amazon Data Firehose. Replay the Firehose records in case of any failures.
- C . Use Amazon OpenSearch Service to store failed records. Configure AWS Lambda to retry failed records from OpenSearch Service. Use Amazon EventBridge to orchestrate the retry logic.
- D . Use Amazon Simple Notification Service (Amazon SNS) to store the failed records. Configure Lambda to retry failed records from the SNS topic. Use Amazon API Gateway to orchestrate the retry
calls.
A
Explanation:
Dead-letter queues (DLQs) with Amazon SQS allow Lambda functions to offload failed events for later inspection or retry. Using retry logic with exponential backoff ensures resilience and compliance with best practices for fault-tolerant serverless architectures. This guarantees no data is lost due to transient errors.
Reference: AWS Documentation C Lambda Error Handling and Dead-Letter Queues
