Practice Free SAA-C03 Exam Online Questions
A company wants to create an API to authorize users by using JSON Web Tokens (JWTs). The company needs to support dynamic access to multiple AWS services by using path-based routing.
Which solution will meet these requirements?
- A . Deploy an Application Load Balancer behind an Amazon API Gateway REST API. Configure IAM authorization.
- B . Deploy an Application Load Balancer behind an Amazon API Gateway HTTP API. Use Amazon Cognito for authorization.
- C . Deploy a Network Load Balancer behind an Amazon API Gateway REST API. Use an AWS Lambda function as a custom authorizer.
- D . Deploy a Network Load Balancer behind an Amazon API Gateway HTTP API. Use Amazon Cognito for authorization.
A media publishing company is building an application that allows users to print custom books. The frontend runs in a Docker container. Incoming order volume is highly variable and can exceed the throughput of the physical printing machines. Order-processing payloads can be up to 4 MB.
The company needs a scalable solution for handling incoming orders.
Which solution will meet this requirement?
- A . Use Amazon SQS to queue incoming orders. Use Lambda@Edge to process orders. Deploy the frontend on Amazon EKS.
- B . Use Amazon SNS to queue incoming orders. Use a Lambda function to process orders. Deploy the frontend on AWS Fargate.
- C . Use Amazon SQS to queue incoming orders. Use a Lambda function to process orders. Deploy the frontend on Amazon ECS with the Fargate launch type.
- D . Use Amazon SNS to queue incoming orders. Use Lambda@Edge to process orders. Deploy the frontend on Amazon EC2 instances.
C
Explanation:
Amazon SQS is the AWS-recommended service for handling decoupled, scalable, ordered, durable message ingestion with potentially large workloads. It supports payloads up to 256 KB directly and up to several MB using S3-backed large-payload extensions.
Lambda functions can process messages from SQS with automatic scaling. ECS with the Fargate launch type provides a serverless container platform for the frontend, scaling based on demand without managing servers.
SNS (Options B and D) is not a queue, does not provide durable message buffering, and is not
appropriate for variable, bursty workloads that may exceed backend throughput. Lambda@Edge (Options A and D) is unsuitable for backend order processing because it runs at CloudFront edge locations with strict limitations.
A city’s weather forecast team is using Amazon DynamoDB in the data tier for an application. The application has several components. The analysis component of the application requires repeated reads against a large dataset. The application has started to temporarily consume all the read capacity in the DynamoDB table and is negatively affecting other applications that need to access the same data.
Which solution will resolve this issue with the LEAST development effort?
- A . Use DynamoDB Accelerator (DAX).
- B . Use Amazon CloudFront in front of DynamoDB.
- C . Create a DynamoDB table with a local secondary index (LSI).
- D . Use Amazon ElastiCache in front of DynamoDB.
A
Explanation:
Explanation (AWS Docs):
DynamoDB Accelerator (DAX) is a fully managed, in-memory cache specifically for DynamoDB. It reduces read load and latency without requiring code changes (only SDK config). This is the least development effort.
“Amazon DynamoDB Accelerator (DAX) is a fully managed, highly available in-memory cache for DynamoDB that delivers microsecond read performance and requires minimal application changes.”
― Amazon DAX
A company has a business system that generates hundreds of reports each day. The business system saves the reports to a network share in CSV format. The company needs to store this data in the AWS Cloud in near-real time for analysis.
- A . Use AWS DataSync to transfer the files to Amazon S3. Create a scheduled task that runs at the end of each day.
- B . Create an Amazon S3 File Gateway. Update the business system to use a new network share from the S3 File Gateway.
- C . Use AWS DataSync to transfer the files to Amazon S3. Create an application that uses the DataSync
API in the automation workflow. - D . Deploy an AWS Transfer for SFTP endpoint. Create a script that checks for new files on the network share and uploads the new files by using SFTP.
B
Explanation:
Amazon S3 File Gateway (AWS Storage Gateway) exposes an on-premises NFS/SMB file share that durably stores files as S3 objects with local caching, enabling low-latency writes on-premises and asynchronous, near-real-time ingestion into Amazon S3 for analytics. It is purpose-built to “present a file interface backed by Amazon S3” and to “store files as objects in your S3 buckets,” so existing applications writing to a network share can transparently land data in S3 without custom scripts or job orchestration. Compared with scheduled transfers (e.g., end-of-day DataSync), S3 File Gateway continuously uploads new/changed files, better meeting the near-real-time requirement. Transfer Family (SFTP) would require custom polling and client changes, increasing operational burden. DataSync is excellent for bulk or periodic migrations/synchronizations but is not as seamless for continuous ingestion from a live file share. Therefore, updating the application to point to an S3 File Gateway share provides the simplest, performant path to near-real-time delivery into S3 for downstream analytics.
Reference: AWS Storage Gateway User Guide ― “What is Amazon S3 File Gateway,” “How S3 File Gateway works,” “File share protocols (SMB, NFS),” “Object creation and upload behavior,” and AWS Well-Architected ― Analytics ingestion patterns.
A company is creating a payment processing application that supports TLS connections from IPv4 clients. The application requires outbound access to the public internet. The application must allow users to access the application from a single entry point while maintaining the lowest possible attack surface.
The company wants to use Amazon ECS tasks to deploy the application. The company wants to enable awsvpc network mode.
Which solution will meet these requirements?
- A . Create a VPC that has an internet gateway, public subnets, and private subnets. Deploy a Network Load Balancer (NLB) and a NAT gateway in the public subnets. Deploy the ECS tasks in the private subnets.
- B . Create a VPC that has an egress-only internet gateway, public subnets, and private subnets. Deploy an Application Load Balancer (ALB) and a NAT gateway in the public subnets. Deploy the ECS tasks in the private subnets.
- C . Create a VPC that has an internet gateway, public subnets, and private subnets. Deploy an Application Load Balancer (ALB) in the public subnets. Deploy the ECS tasks in the public subnets.
- D . Create a VPC that has an egress-only internet gateway, public subnets, and private subnets. Deploy a Network Load Balancer (NLB) in the public subnets. Deploy the ECS tasks in the public subnets.
A
Explanation:
The correct answer is A because the application must accept TLS connections from IPv4 clients, provide outbound internet access, present a single entry point, and maintain the lowest possible attack surface. Placing the Amazon ECS tasks in private subnets is the key security design decision because it prevents direct inbound access from the internet to the tasks themselves. The public-facing entry point is the load balancer, while outbound internet access for the private tasks is provided through a NAT gateway in the public subnet.
A Network Load Balancer (NLB) is well suited for handling TLS at Layer 4 and can expose a single public endpoint for client connections. With awsvpc network mode, each ECS task receives its own elastic network interface, making it straightforward to place tasks securely in private subnets while still registering them as targets behind the load balancer.
Option B is incorrect because an egress-only internet gateway is for IPv6 outbound traffic only, while the requirement specifically mentions IPv4 clients.
Option C is incorrect because placing ECS tasks in public subnets increases the attack surface by exposing application infrastructure more directly to the internet.
Option D is also incorrect for two reasons: it uses an egress-only internet gateway, which does not satisfy IPv4 outbound needs, and it places tasks in public subnets, which violates the goal of minimizing exposure.
AWS security design guidance emphasizes reducing exposure by placing application workloads in private subnets and exposing only the required front-end endpoint. Therefore, a public NLB with private ECS tasks and a NAT gateway is the most secure and appropriate architecture.
A home security company is expanding globally and needs to encrypt customer data. The company does not want to manage encryption keys. The keys must be usable in multiple AWS Regions, and access to the keys must be controlled.
Which solution meets these requirements with the least operational overhead?
- A . Use AWS KMS multi-Region keys. Apply tags and use ABAC condition keys for access control.
- B . Use AWS KMS imported key material in multiple Regions with ABAC-based policies.
- C . Use AWS CloudHSM and synchronize clusters across Regions with the CMU tool.
- D . Use AWS CloudHSM users and share keys manually with CMU across Regions.
A
Explanation:
AWS Key Management Service (AWS KMS) provides multi-Region keys, allowing the same key to be used across Regions without managing your own hardware or key replication.
Multi-Region keys are fully managed and support attribute-based access control (ABAC) using tags and condition keys.
CloudHSM (Options C and D) requires full key lifecycle management, synchronization, and hardware operations, resulting in significantly higher overhead.
Imported key material (Option B) increases key-management responsibility.
A company uses a general-purpose instance class Amazon RDS for MySQL DB instance in a Multi-AZ configuration. The finance team runs SQL queries to generate reports. Customers experience performance issues during report generation.
A solutions architect needs to minimize the effect of the reporting job on the DB instance.
Which solution will meet these requirements?
- A . Create a proxy in Amazon RDS Proxy. Update the reporting job to query the proxy endpoint.
- B . Update the RDS DB instance configuration to use three Availability Zones.
- C . Add an RDS read replica. Update the reporting job to query the replica endpoint.
- D . Change the RDS configuration to a memory-optimized instance class.
C
Explanation:
The performance issue occurs because reporting queries compete with production traffic on the same primary database instance. The best-practice AWS solution is to offload read-heavy workloads to a separate database endpoint.
Option C adds an Amazon RDS read replica, which asynchronously replicates data from the primary instance. By redirecting reporting queries to the replica endpoint, the primary database can focus on transactional workloads, significantly improving application performance and customer experience.
Read replicas are specifically designed for this use case: scaling read capacity and isolating reporting or analytics queries. This solution requires minimal changes to the reporting job (endpoint update only) and avoids overprovisioning the primary database.
Option A (RDS Proxy) improves connection management but does not reduce query load or isolate reporting traffic.
Option B is invalid because Multi-AZ does not scale reads and is not configurable across three AZs for a single instance.
Option D increases instance size but does not address the underlying contention between workloads and increases cost unnecessarily.
Therefore, C is the most efficient and scalable solution to minimize reporting impact while maintaining high performance and availability.
A company has a large amount of data in an Amazon DynamoDB table. A large batch of data is appended to the table once each day. The company wants a solution that will make all the existing and future data in DynamoDB available for analytics on a long-term basis.
Which solution meets these requirements with the LEAST operational overhead?
- A . Configure DynamoDB incremental exports to Amazon S3.
- B . Configure Amazon DynamoDB Streams to write records to Amazon S3.
- C . Configure Amazon EMR to copy DynamoDB data to Amazon S3.
- D . Configure Amazon EMR to copy DynamoDB data to Hadoop Distributed File System (HDFS).
A
Explanation:
Incremental Exports: Exporting DynamoDB data directly to Amazon S3 provides an automated, serverless way to make data available for analytics without operational overhead.
Analytics-Friendly Storage: Amazon S3 supports long-term analytics workloads and can integrate with tools like Athena or QuickSight.
DynamoDB Export to S3 Documentation
A company is building a serverless application to process orders from an e-commerce site. The application needs to handle bursts of traffic during peak usage hours and to maintain high availability. The orders must be processed asynchronously in the order the application receives them.
- A . Use an Amazon Simple Notification Service (Amazon SNS) topic to receive orders. Use an AWS Lambda function to process the orders.
- B . Use an Amazon Simple Queue Service (Amazon SQS) FIFO queue to receive orders. Use an AWS Lambda function to process the orders.
- C . Use an Amazon Simple Queue Service (Amazon SQS) standard queue to receive orders. Use AWS Batch jobs to process the orders.
- D . Use an Amazon Simple Notification Service (Amazon SNS) topic to receive orders. Use AWS Batch jobs to process the orders.
B
Explanation:
Key Requirements:
Serverless architecture.
Handle traffic bursts with high availability.
Process orders asynchronouslyin the order they are received.
Analysis of Options:
Option A: Amazon SNS delivers messages to subscribers. However, SNS does not ensure ordering, making it unsuitable for FIFO (First In, First Out) requirements.
Option B: Amazon SQS FIFO queues support ordering and ensure messages are delivered exactly once. AWS Lambda functions can be triggered by SQS to process messages asynchronously and efficiently. This satisfies all requirements.
Option C: Amazon SQS standard queues do not guarantee message order and have "at-least-once" delivery, making them unsuitable for the FIFO requirement.
Option D: Similar to Option A, SNS does not ensure message ordering, and using AWS Batch adds complexity without directly addressing the requirements.
AWS
Reference: Amazon SQS FIFO Queues
AWS Lambda and SQS Integration
A company is planning to deploy a data processing platform on AWS. The data processing platform is based on PostgreSQL. The company stores the data that the platform must process on premises.
To comply with regulations, the company must not migrate the data to the cloud. However, the company wants to use AWS managed data analytics solutions.
Which solution will meet these requirements?
- A . Create an Amazon RDS for PostgreSQL database in a VPC. Create an interface VPC endpoint to connect the on-premises PostgreSQL database to the RDS for PostgreSQL database.
- B . Create Amazon EC2 instances in an Auto Scaling group on AWS Outposts. Install PostgreSQL data analytics software on the instances.
- C . Create an Amazon EMR cluster on AWS Outposts. Connect the EMR cluster to the on-premises PostgreSQL database to perform data processing locally.
- D . Create an Amazon EMR cluster in a VPC. Connect the EMR cluster to Amazon RDS for SQL Server
with a linked server to connect to the company’s data processing platform.
C
Explanation:
AWS Outposts extends AWS infrastructure and services to on-premises locations. Running Amazon EMR on Outposts allows for processing data that resides locally while benefiting from the managed services of EMR. This enables compliance with data residency requirements and provides scalability and manageability for analytics.
Reference: AWS Documentation C Amazon EMR on Outposts
