Practice Free SAA-C03 Exam Online Questions
A company runs multiple web applications on Amazon EC2 instances behind a single Application Load Balancer (ALB). The application experiences unpredictable traffic spikes throughout each day. The traffic spikes cause high latency. The unpredictable spikes last less than 3 hours. The company needs a solution to resolve the latency issue caused by traffic spikes.
- A . Use EC2 instances in an Auto Scaling group. Configure the ALB and Auto Scaling group to use a target tracking scaling policy.
- B . Use EC2 Reserved Instances in an Auto Scaling group. Configure the Auto Scaling group to use a scheduled scaling policy based on peak traffic hours.
- C . Use EC2 Spot Instances in an Auto Scaling group. Configure the Auto Scaling group to use a scheduled scaling policy based on peak traffic hours.
- D . Use EC2 Reserved Instances in an Auto Scaling group. Replace the ALB with a Network Load Balancer (NLB).
A
Explanation:
AWS recommends Auto Scaling with dynamic scaling policies to handle unpredictable workload spikes. Target tracking scaling policies automatically adjust capacity based on defined metrics such as average CPU utilization or request count per target. This approach ensures applications maintain responsiveness without overprovisioning. Scheduled scaling (options B and C) is only effective when traffic patterns are predictable, which is not the case here. Reserved Instances or Spot Instances also do not address sudden demand changes effectively. Replacing the ALB with an NLB (D) does not solve latency caused by EC2 instance capacity.
Therefore, using EC2 instances in an Auto Scaling group with a target tracking scaling policy is the most effective and cost-efficient solution for unpredictable, short-lived traffic spikes.
Reference:
• Amazon EC2 Auto Scaling User Guide ― Target tracking scaling policies
• AWS Well-Architected Framework ― Performance Efficiency Pillar: Elasticity and scaling
A company has a three-tier web application. An Application Load Balancer (ALB) is in front of Amazon EC2 instances that are in the ALB target group. An Amazon S3 bucket stores documents.
The company requires the application to meet a recovery time objective (RTO) of 60 seconds.
Which solution will meet this requirement?
- A . Replicate S3 objects to a second AWS Region. Create a second ALB and a minimum set of EC2 instances in the second Region. Ensure that the EC2 instances are shut down until they are needed. Configure Amazon Route 53 to fail over to the second Region by using an IP-based routing policy.
- B . Use AWS Backup to take hourly backups of the EC2 instances. Back up the S3 data to a second AWS Region. Use AWS CloudFormation to deploy the entire infrastructure in the second Region when needed.
- C . Create daily snapshots of the EC2 instances in a second AWS Region. Use the snapshots to recreate the instances in the second Region. Back up the S3 data to the second Region. Perform a failover by modifying the application DNS record when needed.
- D . Replicate S3 objects to a second AWS Region. Create a second ALB and a minimum set of EC2 instances in the second Region. Ensure that the EC2 instances in the second Region are running. Configure Amazon Route 53 to fail over to the secondary Region based on health checks.
D
Explanation:
To achieve a 60-second RTO, pre-warming the DR environment (including running EC2 instances and Route 53 health checks) is essential. Active/passive failover using Route 53 with health checks ensures fast redirection when the primary Region becomes unavailable. S3 cross-region replication ensures document availability.
Reference: AWS Disaster Recovery C Active-Passive Strategy with Route 53 and Health Checks
A company needs to archive an on-premises relational database. The company wants to retain the data. The company needs to be able to run SQL queries on the archived data to create annual reports.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Use AWS Database Migration Service (AWS DMS) to migrate the on-premises database to an Amazon RDS instance. Retire the on-premises database. Maintain the RDS instance in a stopped state until the data is needed for reports.
- B . Set up database replication from the on-premises database to an Amazon EC2 instance. Retire the on-premises database. Make a snapshot of the EC2 instance. Maintain the EC2 instance in a stopped state until the data is needed for reports.
- C . Create a database backup on premises. Use AWS DataSync to transfer the data to Amazon S3. Create an S3 Lifecycle configuration to move the data to S3 Glacier Deep Archive. Restore the backup to Amazon EC2 instances to run reports.
- D . Use AWS Database Migration Service (AWS DMS) to migrate the on-premises databases to Amazon S3 in Apache Parquet format. Store the data in S3 Glacier Flexible Retrieval. Use Amazon Athena to run reports.
D
Explanation:
Amazon S3 is the most cost-effective option for archiving data. Using AWS DMS to migrate to S3 in Apache Parquet format provides an optimized columnar format for analytics. By storing in S3 Glacier Flexible Retrieval, costs are minimized while maintaining compliance with retention. When queries are required, Athena can run SQL queries directly on archived data in S3 without provisioning infrastructure.
Options A and B rely on maintaining RDS or EC2 instances, which increases cost and operational overhead.
Option C requires full restores to EC2 before running queries, which is slow and inefficient.
Therefore, D provides the lowest operational overhead and direct query capability with Athena.
Reference:
• AWS DMS Documentation ― Migrating databases to Amazon S3 in Parquet format
• Amazon Athena User Guide ― Querying data stored in S3
• AWS Well-Architected Framework ― Cost Optimization Pillar
A company runs a containerized application on a Kubernetes cluster in an on-premises data center. The company is using a MongoDB database for data storage. The company wants to migrate some of these environments to AWS, but no code changes or deployment method changes are possible at this time. The company needs a solution that minimizes operational overhead.
- A . Use Amazon Elastic Container Service (Amazon ECS) with Amazon EC2 worker nodes for compute and MongoDB on EC2 for data storage.
- B . Use Amazon Elastic Container Service (Amazon ECS) with AWS Fargate for compute and Amazon DynamoDB for data storage.
- C . Use Amazon Elastic Kubernetes Service (Amazon EKS) with Amazon EC2 worker nodes for compute and Amazon DynamoDB for data storage.
- D . Use Amazon Elastic Kubernetes Service (Amazon EKS) with AWS Fargate for compute and Amazon DocumentDB (with MongoDB compatibility) for data storage.
D
Explanation:
EKS lets teams “run Kubernetes on AWS without needing to install and operate your own Kubernetes control plane,” and with AWS Fargate, you “run Kubernetes pods without managing servers,” reducing operational overhead for worker nodes. For the data layer, Amazon DocumentDB (with MongoDB compatibility) “supports the MongoDB API and drivers,” allowing existing MongoDB applications to work without application changes, while the service “automatically scales storage,” provides “high availability across multiple Availability Zones,” automatic backups, and patching. Because the company cannot change code or deployment methods, keeping Kubernetes (EKS) and the MongoDB API (DocumentDB compatibility) is essential.
Options A and C either change the orchestrator (to ECS) or the database API (to DynamoDB), which would require code/deployment changes.
Option A also leaves you managing EC2 nodes and a self-managed MongoDB on EC2, increasing ops burden.
Therefore, EKS on Fargate + Amazon DocumentDB minimizes operations and preserves compatibility.
Reference: Amazon EKS User Guide ― “What is Amazon EKS,” “Fargate for EKS (serverless pods)”;
Amazon DocumentDB ― “MongoDB compatibility,” “High availability and automatic scaling”; AWS Well-Architected ― Operational Excellence (use managed services).
A company creates a VPC that has one public subnet and one private subnet. The company attaches an internet gateway to the VPC. An Application Load Balancer (ALB) in the public subnet communicates with Amazon EC2 instances in the private subnet.
The EC2 instances in the private subnet must be able to download operating system and application updates from the internet. The instances must not be accessible from the internet.
Which combination of steps will meet these requirements? (Select THREE.)
- A . Associate an Elastic IP address with the NAT gateway.
- B . Add a route of 0.0.0.0/0 to the private subnet route table. Set the NAT gateway as a target.
- C . Deploy a NAT gateway in the public subnet.
- D . Deploy a NAT gateway in the private subnet.
- E . Add a route of 0.0.0.0/0 to the public subnet route table. Set the NAT gateway as a target.
- F . Associate an Elastic IP address with the internet gateway.
A,B,C
Explanation:
Instances in a private subnet cannot directly reach the internet because they do not have public IP addresses and the private subnet route table typically does not send 0.0.0.0/0 traffic to an internet gateway. However, these instances still need outbound-only internet access to download patches and updates while remaining inaccessible from the internet. The standard AWS design to achieve this is a NAT gateway deployed in a public subnet.
Option C is required because a NAT gateway must reside in a public subnet that has a route to the internet gateway. This allows the NAT gateway to forward outbound traffic from private instances to the internet and return responses, without allowing inbound connections initiated from the internet to those instances.
Option A is required because a NAT gateway uses an Elastic IP address to represent its public-facing identity on the internet. Without an Elastic IP, the NAT gateway cannot communicate with internet endpoints. The Elastic IP is associated with the NAT gateway, not with the internet gateway.
Option B is required because the private subnet needs a route for 0.0.0.0/0 (default route) that targets the NAT gateway. This ensures that outbound internet-bound traffic from the private instances is sent to the NAT gateway rather than being dropped.
Option D is incorrect because a NAT gateway in a private subnet would not have a working route to the internet gateway and would not provide internet egress.
Option E is incorrect because the public subnet’s default route should point to the internet gateway, not to the NAT gateway.
Option F is incorrect because you do not associate Elastic IPs with an internet gateway.
Therefore, A, B, and C correctly implement private subnet outbound internet access while keeping the instances unreachable from the internet.
A company recently migrated its application to a VPC on AWS. An AWS Site-to-Site VPN connection connects the company’s on-premises network to the VPC. The application retrieves customer data from another system that resides on premises. The application uses an on-premises DNS server to resolve domain records. After the migration, the application is not able to connect to the customer data because of name resolution errors.
Which solution will give the application the ability to resolve the internal domain names?
- A . Launch EC2 instances in the VPC. On the EC2 instances, deploy a custom DNS forwarder that forwards all DNS requests to the on-premises DNS server. Create an Amazon Route 53 private hosted zone that uses the EC2 instances for name servers.
- B . Create an Amazon Route 53 Resolver outbound endpoint. Configure the outbound endpoint to forward DNS queries against the on-premises domain to the on-premises DNS server.
- C . Set up two AWS Direct Connect connections between the AWS environment and the on-premises network. Set up a link aggregation group (LAG) that includes the two connections. Change the VPC resolver address to point to the on-premises DNS server.
- D . Create an Amazon Route 53 public hosted zone for the on-premises domain. Configure the network ACLs to forward DNS requests against the on-premises domain to the Route 53 public hosted zone.
B
Explanation:
When AWS workloads must resolve DNS names from on-premises systems over a hybrid network (like VPN or Direct Connect), the best solution is to use Amazon Route 53 Resolver outbound endpoints.
The outbound endpoint enables DNS queries to be forwarded from your VPC to on-premises DNS servers.
You must also configure a Route 53 Resolver forwarding rule to define which domain names (e.g., corp.internal) should be forwarded to the specific on-premises DNS IPs.
This setup allows private DNS resolution from AWS to on-premises systems and is fully managed, eliminating the need to run and maintain EC2-based DNS proxies (as in option A).
Options C and D are incorrect:
C is not DNS-specific and doesn’t solve name resolution.
D misuses a public hosted zone for a private DNS domain.
Reference: Route 53 Resolver Outbound Endpoints
A company runs an ecommerce platform with a monolithic architecture on Amazon EC2 instances. The platform runs web and API services. The company wants to decouple the architecture and enhance scalability. The company also wants the ability to track orders and reprocess any failed orders.
Which solution will meet these requirements?
- A . Send orders to an Amazon Simple Queue Service (Amazon SQS) queue. Configure AWS Lambda functions to consume the queue and process orders. Implement an SQS dead-letter queue.
- B . Send orders to an Amazon Simple Queue Service (Amazon SQS) queue. Configure Amazon Elastic Container Service (Amazon ECS) tasks to consume the queue. Implement SQS visibility timeout.
- C . Use Amazon Kinesis Data Streams to queue orders. Use AWS Lambda functions to consume the data stream. Configure Amazon S3 to track and reprocess failed orders.
- D . Send orders to an Amazon Simple Queue Service (Amazon SQS) queue. Configure AWS Lambda functions to consume the queue and process orders. Configure the Lambda functions to use SQS long polling.
A
Explanation:
To decouple the monolith and enhance scalability, AWS best practice is to introduce an asynchronous message queue, such as Amazon SQS, between the web/API tier and the order-processing logic.
AWS Lambda functions consuming from the SQS queue provide serverless, auto-scaling processing without managing servers.
To track and reprocess failed orders, SQS supports dead-letter queues (DLQs). Messages that cannot be processed successfully after a configurable number of attempts are automatically moved to the DLQ, where operations teams or automated processes can inspect and reprocess them.
Why others are not correct:
B: ECS tasks can consume an SQS queue, but this requires managing container infrastructure and does not inherently provide as simple reprocessing/visibility as combining Lambda with a DLQ.
Visibility timeout is not a tracking or archival mechanism.
C: Kinesis is a streaming service designed for ordered event streams, not primarily for order-queue semantics and DLQs; SQS is simpler and purpose-built for this pattern.
D: Long polling reduces empty responses and API calls but does nothing for tracking or reprocessing failed messages; without a DLQ, failed orders are harder to manage
A company is building an application composed of multiple microservices that communicate over HTTP. The company must deploy the application across multiple AWS Regions to meet disaster recovery requirements. The application must maintain high availability and automatic fault recovery.
Which solution will meet these requirements?
- A . Deploy all microservices on a single large EC2 instance in one Region to simplify communication.
- B . Use AWS Fargate to run each microservice in separate containers. Deploy across multiple Availability Zones in one Region behind an Application Load Balancer.
- C . Use Amazon Route 53 with latency-based routing. Deploy microservices on Amazon EC2 instances in multiple Regions behind Application Load Balancers.
- D . Implement each microservice using AWS Lambda. Expose the microservices using an Amazon API Gateway REST API.
C
Explanation:
AWS recommends multi-Region active-active architectures for applications requiring high availability, automatic failover, and disaster recovery. Route 53 latency-based routing directs users to the Region providing the lowest latency and automatically shifts traffic if Regional endpoints become unhealthy.
Combined with Application Load Balancers and EC2-based microservices deployed in multiple Regions, this architecture delivers fault tolerance, multi-Region resiliency, and automatic recovery.
Option D provides high availability but does not inherently provide multi-Region failover routing without additional configuration.
Option B is single-Region.
Option A is not resilient.
A company runs its legacy web application on AWS. The web application server runs on an Amazon EC2 instance in the public subnet of a VPC. The web application server collects images from customers and stores the image files in a locally attached Amazon Elastic Block Store (Amazon EBS) volume. The image files are uploaded every night to an Amazon S3 bucket for backup.
A solutions architect discovers that the image files are being uploaded to Amazon S3 through the public endpoint. The solutions architect needs to ensure that traffic to Amazon S3 does not use the public endpoint.
- A . Create a gateway VPC endpoint for the S3 bucket that has the necessary permissions for the VPC. Configure the subnet route table to use the gateway VPC endpoint.
- B . Move the S3 bucket inside the VPC. Configure the subnet route table to access the S3 bucket through private IP addresses.
- C . Create an Amazon S3 access point for the Amazon EC2 instance inside the VPC. Configure the web application to upload by using the Amazon S3 access point.
- D . Configure an AWS Direct Connect connection between the VPC that has the Amazon EC2 instance and Amazon S3 to provide a dedicated network path.
A
Explanation:
To route S3 traffic privately from within a VPC, AWS provides Gateway VPC Endpoints for Amazon S3. These allow private connectivity to S3 without traversing the public internet or requiring an Internet Gateway.
From AWS Documentation:
“A gateway endpoint enables you to privately connect your VPC to supported AWS services such as Amazon S3 and DynamoDB without requiring an Internet Gateway, NAT device, or public IP.”
(Source: Amazon VPC User Guide C Gateway Endpoints)
Why A is correct:
Gateway VPC endpoints route S3 traffic internally within the AWS network.
Improves security and data privacy while reducing exposure to the public internet.
Requires only a simple route table modification and IAM policy configuration.
Why other options are incorrect:
B: S3 is a regional service; you cannot “move” it inside a VPC.
C: Access points do not change the routing path; still uses S3 endpoints.
D: AWS Direct Connect is for hybrid environments, not intra-AWS private connectivity.
Reference: Amazon VPC User Guide C “Gateway Endpoints for Amazon S3”
AWS Well-Architected Framework C Security Pillar
AWS Networking Best Practices
A company is designing an application to maintain a record of customer orders. The application will generate events. The company wants to use an Amazon EventBridge event bus to send the application’s events to an Amazon DynamoDB table.
Which solution will meet these requirements?
- A . Use the EventBridge default event bus. Configure DynamoDB Streams for the DynamoDB table that hosts the customer order data.
- B . Create an EventBridge custom event bus. Create an AWS Lambda function as a target. Configure the Lambda function to forward the customer order data to the DynamoDB table.
- C . Create an EventBridge partner event bus. Create an Amazon Simple Notification Service (Amazon SNS) topic. Subscribe an AWS Lambda function to the SNS topic. Configure the Lambda function to read the customer order data and to forward the data to the DynamoDB table.
- D . Create an EventBridge partner event bus. Create an AWS Lambda function as a target. Configure the Lambda function to forward the customer order data to the DynamoDB table.
C
Explanation:
Amazon EventBridge supports routing application-generated events to AWS Lambda targets. The Lambda function can process and insert events into a DynamoDB table. This is a standard design pattern for connecting event-driven applications with DynamoDB.
Option A confuses DynamoDB Streams (which streams changes from DynamoDB) with EventBridge (which is for event routing).
Options C and D reference partner event buses, which are intended for SaaS integrations, not custom application events.
Therefore, the correct and simplest solution is to use a custom EventBridge event bus with a Lambda target (B).
Reference:
• Amazon EventBridge User Guide ― Targets and event routing
• AWS Lambda Developer Guide ― Using DynamoDB with Lambda
