Practice Free SAA-C03 Exam Online Questions
A company hosts a web application in a VPC on AWS. A public Application Load Balancer (ALB) forwards connections from the internet to an Auto Scaling group of Amazon EC2 instances. The Auto Scaling group runs in private subnets across four Availability Zones.
The company stores data in an Amazon S3 bucket in the same Region. The EC2 instances use NAT gateways in each Availability Zone for outbound internet connectivity.
The company wants to optimize costs for its AWS architecture.
Which solution will meet this requirement?
- A . Reconfigure the Auto Scaling group and the ALB to use two Availability Zones instead of four. Do not change the desired count or scaling metrics for the Auto Scaling group to maintain application availability.
- B . Create a new, smaller VPC that still has sufficient IP address availability to run the application.
Redeploy the application stack in the new VPC. Delete the existing VPC and its resources. - C . Deploy an S3 gateway endpoint to the VPC. Configure the EC2 instances to access the S3 bucket through the S3 gateway endpoint.
- D . Deploy an S3 interface endpoint to the VPC. Configure the EC2 instances to access the S3 bucket through the S3 interface endpoint.
C
Explanation:
Using S3 gateway endpoints allows private and cost-free access to S3 without routing traffic through a NAT gateway. NAT gateway traffic incurs charges, especially when used across multiple Availability Zones.
By using an S3 gateway endpoint, EC2 instances in private subnets can access S3 directly without needing internet access, reducing both data transfer and NAT gateway costs.
Interface endpoints are more expensive and typically used for services like API Gateway or Systems Manager.
A solutions architect is designing the network architecture for an application that runs on Amazon EC2 instances in an Auto Scaling group. The application needs to access data that is in Amazon S3 buckets.
Traffic to the S3 buckets must not use public IP addresses. The solutions architect will deploy the application in a VPC that has public and private subnets.
Which solutions will meet these requirements? (Select TWO.)
- A . Deploy the EC2 instances in a private subnet. Configure a default route to an egress-only internet gateway.
- B . Deploy the EC2 instances in a public subnet. Create a gateway endpoint for Amazon S3. Associate the endpoint with the subnet’s route table.
- C . Deploy the EC2 instances in a public subnet. Create an interface endpoint for Amazon S3.
Configure DNS hostnames and DNS resolution for the VPC. - D . Deploy the EC2 instances in a private subnet. Configure a default route to a NAT gateway in a public subnet.
- E . Deploy the EC2 instances in a private subnet. Configure a default route to a customer gateway.
B, D
Explanation:
Option B: A gateway endpoint for S3 allows traffic to S3 without using public IPs and integrates with route tables.
Option D: Deploying EC2 instances in a private subnet with a NAT gateway enables outbound internet connectivity for other requirements without public IPs.
Option A: Egress-only internet gateways are for IPv6 traffic and do not work for IPv4 in this context.
Option C: Interface endpoints are not required for S3 as gateway endpoints are more suitable and cost-effective.
Option E: A customer gateway is for hybrid connectivity (e.g., on-premises), not suitable for this case.
AWS Documentation
Reference: VPC Endpoints
Amazon S3 Gateway Endpoints
A company is migrating a new application from an on-premises data center to a new VPC in the AWS Cloud. The company has multiple AWS accounts and VPCs that share many subnets and applications. The company wants to have fine-grained access control for the new application. The company wants to ensure that all network resources across accounts and VPCs that are granted permission to access the new application can access the application.
Which solution will meet these requirements?
- A . Set up a VPC peering connection for each VPC that needs access to the new application VPC.
Update route tables in each VPC to enable connectivity. - B . Deploy a transit gateway in the account that hosts the new application. Share the transit gateway with each account that needs to connect to the application. Update route tables in the VPC that hosts the new application and in the transit gateway to enable connectivity.
- C . Use an AWS PrivateLink endpoint service to make the new application accessible to other VPCs.
Control access to the application by using an endpoint policy. - D . Use an Application Load Balancer (ALB) to expose the new application to the internet. Configure authentication and authorization processes to ensure that only specified VPCs can access the application.
B
Explanation:
A company stores a large volume of critical data in Amazon RDS for PostgreSQL tables. The company is developing several new features for an upcoming product launch. Some of the new features require many table alterations.
The company needs a solution to test the altered tables for several days. After testing, the solution must make the new features available to customers in production.
Which solution will meet these requirements with the HIGHEST availability?
- A . Create a new instance of the database in RDS for PostgreSQL to test the new features. When the testing is finished, take a backup of the test database, and restore the test database to the production database.
- B . Create new database tables in the production database to test the new features. When the testing is finished, copy the data from the older tables to the new tables. Delete the older tables, and rename the new tables accordingly.
- C . Create an Amazon RDS read replica to deploy a new instance of the database. Make updates to the database tables in the replica instance. When the testing is finished, promote the replica instance to become the new production instance.
- D . Use an Amazon RDS blue/green deployment to deploy a new test instance of the database. Make database table updates in the test instance. When the testing is finished, promote the test instance to become the new production instance.
D
Explanation:
Amazon RDS Blue/Green Deployments provide a safe and straightforward way to make database changes with minimal downtime and risk. Blue/Green deployments create an exact copy ("green") of your production environment ("blue") where you can make schema changes and run tests. After validation, you can promote the green environment to production with a single click or API call, achieving near-zero downtime and maximum availability. This is the AWS-recommended method for deploying major database changes in a way that minimizes impact to users and maximizes uptime.
Reference Extract from AWS Documentation / Study Guide:
"Amazon RDS Blue/Green Deployments enable you to make changes to your database environment safely. You can perform schema updates and feature testing in a fully managed staging environment and switch over with minimal downtime, ensuring the highest availability."
Source: AWS Certified Solutions Architect C Official Study Guide, Database and Migration section; Amazon RDS Blue/Green Deployments Documentation.
A company is running a media store across multiple Amazon EC2 instances distributed across multiple Availability Zones in a single VPC. The company wants a high-performing solution to share data between all the EC2 instances, and prefers to keep the data within the VPC only.
What should a solutions architect recommend?
- A . Create an Amazon S3 bucket and call the service APIs from each instance’s application.
- B . Create an Amazon S3 bucket and configure all instances to access it as a mounted volume.
- C . Configure an Amazon Elastic Block Store (Amazon EBS) volume and mount it across all instances.
- D . Configure an Amazon Elastic File System (Amazon EFS) file system and mount It across all instances.
D
Explanation:
Amazon Elastic File System (EFS) is a managed file storage service that can be mounted across multiple EC2 instances. It provides a scalable and high-performing solution to share data among instances within a VPC.
High Performance: EFS provides scalable performance for workloads that require high throughput and IOPS. It is particularly well-suited for applications that need to share data across multiple instances.
Ease of Use: EFS can be easily mounted on multiple instances across different Availability Zones, providing a shared file system accessible to all the instances within the VPC.
Security: EFS can be configured to ensure that data remains within the VPC, and it supports encryption at rest and in transit.
Why Not Other Options?
Option A (Amazon S3 bucket with APIs): While S3 is excellent for object storage, it is not a file system and does not provide the low-latency access required for shared data between instances.
Option B (S3 bucket as a mounted volume): S3 is not designed to be mounted as a file system, and this approach would introduce unnecessary complexity and latency.
Option C (EBS volume shared across instances): EBS volumes cannot be attached to multiple instances simultaneously. It is not designed to be shared across instances like EFS.
AWS
Reference: Amazon EFS- Overview of Amazon EFS and its features.
Best Practices for Amazon EFS- Recommendations for using EFS with multiple instances.
A company hosts its order processing system on AWS. The architecture consists of a frontend and a backend. The frontend includes an Application Load Balancer (ALB) and Amazon EC2 instances in an Auto-Scaling group. The backend includes an EC2 instance and an Amazon RDS MySQL database.
To prevent incomplete or lost orders, the company wants to ensure that order states are always preserved. The company wants to ensure that every order will eventually be processed, even after an outage or pause. Every order must be processed exactly once.
- A . Create an Auto Scaling group and an ALB for the backend. Create a read replica for the RDS database in a second Availability Zone. Update the backend RDS endpoint.
- B . Create an Auto Scaling group and an ALB for the backend. Create an Amazon RDS proxy in front of the RDS database. Update the backend EC2 instance to use the Amazon RDS proxy endpoint.
- C . Create an Auto Scaling group for the backend. Configure the backend EC2 instances to con-sume messages from an Amazon Simple Queue Service (Amazon SQS) FIFO queue. Configure a dead-letter queue (DLQ) for the SQS queue.
- D . Create an AWS Lambda function to replace the backend EC2 instance. Subscribe the func-tion to an Amazon Simple Notification Service (Amazon SNS) topic. Configure the frontend to send orders to the SNS topic.
C
Explanation:
Use SQS FIFO to durably persist orders, guarantee order processing semantics, and decouple producers/consumers. FIFO queues provide “exactly-once processing” with message deduplication and “preserve message order.” Visibility timeouts and retries ensure messages are “processed eventually” without being lost; failed messages go to a DLQ for later reprocessing. This pattern aligns with Well-Architected reliability guidance to “queue work to protect against overload and failures” and to ensure “durable, idempotent processing” with retry and backoff. ALB/RDS Proxy/read replicas (A, B) improve availability/connection management but do not guarantee durable handoff or exactly-once processing. SNS (D) is pub/sub and does not provide FIFO semantics in this option, nor a DLQ per subscription for exactly-once. Therefore, frontends write orders to an SQS FIFO queue; backend workers in an Auto Scaling group consume, process idempotently, and use a DLQ for poison messages to meet “no lost orders,” “eventual processing,” and “exactly-once” requirements.
Reference: Amazon SQS Developer Guide ― FIFO Queues (exactly-once processing, message ordering, deduplication), Dead-Letter Queues; AWS Well-Architected Framework ― Reliability Pillar (queue-based load leveling, idempotency, retries).
A company needs to ingest and analyze telemetry data from vehicles at scale for machine learning and reporting.
Which solution will meet these requirements?
- A . Use Amazon Timestream for LiveAnalytics to store data points. Grant Amazon SageMaker permission to access the data. Use Amazon QuickSight to visualize the data.
- B . Use Amazon DynamoDB to store data points. Use DynamoDB Connector to ingest data into Amazon EMR for processing. Use Amazon QuickSight to visualize the data.
- C . Use Amazon Neptune to store data points. Use Amazon Kinesis Data Streams to ingest data into a Lambda function for processing. Use Amazon QuickSight to visualize the data.
- D . Use Amazon Timestream for LiveAnalytics to store data points. Grant Amazon SageMaker permission to access the data. Use Amazon Athena to visualize the data.
A
Explanation:
Amazon Timestreamis purpose-built for storing and analyzing time-series data like telemetry.
Option A leverages Timestream, SageMaker for ML, and QuickSight for visualization, meeting all requirements with minimal complexity.
Option B involves more complex DynamoDB-EMR integration.
Option C uses Neptune, which is designed for graph databases, not telemetry data.
Option D incorrectly uses Athena for visualization instead of QuickSight.
A gaming company is building an application with Voice over IP capabilities. The application will serve traffic to users across the world. The application needs to be highly available with automated failover across AWS Regions. The company wants to minimize the latency of users without relying on IP address caching on user devices.
What should a solutions architect do to meet these requirements?
- A . Use AWS Global Accelerator with health checks.
- B . Use Amazon Route 53 with a geolocation routing policy.
- C . Create an Amazon CloudFront distribution that includes multiple origins.
- D . Create an Application Load Balancer that uses path-based routing.
A
Explanation:
The correct answer is A because the application is a global Voice over IP workload that needs low latency, high availability, and automated failover across AWS Regions. AWS Global Accelerator is specifically designed to improve the availability and performance of global applications by routing user traffic over the AWS global network to the optimal healthy endpoint. It uses static Anycast IP addresses, which means users connect through fixed IP addresses that do not depend on client-side DNS or IP address caching behavior.
This is especially valuable for latency-sensitive applications such as Voice over IP, where rapid routing decisions and fast failover are critical. Global Accelerator continuously monitors endpoint health and can automatically direct traffic to healthy endpoints in another AWS Region if a failure occurs. This supports the requirement for cross-Region failover with minimal disruption.
Option B is incorrect because Route 53 geolocation routing directs traffic based on user location, but DNS-based routing can be affected by client-side caching and does not provide the same performance acceleration or fast failover behavior as Global Accelerator.
Option C is incorrect because Amazon CloudFront is primarily a content delivery network for HTTP and similar content distribution scenarios, not the preferred service for real-time Voice over IP traffic.
Option D is incorrect because an Application Load Balancer does not by itself provide global routing or multi-Region failover.
AWS best practices for global, latency-sensitive applications recommend AWS Global Accelerator when the goal is to improve performance, use the AWS backbone network, and provide health-based failover without depending on DNS cache expiration. Therefore, AWS Global Accelerator with health checks is the best solution.
A company runs several custom applications on Amazon EC2 instances. Each team within the company manages its own set of applications and backups. To comply with regulations, the company
must be able to report on the status of backups and ensure that backups are encrypted.
Which solution will meet these requirements with the LEAST effort?
- A . Create an AWS Lambda function that processes AWS Config events. Configure the Lambda function to query AWS Config for backup-related data and to generate daily reports.
- B . Check the backup status of the EC2 instances daily by reviewing the backup configurations in AWS Backup and Amazon Elastic Block Store (Amazon EBS) snapshots.
- C . Use an AWS Lambda function to query Amazon EBS snapshots, Amazon RDS snapshots, and AWS Backup jobs. Configure the Lambda function to process and report on the data. Schedule the function to run daily.
- D . Use AWS Config and AWS Backup Audit Manager to ensure compliance. Review generated reports daily.
D
Explanation:
AWS Backup Audit Manager automates auditing and reporting of backup activity and compliance, while AWS Config provides visibility into configuration changes. Together, they provide the simplest, most automated, and compliant backup monitoring solution.
From AWS Documentation:
“AWS Backup Audit Manager automatically audits backup activity across AWS resources. You can use predefined or custom frameworks to monitor backup compliance and encryption status.”
(Source: AWS Backup Audit Manager User Guide)
Why D is correct:
Ensures centralized visibility into all backup jobs.
Verifies encryption status automatically.
Generates ready-to-use reports with minimal operational overhead.
Complies with regulatory requirements for data protection.
Why others are incorrect:
A & C: Custom Lambda automation increases maintenance effort.
B: Manual checking is operationally inefficient and error-prone.
Reference: AWS Backup Audit Manager User Guide
AWS Config Documentation C “Compliance and Monitoring” AWS Well-Architected Framework C Operational Excellence Pillar
A company runs an online order management system on AWS. The company stores order and inventory data for the previous 5 years in an Amazon Aurora MySQL database. The company deletes inventory data after 5 years.
The company wants to optimize costs to archive data.
Which solution will meet this requirement?
- A . Create an AWS Glue crawler to export data to Amazon S3. Create an AWS Lambda function to compress the data.
- B . Use the SELECT INTO OUTFILE S3 query on the Aurora database to export the data to Amazon S3.
Configure S3 Lifecycle rules on the S3 bucket. - C . Create an AWS Glue DataBrew job to migrate data from Aurora to Amazon S3. Configure S3 Lifecycle rules on the S3 bucket.
- D . Use the AWS Schema Conversion Tool (AWS SCT) to replicate data from Aurora to Amazon S3. Use the S3 Standard-Infrequent Access (S3 Standard-IA) storage class.
B
Explanation:
Amazon Aurora MySQL supports the SELECT INTO OUTFILE S3 SQL syntax to export query results directly to Amazon S3. This is an efficient and low-overhead method for archiving data.
Once data is in S3, Lifecycle rules can be configured to automatically transition older data to lower-cost storage classes (such as S3 Glacier) or delete it after a defined period, providing a cost-optimized and automated archive solution.
The Glue-based options involve more services and operational overhead. SCT is intended for database migrations, not for periodic data archival.
