Practice Free SAA-C03 Exam Online Questions
A company’s HTTP application is behind a Network Load Balancer (NLB). The NLB’s target group is configured to use an Amazon EC2 Auto Scaling group with multiple EC2 instances that run the web service.
The company notices that the NLB is not detecting HTTP errors for the application. These errors require a manual restart of the EC2 instances that run the web service. The company needs to improve the application’s availability without writing custom scripts or code.
What should a solutions architect do to meet these requirements?
- A . Enable HTTP health checks on the NLB, supplying the URL of the company’s application.
- B . Add a cron job to the EC2 instances to check the local application’s logs once each minute. If HTTP errors are detected, the application will restart.
- C . Replace the NLB with an Application Load Balancer. Enable HTTP health checks by supplying the URL of the company’s application. Configure an Auto Scaling action to replace unhealthy instances.
- D . Create an Amazon CloudWatch alarm that monitors the UnhealthyHostCount metric for the NLB. Configure an Auto Scaling action to replace unhealthy instances when the alarm is in the ALARM state.
C
Explanation:
A Network Load Balancer operates at Layer 4 (TCP/UDP/TLS) and is optimized for high performance and static IP use cases. While NLB target groups can perform health checks, they are typically oriented around basic reachability and do not provide the same application-layer (Layer 7) visibility as an Application Load Balancer (ALB). The problem statement says the NLB is “not detecting HTTP errors,” which indicates the health signal needs to be based on an HTTP endpoint that can reflect application correctness (for example, returning specific HTTP status codes).
Replacing the NLB with an ALB enables true HTTP/HTTPS health checks against a URL path, including interpretation of HTTP response codes. This is the cleanest managed approach to detect application-layer failure modes that still allow TCP connections but produce bad HTTP responses. Once the ALB detects targets as unhealthy, the target group health status can be used by an Auto Scaling group to take action. With appropriate health check configuration (and, commonly, using ELB health checks as a signal), Auto Scaling can replace unhealthy instances automatically, improving availability without custom scripts.
Option A is misleading: NLB does not provide the same HTTP-aware request routing and rich L7 features; even if an NLB health check is configured, it does not address the broader need for application-layer detection and remediation as directly as ALB.
Option B violates the “no custom scripts” requirement.
Option D reacts to UnhealthyHostCount, but if the NLB isn’t marking hosts unhealthy for HTTP error cases, the metric won’t reliably trigger replacement; it also still depends on the NLB’s limited visibility into HTTP failures.
Therefore, C best meets the requirement by shifting to ALB for application-layer health checks and using Auto Scaling to replace unhealthy instances automatically.
A company is planning to connect a remote office to its AWS infrastructure. The office requires permanent and secure connectivity to AWS. The connection must provide secure access to resources in two VPCs. However, the VPCs must not be able to access each other.
- A . Create two transit gateways. Set up one AWS Site-to-Site VPN connection from the remote office to each transit gateway. Connect one VPC to the transit gateway. Configure route table propagation to the appropriate transit gateway based on the destination VPC IP range.
- B . Set up one AWS Site-to-Site VPN connection from the remote office to each of the VPCs. Update the VPC route tables with static routes to the remote office resources.
- C . Set up one AWS Site-to-Site VPN connection from the remote office to one of the VPCs. Set up VPC peering between the two VPCs. Update the VPC route tables with static routes to the remote office and peered resources.
- D . Create a transit gateway. Set up an AWS Direct Connect gateway and one Direct Connect connection between the remote office and the Direct Connect gateway. Associate the transit gateway with the Direct Connect gateway. Configure a separate private virtual interface (VIF) for each VPC, and configure routing.
B
Explanation:
The requirement is secure, permanent connectivity to two VPCs without enabling VPC-to-VPC communication.
Option B achieves this by establishing a separate Site-to-Site VPN connection
between the remote office and each VPC. Updating each VPC’s route tables ensures traffic from the remote office reaches the correct VPC resources, while preventing routing between the VPCs themselves.
Options involving transit gateways (A and D) would introduce transitive routing capabilities, which violates the requirement of isolation between VPCs.
Option C (VPC peering) directly conflicts with the requirement by enabling cross-VPC access.
Therefore, option B is the correct solution as it maintains isolation while providing secure VPN connectivity.
Reference:
• AWS Site-to-Site VPN User Guide ― Connecting on-premises networks to multiple VPCs
• AWS Well-Architected Framework ― Security Pillar: Network isolation and segmentation
A company runs an application on Amazon EC2 instances across multiple Availability Zones in the same AWS Region. The EC2 instances share an Amazon Elastic File System (Amazon EFS) volume that is mounted on all the instances. The EFS volume stores a variety of files such as installation media, third-party files, interface files, and other one-time files.
The company accesses some EFS files frequently and needs to retrieve the files quickly. The company accesses other files rarely. The EFS volume is multiple terabytes in size. The company needs to optimize storage costs for Amazon EFS.
Which solution will meet these requirements with the LEAST effort?
- A . Move the files to Amazon S3. Set up a lifecycle policy to move the files to S3 Glacier Flexible Retrieval.
- B . Apply a lifecycle policy to the EFS files to move the files to EFS Infrequent Access.
- C . Move the files to Amazon Elastic Block Store (Amazon EBS) Cold HDD Volumes (sc1).
- D . Move the files to Amazon S3. Set up a lifecycle policy to move the rarely-used files to S3 Glacier Deep Archive.
B
Explanation:
Amazon EFS offers an Infrequent Access (IA) storage class, which can be managed via EFS lifecycle policies. Frequently accessed files remain in the Standard storage class, while infrequently accessed files are automatically moved to the IA class, significantly reducing storage costs with minimal effort and no application changes.
Reference Extract:
"EFS lifecycle management automatically transitions files that are not accessed for a set period to the EFS Infrequent Access (IA) storage class, reducing storage costs."
Source: AWS Certified Solutions Architect C Official Study Guide, EFS and Lifecycle Management section.
A gaming company hosts a browser-based application on AWS. The users of the application consume a large number of videos and images that are stored in Amazon S3. This content is the same for all users.
The application has increased in popularity, and millions of users worldwide are accessing these media files. The company wants to provide the files to the users while reducing the load on the origin.
Which solution meets these requirements MOST cost-effectively?
- A . Deploy an AWS Global Accelerator accelerator in front of the web servers.
- B . Deploy an Amazon CloudFront web distribution in front of the S3 bucket.
- C . Deploy an Amazon ElastiCache (Redis OSS) instance in front of the web servers.
- D . Deploy an Amazon ElastiCache (Memcached) instance in front of the web servers.
B
Explanation:
Amazon CloudFront is a highly cost-effective CDN that caches content like images and videos at edge locations globally. This reduces latency and the load on the origin S3 bucket. It is ideal for static content that is accessed by many users.
Reference: AWS Documentation C Amazon CloudFront with S3 Integration
A company has a production Amazon RDS for MySQL database. The company needs to create a new application that will read frequently changing data from the database with minimal impact on the database’s overall performance. The application will rarely perform the same query more than once.
What should a solutions architect do to meet these requirements?
- A . Set up an Amazon ElastiCache cluster. Query the results in the cluster.
- B . Set up an Application Load Balancer (ALB). Query the results in the ALB.
- C . Set up a read replica for the database. Query the read replica.
- D . Set up querying of database snapshots. Query the database snapshots.
C
Explanation:
Amazon RDS read replicas provide a way to offload read traffic from the primary database, allowing read-intensive applications to query the replica without impacting the performance of the production (write) database. This is especially effective for workloads that involve frequently changing data but do not benefit from caching, since queries are rarely repeated.
Reference Extract from AWS Documentation / Study Guide:
"Read replicas allow you to elastically scale out beyond the capacity constraints of a single DB instance for read-heavy database workloads."
Source: AWS Certified Solutions Architect C Official Study Guide, RDS Read Replica section.
A company runs a three-tier web application in a VPC on AWS. The company deployed an Application Load Balancer (ALB) in a public subnet. The web tier and application tier Amazon EC2 instances are deployed in a private subnet. The company uses a self-managed MySQL database that runs on EC2 instances in an isolated private subnet for the database tier.
The company wants a mechanism that will give a DevOps team the ability to use SSH to access all the servers. The company also wants to have a centrally managed log of all connections made to the servers.
Which combination of solutions will meet these requirements with the MOST operational efficiency? (Select TWO.)
- A . Create a bastion host in the public subnet. Configure security groups in the public, private, and isolated subnets to allow SSH access.
- B . Create an interface VPC endpoint for AWS Systems Manager Session Manager. Attach the endpoint to the VPC.
- C . Create an IAM policy that grants access to AWS Systems Manager Session Manager. Attach the IAM policy to the EC2 instances.
- D . Create a gateway VPC endpoint for AWS Systems Manager Session Manager. Attach the endpoint to the VPC.
- E . Attach an AmazonSSMManagedInstanceCore AWS managed IAM policy to all the EC2 instance roles.
B,E
Explanation:
AWS Systems Manager Session Manager allows secure, auditable SSH-like access to EC2 instances without the need to open SSH ports or manage bastion hosts. For this to work in a private subnet, an interface VPC endpoint is required (not a gateway endpoint).
The EC2 instances must have the AmazonSSMManagedInstanceCore policy attached to their IAM roles to allow Systems Manager operations.
With Session Manager, all session activity can be logged centrally to Amazon CloudWatch Logs or S3, satisfying the audit requirement and improving operational efficiency over manual SSH and bastion configurations.
A company deploys an application on Amazon EC2 Spot Instances. The company observes frequent unavailability issues that affect the application’s output. The application instances all use the same instance type in a single Availability Zone. The application architecture does not require the use of any specific instance family.
The company needs a solution to improve the availability of the application.
Which combination of steps will meet this requirement MOST cost-effectively? (Select THREE.)
- A . Create an EC2 Auto Scaling group that includes a mix of Spot Instances and a base number of On-Demand Instances.
- B . Create EC2 Capacity Reservations.
- C . Use the lowest price allocation strategy for Spot Instances.
- D . Specify similarly sized instance types and Availability Zones for the Spot Instances.
- E . Use a different instance type for the web application.
- F . Use the price capacity optimized strategy for Spot Instances.
A,D,F
Explanation:
AWS Spot best practices recommend diversifying capacity across multiple instance types and Availability Zones and using the capacity-optimized (price-capacity-optimized) allocation strategy to choose pools with the deepest capacity for higher availability. Adding a small On-Demand base in the Auto Scaling group maintains steady, uninterrupted baseline processing while keeping costs low and absorbing Spot interruptions.
Option C (lowest price) increases interruption risk. Capacity Reservations (B) target On-Demand capacity guarantees and add cost, not needed for Spot-based elasticity.
Option E is redundant; diversification is already achieved with (D). This combination maximizes resiliency of Spot workloads while preserving strong cost efficiency and aligns with AWS guidance for fault-tolerant, stateless applications on Spot.
A company wants to run big data workloads on Amazon EMR. The workloads need to process terabytes of data in memory.
A solutions architect needs to identify the appropriate EMR cluster instance configuration for the workloads.
Which solution will meet these requirements?
- A . Use a storage optimized instance for the primary node. Use compute optimized instances for core nodes and task nodes.
- B . Use a memory optimized instance for the primary node. Use storage optimized instances for core nodes and task nodes.
- C . Use a general purpose instance for the primary node. Use memory optimized instances for core nodes and task nodes.
- D . Use general purpose instances for the primary, core, and task nodes.
C
Explanation:
Big data workloads that need to process terabytes of data in memory requirememory-optimized instancesfor the core and task nodes to ensure sufficient memory for processing data efficiently.
Primary Node: Ageneral purpose instance is suitable because it manages cluster operations, including coordination and monitoring, and does not process data directly.
Core and Task Nodes: These nodes handle data storage and processing. Memory-optimized instancesare ideal because they provide high memory-to-CPU ratios, which is critical for in-memory big data workloads.
Why Other Options Are Incorrect:
Option A: Storage optimized and compute optimized instances are not suitable for workloads that rely heavily on in-memory processing.
Option B: A memory-optimized primary node is unnecessary because the primary node does not process data.
Option D: General purpose instances for all nodes will not provide sufficient memory for processing terabytes of data in memory.
AWS Documentation
Reference: Amazon EMR Instance Types
Memory-Optimized Instances
A company hosts a website analytics application on a single Amazon EC2 On-Demand Instance. The analytics application is highly resilient and is designed to run in stateless mode.
The company notices that the application is showing signs of performance degradation during busy times and is presenting 5xx errors. The company needs to make the application scale seamlessly.
Which solution will meet these requirements MOST cost-effectively?
- A . Create an Amazon Machine Image (AMI) of the web application. Use the AMI to launch a second EC2 On-Demand Instance. Use an Application Load Balancer to distribute the load across the two EC2 instances.
- B . Create an Amazon Machine Image (AMI) of the web application. Use the AMI to launch a second EC2 On-Demand Instance. Use Amazon Route 53 weighted routing to distribute the load across the two EC2 instances.
- C . Create an AWS Lambda function to stop the EC2 instance and change the instance type. Create an Amazon CloudWatch alarm to invoke the Lambda function when CPU utilization is more than 75%.
- D . Create an Amazon Machine Image (AMI) of the web application. Apply the AMI to a launch template. Create an Auto Scaling group that includes the launch template. Configure the launch template to use a Spot Fleet. Attach an Application Load Balancer to the Auto Scaling group.
D
Explanation:
Auto Scalingis the most effective solution for ensuring seamless scalability of a stateless application.
Key points:
Dleverages Auto Scaling with a Spot Fleet for cost efficiency and attaches an ALB to distribute traffic.
A and Bdo not provide automated scaling and would require manual intervention to add more instances.
Cchanges the instance type but does not scale out horizontally, which is required here.
AWS Documentation
Reference: Amazon EC2 Auto Scaling
A solutions architect is designing a customer-facing application for a company. The application’s database will have a clearly defined access pattern throughout the year and will have a variable number of reads and writes that depend on the time of year. The company must retain audit records for the database for 7 days. The recovery point objective (RPO) must be less than 5 hours.
Which solution meets these requirements?
- A . Use Amazon DynamoDB with auto scaling. Use on-demand backups and Amazon DynamoDB Streams.
- B . Use Amazon Redshift. Configure concurrency scaling. Activate audit logging. Perform database snapshots every 4 hours.
- C . Use Amazon RDS with Provisioned IOPS. Activate the database auditing parameter. Perform database snapshots every 5 hours.
- D . Use Amazon Aurora MySQL with auto scaling. Activate the database auditing parameter.
D
Explanation:
Amazon Aurora MySQL provides:
Continuous, incremental backups to Amazon S3 with point-in-time recovery (PITR), allowing recovery to any point within the retention window (typically up to 35 days). This easily achieves an RPO of less than 5 hours, often down to seconds.
Support for database auditing parameters so audit logs can be retained and managed (for example, in database logs or external log services) to meet the 7-day audit requirement.
Auto scaling (via read replicas, Aurora Serverless v2, or capacity management) to handle variable read/write demand throughout the year with minimal operational overhead.
Why others are less suitable:
A: DynamoDB is NoSQL and does not directly satisfy typical relational database + SQL audit requirements; Streams also only retain 24 hours, not 7 days, and on-demand backups do not inherently give an RPO < 5 hours without frequent scheduling.
B: Amazon Redshift is a data warehouse, not a primary transactional application database.
C: Snapshots every 5 hours give an RPO of up to 5 hours, not less than 5 hours, and rely on discrete snapshot points rather than continuous PITR.
