Practice Free SAA-C03 Exam Online Questions
A company runs a mobile game app on AWS. The app stores data for every user session. The data updates frequently during a gaming session. The app stores up to 256 KB for each session. Sessions can last up to 48 hours.
The company wants to automate the deletion of expired session data. The company must be able to restore all session data automatically if necessary.
Which solution will meet these requirements?
- A . Use an Amazon DynamoDB table to store the session data. Enable point-in-time recovery (PITR) and TTL for the table. Select the corresponding attribute for TTL in the session data.
- B . Use an Amazon MemoryDB table to store the session data. Enable point-in-time recovery (PITR) and TTL for the table. Select the corresponding attribute for TTL in the session data.
- C . Store session data in an Amazon S3 bucket. Use the S3 Standard storage class. Enable S3 Versioning for the bucket. Create an S3 Lifecycle configuration to expire objects after 48 hours.
- D . Store session data in an Amazon S3 bucket. Use the S3 Intelligent-Tiering storage class. Enable S3 Versioning for the bucket. Create an S3 Lifecycle configuration to expire objects after 48 hours.
A
Explanation:
Amazon DynamoDB supports TTL (Time To Live) for automated, scheduled deletion of expired items. It also offers point-in-time recovery (PITR) to restore the table to any second within the retention window (typically up to 35 days), providing full data durability and protection. DynamoDB can efficiently handle frequent updates and offers predictable performance. MemoryDB is an in-memory store, not designed for durable recovery. S3 with lifecycle policies does not handle updates as efficiently for small, frequent writes and is not as optimal for session data.
Reference Extract:
"DynamoDB supports TTL for automated expiration and deletion of items and PITR for continuous backups and restoration of data."
Source: AWS Certified Solutions Architect C Official Study Guide, DynamoDB section.
A company deploys a stateful application on Amazon EC2 On-Demand Instances in multiple Availability Zones behind an Application Load Balancer (ALB). The application workload is predictable, and the company has not received any CPU usage alerts. The company expects to run the application for at least 1 year.
The company expects CPU usage to increase by 50% during an upcoming 2-week holiday period. The company wants to optimize costs for the application for both the holiday period and normal operations.
Which solution will meet these requirements in the MOST cost-effective way?
- A . Continue to use On-Demand Instances to handle the existing workload. Purchase additional On-Demand Instances to handle the capacity requirement for the upcoming holiday period.
- B . Purchase a 12-month EC2 Instance Savings Plan to handle the existing workload. Use On-Demand Instances to handle the additional capacity requirement for the upcoming holiday period.
- C . Purchase a 12-month Compute Savings Plan to handle the existing workload. Use Spot Instances to handle the additional capacity requirement for the upcoming holiday period.
- D . Purchase a 12-month Compute Savings Plan to handle both the existing workload and the additional capacity requirement for the upcoming holiday period.
B
Explanation:
The correct answer is B because the company has a predictable baseline workload that will run for at
least 1 year, plus a temporary 2-week increase in demand during a holiday period. The most cost-effective approach is to use a long-term discount option for the steady-state workload and a flexible pricing model for the short-term burst capacity.
An EC2 Instance Savings Plan is a strong fit for the existing workload because it provides cost savings for committed EC2 usage while matching a stable, predictable application footprint. Since the company is already using EC2 and the workload pattern is known, this is an efficient commitment for the baseline capacity. For the additional temporary holiday demand, On-Demand Instances are appropriate because the extra capacity is needed only for a short duration. This avoids overcommitting to capacity that will not be used after the holiday period ends.
Option A is less cost-effective because it misses the savings opportunity for the predictable baseline workload.
Option C is incorrect because the application is described as stateful, which makes Spot Instances a poor fit due to potential interruption.
Option D is incorrect because purchasing a 12-month commitment for both the normal workload and the temporary holiday increase would likely result in paying for excess committed usage after the 2-week peak ends.
AWS cost optimization guidance recommends using discounted commitment models for steady-state usage and On-Demand for short-lived, temporary spikes. Therefore, the best answer is to use an EC2 Instance Savings Plan for the baseline and On-Demand Instances for the seasonal increase.
A retail company is building an order fulfillment system using a microservices architecture on AWS. The system must store incoming orders durably until processing completes successfully. Multiple teams’ services process orders according to a defined workflow. Services must be scalable, loosely coupled, and able to handle sudden surges in order volume. The processing steps of each order must be centrally tracked.
Which solution will meet these requirements?
- A . Send incoming orders to an Amazon Simple Notification Service (Amazon SNS) topic. Start an AWS Step Functions workflow for each order that orchestrates the microservices. Use AWS Lambda functions for each microservice.
- B . Send incoming orders to an Amazon Simple Queue Service (Amazon SQS) queue. Start an AWS Step Functions workflow for each order that orchestrates the microservices. Use AWS Lambda functions for each microservice.
- C . Send incoming orders to an Amazon Simple Queue Service (Amazon SQS) queue. Use Amazon EventBridge to distribute events among the microservices. Use AWS Lambda functions for each microservice.
- D . Send incoming orders to an Amazon Simple Notification Service (Amazon SNS) topic. Subscribe Amazon EventBridge to the topic to distribute events among the microservices. Use AWS Lambda functions for each microservice.
B
Explanation:
Durable storage of incoming orders with buffering and ability to handle surges is exactly what Amazon SQS is designed for. SQS provides highly durable, scalable queues that decouple producers from consumers.
Centrally tracking workflow steps is a core use case of AWS Step Functions, which gives a visual workflow and state machine, tracks the state of each order, and can orchestrate calls to multiple microservices (in this case, Lambda functions).
Combining SQS + Step Functions + Lambda gives:
Durable queueing for orders (SQS).
Loose coupling and surge handling (SQS decoupling + auto-scaling Lambda).
Central orchestration and tracking of order-processing steps (Step Functions).
Why the other options are not correct:
A: SNS is a pub/sub service, not a durable work queue, and is not designed for “store-and-retry until processed” workloads in the same way SQS is.
C: SQS + EventBridge provides decoupling but no central, stateful workflow tracking; EventBridge is event routing, not workflow orchestration.
D: SNS + EventBridge still lacks durable order storage and explicit centralized workflow/state tracking.
A company is designing a web application with an internet-facing Application Load Balancer (ALB).
The company needs the ALB to receive HTTPS web traffic from the public internet. The ALB must send only HTTPS traffic to the web application servers hosted on the Amazon EC2 instances on port 443. The ALB must perform a health check of the web application servers over HTTPS on port 8443.
Which combination of configurations of the security group that is associated with the ALB will meet these requirements? (Select THREE.)
- A . Allow HTTPS inbound traffic from 0.0.0.0/0 for port 443.
- B . Allow all outbound traffic to 0.0.0.0/0 for port 443.
- C . Allow HTTPS outbound traffic to the web application instances for port 443.
- D . Allow HTTPS inbound traffic from the web application instances for port 443.
- E . Allow HTTPS outbound traffic to the web application instances for the health check on port 8443.
- F . Allow HTTPS inbound traffic from the web application instances for the health check on port 8443.
A, C, E
Explanation:
Option A: The ALB must accept HTTPS traffic from the public internet. Allowing inbound traffic on port 443 from 0.0.0.0/0 enables this functionality.
Option C: The ALB must forward HTTPS traffic to the web application servers on port 443. Outbound traffic for port 443 must be allowed for this communication.
Option E: The ALB must perform health checks on the web application servers over HTTPS on port 8443. Outbound traffic for port 8443 must be allowed for this purpose.
Option B: Allowing all outbound traffic is overly permissive and does not align with the specific requirements.
Option D and F: Inbound traffic to the ALB from the web application instances is unnecessary because the flow of traffic is from the ALB to the web application instances, not vice versa.
AWS Documentation
Reference: Application Load Balancer Security Groups
Health Checks for ALBs
A company deploys an application on Amazon EC2 Spot Instances. The company observes frequent unavailability issues that affect the application’s output. The application instances all use the same instance type in a single Availability Zone. The application architecture does not require the use of any specific instance family.
The company needs a solution to improve the availability of the application.
Which combination of steps will meet this requirement MOST cost-effectively? (Select THREE.)
- A . Create an EC2 Auto Scaling group that includes a mix of Spot Instances and a base number of On-Demand Instances.
- B . Create EC2 Capacity Reservations.
- C . Use the lowest price allocation strategy for Spot Instances.
- D . Specify similarly sized instance types and Availability Zones for the Spot Instances.
- E . Use a different instance type for the web application.
- F . Use the price capacity optimized strategy for Spot Instances.
A, D, F
Explanation:
AWS Spot best practices recommend diversifying capacity across multiple instance types and Availability Zones and using the capacity-optimized (price-capacity-optimized) allocation strategy to choose pools with the deepest capacity for higher availability. Adding a small On-Demand base in the Auto Scaling group maintains steady, uninterrupted baseline processing while keeping costs low and absorbing Spot interruptions.
Option C (lowest price) increases interruption risk. Capacity Reservations (B) target On-Demand capacity guarantees and add cost, not needed for Spot-based elasticity.
Option E is redundant; diversification is already achieved with (D). This combination maximizes resiliency of Spot workloads while preserving strong cost efficiency and aligns with AWS guidance for fault-tolerant, stateless applications on Spot.
A company is designing an application on AWS that provides real-time dashboards. The dashboard data comes from on-premises databases that use a variety of schemas and formats. The company needs a solution to transfer and transform the data to AWS with minimal latency.
Which solution will meet these requirements?
- A . Integrate the dashboard with Amazon Managed Streaming for Apache Kafka (Amazon MSK) to transfer and transform the data from the on-premises databases to the dashboards.
- B . Use Amazon Data Firehose to transfer the data to an Amazon S3 Bucket. Configure the dashboard application to import new data from the S3 bucket periodically.
- C . Use AWS Database Migration Service (AWS DMS) Schema Conversion to consolidate the on-premises databases into a single AWS database. Use an AWS Lambda function that is scheduled by Amazon EventBridge to transfer data from the consolidated database to the dashboard application.
- D . Use AWS DataSync to transfer data from the source databases to the dashboard application continuously. Configure the dashboard application to import data from DataSync.
A
Explanation:
Amazon MSK is a fully managed, highly available Apache Kafka service for streaming data with low latency. Kafka Connect and stream processors enable ingest from heterogeneous sources and perform in-stream transformation before delivery to consumers (e.g., the dashboard service). This satisfies real-time updates from diverse schemas and formats. Kinesis alternatives could work, but among the given choices, MSK is the only streaming option designed for sub-second, continuous pipelines. Kinesis Data Firehose (B) buffers and batches data to S3 and is optimized for delivery to storage, not low-latency dashboards. AWS DMS schema conversion (C) focuses on database migration, not ongoing real-time, multi-format streaming for dashboards. AWS DataSync (D) is for file/object transfer, not database change streams. Hence, MSK best meets minimal-latency, transform-in-flight needs with managed operations.
Reference: Amazon MSK ― real-time streaming, low latency, Kafka Connect/Streams for transformations; Well-Architected Performance Efficiency ― use streaming for real-time analytics.
A company has an e-commerce site. The site is designed as a distributed web application hosted in multiple AWS accounts under one AWS Organizations organization. The web application is comprised of multiple microservices. All microservices expose their AWS services either through Amazon CloudFront distributions or public Application Load Balancers (ALBs). The company wants to protect
public endpoints from malicious attacks and monitor security configurations.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Use AWS WAF to protect the public endpoints. Use AWS Firewall Manager from a dedicated security account to manage rules in AWS WAF. Use AWS Config rules to monitor the Regional and global WAF configurations.
- B . Use AWS WAF to protect the public endpoints. Apply AWS WAF rules in each account. Use AWS Config rules and AWS Security Hub to monitor the WAF configurations of the ALBs and the CloudFront distributions.
- C . Use AWS WAF to protect the public endpoints. Use AWS Firewall Manager from a dedicated security account to manage the rules in AWS WAF. Use Amazon Inspector and AWS Security Hub to monitor the WAF configurations of the ALBs and the CloudFront distributions.
- D . Use AWS Shield Advanced to protect the public endpoints. Use AWS Config rules to monitor the Shield Advanced configuration for each account.
A
Explanation:
Key Requirements:
Protect public endpoints (CloudFront distributions and ALBs) frommalicious attacks.
Centralizedmanagementacross multiple accounts in an organization.
Ability tomonitor security configurationseffectively.
Minimizeoperational overhead.
Analysis of Options
Option A:
AWS WAF: Protects web applications by filtering and blocking malicious requests. Rules can be applied to both ALBs and CloudFront distributions.
AWS Firewall Manager: Enables centralized management of WAF rules across multiple accounts in an AWS Organizations organization. It simplifies rule deployment, avoiding the need to configure rules individually in each account.
AWS Config: Monitors compliance by using rules that check Regional and global WAF configurations.
Ensures that security configurations align with organizational policies.
Operational Overhead: Centralized management and automated monitoring reduce the operational burden.
Correct Approach: Meets all requirements with the least overhead.
Option B:
This approach involves applying WAF rules in each account manually.
While AWS Config and AWS Security Hub provide monitoring capabilities, managing individual WAF configurations in multiple accounts introduces significant operational overhead.
Incorrect Approach: Higher overhead compared to centralized management with AWS Firewall Manager.
Option C:
Similar to Option A but includesAmazon Inspector, which is not designed for monitoring WAF configurations.
AWS Security Hubis appropriate for monitoring but is redundant when Firewall Manager and Config are already in use.
Incorrect Approach: Adds unnecessary complexity and does not focus on monitoring WAF specifically.
Option D:
AWS Shield Advanced: Focuses on mitigating large-scale DDoS attacks but does not provide the fine-grained web application protection offered by WAF.
AWS Config: Can monitor Shield Advanced configurations but does not fulfill the WAF monitoring requirements.
Incorrect Approach: Does not address the need for WAF or centralized rule management.
Why Option A is Correct
Protection:
AWS WAF provides fine-grained filtering and protection against SQL injection, cross-site scripting, and other web vulnerabilities.
Rules can be applied at both ALBs and CloudFront distributions, covering all public endpoints.
Centralized Management:
AWS Firewall Manager enables security teams to centrally define and manage WAF rules across all accounts in the organization.
Monitoring:
AWS Config ensures compliance with WAF configurations by checking rules and generating alerts for misconfigurations.
Operational Overhead:
Centralized management via Firewall Manager and automated compliance monitoring via AWS Config greatly reduce manual effort.
AWS Solution Architect Reference
AWS WAF Documentation
AWS Firewall Manager Documentation
AWS Config Best Practices
AWS Organizations Documentation
A company uses a Microsoft SQL Server database. The applications currently connect using SQL Server protocols. The company wants to migrate to Amazon Aurora PostgreSQL with minimal changes to application code.
Which combination of steps will meet these requirements? (Select TWO.)
- A . Use AWS SCT to rewrite SQL queries in the applications.
- B . Enable Babelfish on Aurora PostgreSQL to run SQL Server queries.
- C . Migrate the database schema and data using AWS SCT and AWS DMS.
- D . Use Amazon RDS Proxy to connect the applications to Aurora PostgreSQL.
- E . Use AWS DMS to rewrite SQL queries in the applications.
B, C
Explanation:
Amazon Aurora PostgreSQL with Babelfish allows Aurora to understand SQL Server T-SQL and the SQL Server wire protocol. This enables applications to continue using SQL Server drivers, minimizing code changes (Option B).
Migration of schema and data is performed using AWS Schema Conversion Tool (SCT) and AWS Database Migration Service (DMS) (Option C), which is the AWS-recommended migration pattern for heterogeneous database migrations.
AWS DMS (Option E) does not rewrite application SQL.
RDS Proxy (Option D) does not translate SQL Server protocols.
Option A requires rewriting application queries, which contradicts the “minimal changes” requirement.
A company has a web application with sporadic usage patterns. Usage is heavy at the beginning of each month, moderate weekly, and unpredictable during the week. The application uses a MySQL database and must move to AWS without database modifications.
Which solution will meet these requirements?
- A . Amazon DynamoDB
- B . Amazon RDS for MySQL
- C . MySQL-compatible Amazon Aurora Serverless
- D . MySQL on Amazon EC2 in an Auto Scaling group
C
Explanation:
The database workload has high variability and unpredictability, which makes fixed-capacity database solutions inefficient and costly. The company also requires MySQL compatibility without schema or application changes.
Amazon Aurora Serverless (MySQL-compatible) is specifically designed for this scenario.
Option C automatically scales database capacity up or down based on workload demand, making it highly cost-effective for sporadic usage patterns. During peak periods, Aurora Serverless scales seamlessly to handle increased load, and during idle periods, capacity scales down, significantly reducing cost.
Option A (DynamoDB) is not suitable because it is a NoSQL service and would require significant application and schema changes.
Option B (RDS for MySQL) uses fixed instance sizes and would require overprovisioning to handle peak loads, increasing cost.
Option D introduces high operational overhead and does not provide automatic database scaling.
Therefore, C best meets the requirements by providing MySQL compatibility, serverless scaling, and cost optimization for variable workloads.
A solutions architect is provisioning an Amazon Elastic File System (Amazon EFS) file system to provide shared storage across multiple Amazon EC2 instances. The instances all exist in the same VPC across multiple Availability Zones. There are two instances in each Availability Zone. The solutions architect must make the file system accessible to each instance with the lowest possible latency.
Which solution will meet these requirements?
- A . Create a mount target for the EFS file system in the VPC. Use the mount target to mount the file system on each of the instances.
- B . Create a mount target for the EFS file system in one Availability Zone of the VPC. Use the mount target to mount the file system on the instances in that Availability Zone. Share the directory with the other instances.
- C . Create a mount target for each instance. Use each mount target to mount the EFS file system on each respective instance.
- D . Create a mount target in each Availability Zone of the VPC. Use the mount target to mount the EFS file system on the instances in the respective Availability Zone.
D
Explanation:
Amazon EFS requires a mount target in each Availability Zone where EC2 instances access the file system. This is because each mount target provides an elastic network interface in the subnet and AZ, reducing network latency by allowing EC2 instances to communicate locally with the EFS mount
target. Creating a mount target in each AZ optimizes file system access performance and availability. Instances mount the EFS file system via the mount target in their respective AZ, which provides the lowest possible latency and avoids cross-AZ traffic.
Option A, with only a single mount target in the VPC, will cause cross-AZ traffic for instances in other AZs, increasing latency and potentially incurring data transfer costs.
Option B is incomplete and introduces complexity with sharing directories across instances.
Option C is invalid because mount targets are per AZ and per subnet, not per instance.
Reference: Amazon EFS Overview (https: //docs.aws.amazon.com/efs/latest/ug/whatisefs.html)
Creating Mount Targets (https: //docs.aws.amazon.com/efs/latest/ug/manage-fs-access.html#creating-mount-targets)
AWS Well-Architected Framework ― Performance Efficiency Pillar (https: //d1.awsstatic.com/whitepapers/architecture/AWS_Well-Architected_Framework.pdf)
