Practice Free SAA-C03 Exam Online Questions
A company has an e-commerce site. The site is designed as a distributed web application hosted in multiple AWS accounts under one AWS Organizations organization. The web application is comprised of multiple microservices. All microservices expose their AWS services either through Amazon CloudFront distributions or public Application Load Balancers (ALBs). The company wants to protect
public endpoints from malicious attacks and monitor security configurations.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Use AWS WAF to protect the public endpoints. Use AWS Firewall Manager from a dedicated security account to manage rules in AWS WAF. Use AWS Config rules to monitor the Regional and global WAF configurations.
- B . Use AWS WAF to protect the public endpoints. Apply AWS WAF rules in each account. Use AWS Config rules and AWS Security Hub to monitor the WAF configurations of the ALBs and the CloudFront distributions.
- C . Use AWS WAF to protect the public endpoints. Use AWS Firewall Manager from a dedicated security account to manage the rules in AWS WAF. Use Amazon Inspector and AWS Security Hub to monitor the WAF configurations of the ALBs and the CloudFront distributions.
- D . Use AWS Shield Advanced to protect the public endpoints. Use AWS Config rules to monitor the Shield Advanced configuration for each account.
A
Explanation:
Key Requirements:
Protect public endpoints (CloudFront distributions and ALBs) frommalicious attacks.
Centralizedmanagementacross multiple accounts in an organization.
Ability tomonitor security configurationseffectively.
Minimizeoperational overhead.
Analysis of Options
Option A:
AWS WAF: Protects web applications by filtering and blocking malicious requests. Rules can be applied to both ALBs and CloudFront distributions.
AWS Firewall Manager: Enables centralized management of WAF rules across multiple accounts in an AWS Organizations organization. It simplifies rule deployment, avoiding the need to configure rules individually in each account.
AWS Config: Monitors compliance by using rules that check Regional and global WAF configurations.
Ensures that security configurations align with organizational policies.
Operational Overhead: Centralized management and automated monitoring reduce the operational burden.
Correct Approach: Meets all requirements with the least overhead.
Option B:
This approach involves applying WAF rules in each account manually.
While AWS Config and AWS Security Hub provide monitoring capabilities, managing individual WAF configurations in multiple accounts introduces significant operational overhead.
Incorrect Approach: Higher overhead compared to centralized management with AWS Firewall Manager.
Option C:
Similar to Option A but includesAmazon Inspector, which is not designed for monitoring WAF configurations.
AWS Security Hubis appropriate for monitoring but is redundant when Firewall Manager and Config are already in use.
Incorrect Approach: Adds unnecessary complexity and does not focus on monitoring WAF specifically.
Option D:
AWS Shield Advanced: Focuses on mitigating large-scale DDoS attacks but does not provide the fine-grained web application protection offered by WAF.
AWS Config: Can monitor Shield Advanced configurations but does not fulfill the WAF monitoring requirements.
Incorrect Approach: Does not address the need for WAF or centralized rule management.
Why Option A is Correct
Protection:
AWS WAF provides fine-grained filtering and protection against SQL injection, cross-site scripting, and other web vulnerabilities.
Rules can be applied at both ALBs and CloudFront distributions, covering all public endpoints.
Centralized Management:
AWS Firewall Manager enables security teams to centrally define and manage WAF rules across all accounts in the organization.
Monitoring:
AWS Config ensures compliance with WAF configurations by checking rules and generating alerts for misconfigurations.
Operational Overhead:
Centralized management via Firewall Manager and automated compliance monitoring via AWS Config greatly reduce manual effort.
AWS Solution Architect Reference
AWS WAF Documentation
AWS Firewall Manager Documentation
AWS Config Best Practices
AWS Organizations Documentation
A company is migrating its online shopping platform to AWS and wants to adopt a serverless architecture.
The platform has a user profile and preference service that does not have a defined schema. The platform allows user-defined fields.
Profile information is updated several times daily. The company must store profile information in a durable and highly available solution. The solution must capture modifications to profile data for future processing.
Which solution will meet these requirements?
- A . Use an Amazon RDS for PostgreSQL instance to store profile data. Use a log stream in Amazon CloudWatch Logs to capture modifications.
- B . Use an Amazon DynamoDB table to store profile data. Use Amazon DynamoDB Streams to capture modifications.
- C . Use an Amazon ElastiCache (Redis OSS) cluster to store profile data. Use Amazon Data Firehose to capture modifications.
- D . Use an Amazon Aurora Serverless v2 cluster to store the profile data. Use a log stream in Amazon CloudWatch Logs to capture modifications.
B
Explanation:
Amazon DynamoDB is a serverless, NoSQL database that is fully managed, highly available, and scales automatically. It is ideal for data without a fixed schema and for use cases where fields can
vary by user. DynamoDB Streams enables the capture of changes to table items in real time, which is ideal for triggering additional processing or workflows on data modifications.
Reference Extract from AWS Documentation / Study Guide:
"DynamoDB provides a scalable, highly available NoSQL database service for applications requiring flexible schema. DynamoDB Streams captures table activity for processing changes in real time."
Source: AWS Certified Solutions Architect C Official Study Guide, DynamoDB and Serverless section.
A company is moving a legacy data processing application to the AWS Cloud. The application needs to run on Amazon EC2 instances behind an Application Load Balancer (ALB).
The application must handle incoming traffic spikes and continue to work in the event of an application fault in one Availability Zone. The company requires that a Web Application Firewall (WAF) must be attached to the ALB.
Which solution will meet these requirements?
- A . Deploy the application to EC2 instances in an Auto Scaling group that is in a single Availability Zone. Use an ALB to distribute traffic. Use AWS WAF.
- B . Deploy the application to EC2 instances in an Auto Scaling group across multiple Availability Zones. Use an ALB to distribute traffic. Use AWS WAF.
- C . Deploy the application to EC2 instances in Auto Scaling groups across multiple AWS Regions. Use Route 53 latency routing. Attach AWS WAF to Route 53.
- D . Deploy the application to EC2 instances in an Auto Scaling group across multiple Availability Zones. Use a Network Load Balancer (NLB). Use AWS WAF.
B
Explanation:
This design includes:
ALB: Supports AWS WAF integration.
Auto Scaling Group: Automatically scales based on load.
Multi-AZ Deployment: Increases resiliency and availability.
AWS WAF: Can be attached to ALB for application-layer protection.
“ALB is integrated with AWS WAF. You can deploy your EC2 instances in an Auto Scaling group across multiple Availability Zones to ensure high availability.”
― High Availability with Auto Scaling and ALB Why not others?
A: Single AZ = not resilient
C: AWS WAF cannot attach to Route 53
D: NLB is not supported by AWS WAF
Reference: AWS WAF Supported Services Auto Scaling with Load Balancers
A company is moving a legacy data processing application to the AWS Cloud. The application needs to run on Amazon EC2 instances behind an Application Load Balancer (ALB).
The application must handle incoming traffic spikes and continue to work in the event of an application fault in one Availability Zone. The company requires that a Web Application Firewall (WAF) must be attached to the ALB.
Which solution will meet these requirements?
- A . Deploy the application to EC2 instances in an Auto Scaling group that is in a single Availability Zone. Use an ALB to distribute traffic. Use AWS WAF.
- B . Deploy the application to EC2 instances in an Auto Scaling group across multiple Availability Zones. Use an ALB to distribute traffic. Use AWS WAF.
- C . Deploy the application to EC2 instances in Auto Scaling groups across multiple AWS Regions. Use Route 53 latency routing. Attach AWS WAF to Route 53.
- D . Deploy the application to EC2 instances in an Auto Scaling group across multiple Availability Zones. Use a Network Load Balancer (NLB). Use AWS WAF.
B
Explanation:
This design includes:
ALB: Supports AWS WAF integration.
Auto Scaling Group: Automatically scales based on load.
Multi-AZ Deployment: Increases resiliency and availability.
AWS WAF: Can be attached to ALB for application-layer protection.
“ALB is integrated with AWS WAF. You can deploy your EC2 instances in an Auto Scaling group across multiple Availability Zones to ensure high availability.”
― High Availability with Auto Scaling and ALB Why not others?
A: Single AZ = not resilient
C: AWS WAF cannot attach to Route 53
D: NLB is not supported by AWS WAF
Reference: AWS WAF Supported Services Auto Scaling with Load Balancers
A company runs an application on a group of Amazon EC2 instances behind an Application Load Balancer (ALB). The company wants to protect the application against layer 7 DDoS attacks.
Which solution will meet this requirement?
- A . Associate AWS Shield Standard with the ALB.
- B . Create an AWS WAF web ACL and add a custom rule. Associate the web ACL with the ALB.
- C . Create an AWS WAF web ACL and add an AWS managed rule. Associate the web ACL with the ALB.
- D . Create an Amazon CloudFront distribution and set the ALB as the origin. Configure the application DNS record to point to the CloudFront distribution instead of the ALB.
C
Explanation:
Protecting an application from layer 7 (application layer) DDoS attacks is best achieved by using AWS WAF (Web Application Firewall), which provides customizable protection against common web exploits including DDoS attacks at the application layer. AWS WAF supports managed rule groups maintained by AWS, which offer robust, tested protections against OWASP top 10 vulnerabilities and common attack patterns without requiring extensive manual rule creation.
While AWS Shield Standard provides basic network-layer DDoS protection automatically at no additional charge, it does not offer application-layer filtering capabilities.
Therefore, option A alone is insufficient.
Option B, involving only custom rules, requires significant operational overhead and expertise, whereas AWS managed rules offer a turnkey solution with ongoing updates from AWS security teams.
Option D, using CloudFront in front of the ALB, can provide additional protection benefits such as caching and geographic restrictions, but the question specifically asks for protecting against layer 7 DDoS on the ALB directly. CloudFront plus WAF is a valid enhanced solution, but the direct and recommended answer in AWS official documents is to use AWS WAF managed rules directly with ALB for application-level protection.
Reference: AWS Well-Architected Framework ― Security Pillar (https: //d1.awsstatic.com/whitepapers/architecture/AWS_Well-Architected_Framework.pdf)
AWS WAF Overview (https: //docs.aws.amazon.com/waf/latest/developerguide/waf-chapter.html)
AWS Shield Overview (https: //aws.amazon.com/shield/)
Protecting Web Applications with AWS WAF (https: //aws.amazon.com/blogs/security/how-to-protect-your-web-application-from-dos-and-ddos-attacks-using-aws-waf/)
A company wants to migrate an application to AWS. The application runs on Docker containers behind an Application Load Balancer (ALB). The application stores data in a PostgreSQL database. The cloud-based solution must use AWS WAF to inspect all application traffic. The application experiences most traffic on weekdays. There is significantly less traffic on weekends.
Which solution will meet these requirements in the MOST cost-effective way?
- A . Use a Network Load Balancer (NLB). Create a web access control list (web ACL) in AWS WAF that includes the necessary rules. Attach the web ACL to the NLB. Run the application on Amazon Elastic Container Service (Amazon ECS). Use Amazon RDS for PostgreSQL as the database.
- B . Create a web access control list (web ACL) in AWS WAF that includes the necessary rules. Attach the web ACL to the ALB. Run the application on Amazon Elastic Kubernetes Service (Amazon EKS). Use Amazon RDS for PostgreSQL as the database.
- C . Create a web access control list (web ACL) in AWS WAF that includes the necessary rules. Attach the web ACL to the ALB. Run the application on Amazon Elastic Container Service (Amazon ECS). Use Amazon Aurora Serverless as the database.
- D . Use a Network Load Balancer (NLB). Create a web access control list (web ACL) in AWS WAF that has the necessary rules. Attach the web ACL to the NLB. Run the application on Amazon Elastic Container Service (Amazon ECS). Use Amazon Aurora Serverless as the database.
C
Explanation:
Using an Application Load Balancer (ALB) allows for integration with AWS WAF to inspect all incoming traffic. Running the application on Amazon ECS provides a scalable and managed container orchestration service. Utilizing Amazon Aurora Serverless for the PostgreSQL database offers automatic scaling based on application demand, which is cost-effective for workloads with variable traffic patterns, such as higher traffic on weekdays and lower traffic on weekends.
Reference: Optimizing cost savings: The advantage of Amazon Aurora over self-managed open-source databases
A company receives data transfers from a small number of external clients that use SFTP software on an Amazon EC2 instance. The clients use an SFTP client to upload data. The clients use SSH keys for authentication. Every hour, an automated script transfers new uploads to an Amazon S3 bucket for processing.
The company wants to move the transfer process to an AWS managed service and to reduce the time required to start data processing. The company wants to retain the existing user management and SSH key generation process. The solution must not require clients to make significant changes to their existing processes.
Which solution will meet these requirements?
- A . Reconfigure the script that runs on the EC2 instance to run every 15 minutes. Create an S3 Event Notifications rule for all new object creation events. Set an Amazon Simple Notification Service (Amazon SNS) topic as the destination.
- B . Create an AWS Transfer Family SFTP server that uses the existing S3 bucket as a target. Use service-managed users to enable authentication.
- C . Require clients to add the AWS DataSync agent into their local environments. Create an IAM user
for each client that has permission to upload data to the target S3 bucket. - D . Create an AWS Transfer Family SFTP connector that has permission to access the target S3 bucket for each client. Store credentials in AWS Systems Manager. Create an IAM role to allow the SFTP connector to securely use the credentials.
B
Explanation:
AWS Transfer Family (SFTP) allows clients to use standard SFTP clients and SSH keys without changes.
By enabling service-managed users, clients can continue uploading files with their existing tools.
The service delivers the files directly into S3, reducing latency between upload and processing. This removes the need for EC2, custom scripts, and periodic transfers. It fully meets the requirement for a managed solution with minimal disruption to client processes.
A company is implementing a new application on AWS. The company will run the application on multiple Amazon EC2 instances across multiple Availability Zones within multiple AWS Regions. The application will be available through the internet. Users will access the application from around the world.
The company wants to ensure that each user who accesses the application is sent to the EC2 instances that are closest to the user’s location.
Which solution will meet these requirements?
- A . Implement an Amazon Route 53 geolocation routing policy. Use an internet-facing Application Load Balancer to distribute the traffic across all Availability Zones within the same Region.
- B . Implement an Amazon Route 53 geoproximity routing policy. Use an internet-facing Network Load Balancer to distribute the traffic across all Availability Zones within the same Region.
- C . Implement an Amazon Route 53 multivalue answer routing policy Use an internet-facing Application Load Balancer to distribute the traffic across all Availability Zones within the same Region.
- D . Implement an Amazon Route 53 weighted routing policy. Use an internet-facing Network Load Balancer to distribute the traffic across all Availability Zones within the same Region.
A
Explanation:
The requirement is to route users to the nearest AWS Region where the application is deployed. The best solution is to use Amazon Route 53 with a geolocation routing policy, which routes traffic based on the geographic location of the user making the request.
Geolocation Routing: This routing policy ensures that users are directed to the resources (in this case, EC2 instances) that are geographically closest to them, thereby reducing latency and improving the user experience.
Application Load Balancer (ALB): Within each Region, an internet-facing Application Load Balancer (ALB) is used to distribute incoming traffic across multiple EC2 instances in different Availability Zones. ALBs are designed to handle HTTP/HTTPS traffic and provide advanced features like content-based routing, SSL termination, and user authentication.
Why Not Other Options?
Option B (Geoproximity + NLB): Geoproximity routing is similar but more complex as it requires fine-tuning the proximity settings. A Network Load Balancer (NLB) is better suited for TCP/UDP traffic rather than HTTP/HTTPS.
Option C (Multivalue Answer Routing + ALB): Multivalue answer routing does not direct traffic based on user location but rather returns multiple values and lets the client choose. This does not meet the requirement for geographically routing users.
Option D (Weighted Routing + NLB): Weighted routing splits traffic based on predefined weights and does not consider the user’s geographic location. NLB is not ideal for this scenario due to its focus on lower-level protocols.
AWS
Reference: Amazon Route 53 Routing Policies- Detailed explanation of the various routing policies available in Route 53, including geolocation.
Elastic Load Balancing- Information on the different types of load balancers in AWS and when to use them.
A retail company is building an order fulfillment system using a microservices architecture on AWS. The system must store incoming orders durably until processing completes successfully. Multiple teams’ services process orders according to a defined workflow. Services must be scalable, loosely coupled, and able to handle sudden surges in order volume. The processing steps of each order must be centrally tracked.
Which solution will meet these requirements?
- A . Send incoming orders to an Amazon Simple Notification Service (Amazon SNS) topic. Start an AWS Step Functions workflow for each order that orchestrates the microservices. Use AWS Lambda functions for each microservice.
- B . Send incoming orders to an Amazon Simple Queue Service (Amazon SQS) queue. Start an AWS Step Functions workflow for each order that orchestrates the microservices. Use AWS Lambda functions for each microservice.
- C . Send incoming orders to an Amazon Simple Queue Service (Amazon SQS) queue. Use Amazon EventBridge to distribute events among the microservices. Use AWS Lambda functions for each microservice.
- D . Send incoming orders to an Amazon Simple Notification Service (Amazon SNS) topic. Subscribe Amazon EventBridge to the topic to distribute events among the microservices. Use AWS Lambda functions for each microservice.
B
Explanation:
Durable storage of incoming orders with buffering and ability to handle surges is exactly what Amazon SQS is designed for. SQS provides highly durable, scalable queues that decouple producers from consumers.
Centrally tracking workflow steps is a core use case of AWS Step Functions, which gives a visual workflow and state machine, tracks the state of each order, and can orchestrate calls to multiple microservices (in this case, Lambda functions).
Combining SQS + Step Functions + Lambda gives:
Durable queueing for orders (SQS).
Loose coupling and surge handling (SQS decoupling + auto-scaling Lambda).
Central orchestration and tracking of order-processing steps (Step Functions).
Why the other options are not correct:
A: SNS is a pub/sub service, not a durable work queue, and is not designed for “store-and-retry until processed” workloads in the same way SQS is.
C: SQS + EventBridge provides decoupling but no central, stateful workflow tracking; EventBridge is event routing, not workflow orchestration.
D: SNS + EventBridge still lacks durable order storage and explicit centralized workflow/state tracking.
A company has set up hybrid connectivity between an on-premises data center and AWS by using AWS Site-to-Site VPN. The company is migrating a workload to AWS.
The company sets up a VPC that has two public subnets and two private subnets. The company wants to monitor the total packet loss and round-trip-time (RTT) between the data center and AWS.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Use Amazon CloudWatch Network Monitor to set up Internet Control Message Protocol (ICMP) probe monitoring from each subnet to the on-premises destination.
- B . Create an Amazon EC2 instance in each subnet. Create a scheduled job to send Internet Control Message Protocol (ICMP) packets to the on-premises destination.
- C . Create an AWS Lambda function in each subnet. Write a script to perform Internet Control Message Protocol (ICMP) connectivity checks.
- D . Create an AWS Batch job in each subnet. Write a script to perform Internet Control Message Protocol (ICMP) connectivity checks.
A
Explanation:
The requirement is to monitor network metrics such as total packet loss and round-trip time (RTT) between on-premises and AWS over Site-to-Site VPN with minimal operational overhead. AWS CloudWatch Network Monitor (formerly known as VPC Network Manager) provides a managed solution to monitor connectivity, including packet loss and latency, between AWS and on-premises networks. This solution does not require managing any additional infrastructure like EC2 instances or Lambda functions and thus reduces operational overhead significantly.
CloudWatch Network Monitor leverages AWS-managed probes and integrates natively with CloudWatch dashboards and alarms, enabling automated, centralized monitoring of network health. This aligns with the AWS Well-Architected Framework’s operational excellence pillar by minimizing manual intervention and enabling proactive detection of network issues.
Option B, C, and D involve creating custom probes with EC2, Lambda, or Batch jobs, which increases complexity, cost, and maintenance effort. They also require scheduling, script management, and additional monitoring infrastructure.
Reference: AWS Well-Architected Framework ― Operational Excellence Pillar (https: //d1.awsstatic.com/whitepapers/architecture/AWS_Well-Architected_Framework.pdf)
Amazon CloudWatch Network Monitor (https: //docs.aws.amazon.com/vpc/latest/networkmanager/monitor.html)
AWS Site-to-Site VPN Monitoring (https: //docs.aws.amazon.com/vpn/latest/s2svpn/monitoring-cloudwatch.html)
