Practice Free SAA-C03 Exam Online Questions
A company is developing a social media application. The company anticipates rapid and unpredictable growth in users and data volume. The application needs to handle a continuous high volume of user requests. User requests include long-running processes that store large amounts of user-generated content and user profiles in a relational format. The processes must run in a specific order. The company requires an architecture that can scale resources to meet demand spikes without downtime or performance degradation. The company must ensure that the components of the application can evolve independently without affecting other parts of the system.
Which combination of AWS services will meet these requirements?
- A . Deploy the application on Amazon Elastic Container Service (Amazon ECS) with the AWS Fargate launch type. Use Amazon RDS as the database. Use Amazon Simple Queue Service (Amazon SQS) to decouple message processing between components.
- B . Deploy the application on Amazon Elastic Container Service (Amazon ECS) with the AWS Fargate launch type. Use Amazon RDS as the database. Use Amazon Simple Notification Service (Amazon SNS) to decouple message processing between components.
- C . Use Amazon DynamoDB as the database. Use AWS Lambda functions to implement the application. Configure Amazon DynamoDB Streams to invoke the Lambda functions. Use AWS Step Functions to manage workflows between services.
- D . Use an AWS Elastic Beanstalk environment with auto scaling to deploy the application. Use Amazon RDS as the database. Use Amazon Simple Notification Service (Amazon SNS) to decouple message processing between components.
A
Explanation:
ECS with Fargate: Allows containerized workloads to scale rapidly without managing underlying servers, handling unpredictable growth effectively.
RDS for Relational Data: Manages large relational datasets efficiently while supporting high availability.
SQS for Decoupling: Ensures message processing occurs in a specific order, decoupling application components and allowing independent evolution.
AWS ECS with Fargate Documentation, AWS SQS Documentation
A company currently runs an on-premises stock trading application by using Microsoft Windows Server. The company wants to migrate the application to the AWS Cloud. The company needs to design a highly available solution that provides low-latency access to block storage across multiple Availability Zones.
Which solution will meet these requirements with the LEAST implementation effort?
- A . Configure a Windows Server cluster that spans two Availability Zones on Amazon EC2 instances. Install the application on both cluster nodes. Use Amazon FSx for Windows File Server as shared storage between the two cluster nodes.
- B . Configure a Windows Server cluster that spans two Availability Zones on Amazon EC2 instances. Install the application on both cluster nodes Use Amazon Elastic Block Store (Amazon EBS) General Purpose SSD (gp3) volumes as storage attached to the EC2 instances. Set up application-level replication to sync data from one EBS volume in one Availability Zone to another EBS volume in the second Availability Zone.
- C . Deploy the application on Amazon EC2 instances in two Availability Zones Configure one EC2 instance as active and the second EC2 instance in standby mode. Use an Amazon FSx for NetApp ONTAP Multi-AZ file system to access the data by using Internet Small Computer Systems Interface (iSCSI) protocol.
- D . Deploy the application on Amazon EC2 instances in two Availability Zones. Configure one EC2 instance as active and the second EC2 instance in standby mode. Use Amazon Elastic Block Store (Amazon EBS) Provisioned IOPS SSD (io2) volumes as storage attached to the EC2 instances. Set up Amazon EBS level replication to sync data from one io2 volume in one Availability Zone to another io2 volume in the second Availability Zone.
A
Explanation:
This solution is designed to provide high availability and low-latency access to block storage across multiple Availability Zones with minimal implementation effort.
Windows Server Cluster Across AZs: Configuring a Windows Server Failover Cluster (WSFC) that spans two Availability Zones ensures that the application can failover from one instance to another in case of a failure, meeting the high availability requirement.
Amazon FSx for Windows File Server: FSx for Windows File Server provides fully managed Windows file storage that is accessible via the SMB protocol, which is suitable for Windows-based applications. It offers high availability and can be used as shared storage between the cluster nodes, ensuring that both nodes have access to the same data with low latency.
Why Not Other Options?
Option B (EBS with application-level replication): This requires complex configuration and management, as EBS volumes cannot be directly shared across AZs. Application-level replication is more complex and prone to errors.
Option C (FSx for NetApp ONTAP with iSCSI): While this is a viable option, it introduces additional complexity with iSCSI and requires more specialized knowledge for setup and management.
Option D (EBS with EBS-level replication): EBS-level replication is not natively supported across AZs, and setting up a custom replication solution would increase the implementation effort.
AWS
Reference: Amazon FSx for Windows File Server- Overview and benefits of using FSx for Windows File Server.
Windows Server Failover Clustering on AWS- Guide on setting up a Windows Server cluster on AWS.
A company is building a mobile gaming app. The company wants to serve users from around the world with low latency. The company needs a scalable solution to host the application and to route user requests to the location that is nearest to each user.
Which solution will meet these requirements?
- A . Use an Application Load Balancer to route requests to Amazon EC2 instances that are deployed across multiple Availability Zones.
- B . Use a Regional Amazon API Gateway REST API to route requests to AWS Lambda functions.
- C . Use an edge-optimized Amazon API Gateway REST API to route requests to AWS Lambda functions.
- D . Use an Application Load Balancer to route requests to containers in an Amazon ECS cluster.
C
Explanation:
Edge-optimized API Gateway endpoints utilize the Amazon CloudFront global network to decrease latency for clients globally. This setup ensures that the request is routed to the closest edge location, significantly reducing response time and improving performance for worldwide users.
Reference: AWS Documentation C Amazon API Gateway Endpoint Types
A company has an ordering application that stores customer information in Amazon RDS for MySQL. During regular business hours, employees run one-time queries for reporting purposes. Timeouts are occurring during order processing because the reporting queries are taking a long time to run. The company needs to eliminate the timeouts without preventing employees from performing queries.
- A . Create a read replica. Move reporting queries to the read replica.
- B . Create a read replica. Distribute the ordering application to the primary DB instance and the read replica.
- C . Migrate the ordering application to Amazon DynamoDB with on-demand capacity.
- D . Schedule the reporting queries for non-peak hours.
A
Explanation:
Amazon RDS for MySQL supports the creation of read replicas, which are read-only copies of the primary database instance. By offloading read-heavy operations, such as reporting queries, to a read replica:
Performance Improvement: The primary DB instance is relieved from the additional load, reducing the likelihood of timeouts during order processing.
Data Consistency: Read replicas use asynchronous replication, ensuring that they have up-to-date data for accurate reporting.
Scalability: Multiple read replicas can be created to handle increased read traffic.
This approach allows employees to continue running necessary reports without impacting the performance of the ordering application.
A security team needs to enforce rotation of all IAM users’ access keys every 90 days. Keys older than 90 days must be automatically deactivated and removed. A solutions architect must create a remediation solution with minimal operational effort.
Which solution meets these requirements?
- A . Create an AWS Config rule to check key age. Configure the rule to run an AWS Batch job to remove the key.
- B . Create an Amazon EventBridge rule to check key age. Configure it to run an AWS Batch job to remove the key.
- C . Create an AWS Config rule to check key age. Define an EventBridge rule that schedules an AWS Lambda function to remove the key.
- D . Create an EventBridge rule to check key age. Define a second EventBridge rule to run an AWS Batch job to remove the key.
C
Explanation:
AWS Config has a built-in managed rule (access-keys-rotated) that evaluates IAM access key age.
Config rules detect non-compliant resources automatically, without building custom logic.
After Config identifies old keys, EventBridge can trigger an AWS Lambda function to disable and
delete the keys. Lambda provides a fully managed compute layer requiring no servers or batch environments.
Using AWS Batch (Options A, B, and D) adds unnecessary operational overhead for a simple automation task.
A company runs an application on Amazon EC2 instances. The application is deployed in private subnets in three Availability Zones of the us-east-1 Region. The instances must be able to connect to the internet to download files. The company wants a design that is highly available across the Region.
Which solution should be implemented to ensure that there are no disruptions to internet connectivity?
- A . Deploy a NAT instance in a private subnet of each Availability Zone.
- B . Deploy a NAT gateway in a public subnet of each Availability Zone.
- C . Deploy a transit gateway in a private subnet of each Availability Zone.
- D . Deploy an internet gateway in a public subnet of each Availability Zone.
B
Explanation:
Explanation (AWS Docs):
To allow private subnets to access the internet, deploy NAT gateways in a public subnet in each AZ for high availability. NAT instances are less scalable and less fault-tolerant.
“To create a highly available architecture, create a NAT gateway in each Availability Zone and configure your routing to use it.”
― NAT Gateway Overview
A company is designing a microservice-based architecture for a new application on AWS. Each microservice will run on its own set of Amazon EC2 instances. Each microservice will need to interact with multiple AWS services.
The company wants to manage permissions for each EC2 instance according to the principle of least privilege.
Which solution will meet this requirement with the LEAST administrative overhead?
- A . Assign an IAM user to each microservice. Use access keys that are stored within the application code to authenticate AWS service requests.
- B . Create a single IAM role that has permission to access all AWS services. Add the IAM role to an instance profile that is associated with the EC2 instances.
- C . Use AWS Organizations to create a separate account for each microservice. Manage permissions at the account level.
- D . Create individual IAM roles based on the specific needs of each microservice. Add each IAM role to an instance profile that is associated with the appropriate EC2 instance.
D
Explanation:
AWS best practice is to use IAM roles with instance profiles for EC2 instances so that applications obtain temporary credentials automatically and do not need to store access keys.
To honor the principle of least privilege, each microservice should have an IAM role that grants only the specific permissions it needs.
Therefore, creating individual IAM roles per microservice and attaching them via instance profiles (Option D) both minimizes long-term credential management and applies least privilege cleanly.
Why others are not correct:
A: Using IAM users with access keys in code is insecure and high-overhead (key rotation, secret management).
B: A single broad role violates least privilege because every microservice gets more permissions than it needs.
C: Separate accounts per microservice is extreme over-segmentation and significantly increases operational complexity.
A company hosts an application on AWS that gives users the ability to download photos. The company stores all photos in an Amazon S3 bucket that is located in the us-east-1 Region. The company wants to provide the photo download application to global customers with low latency.
Which solution will meet these requirements?
- A . Find the public IP addresses that Amazon S3 uses in us-east-1. Configure an Amazon Route 53 latency-based routing policy that routes to all the public IP addresses.
- B . Configure an Amazon CloudFront distribution in front of the S3 bucket. Use the distribution endpoint to access the photos that are in the S3 bucket.
- C . Configure an Amazon Route 53 geoproximity routing policy to route the traffic to the S3 bucket that is closest to each customer’s location.
- D . Create a new S3 bucket in the us-west-1 Region. Configure an S3 Cross-Region Replication rule to copy the photos to the new S3 bucket.
B
Explanation:
Amazon CloudFront is a content delivery network (CDN) service that distributes content with low latency and high transfer speeds. Placing CloudFront in front of the S3 bucket ensures globalusers download content from the nearest edge location, reducing latency significantly.
Reference: AWS Documentation C Amazon CloudFront with S3 Origin
As part of budget planning, management wants a report of AWS billed items listed by user. The data will be used to create department budgets. A solutions architect needs to determine the most efficient way to obtain this report information.
Which solution meets these requirements?
- A . Run a query with Amazon Athena to generate the report.
- B . Create a report in Cost Explorer and download the report.
- C . Access the bill details from the billing dashboard and download the bill.
- D . Modify a cost budget in AWS Budgets to alert with Amazon Simple Email Service (Amazon SES).
B
Explanation:
The most efficient way for management-style reporting is AWS Cost Explorer, because it is designed for interactive cost analysis, filtering, grouping, and exporting reports without building a data pipeline. If the organization has activated cost allocation capabilities (for example, by tagging resources or using other allocation methods), Cost Explorer can present costs in a way that supports chargeback/showback and budgeting. From an operational standpoint, generating and downloading a report from Cost Explorer is quick, requires minimal setup, and is repeatable for periodic budget planning.
Option A (Athena) is powerful but typically requires setting up the Cost and Usage Report (CUR) delivery to Amazon S3, defining schemas, and writing/maintaining SQL queries. That is more operational overhead than needed when the ask is “most efficient way to obtain this report information.” Option C (billing dashboard bill download) provides invoice-style line items, but it is not optimized for slicing/grouping “by user” and may not align with departmental budgeting workflows.
Option D is unrelated: AWS Budgets alerts on thresholds and forecasts; it does not generate a billed-items-by-user report.
Important nuance: “by user” reporting in AWS is usually achieved through cost allocation tags, account structure, or other allocation dimensions rather than literal per-person IAM identity billing. In practice, departments/teams/users are mapped via tagging and chargeback structures, which Cost Explorer is intended to analyze and export. As a result, Cost Explorer is the most operationally efficient tool to produce a downloadable report for budget planning.
Therefore, B best meets the requirement with the least effort and fastest path to a usable report.
A company needs to ensure that an IAM group that contains database administrators can perform operations only within Amazon RDS. The company must ensure that the members of the IAM group cannot access any other AWS services.
- A . Create an IAM policy that includes a statement that has the Effect "Allow" and the Action "rds: ".
Attach the IAM policy to the IAM group. - B . Create an IAM policy that includes two statements. Configure the first statement to have the Effect "Allow" and the Action "rds: ". Configure the second statement to have the Effect "Deny" and the Action "". Attach the IAM policy to the IAM group.
- C . Create an IAM policy that includes a statement that has the Effect "Deny" and the NotAction "rds: ". Attach the IAM policy to the IAM group.
- D . Create an IAM policy with a statement that includes the Effect "Allow" and the Action "rds: ". Include a permissions boundary that has the Effect "Allow" and the Action "rds: ". Attach the IAM policy to the IAM group.
C
Explanation:
To enforce that IAM users can only access Amazon RDS and no other AWS services, the recommended approach is to use a Deny statement with NotAction. This ensures that all actions are denied except RDS actions.
Options A and B do not fully achieve the restriction: A only allows RDS but does not explicitly deny access to other services if another policy grants access; B’s explicit Deny for “*” would override all other permissions, including the intended RDS Allow, which would result in no access at all.
Option D with permissions boundaries still allows other attached policies to grant access outside RDS. Therefore, C is the correct approach to enforce RDS-only access.
Reference:
• IAM JSON Policy Elements ― Effect, Action, NotAction, and Deny
• AWS Well-Architected Framework ― Security Pillar: Least privilege
