Practice Free SAA-C03 Exam Online Questions
A solutions architect is designing a multi-Region disaster recovery (DR) strategy for a company. The company runs an application on Amazon EC2 instances in Auto Scaling groups that are behind an Application Load Balancer (ALB). The company hosts the application in the company’s primary and secondary AWS Regions.
The application must respond to DNS queries from the secondary Region if the primary Region fails.
Only one Region must serve traffic at a time.
Which solution will meet these requirements?
- A . Create an outbound endpoint in Amazon Route 53 Resolver. Create forwarding rules that determine how queries will be forwarded to DNS resolvers on the network. Associate the rules with VPCs in each Region.
- B . Create primary and secondary DNS records in Amazon Route 53. Configure health checks and a failover routing policy.
- C . Create a traffic policy in Amazon Route 53. Use a geolocation routing policy and a value type of ELB Application Load Balancer.
- D . Create an Amazon Route 53 profile. Associate DNS resources to the profile. Associate the profile with VPCs in each Region.
B
Explanation:
Amazon Route 53 supports failover routing policies, which use health checks to route DNS queries to a secondary Region only if the primary endpoint fails. This design ensures only one Region is active for traffic at any given time. This is the recommended architecture for active-passive, multi-Region DR strategies.
AWS Documentation Extract:
“Failover routing lets you route traffic to a primary resource, such as a web server in one Region, and a secondary resource in another Region. If the primary fails, Route 53 can route traffic to the secondary resource automatically.”
(Source: Amazon Route 53 documentation, Routing Policy Types)
A, D: These options do not configure DNS failover for external users.
C: Geolocation routing is for regional distribution, not DR failover.
Reference: AWS Certified Solutions Architect C Official Study Guide, Multi-Region DR and Route 53.
A law firm needs to make hundreds of files readable for the general public. The law firm must prevent members of the public from modifying or deleting the files before a specified future date.
Which solution will meet these requirements MOST securely?
- A . Upload the files to an Amazon S3 bucket that is configured for static website hosting. Grant read-only IAM permissions to any AWS principals that access the S3 bucket until the specified date.
- B . Create a new Amazon S3 bucket. Enable S3 Versioning. Use S3 Object Lock and set a retention period based on the specified date. Create an Amazon CloudFront distribution to serve content from the bucket. Use an S3 bucket policy to restrict access to the CloudFront origin access control (OAC).
- C . Create a new Amazon S3 bucket. Enable S3 Versioning. Configure an event trigger to run an AWS Lambda function if a user modifies or deletes an object. Configure the Lambda function to replace the modified or deleted objects with the original versions of the objects from a private S3 bucket.
- D . Upload the files to an Amazon S3 bucket that is configured for static website hosting. Select the folder that contains the files. Use S3 Object Lock with a retention period based on the specified date. Grant read-only IAM permissions to any AWS principals that access the S3 bucket.
B
Explanation:
S3 Object Lock: Enables Write Once Read Many (WORM) protection for data, preventing objects from being deleted or modified for a set retention period.
S3 Versioning: Helps maintain object versions and ensures a recovery path for accidental overwrites.
CloudFront Distribution: Ensures secure and efficient public access by serving content through an edge-optimized delivery network while protecting S3 data with OAC.
Bucket Policy for OAC: Restricts public access to only the CloudFront origin, ensuring maximum security.
Amazon S3 Object Lock Documentation
A company is planning to deploy a managed MySQL database solution for its non-production applications. The company plans to run the system for several years on AWS.
Which solution will meet these requirements MOST cost-effectively?
- A . Create an Amazon RDS for MySQL instance. Purchase a Reserved Instance.
- B . Create an Amazon RDS for MySQL instance. Use the instance on an on-demand basis.
- C . Create an Amazon Aurora MySQL cluster with writer and reader nodes. Use the cluster on an on-demand basis.
- D . Create an Amazon EC2 instance. Manually install and configure MySQL Server on the instance.
A
Explanation:
Amazon RDS for MySQL Reserved Instances provide significant savings over on-demand pricing when you plan to run the database for long periods. This is the most cost-effective option for non-production, long-running managed MySQL workloads.
Reference Extract:
"Reserved Instances provide a significant discount compared to On-Demand pricing and are recommended for steady-state workloads that run for an extended period."
Source: AWS Certified Solutions Architect C Official Study Guide, RDS Cost Optimization section.
A media streaming company is redesigning its infrastructure to accommodate increasing demand for video content that users consume daily. The company needs to process terabyte-sized videos to block some content in the videos. Video processing can take up to 20 minutes.
The company needs a solution that is cost-effective, highly available, and scalable.
Which solution will meet these requirements?
- A . Use AWS Lambda functions to process the videos. Store video metadata in Amazon DynamoDB.
Store video content in Amazon S3 Intelligent-Tiering. - B . Use Amazon Elastic Container Service (Amazon ECS) with the AWS Fargate launch type to implement microservices to process videos. Store video metadata in Amazon Aurora. Store video content in Amazon S3 Intelligent-Tiering.
- C . Use Amazon EMR to process the videos with Apache Spark. Store video content in Amazon FSx for Lustre. Use Amazon Kinesis Data Streams to ingest videos in real time.
- D . Deploy a containerized video processing application on Amazon Elastic Kubernetes Service (Amazon EKS) with the Amazon EC2 launch type. Store video metadata in Amazon RDS in a single Availability Zone. Store video content in Amazon S3 Glacier Deep Archive.
B
Explanation:
AWS Lambda is not suitable for long-running jobs that can take up to 20 minutes, as Lambda has a maximum execution duration of 15 minutes.
Amazon ECS with AWS Fargate allows you to run containers without managing EC2 instances, providing a scalable and highly available environment. You can scale Fargate tasks to handle large and parallel video processing jobs. Amazon Aurora is a highly available, managed relational database. S3 Intelligent-Tiering is cost-effective for storing large video files with variable access patterns.
AWS Documentation Extract:
"AWS Fargate lets you run containers without managing servers or clusters, providing a highly available and scalable compute environment. Fargate is suitable for running data processing workloads that require long-running compute tasks."
(Source: AWS Fargate documentation, Use Cases)
A: Lambda is not suitable for long-running (over 15 min) or heavy compute jobs.
C: Amazon EMR is optimized for big data analytics, not specific video processing. FSx for Lustre and Kinesis are not best fit for this use case.
D: EKS with EC2 adds operational overhead and RDS in a single AZ is not highly available; Glacier Deep Archive is not suitable for frequently accessed video files.
Reference: AWS Certified Solutions Architect C Official Study Guide, Containerized and Serverless Processing.
A company is using AWS DataSync to migrate millions of files from an on-premises system to AWS.
The files are 10 KB in size on average.
The company wants to use Amazon S3 for file storage. For the first year after the migration the files will be accessed once or twice and must be immediately available. After 1 year the files must be archived for at least 7 years.
Which solution will meet these requirements MOST cost-effectively?
- A . Use an archive tool lo group the files into large objects. Use DataSync to migrate the objects. Store the objects in S3 Glacier Instant Retrieval for the first year. Use a lifecycle configuration to transition the files to S3 Glacier Deep Archive after 1 year with a retention period of 7 years.
- B . Use an archive tool to group the files into large objects. Use DataSync to copy the objects to S3 Standard-Infrequent Access (S3 Standard-IA). Use a lifecycle configuration to transition the files to S3 Glacier Instant Retrieval after 1 year with a retention period of 7 years.
- C . Configure the destination storage class for the files as S3 Glacier Instant. Retrieval Use a lifecycle policy to transition the files to S3 Glacier Flexible Retrieval after 1 year with a retention period of 7 years.
- D . Configure a DataSync task to transfer the files to S3 Standard-Infrequent Access (S3 Standard-IA) Use a lifecycle configuration to transition the files to S3. Deep Archive after 1 year with a retention period of 7 years.
A company recently migrated a data warehouse to AWS. The company has an AWS Direct Connect connection to AWS. Company users query the data warehouse by using a visualization tool. The average size of the queries that the data warehouse returns is 50 MB. The average visualization that the visualization tool produces is 500 KB in size. The result sets that the data warehouse returns are not cached.
The company wants to optimize costs for data transfers between the data warehouse and the company.
Which solution will meet this requirement?
- A . Host the visualization tool on premises. Connect to the data warehouse directly through the internet.
- B . Host the visualization tool in the same AWS Region as the data warehouse. Access the visualization tool through the internet.
- C . Host the visualization tool on premises. Connect to the data warehouse through the Direct Connect connection.
- D . Host the visualization tool in the same AWS Region as the data warehouse. Access the visualization tool through the Direct Connect connection.
D
Explanation:
A company is using microservices to build an ecommerce application on AWS. The company wants to preserve customer transaction information after customers submit orders. The company wants to store transaction data in an Amazon Aurora database. The company expects sales volumes to vary throughout each year.
- A . Use an Amazon API Gateway REST API to invoke an AWS Lambda function to send transaction data to the Aurora database. Send transaction data to an Amazon Simple Queue Service (Amazon SQS) queue that has a dead-letter queue. Use a second Lambda function to read from the SQS queue and to update the Aurora database.
- B . Use an Amazon API Gateway HTTP API to send transaction data to an Application Load Balancer (ALB). Use the ALB to send the transaction data to Amazon Elastic Container Service (Amazon ECS) on Amazon EC2. Use ECS tasks to store the data in Aurora database.
- C . Use an Application Load Balancer (ALB) to route transaction data to Amazon Elastic Kubernetes Service (Amazon EKS). Use Amazon EKS to send the data to the Aurora database.
- D . Use Amazon Data Firehose to send transaction data to Amazon S3. Use AWS Database Migration Service (AWS DMS) to migrate the data from Amazon S3 to the Aurora database.
A
Explanation:
Analysis:
The solution must handle variable sales volumes, preserve transaction information, and store data in an Amazon Aurora database with minimal operational overhead. UsingAPI Gateway, AWS Lambda, and SQSis the best option because it provides scalability, reliability, and resilience.
Why Option A is Correct:
API Gateway: Serves as an entry point for transaction data in a serverless, scalable manner.
AWS Lambda: Processes the transactions and sends them to Amazon SQS for queuing.
Amazon SQS: Buffers the transaction data, ensuring durability and resilience against spikes in transaction volume.
Second Lambda Function: Processes messages from the SQS queue and updates the Aurora database, decoupling the workflow for better scalability.
Dead-Letter Queue (DLQ): Ensures failed transactions are logged for later debugging or reprocessing.
Why Other Options Are Not Ideal:
Option B:
Using an ALB with ECS on EC2 introduces operational overhead, such as managing EC2 instances and scaling ECS tasks. Not cost-effective.
Option C:
EKS is highly operationally intensive and requires Kubernetes cluster management, which is unnecessary for this use case. Too complex.
Option D:
Amazon Data Firehose and DMS are not designed for real-time transactional workflows. They are better suited for data analytics pipelines. Not suitable.
AWS
Reference: Amazon API Gateway: AWS Documentation – API Gateway AWS Lambda: AWS Documentation – Lambda Amazon SQS: AWS Documentation – SQS Amazon Aurora: AWS Documentation – Aurora
A transaction processing company has weekly scripted batch jobs that run on Amazon EC2 instances. The EC2 instances are in an Auto Scaling group. The number of transactions can vary, but the baseline CPU utilization that is noted on each run is at least 60%. The company needs to provision the capacity 30 minutes before the jobs run.
Currently, engineers complete this task by manually modifying the Auto Scaling group parameters. The company does not have the resources to analyze the required capacity trends for the Auto Scaling group counts. The company needs an automated way to modify the Auto Scaling group’s desired capacity.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Create a dynamic scaling policy for the Auto Scaling group. Configure the policy to scale based on the CPU utilization metric. Set the target value for the metric to 60%.
- B . Create a scheduled scaling policy for the Auto Scaling group. Set the appropriate desired capacity, minimum capacity, and maximum capacity. Set the recurrence to weekly. Set the start time to 30 minutes before the batch jobs run.
- C . Create a predictive scaling policy for the Auto Scaling group. Configure the policy to scale based on forecast. Set the scaling metric to CPU utilization. Set the target value for the metric to 60%. In the policy, set the instances to pre-launch 30 minutes before the jobs run.
- D . Create an Amazon EventBridge event to invoke an AWS Lambda function when the CPU utilization metric value for the Auto Scaling group reaches 60%. Configure the Lambda function to increase the Auto Scaling group’s desired capacity and maximum capacity by 20%.
C
Explanation:
Predictive scaling in EC2 Auto Scaling uses machine learning to analyze historical load and forecast future capacity needs, then pre-scales capacity before the demand occurs. It is specifically designed for recurring, cyclical workloads such as weekly batch jobs.
In this scenario:
The workload pattern is known and recurring (weekly scripted jobs).
The company explicitly does not want to manually analyze trends.
They need capacity to be ready 30 minutes before jobs start.
Predictive scaling lets you:
Set the metric (CPU utilization) and target (60%).
Configure the policy to pre-launch instances 30 minutes in advance, satisfying the requirement automatically with minimal operational overhead.
Dynamic scaling (Option A) reacts after CPU increases, not before.
Scheduled scaling (Option B) requires manual analysis and ongoing adjustment of capacity values.
EventBridge + Lambda (Option D) introduces unnecessary custom logic and still reacts after CPU reaches 60%, not 30 minutes before.
A company sets up an organization in AWS Organizations that contains 10AWS accounts. A solutions architect must design a solution to provide access to the accounts for several thousand employees. The company has an existing identity provider (IdP). The company wants to use the existing IdP for authentication to AWS.
Which solution will meet these requirements?
- A . Create IAM users for the employees in the required AWS accounts. Connect IAM users to the existing IdP. Configure federated authentication for the IAM users.
- B . Set up AWS account root users with user email addresses and passwords that are synchronized from the existing IdP.
- C . Configure AWS IAM Identity Center Connect IAM Identity Center to the existing IdP Provision users and groups from the existing IdP
- D . Use AWS Resource Access Manager (AWS RAM) to share access to the AWS accounts with the users in the existing IdP.
C
Explanation:
AWS IAM Identity Center:
IAM Identity Centerprovides centralized access management for multiple AWS accounts within an organization and integrates seamlessly with existing identity providers (IdPs) throughSAML 2.0 federation.
It allows users to authenticate using their existing IdP credentials and gain access to AWS resources without the need to create and manage separate IAM users in each account.
IAM Identity Centeralso simplifies provisioning and de-provisioning users, as it can automatically synchronize users and groups from the external IdP to AWS, ensuring secure and managed access.
Integration with Existing IdP:
The solution involves configuring IAM Identity Center to connect to the company’s IdP using SAML. This setup allows employees to log in with their existing credentials, reducing the complexity of managing separate AWS credentials.
Once connected, AM Identity Center handles authentication and authorization, granting users access to the AWS accounts based on their assigned roles and permissions.
Why the Other Options Are Incorrect:
Option A: Creating separate IAM users for each employee is not scalable or efficient. Managing thousands of IAM users across multiple AWS accounts introduces unnecessary complexity and operational overhead.
Option B: Using AWS root users with synchronized passwords is a security risk and goes against AWS best practices. Root accounts should never be used for day-to-day operations.
Option D: AWS Resource Access Manager (RAM)is used for sharing AWS resources between accounts, not for federating access for users across accounts. It doesn’t provide a solution for authentication via an external IdP.
AWS
Reference: AWS IAM Identity Center
SAML 2.0 Integration with AWS IAM Identity Center
By setting up IAM Identity Center and connecting it to the existing IdP, the company can efficiently manage access for thousands of employees across multiple AWS accounts with a high degree of operational efficiency and security.
Therefore, Option Cis the best solution.
A company runs a web application on Amazon EC2 instances. The application also uses an Amazon DynamoDB table. The application generates sporadic HTTP 500 errors. The DynamoDB table is operating in on-demand mode, and other applications use the table without any issues.
A solutions architect wants to resolve the HTTP 500 errors without disrupting the web application.
Which solution will meet these requirements?
- A . Configure DynamoDB to support larger write requests for increased throughput.
- B . Enable DynamoDB Streams to monitor changes in the table.
- C . Configure the application to use exponential backoff and retries to query the table.
- D . Configure the application to use strongly consistent reads.
C
Explanation:
Sporadic HTTP 500 errors in an application that depends on DynamoDB often indicate intermittent downstream failures (for example, transient throttling, temporary network issues, request timeouts, or service-side retryable exceptions). The prompt notes that the DynamoDB table is in on-demand mode and other applications use it without problems, which suggests the issue is not a sustained capacity shortfall or a systemic DynamoDB outage. Instead, the web application likely needs to handle occasional retryable failures more gracefully.
Option C is the best practice for improving reliability when calling AWS services: implement exponential backoff with retries. AWS SDKs commonly include retry behavior, but many systems still require tuning (for example, increasing maximum retries, adding jitter, ensuring idempotent operations where required, and retrying only on retryable errors). Exponential backoff reduces the chance of overwhelming a service during brief contention periods and helps the application recover quickly from transient errors without user-visible failures. This approach is also minimally disruptive because it is a configuration and logic improvement rather than an architectural migration.
Option A is not a standard DynamoDB configuration; DynamoDB has item size limits and throughput is managed via on-demand or provisioned capacity, not by “supporting larger write requests.”
Option B (Streams) provides a change data capture feed but does not prevent or mitigate request failures; it is for event processing, replication, or auditing.
Option D (strongly consistent reads) addresses stale reads, not HTTP 500 errors, and could increase read cost/RCU consumption without solving the root cause.
Therefore, adding exponential backoff and retries is the most appropriate and operationally sound way to reduce sporadic failures without disrupting the application’s architecture or performance.
