Practice Free SAA-C03 Exam Online Questions
A company uses an Amazon CloudFront distribution to serve content pages for its website. The company needs to ensure that clients use a TLS certificate when accessing the company’s website. The company wants to automate the creation and renewal of the TLS certificates.
Which solution will meet these requirements with the MOST operational efficiency?
- A . Use a CloudFront security policy to create a certificate.
- B . Use a CloudFront origin access control (OAC) to create a certificate.
- C . Use AWS Certificate Manager (ACM) to create a certificate. Use DNS validation for the domain.
- D . Use AWS Certificate Manager (ACM) to create a certificate. Use email validation for the domain.
C
Explanation:
AWS Certificate Manager (ACM) issues and automatically renews public TLS certificates used by Amazon CloudFront. With DNS validation, ACM creates CNAME records that prove domain control; once validated, renewals occur automatically without manual approval, providing the lowest operational effort. For CloudFront, ACM certificates must be in the US East (N. Virginia) Region. CloudFront security policies (A) configure protocol/cipher requirements, not certificate issuance. OAC (B) secures origin access and is unrelated to certificate management. Email validation (D) requires mailbox approvals and operational handling at issuance/renewal, which is less efficient than DNS validation. Thus, ACM with DNS validation delivers automated creation and renewal with minimal overhead.
Reference: AWS Certificate Manager ― public certificates, DNS validation, automatic renewal; CloudFront ― using ACM certificates (us-east-1 requirement), TLS configuration.
A company is developing a containerized web application that needs to be highly available and scalable. The application requires access to GPU resources.
- A . Package the application as an AWS Lambda function in a container image. Use Lambda to run the containerized application on a runtime with GPU access.
- B . Deploy the application container to Amazon Elastic Kubernetes Service (Amazon EKS). Use AWS Fargate to manage compute resources and access to GPU resources.
- C . Deploy the application container to Amazon Elastic Container Registry (Amazon ECR). Use Amazon ECR to run the containerized application with an attached GPU.
- D . Run the application on Amazon EC2 instances from a GPU instance family by using Amazon Elastic Container Service (Amazon ECS) for orchestration.
D
Explanation:
Why Option D is Correct:
GPU Access: Only EC2 instances in the GPU family (e.g., P2, P3) can provide GPU resources. ECS Orchestration: Simplifies container deployment and management.
Why Other Options Are Not Ideal:
Option A: Lambda does not support GPU-based runtimes.
Option B: AWS Fargate does not support GPU-based workloads.
Option C: ECR is a container registry, not an orchestration or execution service.
AWS
Reference: Amazon ECS with GPU Instances: AWS Documentation – ECS GPU Instances
A company has a production Amazon RDS for MySQL database. The company needs to create a new application that will read frequently changing data from the database with minimal impact on the database’s overall performance. The application will rarely perform the same query more than once.
What should a solutions architect do to meet these requirements?
- A . Set up an Amazon ElastiCache cluster. Query the results in the cluster.
- B . Set up an Application Load Balancer (ALB). Query the results in the ALB.
- C . Set up a read replica for the database. Query the read replica.
- D . Set up querying of database snapshots. Query the database snapshots.
C
Explanation:
Amazon RDS read replicas provide a way to offload read traffic from the primary database, allowing read-intensive applications to query the replica without impacting the performance of the production (write) database. This is especially effective for workloads that involve frequently changing data but do not benefit from caching, since queries are rarely repeated.
Reference Extract from AWS Documentation / Study Guide:
"Read replicas allow you to elastically scale out beyond the capacity constraints of a single DB instance for read-heavy database workloads."
Source: AWS Certified Solutions Architect C Official Study Guide, RDS Read Replica section.
An advertising company stores terabytes of data in an Amazon S3 data lake. The company wants to build its own foundation model (FM) and has deployed a training cluster on AWS. The company loads file-based data from Amazon S3 to the training cluster to train the FM. The company wants to reduce data loading time to optimize the overall deployment cycle.
The company needs a storage solution that is natively integrated with Amazon S3. The solution must be scalable and provide high throughput.
Which storage solution will meet these requirements?
- A . Mount an Amazon Elastic File System (Amazon EFS) file system to the training cluster. Use AWS DataSync to migrate data from Amazon S3 to the EFS file system to train the FM.
- B . Use an Amazon FSx for Lustre file system and Amazon S3 with Data Repository Association (DRA).
Preload the data from Amazon S3 to the Lustre file system to train the FM. - C . Attach Amazon Block Store (Amazon EBS) volumes to the training cluster. Load the data from Amazon S3 to the EBS volumes to train the FM.
- D . Use AWS DataSync to migrate the data from Amazon S3 to the training cluster as files. Train the FM on the local file-based data.
B
Explanation:
Amazon FSx for Lustre is a high-performance parallel file system designed for machine learning and HPC that can be linked to Amazon S3 via Data Repository Associations (DRA). With DRA, you can import S3 objects as files (lazy or preloaded) and export results back to S3, providing very high throughput and low-latency POSIX access on the training cluster. This S3-native integration minimizes data loading overhead and accelerates training cycles at scale. EFS (A) is general-purpose and lower throughput per TB than Lustre. EBS (C) requires manual copy and does not scale as a shared parallel filesystem. DataSync alone (D) is a transfer tool, not a high-throughput training filesystem. FSx for Lustre with S3 DRA best satisfies scalability, throughput, and native S3 integration for rapid ML training.
A company runs an application on Amazon EC2 instances that have instance store volumes attached. The application uses Amazon Elastic File System (Amazon EFS) to store files that are shared across a cluster of Linux servers. The shared files are at least 1 GB in size.
The company accesses the files often for the first 7 days after creation. The files must remain readily available after the first 7 days.
The company wants to optimize costs for the application.
Which solution will meet these requirements?
- A . Configure an AWS Storage Gateway Amazon S3 File Gateway to cache frequently accessed files locally. Store older files in Amazon S3.
- B . Move the files from Amazon EFS, and store the files locally on each EC2 instance.
- C . Configure a lifecycle policy to move the files to the EFS Infrequent Access (IA) storage class after 7 days.
- D . Deploy AWS DataSync to automatically move files older than 7 days to Amazon S3 Glacier Deep Archive.
C
Explanation:
Amazon EFS Lifecycle Management enables automatic cost optimization by transitioning files that haven’t been accessed for a defined period (e.g., 7 days) from EFS Standard to EFS Infrequent Access (IA).
“Amazon EFS Lifecycle Management automatically moves files that haven’t been accessed for a set period to the EFS Infrequent Access storage class, reducing storage costs for infrequently accessed files.”
― Amazon EFS Documentation Key Points:
EFS IA is ideal for files larger than 128 KB and accessed less frequently. It’s seamless ― no code or tools needed.
Meets requirement for cost optimization and high availability. Incorrect Options:
A: File Gateway adds unnecessary complexity and does not use EFS.
B: Storing files locally breaks shared access and resiliency.
D: Glacier Deep Archive is cold storage ― not "readily available."
Reference: EFS Lifecycle Management EFS IA Storage Class
A company wants to migrate an application to AWS. The application runs on Docker containers behind an Application Load Balancer (ALB). The application stores data in a PostgreSQL database. The cloud-based solution must use AWS WAF to inspect all application traffic. The application experiences most traffic on weekdays. There is significantly less traffic on weekends.
Which solution will meet these requirements in the MOST cost-effective way?
- A . Use a Network Load Balancer (NLB). Create a web access control list (web ACL) in AWS WAF that includes the necessary rules. Attach the web ACL to the NLB. Run the application on Amazon Elastic Container Service (Amazon ECS). Use Amazon RDS for PostgreSQL as the database.
- B . Create a web access control list (web ACL) in AWS WAF that includes the necessary rules. Attach the web ACL to the ALB. Run the application on Amazon Elastic Kubernetes Service (Amazon EKS). Use Amazon RDS for PostgreSQL as the database.
- C . Create a web access control list (web ACL) in AWS WAF that includes the necessary rules. Attach the web ACL to the ALB. Run the application on Amazon Elastic Container Service (Amazon ECS). Use Amazon Aurora Serverless as the database.
- D . Use a Network Load Balancer (NLB). Create a web access control list (web ACL) in AWS WAF that has the necessary rules. Attach the web ACL to the NLB. Run the application on Amazon Elastic Container Service (Amazon ECS). Use Amazon Aurora Serverless as the database.
C
Explanation:
Using an Application Load Balancer (ALB) allows for integration with AWS WAF to inspect all incoming traffic. Running the application on Amazon ECS provides a scalable and managed container orchestration service. Utilizing Amazon Aurora Serverless for the PostgreSQL database offers automatic scaling based on application demand, which is cost-effective for workloads with variable traffic patterns, such as higher traffic on weekdays and lower traffic on weekends.
Reference: Optimizing cost savings: The advantage of Amazon Aurora over self-managed open-source databases
A company runs a container application by using Amazon Elastic Kubernetes Service (Amazon EKS). The application includes microservices that manage customers and place orders. The company needs to route incoming requests to the appropriate microservices.
Which solution will meet this requirement MOST cost-effectively?
- A . Use the AWS Load Balancer Controller to provision a Network Load Balancer.
- B . Use the AWS Load Balancer Controller to provision an Application Load Balancer.
- C . Use an AWS Lambda function to connect the requests to Amazon EKS.
- D . Use Amazon API Gateway to connect the requests to Amazon EKS.
B
Explanation:
Comprehensive and Detailed 250 to 300 words of Explanation (AWS documentation-based, no links):
Routing requests to the “appropriate microservices” implies Layer 7 (HTTP/HTTPS) routing such as host-based or path-based rules (for example, /customers to one service and /orders to another). In EKS, the most direct and cost-efficient way to provide this style of routing is to use Kubernetes Ingress with the AWS Load Balancer Controller to provision an Application Load Balancer (ALB). An ALB natively supports HTTP/HTTPS request routing features (paths, hosts, headers), which map cleanly to microservices patterns and reduce the need for additional infrastructure components.
A Network Load Balancer (NLB) (Option A) is primarily Layer 4 and is best for TCP/UDP/TLS pass-through and ultra-high performance. It does not provide the same native HTTP path-based routing behavior required to steer requests between microservices based on URL structure, so it often forces
you to add another routing layer (like an in-cluster ingress gateway), which can increase cost and operational complexity.
Option C is not an appropriate request routing mechanism; Lambda is compute, not an ingress/routing layer for EKS.
Option D (API Gateway) can front services, but for simple microservice routing into EKS it typically adds per-request costs and additional configuration compared with using an ALB that is purpose-built for HTTP load balancing and routing. API Gateway is a great fit for API management features (throttling, usage plans, keys, transformations), but those are not requirements here.
Therefore, B is the most cost-effective and architecturally aligned solution for HTTP routing across EKS-hosted microservices using managed AWS-native ingress integration.
A gaming company hosts a browser-based application on AWS. The users of the application consume a large number of videos and images that are stored in Amazon S3. This content is the same for all users.
The application has increased in popularity, and millions of users worldwide are accessing these media files. The company wants to provide the files to the users while reducing the load on the origin.
Which solution meets these requirements MOST cost-effectively?
- A . Deploy an AWS Global Accelerator accelerator in front of the web servers.
- B . Deploy an Amazon CloudFront web distribution in front of the S3 bucket.
- C . Deploy an Amazon ElastiCache (Redis OSS) instance in front of the web servers.
- D . Deploy an Amazon ElastiCache (Memcached) instance in front of the web servers.
B
Explanation:
Amazon CloudFront is a highly cost-effective CDN that caches content like images and videos at edge locations globally. This reduces latency and the load on the origin S3 bucket. It is ideal for static content that is accessed by many users.
Reference: AWS Documentation C Amazon CloudFront with S3 Integration
A company wants to migrate an on-premises video processing application to AWS. Processing times range from 5C30 minutes. The application must run multiple jobs in parallel. The application processes videos that users upload to an Amazon S3 bucket.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Configure the S3 bucket to send S3 event notifications to an Amazon SQS standard queue. Deploy the application on an Amazon ECS cluster. Configure automatic scaling for AWS Fargate tasks based on the SQS queue size.
- B . Configure the S3 bucket to send S3 event notifications to an Amazon SQS FIFO queue. Deploy the application on Amazon EC2 instances. Create an Auto Scaling group to scale based on the SQS queue size.
- C . Configure the S3 bucket to send S3 event notifications to an Amazon SQS standard queue. Deploy the application as an AWS Lambda function. Configure the Lambda function to poll the SQS queue.
- D . Configure the S3 bucket to send S3 event notifications to an Amazon SNS topic. Deploy the application as an AWS Lambda function. Configure the SNS topic to invoke the Lambda function.
A
Explanation:
The correct answer is A because the application processes videos for 5C30 minutes, must run multiple jobs in parallel, and should have the least operational overhead. Amazon ECS with AWS Fargate is a managed container solution that removes the need to provision and manage EC2 instances while still supporting longer-running parallel jobs. Using Amazon SQS standard queues decouples the upload event from processing and provides a scalable buffer for incoming work.
With this design, Amazon S3 sends object-created notifications to the SQS queue whenever users upload videos. ECS services or tasks on Fargate can then consume messages from the queue and process videos independently. Auto scaling based on SQS queue depth ensures the system increases task count when more videos arrive and decreases capacity when demand drops. This provides elasticity and efficient parallel processing with minimal infrastructure management.
Option B is incorrect because EC2-based scaling introduces more operational overhead than Fargate.
Option C is incorrect because AWS Lambda has a maximum execution duration and is not appropriate for jobs that can run up to 30 minutes.
Option D is also incorrect for the same reason; Lambda is not the best fit for this processing duration, and SNS does not provide the same durable queued work pattern as SQS for controlled parallel processing.
AWS best practices recommend using event-driven queues with containerized workers for medium-duration processing jobs that need concurrency and minimal management.
Therefore, S3 + SQS + ECS on Fargate with queue-based scaling is the best solution.
A machine learning (ML) team is building an application that uses data that is in an Amazon S3 bucket. The ML team needs a storage solution for its model training workflow on AWS. The ML team requires high-performance storage that supports frequent access to training datasets. The storage solution must integrate natively with Amazon S3.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Use Amazon Elastic Block Store (Amazon EBS) volumes to provide high-performance storage. Use AWS DataSync to migrate data from the S3 bucket to EBS volumes.
- B . Use Amazon EC2 ML instances to provide high-performance storage. Store training data on Amazon EBS volumes. Use the S3 Copy API to copy data from the S3 bucket to EBS volumes.
- C . Use Amazon FSx for Lustre to provide high-performance storage. Store training datasets in Amazon S3 Standard storage.
- D . Use Amazon EMR to provide high-performance storage. Store training datasets in Amazon S3 Glacier Instant Retrieval storage.
C
Explanation:
Amazon FSx for Lustre is a high-performance file system optimized for fast processing of workloads such as machine learning, high-performance computing (HPC), and video processing. It integrates natively with Amazon S3, allowing you to:
Access S3 Data: FSx for Lustre can be linked to an S3 bucket, presenting S3 objects as files in the file system.
High Performance: It provides sub-millisecond latencies, high throughput, and millions of IOPS, which are ideal for ML workloads. Amazon Web Services, Inc.
Minimal Operational Overhead: Being a fully managed service, it reduces the complexity of setting up and managing high-performance file systems.
Reference: Amazon FSx for Lustre C High-Performance File System Integrated with S3Amazon Web Services, Inc.
What is Amazon FSx for Lustre?
