Practice Free SAA-C03 Exam Online Questions
A company runs an online order management system on AWS. The company stores order and inventory data for the previous 5 years in an Amazon Aurora MySQL database. The company deletes inventory data after 5 years.
The company wants to optimize costs to archive data.
Which solution will meet this requirement?
- A . Create an AWS Glue crawler to export data to Amazon S3. Create an AWS Lambda function to compress the data.
- B . Use the SELECT INTO OUTFILE S3 query on the Aurora database to export the data to Amazon S3.
Configure S3 Lifecycle rules on the S3 bucket. - C . Create an AWS Glue DataBrew job to migrate data from Aurora to Amazon S3. Configure S3 Lifecycle rules on the S3 bucket.
- D . Use the AWS Schema Conversion Tool (AWS SCT) to replicate data from Aurora to Amazon S3. Use the S3 Standard-Infrequent Access (S3 Standard-IA) storage class.
B
Explanation:
Amazon Aurora MySQL supports the SELECT INTO OUTFILE S3 SQL syntax to export query results directly to Amazon S3. This is an efficient and low-overhead method for archiving data.
Once data is in S3, Lifecycle rules can be configured to automatically transition older data to lower-cost storage classes (such as S3 Glacier) or delete it after a defined period, providing a cost-optimized and automated archive solution.
The Glue-based options involve more services and operational overhead. SCT is intended for database migrations, not for periodic data archival.
A company wants to deploy an AWS Lambda function that will read and write objects to Amazon S3 bucket. The Lambda function must be connected to the company’s VPC. The company must deploy the Lambda function only to private subnets in the VPC. The Lambda function must not be allowed to access the internet.
Which solutions will meet these requirements? (Select TWO.)
- A . Create a private NAT gateway to access the S3 bucket.
- B . Attach an Elastic IP address to the NAT gateway.
- C . Create a gateway VPC endpoint for the S3 bucket.
- D . Create an interface VPC endpoint for the S3 bucket.
- E . Create a public NAT gateway to access the S3 bucket.
A company needs a secure connection between its on-premises environment and AWS. This connection does not need high bandwidth and will handle a small amount of traffic. The connection should be set up quickly.
What is the MOST cost-effective method to establish this type of connection?
- A . Implement a client VPN
- B . Implement AWS Direct Connect.
- C . Implement a bastion host on Amazon EC2.
- D . Implement an AWS Site-to-Site VPN connection.
D
Explanation:
AWS Site-to-Site VPN: This provides a secure and encrypted connection between an on-premises environment and AWS. It is a cost-effective solution suitable for low bandwidth and small traffic needs.
Quick Setup:
Site-to-Site VPN can be quickly set up by configuring a virtual private gateway on the AWS side and a customer gateway on the on-premises side.
It uses standard IPsec protocol to establish the VPN tunnel.
Cost-Effectiveness: Compared to AWS Direct Connect, which requires dedicated physical connections and higher setup costs, a Site-to-Site VPN is less expensive and easier to implement for smaller traffic requirements.
Reference: AWS Site-to-Site VPN
A company is developing a content sharing platform that currently handles 500 GB of user-generated media files. The company expects the amount of content to grow significantly in the future. The company needs a storage solution that can automatically scale, provide high durability, and allow direct user uploads from web browsers.
- A . Store the data in an Amazon Elastic Block Store (Amazon EBS) volume with Multi-Attach enabled.
- B . Store the data in an Amazon Elastic File System (Amazon EFS) Standard file system.
- C . Store the data in an Amazon S3 Standard bucket.
- D . Store the data in an Amazon S3 Express One Zone bucket.
C
Explanation:
Amazon S3 Standard provides virtually unlimited scalability, high durability (11 nines), and millisecond latency. It is designed for storing large volumes of unstructured content such as media files. S3 also supports pre-signed URLs and direct browser uploads, enabling users to upload files securely without passing through backend servers. EBS volumes (A) are block storage, limited to single AZ, and not suitable for web-scale storage. EFS (B) is a shared file system for POSIX workloads, not for direct browser uploads. S3 Express One Zone (D) offers higher performance for small objects but does not provide cross-AZ durability, making it unsuitable for growing global content.
Therefore, option C is the most scalable, durable, and cost-effective solution.
Reference:
• Amazon S3 User Guide ― Direct browser uploads and durability
• AWS Well-Architected Framework ― Performance Efficiency Pillar
A company is migrating a Linux-based web server group to AWS. The web servers must access shared files by using the NFS protocol. The company must not make any changes to the web server application.
Which solution will meet these requirements?
- A . Create an Amazon S3 bucket to store the shared files in S3 Standard. Grant the S3 bucket access to the web servers.
- B . Configure an Amazon CloudFront distribution. Set an Amazon S3 bucket as the origin. Store the shared files in the S3 bucket.
- C . Create an Amazon Elastic File System (Amazon EFS) file system. Mount the EFS file system on the web servers.
- D . Create an Amazon FSx for Windows File Server file system. Configure SMB protocol access for the web servers.
C
Explanation:
The key requirements are shared file access, NFS protocol compatibility, and no application changes. Amazon Elastic File System (EFS) is a fully managed, scalable file system that natively supports the NFS protocol, making it an ideal drop-in replacement for on-premises shared file systems used by Linux applications.
Option C allows the existing web servers to mount the EFS file system using standard NFS mount commands, preserving application behavior and avoiding code changes. EFS is designed to be accessed concurrently by multiple EC2 instances across Availability Zones, providing high availability and elasticity without manual capacity management. This aligns well with typical web server architectures that rely on shared content or assets.
Option A and B use Amazon S3, which is an object storage service and does not support NFS semantics. Migrating to S3 would require application changes to use object-based APIs instead of file system operations.
Option D uses Amazon FSx for Windows File Server, which supports SMB, not NFS, and is intended for Windows-based workloads.
Therefore, C is the correct solution because Amazon EFS provides NFS compatibility, shared access, high availability, and minimal operational overhead while requiring no changes to the existing Linux-based web server applications.
A company runs an online order management system on AWS. The company stores order and inventory data for the previous 5 years in an Amazon Aurora MySQL database. The company deletes inventory data after 5 years.
The company wants to optimize costs to archive data.
- A . Create an AWS Glue crawler to export data to Amazon S3. Create an AWS Lambda function to compress the data.
- B . Use the SELECT INTO OUTFILE S3 query on the Aurora database to export the data to Amazon S3. Configure S3 Lifecycle rules on the S3 bucket.
- C . Create an AWS Glue DataBrew Job to migrate data from Aurora to Amazon S3. Configure S3 Lifecycle rules on the S3 bucket.
- D . Use the AWS Schema Conversion Tool (AWS SCT) to replicate data from Aurora to Amazon S3. Use the S3 Standard-Infrequent Access (S3 Standard-IA) storage class.
B
Explanation:
The SELECT INTO OUTFILE S3 feature allows you to export Amazon Aurora MySQL data directly to Amazon S3 with minimal operational overhead. This method is efficient and cost-effective for archiving historical data.
You can configure S3 Lifecycle rules to transition the exported data to lower-cost storage (e.g., S3 Glacier or S3 Standard-IA) and eventually delete it after 5 years.
No need for additional ETL tools like Glue or DataBrew unless complex transformations are required.
Reference: Exporting data from Aurora MySQL to S3
A company is building a serverless application to process orders from an e-commerce site. The application needs to handle bursts of traffic during peak usage hours and to maintain high availability. The orders must be processed asynchronously in the order the application receives them.
- A . Use an Amazon Simple Notification Service (Amazon SNS) topic to receive orders. Use an AWS Lambda function to process the orders.
- B . Use an Amazon Simple Queue Service (Amazon SQS) FIFO queue to receive orders. Use an AWS Lambda function to process the orders.
- C . Use an Amazon Simple Queue Service (Amazon SQS) standard queue to receive orders. Use AWS Batch jobs to process the orders.
- D . Use an Amazon Simple Notification Service (Amazon SNS) topic to receive orders. Use AWS Batch jobs to process the orders.
B
Explanation:
Key Requirements:
Serverless architecture.
Handle traffic bursts with high availability.
Process orders asynchronouslyin the order they are received.
Analysis of Option A: Amazon SNS delivers messages to subscribers. However, SNS does not ensure ordering, making it unsuitable for FIFO (First In, First Out) requirements.
Option B: Amazon SQS FIFO queues support ordering and ensure messages are delivered exactly once. AWS Lambda functions can be triggered by SQS to process messages asynchronously and efficiently. This satisfies all requirements.
Option C: Amazon SQS standard queues do not guarantee message order and have "at-least-once" delivery, making them unsuitable for the FIFO requirement.
Option D: Similar to Option A, SNS does not ensure message ordering, and using AWS Batch adds complexity without directly addressing the requirements.
AWS
Reference: Amazon SQS FIFO Queues
AWS Lambda and SQS Integration
A company is migrating some workloads to AWS. However, many workloads will remain on premises. The on-premises workloads require secure and reliable connectivity to AWS with consistent, low-latency performance.
The company has deployed the AWS workloads across multiple AWS accounts and multiple VPCs.
The company plans to scale to hundreds of VPCs within the next year.
The company must establish connectivity between each of the VPCs and from the on-premises environment to each VPC.
Which solution will meet these requirements?
- A . Use an AWS Direct Connect connection to connect the on-premises environment to AWS. Configure VPC peering to establish connectivity between VPCs.
- B . Use multiple AWS Site-to-Site VPN connections to connect the on-premises environment to AWS. Create a transit gateway to establish connectivity between VPCs.
- C . Use an AWS Direct Connect connection with a Direct Connect gateway to connect the on-premises environment to AWS. Create a transit gateway to establish connectivity between VPCs. Associate the transit gateway with the Direct Connect gateway.
- D . Use an AWS Site-to-Site VPN connection to connect the on-premises environment to AWS. Configure VPC peering to establish connectivity between VPCs.
C
Explanation:
The optimal solution for scalable and resilient hybrid networking is to use AWS Direct Connect with a Direct Connect gateway for secure, low-latency access to AWS, and an AWS Transit Gateway to manage connectivity among hundreds of VPCs.
By associating the Transit Gateway with the Direct Connect gateway, you enable transitive routing between on-premises and all VPCs, while minimizing network complexity and maintaining high performance.
VPC peering does not scale well, and VPNs don’t offer the same performance or consistency.
A solutions architect needs to build a log storage solution for a client. The client has an application that produces user activity logs that track user API calls to the application. The application typically
produces 50 GB of logs each day. The client needs a storage solution that makes the logs available for occasional querying and analytics.
- A . Store user activity logs in an Amazon S3 bucket. Use Amazon Athena to perform queries and analytics.
- B . Store user activity logs in an Amazon OpenSearch Service cluster. Use OpenSearch Dashboards to perform queries and analytics.
- C . Store user activity logs in an Amazon RDS instance. Use an Open Database Connectivity (ODBC) connector to perform queries and analytics.
- D . Store user activity logs in an Amazon CloudWatch Logs log group. Use CloudWatch Logs Insights to perform queries and analytics.
A
Explanation:
For infrequent or ad hoc querying of log data, Amazon S3 + Amazon Athena provides the most cost-effective, serverless, and scalable analytics solution.
From AWS Documentation:
“Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL. Athena is serverless, so there is no infrastructure to manage, and you pay only for the queries that you run.”
(Source: Amazon Athena User Guide)
Why A is correct:
Amazon S3 offers durable, scalable, and cost-efficient storage.
Athena allows SQL-based querying on structured or semi-structured data like logs.
No need to provision or manage infrastructure.
Ideal for occasional querying at low cost.
Why the others are not optimal:
Option B: OpenSearch adds cost and is best for frequent, low-latency log querying.
Option C: RDS is not optimized for large-scale write-heavy log ingestion and costs more.
Option D: CloudWatch Logs is suitable for real-time monitoring, not for long-term storage and analytics of large log volumes.
Reference: Amazon Athena User Guide
AWS Well-Architected Framework C Cost Optimization Pillar
Amazon S3 Storage Classes and Pricing Guide
A healthcare company is designing a system to store and manage logs in the AWS Cloud. The system ingests and stores logs in JSON format that contain sensitive patient information. The company must identify any sensitive data and must be able to search the log data by using SQL queries.
Which solution will meet these requirements?
- A . Store the logs in an Amazon S3 bucket. Configure Amazon Macie to discover sensitive data. Use Amazon Athena to query the logs.
- B . Store the logs in an Amazon EBS volume. Create an application that uses Amazon SageMaker AI to detect sensitive data. Use Amazon RDS to query the logs.
- C . Store the logs in Amazon DynamoDB. Use AWS KMS to discover sensitive data. Use Amazon Redshift Spectrum to query the logs.
- D . Store the logs in an Amazon S3 bucket. Use Amazon Inspector to discover sensitive data. Use Amazon Athena to query the logs.
A
Explanation:
AWS documentation states that Amazon Macie is the managed service designed to automatically identify and classify sensitive data stored in Amazon S3, including PII and healthcare-related identifiers.
Storing logs in Amazon S3 provides scalable, durable storage, and Amazon Athena can directly query JSON data stored in S3 using SQL.
Amazon Inspector (Option D) is for vulnerability scanning and does not identify sensitive data. DynamoDB with KMS (Option C) cannot detect sensitive information. EBS (Option B) requires custom tooling and does not support serverless SQL querying.
