Practice Free SAA-C03 Exam Online Questions
A company has a large amount of data in an Amazon DynamoDB table. A large batch of data is appended to the table once each day. The company wants a solution that will make all the existing and future data in DynamoDB available for analytics on a long-term basis.
Which solution meets these requirements with the LEAST operational overhead?
- A . Configure DynamoDB incremental exports to Amazon S3.
- B . Configure Amazon DynamoDB Streams to write records to Amazon S3.
- C . Configure Amazon EMR to copy DynamoDB data to Amazon S3.
- D . Configure Amazon EMR to copy DynamoDB data to Hadoop Distributed File System (HDFS).
A
Explanation:
Incremental Exports: Exporting DynamoDB data directly to Amazon S3 provides an automated, serverless way to make data available for analytics without operational overhead.
Analytics-Friendly Storage: Amazon S3 supports long-term analytics workloads and can integrate with tools like Athena or QuickSight.
DynamoDB Export to S3 Documentation
An ecommerce company is redesigning a product catalog system to handle millions of products and provide fast access to product information. The system needs to store structured product data such as product name, price, description, and category. The system also needs to store unstructured data such as high-resolution product videos and user manuals. The architecture must be highly available and must be able to handle sudden spikes in traffic during large-scale sales events.
- A . Use an Amazon RDS Multi-AZ deployment to store product information. Store product videos and user manuals in Amazon S3.
- B . Use Amazon DynamoDB to store product information. Store product videos and user manuals in Amazon S3.
- C . Store all product information, including product videos and user manuals, in Amazon DynamoDB.
- D . Deploy an Amazon DocumentDB (with MongoDB compatibility) cluster to store all product information, product videos, and user manuals.
B
Explanation:
Amazon DynamoDB provides single-digit millisecond performance at any scale and is fully managed to handle millions of catalog records. It is ideal for structured catalog data such as product metadata and scales seamlessly during high-traffic events like sales. Amazon S3 is optimized for storing unstructured large objects such as videos and manuals, with virtually unlimited scalability and high durability.
Option A (RDS) would not handle massive scale or traffic spikes as efficiently.
Option C overloads DynamoDB by forcing it to store large binary data, which is not its purpose.
Option D (DocumentDB) is suitable for JSON-like documents but not optimal for storing large media files and would add operational complexity.
Therefore, option B represents the best separation of structured and unstructured data storage.
Reference:
• DynamoDB Developer Guide ― Millisecond performance at scale
• Amazon S3 User Guide ― Storage for unstructured data
• AWS Well-Architected Framework ― Performance Efficiency Pillar
A company hosts a public web application on AWS. The website has a three-tier architecture. The frontend web tier is comprised of Amazon EC2 instances in an Auto Scaling group. The application tier is a second Auto Scaling group. The database tier is an Amazon RDS database.
The company has configured the Auto Scaling groups to handle the application’s normal level of demand. During an unexpected spike in demand, the company notices a long delay in the startup time when the frontend and application layers scale out. The company needs to improve the scaling performance of the application without negatively affecting the user experience.
Which solution will meet these requirements MOST cost-effectively?
- A . Decrease the minimum number of EC2 instances for both Auto Scaling groups. Increase the desired number of instances to meet the peak demand requirement.
- B . Configure the maximum number of instances for both Auto Scaling groups to be the number required to meet the peak demand. Create a warm pool.
- C . Increase the maximum number of EC2 instances for both Auto Scaling groups to meet the normal demand requirement. Create a warm pool.
- D . Reconfigure both Auto Scaling groups to use a scheduled scaling policy. Increase the size of the EC2 instance types and the RDS instance types.
B
Explanation:
EC2 Auto Scaling warm pools allow you to pre-initialize instances, reducing the delay in scale-out events. This results in significantly faster response times during demand surges while remaining cost-effective compared to always running at peak capacity.
Reference: AWS Documentation C EC2 Auto Scaling Warm Pools for Faster Scaling
A company is building an application in the AWS Cloud. The application is hosted on Amazon EC2 instances behind an Application Load Balancer (ALB). The company uses Amazon Route 53 for the DNS.
The company needs a managed solution with proactive engagement to detect against DDoS attacks.
Which solution will meet these requirements?
- A . Enable AWS Config. Configure an AWS Config managed rule that detects DDoS attacks.
- B . Enable AWS WAF on the ALB Create an AWS WAF web ACL with rules to detect and prevent DDoS attacks. Associate the web ACL with the ALB.
- C . Store the ALB access logs in an Amazon S3 bucket. Configure Amazon GuardDuty to detect and take automated preventative actions for DDoS attacks.
- D . Subscribe to AWS Shield Advanced. Configure hosted zones in Route 53 Add ALB resources as protected resources.
D
Explanation:
AWS Shield Advanced is designed to provide enhanced protection against DDoS attacks with proactive engagement and response capabilities, making it the best solution for this scenario.
AWS Shield Advanced: This service provides advanced protection against DDoS attacks. It includes detailed attack diagnostics, 24/7 access to the AWS DDoS Response Team (DRT), and financial protection against DDoS-related scaling charges. Shield Advanced also integrates with Route 53 and the Application Load Balancer (ALB) to ensure comprehensive protection for your web applications.
Route 53 and ALB Protection: By adding your Route 53 hosted zones and ALB resources to AWS Shield Advanced, you ensure that these components are covered under the enhanced protection plan. Shield Advanced actively monitors traffic and provides real-time attack mitigation, minimizing the impact of DDoS attacks on your application.
Why Not Other Options?
Option A (AWS Config): AWS Config is a configuration management service and does not provide DDoS protection or detection capabilities.
Option B (AWS WAF): While AWS WAF can help mitigate some types of attacks, it does not provide the comprehensive DDoS protection and proactive engagement offered by Shield Advanced.
Option C (GuardDuty): GuardDuty is a threat detection service that identifies potentially malicious activity within your AWS environment, but it is not specifically designed to provide DDoS protection.
AWS
Reference: AWS Shield Advanced- Overview of AWS Shield Advanced and its DDoS protection capabilities.
Integrating AWS Shield Advanced with Route 53 and ALB- Detailed guidance on how to protect Route 53 and ALB with AWS Shield Advanced.
A solutions architect is building a static website hosted on Amazon S3. The website uses an Amazon Aurora PostgreSQL database accessed through an AWS Lambda function. The production website uses a Lambda alias that points to a specific version of the Lambda function.
Database credentials must rotate every 2 weeks. Previously deployed Lambda versions must always use the most recent credentials.
Which solution will meet these requirements?
- A . Store credentials in AWS Secrets Manager. Turn on rotation. Write code in the Lambda function to retrieve credentials from Secrets Manager.
- B . Include the credentials in the Lambda function code and update the function regularly.
- C . Use Lambda environment variables and update them when new credentials are available.
- D . Store credentials in AWS Systems Manager Parameter Store. Turn on rotation. Write code to retrieve credentials from Parameter Store.
A
Explanation:
AWS Secrets Manager is the recommended service for storing database credentials and performing automated rotation. Any Lambda function version or alias can fetch the latest secret value at runtime, ensuring no outdated credentials exist in deployed versions.
Environment variables (Option C) are static per version. Embedding credentials in code (Option B) is insecure and requires redeployment. Parameter Store (Option D) supports rotation but requires more configuration and is not as seamless as Secrets Manager for database credential rotation.
A company hosts an application on AWS that stores files that users need to access. The application uses two Amazon EC2 instances. One instance is in Availability Zone A, and the second instance is in Availability Zone
B. Both instances use Amazon Elastic Block Store (Amazon EBS) volumes. Users must be able to access the files at any time without delay. Users report that the two instances occasionally contain different versions of the same file. Users occasionally receive HTTP 404 errors when they try to download files. The company must address the customer issues. The company cannot make changes to the application code.
Which solution will meet these requirements in the MOST operationally efficient way?
- A . Run the robocopy command on one of the EC2 instances on a schedule to copy files from the Availability Zone A instance to the Availability Zone B instance.
- B . Configure the application to store the files on both EBS volumes each time a user writes or updates a file.
- C . Mount an Amazon Elastic File System (Amazon EFS) file system to the EC2 instances. Copy the files from the EBS volumes to the EFS file system. Configure the application to store files in the EFS file system.
- D . Create an EC2 instance profile that allows the instance in Availability Zone A to access the S3 bucket. Re-associate the instance profile to the instance in Availability Zone B when needed.
C
Explanation:
Amazon EFS provides a fully managed, highly available, and shared file system that can be mounted by instances across multiple Availability Zones. This ensures consistency of files between EC2 instances and avoids replication issues. EBS volumes, in contrast, are AZ-scoped and not designed for multi-instance sharing.
Options A and B rely on custom replication or manual file handling, which increases operational overhead and risks inconsistencies.
Option D does not solve the shared access and consistency requirement. By migrating storage to EFS, both EC2 instances will read and write to the same storage system, ensuring that users always access the latest files without 404 errors.
Reference:
• Amazon EFS User Guide ― Accessing data across multiple Availability Zones
• AWS Well-Architected Framework ― Reliability Pillar: Data durability and consistency
A finance company is migrating its trading platform to AWS. The trading platform processes a high volume of market data and processes stock trades. The company needs to establish a consistent, low-latency network connection from its on-premises data center to AWS.
The company will host resources in a VPC. The solution must not use the public internet.
Which solution will meet these requirements?
- A . Use AWS Client VPN to connect the on-premises data center to AWS.
- B . Use AWS Direct Connect to set up a connection from the on-premises data center to AWS
- C . Use AWS PrivateLink to set up a connection from the on-premises data center to AWS.
- D . Use AWS Site-to-Site VPN to connect the on-premises data center to AWS.
B
Explanation:
AWSDirect Connectis the best solution for establishing a consistent, low-latency connection from an on-premises data center to AWS without using the public internet. Direct Connect offers dedicated,
high-throughput, and low-latency network connections, which are ideal for performance-sensitive applications like a trading platform that processes high volumes of market data and stock trades.
Direct Connect provides a private connection to your AWS VPC, ensuring that data doesn’t traverse the public internet, which enhances both security and performance consistency.
AWS
Reference: AWS Direct Connectprovides a dedicated network connection to AWS services with consistent, low-latency performance.
Best Practices for High Performance on AWSfor performance-sensitive workloads like trading platforms.
Why the other options are incorrect:
A company needs a solution to ingest streaming sensor data from 100,000 devices, transform the data in near real time, and load the data into Amazon S3 for analysis. The solution must be fully managed, scalable, and maintain sub-second ingestion latency.
- A . Use Amazon Kinesis Data Streams to ingest the data. Use Amazon Managed Service for Apache Flink to process the data in near real time. Use an Amazon Data Firehose stream to send processed data to Amazon S3.
- B . Use Amazon Simple Queue Service (Amazon SQS) standard queues to collect the sensor data. Invoke AWS Lambda functions to transform and process SQS messages in batches. Configure the Lambda functions to use an AWS SDK to write transformed data to Amazon S3.
- C . Deploy a fleet of Amazon EC2 instances that run Apache Kafka to ingest the data. Run Apache Spark on Amazon EMR clusters to process the data. Configure Spark to write processed data directly to Amazon S3.
- D . Implement Amazon EventBridge to capture all sensor data. Use AWS Batch to run containerized transformation jobs on a schedule. Configure AWS Batch jobs to process data in chunks. Save results to Amazon S3.
A
Explanation:
The most scalable and managed solution for streaming ingestion, real-time transformation, and delivery to Amazon S3 is Amazon Kinesis Data Streams, Amazon Managed Service for Apache Flink, and Amazon Kinesis Data Firehose.
From AWS Documentation:
“Amazon Kinesis Data Streams enables real-time processing of streaming data at massive scale. With Apache Flink on Kinesis Data Analytics, you can process data streams in near real-time, then use Amazon Kinesis Data Firehose to reliably deliver that data to S3.”
(Source: Amazon Kinesis Developer Guide)
Why A is correct:
Fully managed: All services involved are serverless and managed.
Real-time ingestion: Kinesis Data Streams supports sub-second latency and can handle high-throughput workloads like 100,000+ devices.
Near real-time processing: Apache Flink is designed for continuous stream processing with complex event handling.
Efficient delivery: Kinesis Firehose delivers processed data directly to S3 with retry and backup capability.
Why other options are incorrect:
Option B: SQS is not optimized for real-time streaming at high volume.
Option C: EC2 + Kafka + EMR adds high operational overhead and cost.
Option D: EventBridge is event-driven, not designed for high-throughput streaming; AWS Batch is unsuitable for near real-time processing.
Reference: Amazon Kinesis Developer Guide
AWS Well-Architected Framework C Performance Efficiency Pillar
Amazon Managed Flink (Apache Flink on KDA)
A company hosts an end-user application on Amazon EC2 instances behind an Application Load Balancer (ALB). The company needs to configure end-to-end encryption between the ALB and the EC2 instances.
Which solution will meet this requirement with the LEAST operational effort?
- A . Deploy AWS CloudHSM. Import a third-party certificate into CloudHSM. Configure the EC2 instances and the ALB to use the CloudHSM imported certificate.
- B . Import a third-party certificate bundle into AWS Certificate Manager (ACM). Generate a self-signed certificate on the EC2 instances. Associate the ACM imported third-party certificate with the ALB.
- C . Import a third-party SSL certificate into AWS Certificate Manager (ACM). Install the third-party certificate on the EC2 instances. Associate the ACM imported third-party certificate with the ALB.
- D . Use Amazon-issued AWS Certificate Manager (ACM) certificates on the EC2 instances and the ALB.
D
Explanation:
End-to-end encryption means TLS is used not only from the client to the ALB, but also from the ALB to the EC2 targets. The least operational effort is achieved by using AWS Certificate Manager (ACM) Amazon-issued certificates where possible, because ACM automates key management, certificate provisioning, and renewal for supported use cases. Associating an ACM certificate with the ALB is a standard managed approach for TLS termination at the load balancer, and using managed certificates reduces the overhead of tracking expiration, renewing, and distributing certificates.
For encryption on the backend connection (ALB to instances), the instances must present a certificate and complete TLS. Using Amazon-issued ACM certificates (via supported mechanisms) reduces manual certificate lifecycle work compared with importing and managing third-party certificates. By avoiding third-party cert procurement and manual renewals, the solution minimizes ongoing operations and reduces the risk of outages due to expired certificates. This aligns with the requirement for the least operational effort while meeting the security requirement.
Option A is far more operationally heavy: CloudHSM is intended for scenarios requiring customer-managed HSMs and adds complexity in deployment, scaling, integration, and operations.
Option B introduces self-signed certificates on instances, which increases operational friction (distribution, trust, rotation) and is not as clean for managed operations.
Option C works but requires managing third-party certificate lifecycle (renewals, re-import, redeploy to instances), which is more overhead than Amazon-issued managed certificates.
Therefore, D best meets end-to-end encryption needs with the lowest ongoing operational burden by using AWS-managed certificate issuance and lifecycle management capabilities.
A company uses AWS to run its workloads. The company uses AWS Organizations to manage its accounts. The company needs to identify which departments are responsible for specific costs.
New accounts are constantly created in the Organizations account structure. The Organizations continuous integration and continuous delivery (CI/CD) framework already adds the populated department tag to the AWS resources. The company wants to use an AWS Cost Explorer report to identify the service costs by department from all AWS accounts.
Which combination of steps will meet these requirements with the MOST operational efficiency? (Select TWO.)
- A . Activate the aws: createdBy cost allocation tag and the department cost allocation tag in the management account.
- B . Create a new cost and usage report in Cost Explorer. Group by the department cost allocation tag. Apply a filter to see all linked accounts and services.
- C . Activate only the department cost allocation tag in the management account.
- D . Create a new cost and usage report in Cost Explorer. Group by the department cost allocation tag without any other filters.
- E . Activate only the aws: createdBy cost allocation tag in the management account.
C,D
Explanation:
To track costs by department, you must activate the custom department tag as a cost allocation tag in the AWS Organizations management account. Once activated, Cost Explorer and cost and usage reports can group costs by this tag for all linked accounts. The most operationally efficient way is to activate only the relevant department tag and create a cost and usage report grouped by that tag.
AWS Documentation Extract:
“To use a tag for cost allocation, you must activate it in the AWS Billing and Cost Management console. After activation, you can use the tag to group costs in Cost Explorer and reports.”
(Source: AWS Cost Management documentation)
A, E: aws: createdBy is not related to department cost grouping and is unnecessary.
B: Applying extra filters is optional; D is more direct and operationally efficient.
Reference: AWS Certified Solutions Architect C Official Study Guide, Cost Allocation and Tagging.
