Practice Free SAA-C03 Exam Online Questions
A company provides a trading platform to customers. The platform uses an Amazon API Gateway REST API, AWS Lambda functions, and an Amazon DynamoDB table. Each trade that the platform processes invokes a Lambda function that stores the trade data in Amazon DynamoDB. The company
wants to ingest trade data into a data lake in Amazon S3 for near real-time analysis.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Use Amazon DynamoDB Streams to capture the trade data changes. Configure DynamoDB Streams to invoke a Lambda function that writes the data to Amazon S3.
- B . Use Amazon DynamoDB Streams to capture the trade data changes. Configure DynamoDB Streams to invoke a Lambda function that writes the data to Amazon Data Firehose. Write the data from Data Firehose to Amazon S3.
- C . Enable Amazon Kinesis Data Streams on the DynamoDB table to capture the trade data changes.
Configure Kinesis Data Streams to invoke a Lambda function that writes the data to Amazon S3. - D . Enable Amazon Kinesis Data Streams on the DynamoDB table to capture the trade data changes. Configure a data stream to be the input for Amazon Data Firehose. Write the data from Data Firehose to Amazon S3.
A
Explanation:
DynamoDB Streams: Captures real-time changes in DynamoDB tables and allows integration with Lambda for processing the changes.
Minimal Operational Overhead: Using a Lambda function directly to write data to S3 ensures simplicity and reduces the complexity of the pipeline. Amazon DynamoDB Streams Documentation
A company runs production workloads in its AWS account. Multiple teams create and maintain the workloads.
The company needs to be able to detect changes in resource configurations. The company needs to capture changes as configuration items without changing or modifying the existing resources.
Which solution will meet these requirements?
- A . Use AWS Config. Start the configuration recorder for AWS resources to detect changes in resource configurations.
- B . Use AWS CloudFormation. Initiate drift detection to capture changes in resource configurations.
- C . Use Amazon Detective to detect, analyze, and investigate changes in resource configurations.
- D . Use AWS Audit Manager to capture management events and global service events for resource configurations.
A
Explanation:
AWS Config is a service designed to assess, audit, and evaluate the configurations of AWS resources. It continuously monitors and records your AWS resource configurations and allows you to automate the evaluation of recorded configurations against desired configurations. By starting a configuration recorder, AWS Config will capture changes to supported resource types as configuration items― without the need to modify any of the existing resources. This provides a full history of configuration changes and is specifically intended for exactly this use case.
AWS Documentation Extract:
“AWS Config provides a detailed view of the configuration of AWS resources in your AWS account. This includes how the resources are related to one another and how they were configured in the past so you can see how the configurations and relationships change over time.”
“You can start the configuration recorder, which will record the configuration changes of the supported resources in your AWS account.”
(Source: AWS Config documentation, What is AWS Config?)
Other options:
B: CloudFormation drift detection only works for resources created and managed by CloudFormation and requires stacks.
C: Amazon Detective is used for analyzing and investigating security findings, not for resource configuration tracking.
D: AWS Audit Manager is used for automating evidence collection to help with audits, not for tracking resource configurations.
Reference: AWS Certified Solutions Architect C Official Study Guide, Chapter on Monitoring and Auditing.
A manufacturing company runs an order processing application in its VPC. The company wants to securely send messages from the application to an external Salesforce system that uses Open Authorization (OAuth).
A solutions architect needs to integrate the company’s order processing application with the external Salesforce system.
Which solution will meet these requirements?
- A . Create an Amazon Simple Notification Service (Amazon SNS) topic in a fanout configuration that pushes data to an HTTPS endpoint. Configure the order processing application to publish messages to the SNS topic.
- B . Create an Amazon Simple Notification Service (Amazon SNS) topic in a fanout configuration that pushes data to an Amazon Data Firehose delivery stream that has a HTTP destination. Configure the order processing application to publish messages to the SNS topic.
- C . Create an Amazon EventBridge rule and configure an Amazon EventBridge API destination partner Configure the order processing application to publish messages to Amazon EventBridge.
- D . Create an Amazon Managed Streaming for Apache Kafka (Amazon MSK) topic that has an outbound MSK Connect connector. Configure the order processing application to publish messages to the MSK topic.
C
Explanation:
Amazon EventBridge API destinations allow you to send data from AWS to external systems, like Salesforce, using HTTP APIs, including those secured with OAuth. This provides a secure and scalable solution for sending messages from the order processing application to Salesforce.
Option A and B (SNS): SNS is not ideal for OAuth-secured external APIs and lacks the necessary OAuth integration.
Option D (MSK): Amazon MSK is a Kafka-based streaming solution, which is overkill for simple message forwarding to Salesforce.
AWS
Reference: Amazon EventBridge API Destinations
A finance company hosts a data lake in Amazon S3. The company receives financial data records over SFTP each night from several third parties. The company runs its own SFTP server on an Amazon EC2 instance in a public subnet of a VPC. After the files are uploaded, they are moved to the data lake by a cron job that runs on the same instance. The SFTP server is reachable on DNS sftp.example.com through the use of Amazon Route 53.
What should a solutions architect do to improve the reliability and scalability of the SFTP solution?
- A . Move the EC2 instance into an Auto Scaling group. Place the EC2 instance behind an Application Load Balancer (ALB). Update the DNS record sftp.example.com in Route 53 to point to the ALB.
- B . Migrate the SFTP server to AWS Transfer for SFTP. Update the DNS record sftp.example.com in Route 53 to point to the server endpoint hostname.
- C . Migrate the SFTP server to a file gateway in AWS Storage Gateway. Update the DNS record sftp.example.com in Route 53 to point to the file gateway endpoint.
- D . Place the EC2 instance behind a Network Load Balancer (NLB). Update the DNS record sftp.example.com in Route 53 to point to the NLB.
B
Explanation:
The optimal way to improve reliability and scalability of SFTP on AWS is to use AWS Transfer Family (for SFTP). It provides a fully managed SFTP server integrated with Amazon S3. No EC2 instances or infrastructure management is required.
AWS Transfer Family supports custom DNS domains (e.g., sftp.example.com) and allows integration with existing authentication mechanisms like LDAP, AD, or custom identity providers.
Files are uploaded directly to S3, eliminating the need for cron jobs to move data from EC2 to S3.
Built-in high availability and scalability removes the burden of managing infrastructure.
Other options:
A and D still require manual scaling, server maintenance, and cron jobs.
C (Storage Gateway) is used for hybrid file access, not for replacing an SFTP server.
Reference: AWS Transfer Family for SFTP
A company runs HPC workloads requiring high IOPS.
Which combination of steps will meet these requirements? (Select TWO)
- A . Use Amazon EFS as a high-performance file system.
- B . Use Amazon FSx for Lustre as a high-performance file system.
- C . Create an Auto Scaling group of EC2 instances. Use Reserved Instances. Configure a spread placement group. Use AWS Batch for analytics.
- D . Use Mountpoint for Amazon S3 as a high-performance file system.
- E . Create an Auto Scaling group of EC2 instances. Use mixed instance types and a cluster placement group. Use Amazon EMR for analytics.
B, E
Explanation:
Option B: FSx for Lustre is designed for HPC workloads with high IOPS.
Option E: A cluster placement group ensures low-latency networking for HPC analytics workloads.
Option A: Amazon EFS is not optimized for HPC.
Option D: Mountpoint for S3 does not meet high IOPS needs.
A company collects data for temperature, humidity, and atmospheric pressure in cities across multiple continents. The average volume of data that the company collects from each site daily is 500 GB. Each site has a high-speed internet connection.
The company wants to aggregate the data from all these global sites as quickly as possible in a single
Amazon S3 bucket. The solution must minimize operational complexity.
Which solution meets these requirements?
- A . Turn on S3 Transfer Acceleration on the destination S3 bucket. Use multipart uploads to directly upload site data to the destination S3 bucket.
- B . Upload the data from each site to an S3 bucket in the closest Region. Use S3 Cross-Region Replication to copy objects to the destination S3 bucket. Then remove the data from the origin S3 bucket.
- C . Schedule AWS Snowball Edge Storage Optimized device jobs daily to transfer data from each site to the closest Region. Use S3 Cross-Region Replication to copy objects to the destination S3 bucket.
- D . Upload the data from each site to an Amazon EC2 instance in the closest Region. Store the data in an Amazon Elastic Block Store (Amazon EBS) volume. At regular intervals, take an EBS snapshot and copy it to the Region that contains the destination S3 bucket. Restore the EBS volume in that Region.
A
Explanation:
Comprehensive and Detailed Step-by-Step
The goal is totransfer 500 GB dailyfrom multiple global locationsquicklyintoa single S3 bucketwhile keeping operational complexity low.
Option A: ✅
Turn on S3 Transfer Acceleration on the destination S3 bucket. Use multipart uploads to directly upload site data to the destination S3 bucket.
S3 Transfer Acceleration (S3-TA) allowsfasterglobal uploads by routing traffic through Amazon CloudFront’s globally distributed edge locations.
Multipart uploadsimprove efficiency bybreaking large filesinto smaller parts, transferring them in parallel.
Low operational complexity: No need for additional resources or manual replication.
Why is this best?It ensureshigh-speed transferswhileminimizing complexity.
Reference: Amazon S3 Transfer Acceleration
Option B: ❌
Upload the data from each site to an S3 bucket in the closest Region. Use S3 Cross-Region Replication to copy objects to the destination S3 bucket. Then remove the data from the origin S3 bucket.
While S3 Cross-Region Replication (CRR) can copy objects, itadds latencydue tosequential replicationrather than a directfasttransfer.
Why not?S3 Transfer Acceleration is fasterand avoidsextra steps.
Reference: Cross-Region Replication
Option C: ❌
Use AWS Snowball Edge for daily transfers.
AWS Snowballis forbulk offline transfers, notdaily high-speed internet transfers.
Why not?Unnecessary physical devices add operational overhead.
Reference: AWS Snowball Edge
Option D: ❌
Upload to EC2, store in EBS, snapshot, and restore in the destination Region.
This approach isoverly complexand notoptimized for direct S3 ingestion.
Why not?Too many steps and higher costs.
Reference: Amazon EBS Snapshots
An ecommerce company experiences a surge in mobile application traffic every Monday at 8 AM during the company’s weekly sales events. The application’s backend uses an Amazon API Gateway HTTP API and AWS Lambda functions to process user requests. During peak sales periods, users report encountering Too Many Requests Exception errors from the Lambda functions. The errors result in a degraded user experience. A solutions architect needs to design a scalable and resilient solution that minimizes the errors and ensures that the application’s overall functionality remains unaffected.
- A . Create an Amazon Simple Queue Service (Amazon SQS) queue. Send user requests to the SQS queue. Configure the Lambda function with provisioned concurrency. Set the SQS queue as the event source trigger.
- B . Use AWS Step Functions to orchestrate and process user requests. Configure Step Functions to invoke the Lambda functions and to manage the request flow.
- C . Create an Amazon Simple Notification Service (Amazon SNS) topic. Send user requests to the SNS topic. Configure the Lambda functions with provisioned concurrency. Subscribe the functions to the SNS topic.
- D . Create an Amazon Simple Queue Service (Amazon SQS) queue. Send user requests to the SQS queue. Configure the Lambda functions with reserved concurrency. Set the SQS queue as the event source trigger for the functions.
A
Explanation:
Too Many Requests Exception errors occur when Lambda exceeds concurrency limits. The recommended pattern is to use Amazon SQS with Lambda to decouple and buffer traffic, ensuring that bursts of requests are queued and processed smoothly. Enabling provisioned concurrency for Lambda ensures that functions are pre-initialized and ready to handle spikes in load with low latency. Step Functions (B) is designed for workflow orchestration, not high-throughput request buffering. SNS with Lambda (C) does not provide buffering and may overwhelm Lambda during bursts. Reserved concurrency (D) limits function scaling instead of improving resilience. Therefore, option A provides a scalable and resilient solution, minimizing errors during traffic surges.
Reference:
• AWS Lambda Developer Guide ― Provisioned concurrency and scaling with SQS
• Amazon SQS User Guide ― Using Lambda with Amazon SQS
• AWS Well-Architected Framework ― Reliability Pillar
A company runs an application on several Amazon EC2 instances that store persistent data on an Amazon Elastic File System (Amazon EFS) file system. The company needs to replicate the data to another AWS Region by using an AWS managed service solution.
Which solution will meet these requirements MOST cost-effectively?
- A . Use the EFS-to-EFS backup solution to replicate the data to an EFS file system in another Region.
- B . Run a nightly script to copy data from the EFS file system to an Amazon S3 bucket. Enable S3 Cross-Region Replication on the S3 bucket.
- C . Create a VPC in another Region. Establish a cross-Region VPC peer. Run a nightly rsync to copy data from the original Region to the new Region.
- D . Use AWS Backup to create a backup plan with a rule that takes a daily backup and replicates it to another Region. Assign the EFS file system resource to the backup plan.
D
Explanation:
AWS Backup supports cross-Region backup for Amazon EFS, allowing automated, scheduled backups
and replication to another Region. This managed service simplifies backup management and ensures data resilience without the need for custom scripts or manual processes
A company wants to isolate its workloads by creating an AWS account for each workload. The company needs a solution that centrally manages networking components for the workloads. The solution also must create accounts with automatic security controls (guardrails).
Which solution will meet these requirements with the LEAST operational overhead?
- A . Use AWS Control Tower to deploy accounts. Create a networking account that has a VPC with private subnets and public subnets. Use AWS Resource Access Manager (AWS RAM) to share the subnets with the workload accounts.
- B . Use AWS Organizations to deploy accounts. Create a networking account that has a VPC with private subnets and public subnets. Use AWS Resource Access Manager (AWS RAM) to share the subnets with the workload accounts.
- C . Use AWS Control Tower to deploy accounts. Deploy a VPC in each workload account. Configure each VPC to route through an inspection VPC by using a transit gateway attachment.
- D . Use AWS Organizations to deploy accounts. Deploy a VPC in each workload account. Configure each VPC to route through an inspection VPC by using a transit gateway attachment.
A
Explanation:
AWS Control Tower: Provides a managed service to set up and govern a secure, multi-account AWS environment based on AWS best practices. It automates the setup of AWS Organizations and applies security controls (guardrails).
Networking Account:
Create a centralized networking account that includes a VPC with both private and public subnets.
This centralized VPC will manage and control the networking resources.
AWS Resource Access Manager (AWS RAM):
Use AWS RAM to share the subnets from the networking account with the other workload accounts. This allows different workload accounts to utilize the shared networking resources without the need to manage their own VPCs.
Operational Efficiency: Using AWS Control Tower simplifies the setup and governance of multiple AWS accounts, while AWS RAM facilitates centralized management of networking resources, reducing operational overhead and ensuring consistent security and compliance.
Reference: AWS Control Tower
AWS Resource Access Manager
A company is designing a new internal web application in the AWS Cloud. The new application must securely retrieve and store multiple employee usernames and passwords from an AWS managed service.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Store the employee credentials in AWS Systems Manager Parameter Store. Use AWS Cloud Formation and the BatchGetSecretValue API to retrieve usernames and passwords from Parameter Store.
- B . Store the employee credentials in AWS Secrets Manager. Use AWS Cloud Formation and AWS Batch with the BatchGetSecretValue API to retrieve the usernames and passwords from Secrets Manager.
- C . Store the employee credentials in AWS Systems Manager Parameter Store. Use AWS Cloud Formation and AWS Batch with the BatchGetSecretValue API to retrieve the usernames and passwords from Parameter Store.
- D . Store the employee credentials in AWS Secrets Manager. Use AWS Cloud Formation and the BatchGetSecretValue API to retrieve the usernames and passwords from Secrets Manager.
D
Explanation:
AWS Secrets Manager is the best solution for securely storing and managing sensitive information, such as usernames and passwords. Secrets Manager provides automatic rotation, fine-grained access control, and encryption of credentials. It is designed to integrate easily with other AWS services, such as CloudFormation, to automate the retrieval of secrets via the BatchGetSecretValue API.
Secrets Manager has a lower operational overhead than manually managing credentials, and it offers features like automatic secret rotation that reduce the need for human intervention.
Option A and C (Parameter Store): While Systems Manager Parameter Store can store secrets, Secrets Manager provides more specialized capabilities for securely managing and rotating credentials with less operational overhead.
Option B and C (AWS Batch): Introducing AWS Batch unnecessarily complicates the solution. Secrets Manager already provides simple API calls for retrieving secrets without needing an additional service.
AWS
Reference: AWS Secrets Manager
Secrets Manager with CloudFormation
