Practice Free SAA-C03 Exam Online Questions
A company wants to run a hybrid workload for data processing. The data needs to be accessed by on-premises applications for local data processing using an NFS protocol, and must also be accessible from the AWS Cloud for further analytics and batch processing.
Which solution will meet these requirements?
- A . Use an AWS Storage Gateway file gateway to provide file storage to AWS, then perform analytics on this data in the AWS Cloud.
- B . Use an AWS Storage Gateway tape gateway to copy the backup of the local data to AWS, then perform analytics on this data in the AWS Cloud.
- C . Use an AWS Storage Gateway volume gateway in a stored volume configuration to regularly take snapshots of the local data, then copy the data to AWS.
- D . Use an AWS Storage Gateway volume gateway in a cached volume configuration to back up all the local storage in the AWS Cloud, then perform analytics on this data in the cloud.
A
Explanation:
AWS Storage Gateway file gateway presents a file interface backed by Amazon S3 and supports NFS. This allows local applications to access data via NFS while also enabling cloud applications to use the data stored in S3 for analytics and processing, fulfilling both hybrid and cloud-native requirements.
Reference Extract:
"AWS Storage Gateway file gateway offers NFS and SMB access to data stored in Amazon S3, supporting hybrid workloads for local and cloud access."
Source: AWS Certified Solutions Architect C Official Study Guide, Hybrid and Storage Gateway section.
A company has AWS Lambda functions that use environment variables. The company does not want its developers to see environment variables in plaintext.
Which solution will meet these requirements?
- A . Deploy code to Amazon EC2 instances instead of using Lambda functions.
- B . Configure SSL encryption on the Lambda functions to use AWS CloudHSM to store and encrypt the environment variables.
- C . Create a certificate in AWS Certificate Manager (ACM). Configure the Lambda functions to use the certificate to encrypt the environment variables.
- D . Create an AWS Key Management Service (AWS KMS) key. Enable encryption helpers on the Lambda functions to use the KMS key to store and encrypt the environment variables.
D
Explanation:
AWS Lambda supports encrypting environment variables at rest using AWS KMS. You can use encryption helpers (or Lambda’s built-in support) to encrypt sensitive environment variable values using a KMS key. These encrypted variables are not visible in plaintext to developers, either in the console or when running the code.
AWS Documentation Extract:
"AWS Lambda automatically encrypts environment variables at rest. For additional security, you can use AWS KMS keys and encryption helpers to encrypt environment variables, ensuring they are never exposed in plaintext."
(Source: AWS Lambda documentation, Environment Variables Security)
A: Does not address the issue (and adds more management overhead).
B, C: There is no native support for environment variable encryption via CloudHSM or ACM.
Reference: AWS Certified Solutions Architect C Official Study Guide, Lambda Security Best Practices.
A security team needs to enforce the rotation of all IAM users’ access keys every 90 days. If an access key is found to be older, the key must be made inactive and removed. A solutions architect must create a solution that will check for and remediate any keys older than 90 days.
Which solution meets these requirements with the LEAST operational effort?
- A . Create an AWS Config rule to check for the key age. Configure the AWS Config rule to run an AWS
Batch job to remove the key. - B . Create an Amazon EventBridge rule to check for the key age. Configure the rule to run an AWS Batch job to remove the key.
- C . Create an AWS Config rule to check for the key age. Define an Amazon EventBridge rule to schedule an AWS Lambda function to remove the key.
- D . Create an Amazon EventBridge rule to check for the key age. Define an EventBridge rule to run an AWS Batch job to remove the key.
C
Explanation:
The requirement has two parts: (1) continuously or regularly evaluate IAM access key age against a 90-day policy, and (2) remediate noncompliant keys by disabling and deleting them. The least operational effort generally comes from using managed compliance evaluation and lightweight serverless remediation.
AWS Config is designed to assess resource configuration compliance over time. Using an AWS Config rule to check access key age fits the governance model because Config maintains a compliance history, supports central reporting, and can evaluate keys across an account (and commonly across an organization with aggregation patterns). After detection, remediation should be automated with minimal infrastructure, and AWS Lambda is the lightest operational tool for directly calling IAM APIs (UpdateAccessKey to Inactive, then DeleteAccessKey) on a schedule.
Option C pairs these appropriately: Config performs compliance evaluation, and an EventBridge scheduled rule triggers a Lambda function to remediate keys older than 90 days. This avoids running and maintaining compute fleets or batch infrastructure. It also provides clear separation of duties: Config for detection and evidence, Lambda for corrective action.
Options A, B, and D rely on AWS Batch, which introduces additional operational overhead (compute environments, job definitions, queues, scaling, and monitoring) that is unnecessary for a simple IAM housekeeping task. Also, EventBridge by itself is not a compliance evaluation service; it can schedule a job, but it does not inherently track “key age” state or compliance posture the way Config does.
Therefore, C best meets the requirement with the least operational effort by using Config for continuous compliance visibility and a scheduled Lambda for automated key deactivation and deletion.
A company runs a three-tier web application in a VPC on AWS. The company deployed an Application Load Balancer (ALB) in a public subnet. The web tier and application tier Amazon EC2 instances are deployed in a private subnet. The company uses a self-managed MySQL database that runs on EC2 instances in an isolated private subnet for the database tier.
The company wants a mechanism that will give a DevOps team the ability to use SSH to access all the servers. The company also wants to have a centrally managed log of all connections made to the servers.
Which combination of solutions will meet these requirements with the MOST operational efficiency? (Select TWO.)
- A . Create a bastion host in the public subnet. Configure security groups in the public, private, and isolated subnets to allow SSH access.
- B . Create an interface VPC endpoint for AWS Systems Manager Session Manager. Attach the endpoint to the VPC.
- C . Create an IAM policy that grants access to AWS Systems Manager Session Manager. Attach the IAM policy to the EC2 instances.
- D . Create a gateway VPC endpoint for AWS Systems Manager Session Manager. Attach the endpoint to the VPC.
- E . Attach an AmazonSSMManagedInstanceCore AWS managed IAM policy to all the EC2 instance roles.
B, E
Explanation:
AWS Systems Manager Session Manager allows secure, auditable SSH-like access to EC2 instances without the need to open SSH ports or manage bastion hosts. For this to work in a private subnet, an interface VPC endpoint is required (not a gateway endpoint).
The EC2 instances must have the AmazonSSMManagedInstanceCore policy attached to their IAM roles to allow Systems Manager operations.
With Session Manager, all session activity can be logged centrally to Amazon CloudWatch Logs or S3, satisfying the audit requirement and improving operational efficiency over manual SSH and bastion configurations.
A company processes streaming data by using Amazon Kinesis Data Streams and an AWS Lambda function. The streaming data comes from devices that are connected to the internet. The company is experiencing scaling problems and needs to implement shard-level control and custom checkpointing.
Which solution will meet these requirements with the LEAST latency?
- A . Connect Kinesis Data Streams to Amazon Data Firehose to ingest incoming data to an Amazon S3 bucket. Configure S3 Event Notifications to invoke the Lambda function.
- B . Increase the provisioned concurrency settings for the Lambda function. Stream the data from Kinesis Data Streams to an Amazon Simple Queue Service (Amazon SQS) standard queue. Invoke the Lambda function to process the messages.
- C . Run the Lambda function code in an Amazon Elastic Container Service (Amazon ECS) container that runs on AWS Fargate. Change the code to use the Kinesis Client Library (KCL).
- D . Increase the memory and provisioned concurrency settings for the Lambda function. Stream the data from Kinesis Data Streams to an Amazon Simple Queue Service (Amazon SQS) FIFO queue. Configure the Lambda function to be invoked by the SQS queue.
C
Explanation:
Comprehensive and Detailed 250 to 300 words of Explanation (AWS documentation-based, no links):
The requirements “shard-level control” and “custom checkpointing” point directly to the Kinesis Client Library (KCL), which is designed for building consumer applications that coordinate shard processing, perform load balancing across workers, and manage checkpoints in a durable store (commonly DynamoDB) with fine-grained control. While Lambda can consume from Kinesis Data Streams, its event source mapping abstracts shard coordination and checkpoint behavior; it is not the best fit when you explicitly need custom checkpointing logic and shard-level control beyond the managed integration.
Running the existing Lambda processing logic as a long-running consumer in Amazon ECS on AWS Fargate allows the company to operate a scalable KCL-based consumer fleet without managing servers. With Fargate, AWS manages the underlying compute while the application maintains direct control over shard assignment and checkpoint timing/frequency through KCL―exactly what the requirement calls for. Latency is minimized because the consumer reads directly from Kinesis Data Streams and processes records continuously, avoiding additional buffering layers or store-and-forward patterns.
Option A adds significant latency by delivering to S3 through Firehose and then triggering processing from S3 object notifications; this is optimized for delivery and batch-style processing, not low-latency streaming with shard-level control.
Options B and D introduce SQS between Kinesis and processing, which adds another hop and does not inherently provide shard-level control or KCL-style checkpointing; FIFO also limits throughput and is not intended for high-scale streaming fan-in. Increasing Lambda provisioned concurrency can reduce cold starts, but it does not solve the need for custom checkpointing at the shard level.
Therefore, C best meets shard-level control and custom checkpointing requirements with the least latency by using a direct KCL consumer on a managed compute platform (Fargate).
A company uses an Amazon EC2 instance to run a script to poll for and process messages in an Amazon Simple Queue Service (Amazon SQS) queue. The company wants to reduce operational overhead while maintaining its ability to process an increasing number of messages that are added to the queue.
Which solution will meet these requirements?
- A . Increase the size of the EC2 instance to process messages in the SQS queue faster.
- B . Configure an Amazon EventBridge rule to turn off the EC2 instance when the SQS queue is empty.
- C . Migrate the script on the EC2 instance to an AWS Lambda function with an event source of the SQS queue.
- D . Configure an AWS Systems Manager Run Command to run the script on demand.
C
Explanation:
AWS advises using serverless event-driven processing to minimize operational burden and scale automatically with demand. Amazon SQS can be directly configured as an event source for AWS Lambda. With this setup, Lambda polls the queue, invokes the function, and processes messages without requiring the customer to manage infrastructure. Scaling is automatic, based on the volume of messages in the queue.
Option A (increasing instance size) still leaves scaling and management challenges.
Option B reduces cost when idle but does not address scaling.
Option D requires manual triggering and does not meet continuous processing requirements. Migrating the script to Lambda
(C) fully eliminates the need for instance management, providing scalability, reliability, and reduced operational overhead.
Reference:
• Amazon SQS Developer Guide ― Using AWS Lambda with Amazon SQS
• AWS Well-Architected Framework ― Operational Excellence Pillar
A company has an application that runs on Amazon EC2 instances in an Auto Scaling group. The application uses hardcoded credentials to access an Amazon RDS database.
To comply with new regulations, the company needs to automatically rotate the database password for the application service account every 90 days.
Which solution will meet these requirements?
- A . Create an AWS Lambda function to generate new passwords and upload them to EC2 instances by using SSH.
- B . Create a secret for the database credentials in AWS Secrets Manager. Enable rotation every 90 days. Modify the application to retrieve credentials from Secrets Manager.
- C . Create an Amazon ECS task to rotate passwords and upload them to EC2 instances.
- D . Create a new EC2 instance that runs a cron job to rotate passwords.
B
Explanation:
Hardcoded database credentials create security risks and operational challenges, especially when compliance requires regular credential rotation. AWS Secrets Manager is the AWS-recommended service for securely storing, managing, and rotating secrets such as database credentials.
Option B is the correct solution because Secrets Manager natively supports automated credential rotation using AWS-managed or custom Lambda functions. By enabling rotation on a 90-day schedule, Secrets Manager automatically updates the credentials in the RDS database and stores the new values securely. The application retrieves credentials dynamically at runtime, eliminating the need for storing passwords on EC2 instances.
Options A, C, and D all rely on custom scripts, SSH access, and manual distribution of secrets, which significantly increases operational overhead, security risk, and failure potential. These approaches also violate AWS best practices by spreading sensitive credentials across multiple hosts.
Secrets Manager integrates with IAM for fine-grained access control, supports auditing through AWS CloudTrail, and improves overall security posture while reducing operational complexity.
Therefore, B best meets the requirements in a secure, scalable, and compliant manner.
A company wants to standardize its Amazon Elastic Block Store (Amazon EBS) volume encryption strategy. The company also wants to minimize the cost and configuration effort required to operate the volume encryption check.
Which solution will meet these requirements?
- A . Write API calls to describe the EBS volumes and to confirm the EBS volumes are encrypted. Use Amazon EventBridge to schedule an AWS Lambda function to run the API calls.
- B . Write API calls to describe the EBS volumes and to confirm the EBS volumes are encrypted. Run the API calls on an AWS Fargate task.
- C . Create an AWS Identity and Access Management (IAM) policy that requires the use of tags on EBS volumes. Use AWS Cost Explorer to display resources that are not properly tagged. Encrypt the untagged resources manually.
- D . Create an AWS Config rule for Amazon EBS to evaluate if a volume is encrypted and to flag the volume if it is not encrypted.
D
Explanation:
AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. By creating a Config rule, you can automatically check whether your Amazon EBS volumes are encrypted and flag those that are not, with minimal cost and configuration effort.
AWS Config Rule: AWS Config provides managed rules that you can use to automatically check the compliance of your resources against predefined or custom criteria. In this case, you wouldcreate a rule to evaluate EBS volumes and determine if they are encrypted. If a volume is not encrypted, the rule will flag it, allowing you to take corrective action.
Operational Overhead: This approach significantly reduces operational overhead because once the rule is in place, it continuously monitors your EBS volumes for compliance, and there’s no need for manual checks or custom scripting.
Why Not Other Options?
Option A (Lambda with API calls and EventBridge): While this can work, it involves writing and maintaining custom code, which increases operational overhead compared to using a managed AWS Config rule.
Option B (API calls on Fargate): Running API calls on Fargate is more complex and costly compared to using AWS Config, which provides a simpler, managed solution.
Option C (IAM policy with Cost Explorer): This option does not directly enforce encryption compliance and involves manual intervention, making it less efficient and more prone to errors.
AWS
Reference: AWS Config Rules- Overview of AWS Config rules and how they can be used to evaluate resource configurations.
Amazon EBS Encryption- Information on how to manage and enforce encryption for EBS volumes.
An ecommerce company is redesigning a product catalog system to handle millions of products and provide fast access to product information. The system needs to store structured product data such as product name, price, description, and category. The system also needs to store unstructured data such as high-resolution product videos and user manuals. The architecture must be highly available and must be able to handle sudden spikes in traffic during large-scale sales events.
- A . Use an Amazon RDS Multi-AZ deployment to store product information. Store product videos and user manuals in Amazon S3.
- B . Use Amazon DynamoDB to store product information. Store product videos and user manuals in Amazon S3.
- C . Store all product information, including product videos and user manuals, in Amazon DynamoDB.
- D . Deploy an Amazon DocumentDB (with MongoDB compatibility) cluster to store all product information, product videos, and user manuals.
B
Explanation:
Amazon DynamoDB provides single-digit millisecond performance at any scale and is fully managed to handle millions of catalog records. It is ideal for structured catalog data such as product metadata and scales seamlessly during high-traffic events like sales. Amazon S3 is optimized for storing unstructured large objects such as videos and manuals, with virtually unlimited scalability and high durability.
Option A (RDS) would not handle massive scale or traffic spikes as efficiently.
Option C overloads DynamoDB by forcing it to store large binary data, which is not its purpose.
Option D (DocumentDB) is suitable for JSON-like documents but not optimal for storing large media files and would add operational complexity.
Therefore, option B represents the best separation of structured and unstructured data storage.
Reference:
• DynamoDB Developer Guide ― Millisecond performance at scale
• Amazon S3 User Guide ― Storage for unstructured data
• AWS Well-Architected Framework ― Performance Efficiency Pillar
A company provides a trading platform to customers. The platform uses an Amazon API Gateway REST API, AWS Lambda functions, and an Amazon DynamoDB table. Each trade that the platform processes invokes a Lambda function that stores the trade data in Amazon DynamoDB. The company wants to ingest trade data into a data lake in Amazon S3 for near real-time analysis.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Use Amazon DynamoDB Streams to capture the trade data changes. Configure DynamoDB Streams to invoke a Lambda function that writes the data to Amazon S3.
- B . Use Amazon DynamoDB Streams to capture the trade data changes. Configure DynamoDB Streams to invoke a Lambda function that writes the data to Amazon Data Firehose. Write the data from Data Firehose to Amazon S3.
- C . Enable Amazon Kinesis Data Streams on the DynamoDB table to capture the trade data changes.
Configure Kinesis Data Streams to invoke a Lambda function that writes the data to Amazon S3. - D . Enable Amazon Kinesis Data Streams on the DynamoDB table to capture the trade data changes. Configure a data stream to be the input for Amazon Data Firehose. Write the data from Data Firehose to Amazon S3.
A
Explanation:
DynamoDB Streams: Captures real-time changes in DynamoDB tables and allows integration with Lambda for processing the changes.
Minimal Operational Overhead: Using a Lambda function directly to write data to S3 ensures simplicity and reduces the complexity of the pipeline.
Amazon DynamoDB Streams Documentation
