Practice Free SAA-C03 Exam Online Questions
A media company hosts a web application on AWS. The application gives users the ability to upload and view videos. The application stores the videos in an Amazon S3 bucket. The company wants to ensure that only authenticated users can upload videos. Authenticated users must have the ability to upload videos only within a specified time frame after authentication.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Configure the application to generate IAM temporary security credentials for authenticated users.
- B . Create an AWS Lambda function that generates pre-signed URLs when a user authenticates.
- C . Develop a custom authentication service that integrates with Amazon Cognito to control and log direct S3 bucket access through the application.
- D . Use AWS Security Token Service (AWS STS) to assume a pre-defined IAM role that grants authenticated users temporary permissions to upload videos directly to the S3 bucket.
B
Explanation:
Pre-Signed URLs: Allow temporary access to S3 buckets, making it easy to manage time-limited upload permissions without complex operational overhead.
Lambda for Automation: Automatically generates and provides pre-signed URLs when users authenticate, minimizing manual steps and code complexity.
Least Operational Overhead: Requires no custom authentication service or deep integration with STS or Cognito.
Amazon S3 Pre-Signed URLs Documentation
A company hosts an application in an Amazon EC2 Auto Scaling group. The company has observed that during periods of high demand, new instances take too long to join the Auto Scaling group and serve the increased demand. The company determines that the root cause of the issue is the long boot time of the instances in the Auto Scaling group. The company needs to reduce the time required to launch new instances to respond to demand.
Which solution will meet this requirement?
- A . Increase the maximum capacity of the Auto Scaling group by 50%.
- B . Create a warm pool for the Auto Scaling group. Use the default specification for the warm pool size.
- C . Increase the health check grace period for the Auto Scaling group by 50%.
- D . Create a scheduled scaling action. Set the desired capacity equal to the maximum capacity of the Auto Scaling group.
B
Explanation:
A warm pool is an Auto Scaling feature that keeps instances in a pre-initialized state so they can quickly join the active group when scaling is required. This reduces the time needed for instance bootstrapping and makes new capacity available almost instantly.
Option A only increases capacity limits but does not address slow boot times.
Option C merely extends grace periods without solving the delay.
Option D forces overprovisioning, which is wasteful and not aligned with cost optimization. Using a warm pool (B) directly addresses the problem by reducing response time to scaling events.
Reference:
• Amazon EC2 Auto Scaling User Guide ― Warm pools for scaling faster
• AWS Well-Architected Framework ― Performance Efficiency Pillar: Optimizing responsiveness
A multinational company operates in multiple AWS Regions. The company must ensure that its developers and administrators have secure, role-based access to AWS resources.
The roles must be specific to each user’s geographic location and job responsibilities.
The company wants to implement a solution to ensure that each team can access only resources within the team’s Region. The company wants to use its existing directory service to manage user access. The existing directory service organizes users into roles based on location. The system must be capable of integrating seamlessly with multi-factor authentication (MFA).
Which solution will meet these requirements?
- A . Use AWS Security Token Service (AWS STS) to generate temporary access tokens. Integrate STS with the directory service. Assign Region-specific roles.
- B . Configure AWS IAM Identity Center with federated access. Integrate IAM Identity Center with the directory service to set up Region-specific IAM roles.
- C . Create IAM managed policies that restrict access by location. Apply policies based on group membership in the directory.
- D . Use custom Lambda functions to dynamically assign IAM policies based on login location and job function.
B
Explanation:
IAM Identity Center (formerly AWS SSO) is designed for:
Federated access from external directories (e.g., Active Directory, Okta)
Centralized permission management
Support for MFA
Granular control via Attribute-based access control (ABAC)
“IAM Identity Center allows you to manage SSO access to AWS accounts and business applications centrally. You can assign users and groups permissions based on directory attributes such as Region and job role.”
― IAM Identity Center Docs This option ensures: Federated, centralized access Region-specific permissions
MFA and role mapping via existing directory service
Reference: IAM Identity Center (SSO) Overview
Set Up Attribute-Based Access Control
An insurance company runs an application on premises to process contracts. The application processes jobs that are comprised of many tasks. The individual tasks run for up to 5 minutes. Some jobs can take up to 24 hours in total to finish. If a task fails, the task must be reprocessed.
The company wants to migrate the application to AWS. The company will use Amazon S3 as part of the solution. The company wants to configure jobs to start automatically when a contract is uploaded to an S3 bucket.
Which solution will meet these requirements?
- A . Use AWS Lambda functions to process individual tasks. Create a primary Lambda function to handle the overall job processing by calling individual Lambda functions in sequence. Configure the S3 bucket to send an event notification to invoke the primary Lambda function to begin processing.
- B . Use a state machine in AWS Step Functions to handle the overall contract processing job. Configure the S3 bucket to send an event notification to Amazon EventBridge. Create a rule in Amazon EventBridge to target the state machine.
- C . Use an AWS Batch job to handle the overall contract processing job. Configure the S3 bucket to send an event notification to initiate the Batch job.
- D . Use an S3 event notification to notify an Amazon Simple Queue Service (Amazon SQS) queue when a contract is uploaded. Configure an AWS Lambda function to read messages from the queue and to run the contract processing job.
B
Explanation:
AWS Step Functions supports long-running workflows and error retries, making it ideal for a job composed of many tasks. Integration with EventBridge allows automatic triggering from S3 events. This setup is resilient and supports up to 1-year execution duration.
Reference: AWS Documentation C AWS Step Functions with Amazon EventBridge for Long-Running Workflows
A company runs a Node.js function on a server in its on-premises data center. The data center stores data in a PostgreSQL database. The company stores the credentials in a connection string in an environment variable on the server. The company wants to migrate its application to AWS and to replace the Node.js application server with AWS Lambda. The company also wants to migrate to Amazon RDS for PostgreSQL and to ensure that the database credentials are securely managed.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Store the database credentials as a parameter in AWS Systems Manager Parameter Store. Configure Parameter Store to automatically rotate the secrets every 30 days. Update the Lambda function to retrieve the credentials from the parameter.
- B . Store the database credentials as a secret in AWS Secrets Manager. Configure Secrets Manager to automatically rotate the credentials every 30 days Update the Lambda function to retrieve the credentials from the secret.
- C . Store the database credentials as an encrypted Lambda environment variable. Write a custom Lambda function to rotate the credentials. Schedule the Lambda function to run every 30 days.
- D . Store the database credentials as a key in AWS Key Management Service (AWS KMS). Configure automatic rotation for the key. Update the Lambda function to retrieve the credentials from the KMS key.
B
Explanation:
AWS Secrets Manager is designed specifically to securely store and manage sensitive information such as database credentials. It integrates seamlessly with AWS services like Lambda and RDS, and it provides automatic credential rotation with minimal operational overhead.
AWS Secrets Manager: By storing the database credentials in Secrets Manager, you ensure that the credentials are securely stored, encrypted, and managed. Secrets Manager provides a built-in mechanism to automatically rotate credentials at regular intervals (e.g., every 30 days), which helps in maintaining security best practices without requiring additional manual intervention.
Lambda Integration: The Lambda function can be easily configured to retrieve the credentials from Secrets Manager using the AWS SDK, ensuring that the credentials are accessed securely at runtime.
Why Not Other Options?
Option A (Parameter Store with Rotation): While Parameter Store can store parameters securely, Secrets Manager is more tailored for secrets management and automatic rotation, offering more features and less operational overhead.
Option C (Encrypted Lambda environment variable): Storing credentials directly in Lambda environment variables, even when encrypted, requires custom code to manage rotation, which increases operational complexity.
Option D (KMS with automatic rotation): KMS is for managing encryption keys, not for storing and rotating secrets like database credentials. This option would require more custom implementation to manage credentials securely.
AWS
Reference: AWS Secrets Manager- Detailed documentation on how to store, manage, and rotate secrets using AWS Secrets Manager.
Using Secrets Manager with AWS Lambda- Guidance on integrating Secrets Manager with Lambda for secure credential management.
A transaction-processing company has weekly batch jobs that run on Amazon EC2 instances in an Auto Scaling group. Transaction volume varies, but CPU utilization is always at least 60% during the
batch runs. Capacity must be provisioned 30 minutes before the jobs begin.
Engineers currently scale the Auto Scaling group manually. The company needs an automated solution but cannot allocate time to analyze scaling trends.
Which solution will meet these requirements with the least operational overhead?
- A . Create a dynamic scaling policy based on CPU utilization at 60%.
- B . Create a scheduled scaling policy. Set desired, minimum, and maximum capacity. Set recurrence weekly. Set the start time to 30 minutes before the jobs run.
- C . Create a predictive scaling policy that forecasts CPU usage and pre-launches instances 30 minutes before the jobs run.
- D . Create an EventBridge rule that invokes a Lambda function when CPU reaches 60%. The Lambda function increases the Auto Scaling group size by 20%.
C
Explanation:
Predictive scaling automatically analyzes historical workload patterns, forecasts future capacity needs, and launches instances ahead of time. AWS documentation states that predictive scaling is designed for workloads with recurring, cyclical patterns―such as scheduled weekly batch jobs.
It also supports "pre-launching" capacity before peak demand. This eliminates manual trend analysis and delivers the lowest operational overhead.
Scheduled scaling (Option B) works but requires manual calculation of capacity numbers and updating if patterns change. Predictive scaling removes this burden entirely.
An ecommerce company runs a multi-tier application on AWS. The frontend and backend tiers run on Amazon EC2 instances. The database tier runs on an Amazon RDS for MySQL DB instance.
The application makes frequent calls to return identical datasets from the database. These frequent calls cause performance slowdowns. A solutions architect must improve the performance of the application backend.
Which solution will meet this requirement?
- A . Configure an Amazon Simple Notification Service (Amazon SNS) topic between the EC2 instances and the RDS DB instance.
- B . Configure an Amazon ElastiCache (Redis OSS) cache. Configure the backend EC2 instances to read from the cache.
- C . Configure an Amazon DynamoDB Accelerator (DAX) cluster. Configure the backend EC2 instances to read from the cluster.
- D . Configure Amazon Data Firehose to stream the calls to the database.
B
Explanation:
The key issue is repeated reads of identical data from an RDS MySQL database, which leads to unnecessary database load and degraded performance. The AWS-recommended solution for this pattern is to introduce an in-memory cache between the application and the database.
Amazon ElastiCache (Redis OSS) is purpose-built for caching frequently accessed data with microsecond-level latency. By caching identical query results, the backend EC2 instances can serve responses directly from memory instead of repeatedly querying the database. This significantly reduces read pressure on the RDS instance and improves overall application performance and scalability.
Option B is correct because Redis integrates cleanly with EC2-based applications and supports advanced caching patterns such as key expiration, eviction policies, and fine-grained control over cached objects. This approach also improves resilience during traffic spikes by offloading work from the database.
Option A is incorrect because SNS is a messaging service and does not cache or accelerate database queries.
Option C is incorrect because DynamoDB Accelerator (DAX) works only with DynamoDB tables, not Amazon RDS.
Option D is designed for streaming data ingestion and analytics, not for optimizing synchronous database access.
Therefore, B is the correct solution because it addresses the root cause of the performance problem by caching repeated database reads, following AWS best practices for high-performing application architectures.
A company hosts a web application in a VPC on AWS. A public Application Load Balancer (ALB) forwards connections from the internet to an Auto Scaling group of Amazon EC2 instances. The Auto Scaling group runs in private subnets across four Availability Zones.
The company stores data in an Amazon S3 bucket in the same Region. The EC2 instances use NAT gateways in each Availability Zone for outbound internet connectivity.
The company wants to optimize costs for its AWS architecture.
Which solution will meet this requirement?
- A . Reconfigure the Auto Scaling group and the ALB to use two Availability Zones instead of four. Do not change the desired count or scaling metrics for the Auto Scaling group to maintain application availability.
- B . Create a new, smaller VPC that still has sufficient IP address availability to run the application.
Redeploy the application stack in the new VPC. Delete the existing VPC and its resources. - C . Deploy an S3 gateway endpoint to the VPC. Configure the EC2 instances to access the S3 bucket through the S3 gateway endpoint.
- D . Deploy an S3 interface endpoint to the VPC. Configure the EC2 instances to access the S3 bucket through the S3 interface endpoint.
C
Explanation:
Using S3 gateway endpoints allows private and cost-free access to S3 without routing traffic through a NAT gateway. NAT gateway traffic incurs charges, especially when used across multiple Availability Zones.
By using an S3 gateway endpoint, EC2 instances in private subnets can access S3 directly without needing internet access, reducing both data transfer and NAT gateway costs.
Interface endpoints are more expensive and typically used for services like API Gateway or Systems Manager.
A company stores data in Amazon S3. According to regulations, the data must not contain personally identifiable information (PII). The company recently discovered that S3 buckets have some objects that contain PII. The company needs to automatically detect PII in S3 buckets and to notify the company’s security team.
Which solution will meet these requirements?
- A . Use Amazon Macie. Create an Amazon EventBridge rule to filter the SensitiveData event type from Macie findings and to send an Amazon Simple Notification Service (Amazon SNS) notification to the security team.
- B . Use Amazon GuardDuty. Create an Amazon EventBridge rule to filter the CRITICAL event type from GuardDuty findings and to send an Amazon Simple Notification Service (Amazon SNS) notification to the security team.
- C . Use Amazon Macie. Create an Amazon EventBridge rule to filter the SensitiveData: S3Object/Personal event type from Macie findings and to send an Amazon Simple Queue Service (Amazon SQS) notification to the security team.
- D . Use Amazon GuardDuty. Create an Amazon EventBridge rule to filter the CRITICAL event type from GuardDuty findings and to send an Amazon Simple Queue Service (Amazon SQS) notification to the security team.
A
Explanation:
Amazon Macie: Detects sensitive data such as PII in S3 buckets using machine learning. EventBridge Rule: Filters Macie findings for specific sensitive data events (e.g., SensitiveData). SNS Notification: Provides real-time alerts to the security team for immediate action. Amazon Macie Documentation, Amazon EventBridge Documentation
A company currently stores 5 TB of data in on-premises block storage systems. The company’s current storage solution provides limited space for additional data. The company runs applications on premises that must be able to retrieve frequently accessed data with low latency. The company requires a cloud-based storage solution.
Which solution will meet these requirements with the MOST operational efficiency?
- A . Use Amazon S3 File Gateway Integrate S3 File Gateway with the on-premises applications to store and directly retrieve files by using the SMB file system.
- B . Use an AWS Storage Gateway Volume Gateway with cached volumes as iSCSt targets.
- C . Use an AWS Storage Gateway Volume Gateway with stored volumes as iSCSI targets.
- D . Use an AWS Storage Gateway Tape Gateway. Integrate Tape Gateway with the on-premises applications to store virtual tapes in Amazon S3.
B
Explanation:
The company needs a cloud-based storage solution for frequently accessed data with low latency, while retaining their current on-premises infrastructure for some data storage. AWS Storage Gateway’s Volume Gateway with cached volumes is the most appropriate solution for this scenario.
AWS Storage Gateway – Volume Gateway (Cached Volumes):
Volume Gateway with cached volumes allows you to store frequently accessed data in the AWS Cloud while keeping the most recently accessed data cached locally on-premises. This ensures low-latency access to active data while providing scalability for the rest of the data in the cloud.
The cached volume option stores the primary data in Amazon S3 but caches frequently accessed data locally, ensuring fast access. This configuration is well-suited for applications that require fast access to frequently used data but can tolerate cloud-based storage for the rest.
Since the company is facing limited on-premises storage, cached volumes provide an ideal solution, as they reduce the need for additional on-premises storage infrastructure.
Why Not the Other Options?
Option A (S3 File Gateway): S3 File Gateway provides a file-based interface (SMB/NFS) for storing data directly in S3. While it is great for file storage, the company’s need for block-level storage with iSCSI targets makes Volume Gateway a better fit.
Option C (Volume Gateway – Stored Volumes): Stored volumes keep all the data on-premises and asynchronously back up to AWS. This would not address the company’s storage limitations since they would still need substantial on-premises storage.
Option D (Tape Gateway): Tape Gateway is designed for archiving and backup, not for frequently accessed low-latency data.
AWS
Reference: AWS Storage Gateway – Volume Gateway
