Practice Free SAA-C03 Exam Online Questions
A company runs an ecommerce application on Amazon EC2 instances behind an Application Load Balancer. The instances run in an Amazon EC2 Auto Scaling group across multiple Availability Zones. The Auto Scaling group scales based on CPU utilization metrics. The ecommerce application stores the transaction data in a MySQL 8.0 database that is hosted on a large EC2 instance.
The database’s performance degrades quickly as application load increases. The application handles more read requests than write transactions. The company wants a solution that will automatically scale the database to meet the demand of unpredictable read workloads while maintaining high availability.
- A . Use Amazon Redshift with a single node for leader and compute functionality.
- B . Use Amazon RDS with a Single-AZ deployment. Configure Amazon RDS to add reader instances in a different Availability Zone.
- C . Use Amazon Aurora with a Multi-AZ deployment. Configure Aurora Auto Scaling with Aurora Replicas.
- D . Use Amazon ElastiCache (Memcached) with EC2 Spot Instances.
C
Explanation:
Amazon Aurora MySQLCcompatible offers a distributed, fault-tolerant storage system with Multi-AZ high availability and supports Aurora Replicas that “share the same underlying storage” for low-latency reads. Aurora provides Aurora Auto Scaling to “automatically add or remove Aurora Replicas based on load,” ideal for unpredictable, read-heavy workloads. This architecture offloads reads from the writer and maintains HA through automatic failover. Amazon RDS Single-AZ (B) lacks HA. Redshift
(A) is a data warehouse, not a transactional DB. ElastiCache (D) can reduce read pressure but does not provide durable read replicas or automatic HA scaling at the database tier. Aurora’s design directly addresses the requirement to automatically scale reads while maintaining availability, matching Well-Architected guidance to use managed, elastic services for variable demand.
Reference: Amazon Aurora User Guide ― “Aurora Replicas,” “Aurora Auto Scaling,” “High availability and durability”; AWS Well-Architected Framework ― Performance Efficiency, Reliability (managed, elastic databases).
A company has applications that run in an organization in AWS Organizations. The company outsources operational support of the applications. The company needs to provide access for the external support engineers without compromising security.
The external support engineers need access to the AWS Management Console. The external support engineers also need operating system access to the company’s fleet of Amazon EC2 instances that run Amazon Linux in private subnets.
Which solution will meet these requirements MOST securely?
- A . Confirm that AWS Systems Manager Agent (SSM Agent) is installed on all instances. Assign an instance profile with the necessary policy to connect to Systems Manager. Use AWS IAM IdentityCenter to provide the external support engineers console access. Use Systems Manager Session Manager to assign the required permissions.
- B . Confirm that AWS Systems Manager Agent {SSM Agent) is installed on all instances. Assign an instance profile with the necessary policy to connect to Systems Manager. Use Systems Manager Session Manager to provide local IAM user credentials in each AWS account to the external support engineers for console access.
- C . Confirm that all instances have a security group that allows SSH access only from the external support engineers source IP address ranges. Provide local IAM user credentials in each AWS account to the external support engineers for console access. Provide each external support engineer an SSH key pair to log in to the application instances.
- D . Create a bastion host in a public subnet. Set up the bastion host security group to allow access from only the external engineers’ IP address ranges Ensure that all instances have a security group that allows SSH access from the bastion host. Provide each external support engineer an SSH key pair to log in to the application instances. Provide local account IAM user credentials to the engineers for console access.
A
Explanation:
This solution provides the most secure access for external support engineers with the least exposure to potential security risks.
AWS Systems Manager (SSM) and Session Manager: Systems Manager Session Manager allows secure and auditable access to EC2 instances without the need to open inbound SSH ports or manage SSH keys. This reduces the attack surface significantly. The SSM Agent must be installed and configured on all instances, and the instances must have an instance profile with the necessary IAM permissions to connect to Systems Manager.
IAM Identity Center: IAM Identity Center provides centralized management of access to the AWS Management Console for external support engineers. By using IAM Identity Center, youcan control console access securely and ensure that external engineers have the appropriate permissions based on their roles.
Why Not Other Options?
Option B (Local IAM user credentials): This approach is less secure because it involves managing local IAM user credentials and does not leverage the centralized management and security benefits of IAM Identity Center.
Option C (Security group with SSH access): Allowing SSH access opens up the infrastructure to potential security risks, even when restricted by IP addresses. It also requires managing SSH keys, which can be cumbersome and less secure.
Option D (Bastion host): While a bastion host can secure SSH access, it still requires managing SSH keys and opening ports. This approach is less secure and more operationally intensive compared to using Session Manager.
AWS
Reference: AWS Systems Manager Session Manager- Documentation on using Session Manager for secure instance access.
AWS IAM Identity Center- Overview of IAM Identity Center and its capabilities for managing user access.
A company has deployed a multi-tier web application to support a website. The architecture includes an Application Load Balancer (ALB) in public subnets, two Amazon Elastic Container Service (Amazon ECS) tasks in the public subnets, and a PostgreSQL cluster that runs on Amazon EC2 instances in private subnets.
The EC2 instances that host the PostgreSQL database run shell scripts that need to access an external API to retrieve product information. A solutions architect must design a solution to allow the EC2 instances to securely communicate with the external API without increasing operational overhead.
Which solution will meet these requirements?
- A . Assign public IP addresses to the EC2 instances in the private subnets. Configure security groups to allow outbound internet access.
- B . Configure a NAT gateway in the public subnets. Update the route table for the private subnets to route traffic to the NAT gateway.
- C . Configure a VPC peering connection between the private subnets and a public subnet that has access to the external API.
- D . Deploy an interface VPC endpoint to securely connect to the external API.
B
Explanation:
EC2 instances in private subnets cannot access the internet unless there is a NAT gateway or a NAT instance configured.
“To enable instances in a private subnet to connect to the internet or other AWS services, you can use a NAT gateway or NAT instance.”
― NAT Gateways C Amazon VPC In this use case:
EC2 instances are in private subnets
They need to call external APIs (internet access)
The most operationally efficient and secure method is to place a NAT Gateway in a public subnet and update the route table for private subnets to route internet-bound traffic through it.
Incorrect A: Private subnets don’t support public IPs.
C: VPC peering doesn’t help reach the public internet.
D: Interface endpoints are for private connectivity to AWS services, not external APIs.
Reference: NAT Gateway Documentation VPC Best Practices
A company is developing a rating system for its ecommerce web application. The company needs a solution to save ratings that users submit in an Amazon DynamoDB table.
The company wants to ensure that developers do not need to interact directly with the DynamoDB table. The solution must be scalable and reusable.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Create an Application Load Balancer (ALB). Create an AWS Lambda function, and set the function as a target group in the ALB. Invoke the Lambda function by using the put_item method through the ALB.
- B . Create an AWS Lambda function. Configure the Lambda function to interact with the DynamoDB table by using the put-item method from Boto3. Invoke the Lambda function from the web application.
- C . Create an Amazon Simple Queue Service (Amazon SQS) queue and an AWS Lambda function that has an SQS trigger type. Instruct the developers to add customer ratings to the SQS queue as JSON messages. Configure the Lambda function to fetch the ratings from the queue and store the ratings in DynamoDB.
- D . Create an Amazon API Gateway REST API Define a resource and create a new POST method Choose AWS as the integration type, and select DynamoDB as the service. Set the action to PutItem.
D
Explanation:
Amazon API Gatewayprovides a scalable and reusable solution for interacting with DynamoDB without requiring direct access by developers. By setting up a REST API with a POST methodthat integrates with DynamoDB’sPutItemaction, developers can submit data (such as user ratings) to the DynamoDB table through API Gateway, without having to directly interact with the database. This solution is serverless and minimizes operational overhead.
Option A: Using ALB with Lambda adds complexity and is less efficient for this use case.
Option B: While using Lambda is possible, API Gateway provides a more scalable, reusable interface.
Option C: SQS with Lambda introduces unnecessary components for a simple put operation.
AWS
Reference: Amazon API Gateway with DynamoDB
An ecommerce company hosts an application on AWS across multiple Availability Zones. The application experiences uniform load throughout most days.
The company hosts some components of the application in private subnets. The components need to access the internet to install and update patches.
A solutions architect needs to design a cost-effective solution that provides secure outbound internet connectivity for private subnets across multiple Availability Zones. The solution must maintain high availability.
- A . Deploy one NAT gateway in each Availability Zone. Configure the route table for each pri-vate subnet within an Availability Zone to route outbound traffic through the NAT gateway in the same Availability Zone.
- B . Place one NAT gateway in a designated Availability Zone within the VPC. Configure the route tables of the private subnets in each Availability Zone to direct outbound traffic specifi-cally through the NAT gateway for internet access.
- C . Deploy an Amazon EC2 instance in a public subnet. Configure the EC2 instance as a NAT instance. Set up the instance with security groups that allow inbound traffic from private sub-nets and outbound internet access. Configure route tables to direct traffic from the private sub-nets through the NAT instance.
- D . Use one NAT Gateway in a Network Load Balancer (NLB) target group. Configure private subnets in each Availability Zone to route traffic to the NLB for outbound internet access.
A
Explanation:
AWS guidance for NAT Gateway recommends deploying “a NAT gateway in each Availability Zone and configure your routing to ensure that resources use the NAT gateway in the same Availability Zone.” This provides “zone-independent architecture” and avoids cross-AZ data processing charges and single-AZ failures.
Option B creates a single point of failure and incurs cross-AZ egress charges when private subnets in other AZs traverse a centralized NAT. NAT instances (C) are legacy, require manual scaling/failover/patching, and are not recommended for production HA.
Option D is not supported (NLB cannot front a NAT Gateway as a target). With steady, uniform load, per-AZ NAT Gateways deliver high availability with predictable cost; routing each private subnet to its local NAT Gateway maintains security (no inbound initiated connections) and resilience. This meets the requirement for cost-effective, secure outbound connectivity across multiple AZs while preserving availability.
Reference: VPC NAT Gateway documentation ― Multi-AZ best practices and same-AZ routing; AWS Well-Architected Framework ― Reliability and Cost Optimization (avoid single points of failure; minimize cross-AZ data transfer).
A company is developing a highly available natural language processing (NLP) application. The application handles large volumes of concurrent requests. The application performs NLP tasks such as entity recognition, sentiment analysis, and key phrase extraction on text data.
The company needs to store data that the application processes in a highly available and scalable database.
- A . Create an Amazon API Gateway REST API endpoint to handle incoming requests. Configure the REST API to invoke an AWS Lambda function for each request. Configure the Lambda function to call Amazon Comprehend to perform NLP tasks on the text data. Store the processed data in Amazon
DynamoDB. - B . Create an Amazon API Gateway HTTP API endpoint to handle incoming requests. Configure the HTTP API to invoke an AWS Lambda function for each request. Configure the Lambda function to call Amazon Translate to perform NLP tasks on the text data. Store the processed data in Amazon ElastiCache.
- C . Create an Amazon SQS queue to buffer incoming requests. Deploy the NLP application on Amazon EC2 instances in an Auto Scaling group. Use Amazon Comprehend to perform NLP tasks. Store the processed data in an Amazon RDS database.
- D . Create an Amazon API Gateway WebSocket API endpoint to handle incoming requests. Configure the WebSocket API to invoke an AWS Lambda function for each request. Configure the Lambda function to call Amazon Textract to perform NLP tasks on the text data. Store the processed data in Amazon ElastiCache.
A
Explanation:
A gaming company is building an application that uses a database to store user data. The company wants the database to have an active-active configuration that allows data writes to a secondary AWS Region. The database must achieve a sub-second recovery point objective (RPO).
- A . Deploy an Amazon ElastiCache (Redis OSS) cluster. Configure a global data store for disaster recovery. Configure the ElastiCache cluster to cache data from an Amazon RDS database that is deployed in the primary Region.
- B . Deploy an Amazon DynamoDB table in the primary Region and the secondary Region. Configure Amazon DynamoDB Streams to invoke an AWS Lambda function to write changes from the table in the primary Region to the table in the secondary Region.
- C . Deploy an Amazon Aurora MySQL database in the primary Region. Configure a global database for the secondary Region.
- D . Deploy an Amazon DynamoDB table in the primary Region. Configure global tables for the secondary Region.
D
Explanation:
A media company hosts a mobile app backend in the AWS Cloud. The company is releasing a new feature to allow users to upload short videos and apply special effects by using the mobile app. The company uses AWS Amplify to store the videos that customers upload in an Amazon S3 bucket.
The videos must be processed immediately. Users must receive a notification when processing is finished.
Which solution will meet these requirements?
- A . Use Amazon EventBridge Scheduler to schedule an AWS Lambda function to process the videos. Save the processed videos to the S3 bucket. Use Amazon Simple Notification Service (Amazon SNS) to send push notifications to customers when processing is finished.
- B . Use Amazon EventBridge Scheduler to schedule AWS Fargate to process the videos. Save the processed videos to the S3 bucket. Use Amazon Simple Notification Service (Amazon SNS) to send push notifications to customers when processing is finished.
- C . Use an S3 trigger to invoke an AWS Lambda function to process the videos. Save the processed
videos to the S3 bucket. Use Amazon Simple Notification Service (Amazon SNS) to send push notifications to customers when processing is finished. - D . Use an S3 trigger to invoke an AWS Lambda function to process the videos. Save the processed videos to the S3 bucket. Use AWS Amplify to send push notifications to customers when processing is finished.
C
Explanation:
The requirement is for immediate processing of uploaded videos and prompt notification to users. According to AWS best practices for event-driven architectures, using S3 event notifications to trigger a Lambda function upon an object creation (upload) is the optimal solution for real-time processing. Lambda can process the file as soon as it is uploaded, ensuring low latency. Once processing is complete, Lambda can save the processed file back to S3 and use Amazon SNS to notify the user.
This approach uses managed services with minimal operational overhead, is scalable, and ensures event-driven processing with instant user feedback. AWS Amplify primarily facilitates application development and hosting but does not natively provide direct push notification support for this backend workflow; instead, SNS is designed for such notification scenarios.
Reference Extract from AWS Documentation / Study Guide:
"Amazon S3 can publish events to AWS Lambda when objects are created. AWS Lambda runs code in response to events and can interact with other AWS services. Amazon SNS is a flexible, fully managed pub/sub messaging and mobile notifications service for coordinating the delivery of messages to subscribing endpoints and clients."
Source: AWS Certified Solutions Architect C Official Study Guide, Event-driven Architectures section; AWS Lambda Developer Guide (S3 triggers); Amazon SNS User Guide.
A company has a web application that uses Amazon API Gateway to route HTTPS requests to AWS Lambda functions. The application uses an Amazon Aurora MySQL database for its data storage. The application has experienced unpredictable surges in traffic that overwhelm the database with too many connection requests. The company wants to implement a scalable solution that is more resilient to database failures.
Which solution will meet these requirements MOST cost-effectively?
- A . Create an Amazon RDS proxy for the database. Replace the database endpoint with the proxy endpoint in the Lambda functions.
- B . Migrate the database to Amazon DynamoDB tables by using AWS Database Migration Service (AWS DMS).
- C . Review the existing connections. Call MySQL queries to end any connections in the sleep state.
- D . Increase the instance class of the database with more memory. Set a larger value for the
max_connections parameter.
A
Explanation:
Amazon RDS Proxy helps manage and pool database connections from serverless compute like AWS Lambda, significantly reducing the stress on the database during unpredictable traffic surges. It improves scalability and resiliency by efficiently managing connections, protecting the database from being overwhelmed, and enabling failover handling.
Option A is the most cost-effective and operationally efficient approach to handling unpredictable surges and improving resilience without requiring major application changes.
Option B involves a migration to DynamoDB, which is a significant architectural change and costlier initially.
Option C is manual connection cleanup, insufficient for unpredictable surges.
Option D increases resources but does not solve connection storm problems efficiently and is more costly.
Reference: Amazon RDS Proxy (https: //docs.aws.amazon.com/AmazonRDS/latest/UserGuide/rds-proxy.html)
AWS Well-Architected Framework ― Reliability Pillar (https: //d1.awsstatic.com/whitepapers/architecture/AWS_Well-Architected_Framework.pdf)
AWS Lambda Best Practices (https: //docs.aws.amazon.com/lambda/latest/dg/best-practices.html)
A company is building an Amazon Elastic Kubernetes Service (Amazon EKS) cluster for its workloads.
All secrets that are stored in Amazon EKS must be encrypted in the Kubernetes etcd key-value store.
Which solution will meet these requirements?
- A . Create a new AWS Key Management Service (AWS KMS) key. Use AWS Secrets Manager to manage, rotate, and store all secrets in Amazon EKS.
- B . Create a new AWS Key Management Service (AWS KMS) key. Enable Amazon EKS KMS secrets encryption on the Amazon EKS cluster.
- C . Create the Amazon EKS cluster with default options. Use the Amazon Elastic Block Store (Amazon EBS) Container Storage Interface (CSI) driver as an add-on.
- D . Create a new AWS Key Management Service (AWS KMS) key with the alias/aws/ebs alias. Enable default Amazon Elastic Block Store (Amazon EBS) volume encryption for the account.
B
Explanation:
In Amazon EKS, Kubernetes stores objects such as Secrets in the cluster’s etcd key-value store. By default, Kubernetes Secrets are base64-encoded and are not automatically encrypted at the application object level unless encryption is configured. Amazon EKS provides a managed capability to encrypt Kubernetes Secrets at rest in etcd using AWS Key Management Service (KMS). The requirement explicitly states that “all secrets that are stored in Amazon EKS must be encrypted in the Kubernetes etcd key-value store,” which maps directly to enabling EKS secrets encryption with a customer-managed KMS key.
Option B is exactly this: create a KMS key and enable EKS KMS secrets encryption on the cluster. With this enabled, EKS uses envelope encryption so that Secrets are encrypted when stored in etcd, and decrypt operations are controlled through KMS permissions. This is the standard, AWS-native method that fulfills the requirement without requiring application changes or external secret stores for the encryption-at-rest requirement in etcd.
Option A (Secrets Manager) is a strong service for secret lifecycle management and rotation, but it does not by itself guarantee that Kubernetes Secrets stored in etcd are encrypted unless EKS secrets encryption is also enabled or the cluster avoids storing secrets in etcd entirely. The question specifically targets etcd encryption.
Option C is unrelated: the EBS CSI driver concerns persistent volumes, not etcd secret encryption.
Option D concerns EBS volume encryption and a specific AWS-managed key alias used for EBS; it also does not address etcd encryption for Kubernetes Secrets.
Therefore, B is the correct solution because it directly enables encryption of Kubernetes Secrets within etcd using KMS.
