Practice Free SAA-C03 Exam Online Questions
A company needs to store confidential files on AWS. The company accesses the files every week. The company must encrypt the files by using envelope encryption, and the encryption keys must be rotated automatically. The company must have an audit trail to monitor encryption key usage.
Which combination of solutions will meet these requirements? (Select TWO.)
- A . Store the confidential files in Amazon S3.
- B . Store the confidential files in Amazon S3 Glacier Deep Archive.
- C . Use server-side encryption with customer-provided keys (SSE-C).
- D . Use server-side encryption with Amazon S3 managed keys (SSE-S3).
- E . Use server-side encryption with AWS KMS managed keys (SSE-KMS).
A, E
Explanation:
Amazon S3 is suitable for storing data that needs to be accessed weekly and integrates with AWS Key Management Service (KMS) to provide encryption at rest with server-side encryption using KMS-managed keys (SSE-KMS).
SSE-KMS uses envelope encryption and allows automatic key rotation and logging through AWS CloudTrail, satisfying the requirements for audit trails and compliance.
S3 Glacier Deep Archive is unsuitable due to its high retrieval latency. SSE-C requires customer-side management of encryption keys, with no support for automatic rotation or audit. SSE-S3 does not use customer-managed keys and lacks fine-grained control and auditing.
A company is planning to deploy its application on an Amazon Aurora PostgreSQL Serverless v2 cluster. The application will receive large amounts of traffic. The company wants to optimize the storage performance of the cluster as the load on the application increases
Which solution will meet these requirements MOST cost-effectively?
- A . Configure the cluster to use the Aurora Standard storage configuration.
- B . Configure the cluster storage type as Provisioned IOPS.
- C . Configure the cluster storage type as General Purpose.
- D . Configure the cluster to use the Aurora l/O-Optimized storage configuration.
D
Explanation:
Aurora I/O-Optimized: This storage configuration is designed to provide consistent high performance for Aurora databases. It automatically scales IOPS as the workload increases, without needing to provision IOPS separately.
Cost-Effectiveness: With Aurora I/O-Optimized, you only pay for the storage and I/O you use, making it a cost-effective solution for applications with varying and unpredictable I/O demands.
Implementation:
During the creation of the Aurora PostgreSQL Serverless v2 cluster, select the I/O-Optimized storage configuration.
The storage system will automatically handle scaling and performance optimization based on the application load.
Operational Efficiency: This configuration reduces the need for manual tuning and ensures optimal performance without additional administrative overhead.
Reference: Amazon Aurora I/O-Optimized
A company needs a solution to back up and protect critical AWS resources. The company needs to regularly take backups of several Amazon EC2 instances and Amazon RDS for PostgreSQL databases. To ensure high resiliency, the company must have the ability to validate and restore backups.
Which solution meets the requirement with LEAST operational overhead?
- A . Use AWS Backup to create a backup schedule for the resources. Use AWS Backup to create a restoration testing plan for the required resources.
- B . Take snapshots of the EC2 instances and RDS DB instances. Create AWS Batch jobs to validate and restore the snapshots.
- C . Create a custom AWS Lambda function to take snapshots of the EC2 instances and RDS DB instances. Create a second Lambda function to restore the snapshots periodically to validate the backups.
- D . Take snapshots of the EC2 instances and RDS DB instances. Create an AWS Lambda function to restore the snapshots periodically to validate the backups.
A
Explanation:
AWS Backup is a fully managed backup service designed to centralize and automate data protection across AWS services including EC2 and RDS. It allows users to define backup schedules (backup plans) and automatically create and retain backups. AWS Backup also offers restore testing plans, allowing users to automate the validation of backups by restoring them in a controlled manner. This service is built to minimize operational overhead by removing the need to manage custom scripts, manual processes, or additional orchestration services. This aligns with AWS best practices for resilience, automation, and operational excellence.
Reference Extract from AWS Documentation / Study Guide:
"AWS Backup enables you to centralize and automate data protection across AWS services. You can create backup plans, schedule backups, and set lifecycle policies. AWS Backup also enables restore testing to verify your backup integrity, with minimal manual intervention."
Source: AWS Certified Solutions Architect C Official Study Guide, Resiliency and Disaster Recovery section; AWS Backup User Guide.
A company plans to rehost an application to Amazon EC2 instances that use Amazon Elastic Block Store (Amazon EBS) as the attached storage
A solutions architect must design a solution to ensure that all newly created Amazon EBS volumes are encrypted by default. The solution must also prevent the creation of unencrypted EBS volumes
Which solution will meet these requirements?
- A . Configure the EC2 account attributes to always encrypt new EBS volumes.
- B . Use AWS Config. Configure the encrypted-volumes identifier Apply the default AWS Key Management Service (AWS KMS) key.
- C . Configure AWS Systems Manager to create encrypted copies of the EBS volumes. Reconfigure the EC2 instances to use the encrypted volumes
- D . Create a customer managed key in AWS Key Management Service (AWS KMS) Configure AWS Migration Hub to use the key when the company migrates workloads.
A
Explanation:
EC2 Account Attributes: Amazon EC2 allows you to set account attributes to automatically encrypt new EBS volumes. This ensures that all new volumes created in your account are encrypted by default.
Configuration Steps:
Go to the EC2 Dashboard.
Select "Account Attributes" and then "EBS encryption".
Enable default EBS encryption and select the default AWS KMS key or a customer-managed key.
Prevention of Unencrypted Volumes: By setting this account attribute, you ensure that it is not possible to create unencrypted EBS volumes, thereby enforcing compliance with security requirements.
Operational Efficiency: This solution requires minimal configuration changes and provides automatic enforcement of encryption policies, reducing operational overhead.
Reference: Amazon EC2 Default EBS Encryption
A company needs to create an AWS Lambda function that will run in a VPC in the company’s primary AWS account. The Lambda function needs to access files that the company stores in an Amazon Elastic File System (Amazon EFS) file system. The EFS file system is located in a secondary AWS account. As the company adds files to the file system, the solution must scale to meet the demand.
Which solution will meet these requirements MOST cost-effectively?
- A . Create a new EFS file system in the primary account. Use AWS DataSync to copy the contents of
the original EFS file system to the new EFS file system. - B . Create a VPC peering connection between the VPCs that are in the primary account and the secondary account.
- C . Create a second Lambda function in the secondary account that has a mount that is configured for the file system. Use the primary account’s Lambda function to invoke the secondary account’s Lambda function.
- D . Move the contents of the file system to a Lambda layer. Configure the Lambda layer’s permissions to allow the company’s secondary account to use the Lambda layer.
B
Explanation:
Amazon EFS is a regional, elastic file system that “scales to petabytes” and can be accessed from “thousands of compute instances” concurrently. You can mount EFS across VPCs and across AWS accounts by providing network connectivity (e.g., VPC peering) to the EFS mount targets and allowing NFS (TCP 2049) in the mount target security group, optionally using EFS access points and a file system policy for cross-account access control. VPC peering has no hourly charge and minimal operational overhead, and EFS automatically scales as files are added―meeting the scalability and cost goals.
Option A duplicates data, incurs DataSync and extra storage costs, and adds sync lag.
Option C adds an extra Lambda hop and complexity without exposing a shared filesystem to the primary function.
Option D is impractical: Lambda layers are immutable artifacts with tight size limits (up to 50 MB compressed/250 MB uncompressed) and are not suited for dynamic, growing file sets.
Reference: Amazon EFS User Guide ― “Accessing EFS across VPCs and accounts,” “Mount targets and security groups,” “EFS access points,” and “EFS automatically scales to petabytes.”
A solutions architect needs to design a solution for a high performance computing (HPC) workload. The solution must include multiple Amazon EC2 instances. Each EC2 instance requires 10 Gbps of bandwidth individually for single-flow traffic. The EC2 instances require an aggregate throughput of 100 Gbps of bandwidth across all EC2 instances. Communication between the EC2 instances must have low latency.
Which solution will meet these requirements?
- A . Place the EC2 instances in a single subnet of a VPC. Configure a cluster placement group. Ensure that the latest Elastic Fabric Adapter (EFA) drivers are installed on the EC2 instances with a supported operating system.
- B . Place the EC2 instances in multiple subnets in a single VPC. Configure a spread placement group. Ensure that the EC2 instances support Elastic Network Adapters (ENAs) and that the drivers are updated on each instance operating system.
- C . Place the EC2 instances in multiple VPCs. Use AWS Transit Gateway to route traffic between the VPCs. Ensure that the latest Elastic Fabric Adapter (EFA) drivers are installed on the EC2 instances with a supported operating system.
- D . Place the EC2 instances in multiple subnets across multiple Availability Zones. Configure a cluster placement group. Ensure that the EC2 instances support Elastic Network Adapters (ENAs) and that the drivers are updated on each instance operating system.
A
Explanation:
HPC workloads require high-throughput, low-latency networking, especially for tightly-coupled applications like weather modeling, genomics, or real-time rendering.
A cluster placement group places instances in the same Availability Zone and on physically connected hardware, reducing network latency and increasing throughput.
Elastic Fabric Adapter (EFA) is a network device for EC2 instances that enables low-latency, high-throughput networking using OS-bypass technology, ideal for tightly-coupled HPC applications.
Each instance can support single-flow 10 Gbps bandwidth using EFA, and collectively, the cluster can achieve up to 100 Gbps aggregate throughput when properly configured.
This solution supports the Performance Efficiency and Resilience design principles and is a standard AWS-recommended pattern for HPC.
Reference: EC2 Placement Groups
Elastic Fabric Adapter Overview
Best Practices for HPC on AWS
A company uses AWS to host a public website. The load on the webservers recently increased.
The company wants to learn more about the traffic flow and traffic sources. The company also wants to increase the overall security of the website.
Which solution will meet these requirements?
- A . Deploy AWS WAF and set up logging. Use Amazon Data Firehose to deliver the log files to an Amazon S3 bucket for analysis.
- B . Deploy Amazon API Gateway and set up logging. Use Amazon Kinesis Data Streams to deliver the log files to an Amazon S3 bucket for analysis.
- C . Deploy a Network Load Balancer and set up logging. Use Amazon Data Firehose to deliver the log files to an Amazon S3 bucket for analysis.
- D . Deploy an Application Load Balancer and set up logging. Use Amazon Kinesis Data Streams to deliver the log files to an Amazon S3 bucket for analysis.
A
Explanation:
AWS WAF (Web Application Firewall) is designed to protect public-facing web applications from common web exploits and allows for deep inspection of HTTP and HTTPS requests. By enabling logging on AWS WAF, you can gain insights into traffic flow, request sources, and blocked requests, which improves both security visibility and posture. Integrating AWS WAF logging with Amazon Kinesis Data Firehose allows automatic, near-real-time delivery of log data to Amazon S3 for analysis, reporting, or integration with analytics tools. This solution not only secures the website but also enables comprehensive traffic analysis, satisfying both requirements efficiently.
Reference Extract from AWS Documentation / Study Guide:
"AWS WAF provides detailed logs of web requests, which can be delivered to Amazon S3 via Amazon Kinesis Data Firehose. This logging enables analysis of web traffic and sources, supporting both security and operational monitoring."
Source: AWS Certified Solutions Architect C Official Study Guide, Security and Compliance section; AWS WAF Developer Guide (Logging and Monitoring).
A company is migrating a distributed application to AWS. The application serves variable workloads. The legacy platform consists of a primary server that coordinates jobs across multiple compute nodes. The company wants to modernize the application with a solution that maximizes resiliency and scalability.
How should a solutions architect design the architecture to meet these requirements?
- A . Configure an Amazon Simple Queue Service (Amazon SQS) queue as a destination for the jobs. Implement the compute nodes with Amazon EC2 instances that are managed in an Auto Scaling group. Configure EC2 Auto Scaling to use scheduled scaling.
- B . Configure an Amazon Simple Queue Service (Amazon SQS) queue as a destination for the jobs. Implement the compute nodes with Amazon EC2 instances that are managed in an Auto Scaling group. Configure EC2 Auto Scaling based on the size of the queue.
- C . Implement the primary server and the compute nodes with Amazon EC2 instances that are managed in an Auto Scaling group. Configure AWS CloudTrail as a destination for the jobs. Configure EC2 Auto Scaling based on the load on the primary server.
- D . Implement the primary server and the compute nodes with Amazon EC2 instances that are
managed in an Auto Scaling group. Configure Amazon EventBridge as a destination for the jobs.
Configure EC2 Auto Scaling based on the load on the compute nodes.
B
Explanation:
To decouple distributed workloads and improve scalability and resiliency, Amazon SQS should be used as a reliable job queue. The compute nodes (workers) can poll the SQS queue for tasks, and EC2 Auto Scaling can dynamically scale instances based on the ApproximateNumberOfMessages metric.
From AWS Documentation:
“Amazon SQS enables you to decouple application components, allowing each part to scale independently. You can scale EC2 instances automatically based on the number of messages in the queue.”
(Source: Amazon SQS Developer Guide C Scaling Consumers with Auto Scaling)
Why B is correct:
Eliminates the single point of failure (the primary coordinator).
Enables event-driven scaling based on queue depth.
Provides durability and resiliency since messages are stored redundantly.
Fully managed and integrates seamlessly with Auto Scaling policies.
Why other options are incorrect:
A: Scheduled scaling does not respond to variable workloads.
C & D: Misuse of CloudTrail/EventBridge; not intended for workload coordination.
Reference: Amazon SQS Developer Guide C “Integrating SQS with Auto Scaling” AWS Well-Architected Framework C Reliability Pillar
A company runs an AWS Lambda function in private subnets in a VPC. The subnets have a default route to the internet through an Amazon EC2 NAT instance. The Lambda function processes input data and saves its output as an object to Amazon S3.
Intermittently, the Lambda function times out while trying to upload the object because of saturated traffic on the NAT instance’s network. The company wants to access Amazon S3 without traversing the internet.
Which solution will meet these requirements?
- A . Replace the EC2 NAT instance with an AWS managed NAT gateway.
- B . Increase the size of the EC2 NAT instance in the VPC to a network optimized instance type
- C . Provision a gateway endpoint for Amazon S3 in the VPC. Update the route tables of the subnets accordingly.
- D . Provision a transit gateway. Place transit gateway attachments in the private subnets where the Lambda function is running.
C
Explanation:
Gateway Endpoint for Amazon S3: A VPC endpoint for Amazon S3 allows you to privately connect your VPC to Amazon S3 without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection.
Provisioning the Endpoint:
Navigate to the VPC Dashboard.
Select "Endpoints" and create a new endpoint.
Choose the service name for S3 (com.amazonaws.region.s3).
Select the appropriate VPC and subnets.
Adjust the route tables of the subnets to include the new endpoint.
Update Route Tables: Modify the route tables of the subnets to direct traffic destined for S3 to the newly created endpoint. This ensures that traffic to S3 does not go through the NAT instance, avoiding the saturated network and eliminating timeouts.
Operational Efficiency: This solution minimizes operational overhead by removing dependency on the NAT instance and avoiding internet traffic, leading to more stable and secure S3 interactions.
Reference: VPC Endpoints for Amazon S3
Creating a Gateway Endpoint
A company is developing a social media application that must scale to meet demand spikes and handle ordered processes.
Which AWS services meet these requirements?
- A . ECS with Fargate, RDS, and SQS for decoupling.
- B . ECS with Fargate, RDS, and SNS for decoupling.
- C . DynamoDB, Lambda, DynamoDB Streams, and Step Functions.
- D . Elastic Beanstalk, RDS, and SNS for decoupling.
A
Explanation:
Option A combines ECS with Fargate for scalability, RDS for relational data, and SQS for decoupling with message ordering (FIFO queues).
Option B uses SNS, which does not maintain message order.
Option C is suitable for serverless workflows but not relational data.
Option D relies on Elastic Beanstalk, which offers less flexibility for scaling.
