Practice Free SAA-C03 Exam Online Questions
A company is designing a serverless application to process a large number of events within an AWS account. The application saves the events to a data warehouse for further analysis. The application sends incoming events to an Amazon SQS queue. Traffic between the application and the SQS queue must not use public IP addresses.
- A . Create a VPC endpoint for Amazon SQS. Set the queue policy to deny all access except from the VPC endpoint.
- B . Configure server-side encryption with SQS-managed keys (SSE-SQS).
- C . Configure AWS Security Token Service (AWS STS) to generate temporary credentials for resources that access the queue.
- D . Configure VPC Flow Logs to detect SQS traffic that leaves the VPC.
A
Explanation:
Amazon SQS supports Interface VPC endpoints (AWS PrivateLink), enabling private connectivity from your VPC to SQS without using public IPs, traversing the public Internet, or requiring NAT/IGW. You can restrict access by attaching a queue resource policy that allows only the specific VPC endpoint and denies all other principals/paths, enforcing that all traffic stays on the AWS network. SSE-SQS (B) encrypts data at rest but does not influence network pathing. STS temporary credentials (C) handle authentication/authorization, not routing. VPC Flow Logs (D) are monitoring/visibility and do not prevent public egress. Creating an SQS VPC endpoint and tightening the queue policy satisfies the requirement of no public IP usage while maintaining secure, private access from serverless components in VPC subnets.
Reference: Amazon SQS ― VPC endpoints (PrivateLink) and endpoint policies; Amazon SQS queue policies and condition keys; Security best practices for private access.
A company stores sensitive financial reports in an Amazon S3 bucket. To comply with auditing requirements, the company must encrypt the data at rest. Users must not have the ability to change the encryption method or remove encryption when the users upload data. The company must be able to audit all encryption and storage actions.
Which solution will meet these requirements and provide the MOST granular control?
- A . Enable default server-side encryption with Amazon S3 managed keys (SSE-S3) for the S3 bucket. Apply a bucket policy that denies any upload requests that do not include the x-amz-server-side-encryption header.
- B . Configure server-side encryption with AWS KMS (SSE-KMS) keys. Use an S3 bucket policy to reject any data that is not encrypted by the designated key.
- C . Use client-side encryption before uploading the reports. Store the encryption keys in AWS Secrets Manager.
- D . Enable default server-side encryption with Amazon S3 managed keys (SSE-S3). Use AWS Identity and Access Management (IAM) to prevent users from changing S3 bucket settings.
B
Explanation:
AWS KMS with SSE-KMS provides granular key management and auditability. All use of KMS keys is logged in AWS CloudTrail, which allows compliance teams to monitor encryption and decryption operations. A bucket policy can be configured to enforce uploads only with the designated KMS key, ensuring that users cannot bypass encryption or change methods.
Option A (SSE-S3 with bucket policy) enforces encryption but does not provide the same level of control or auditable key usage.
Option C (client-side encryption) increases complexity and key management burden.
Option D prevents bucket setting changes but does not prevent unencrypted uploads.
Therefore, B ensures the most granular control, auditability, and compliance with financial data requirements.
Reference:
• Amazon S3 User Guide ― Using SSE-KMS for encryption
• AWS KMS Developer Guide ― Key management and auditing with CloudTrail
• AWS Well-Architected Framework ― Security Pillar
A company needs a cloud-based solution for backup, recovery, and archiving while retaining encryption key material control.
Which combination of solutions will meet these requirements? (Select TWO)
- A . Create an AWS Key Management Service (AWS KMS) key without key material. Import the company’s key material into the KMS key.
- B . Create an AWS KMS encryption key that contains key material generated by AWS KMS.
- C . Store the data in Amazon S3 Standard-Infrequent Access (S3 Standard-IA). Use S3 Bucket Keyswith AWS KMS keys.
- D . Store the data in an Amazon S3 Glacier storage class. Use server-side encryption with customer-provided keys (SSE-C).
- E . Store the data in AWS Snowball devices. Use server-side encryption with AWS KMS keys (SSE-KMS).
A,D
Explanation:
Option Aallows importing your own encryption keys into AWS KMS, ensuring control over key material.
Option Duses S3 Glacier with SSE-C, where the customer controls the encryption keys, meeting compliance needs.
Option Buses AWS-managed key material, violating the requirement for key material control.
Option C and Eare not fully compliant with the control requirement.
A company is migrating a new application from an on-premises data center to a new VPC in the AWS Cloud. The company has multiple AWS accounts and VPCs that share many subnets and applications.
The company wants to have fine-grained access control for the new application. The company wants to ensure that all network resources across accounts and VPCs that are granted permission to access the new application can access the application.
- A . Set up a VPC peering connection for each VPC that needs access to the new application VPC. Update route tables in each VPC to enable connectivity.
- B . Deploy a transit gateway in the account that hosts the new application. Share the transit gateway with each account that needs to connect to the application. Update route tables in the VPC that hosts the new application and in the transit gateway to enable connectivity.
- C . Use an AWS Private Link endpoint service to make the new application accessible to other VPCs. Control access to the application by using an endpoint policy.
- D . Use an Application Load Balancer (ALB) to expose the new application to the internet. Configure authentication and authorization processes to ensure that only specified VPCs can access the application.
C
Explanation:
AWS PrivateLinkis the most suitable solution for providing fine-grained access control while allowing multiple VPCs, potentially across multiple accounts, to access the new application. This approach offers the following advantages:
Fine-grained control: Endpoint policies can restrict access to specific services or principals.
No need for route table updates: Unlike VPC peering or transit gateways, AWS PrivateLink does not require complex route table management.
Scalable architecture: PrivateLink scales to support traffic from multiple VPCs.
Secure connectivity: Ensures private connectivity over the AWS network, without exposing resources to the internet.
Why Other Options Are Not Ideal:
Option A:
VPC peering is not scalable when connecting multiple VPCs or accounts.
Route table management becomes complex as the number of VPCs increases. Not scalable.
Option B:
While transit gateways provide scalable VPC connectivity, they are not ideal for fine-grained access control.
Transit gateways allow connectivity but do not inherently restrict access to specific applications. Not ideal for fine-grained access control.
Option D:
Exposing the application through an ALB over the internet is not secure and does not align with the requirement to use private network resources. Security risk.
AWS
Reference: AWS PrivateLink: AWS Documentation – PrivateLink
AWS Networking Services Comparison: AWS Whitepaper – Networking Services
A logistics company is creating a data exchange platform to share shipment status information with shippers. The logistics company can see all shipment information and metadata. The company distributes shipment data updates to shippers.
Each shipper should see only shipment updates that are relevant to their company. Shippers should not see the full detail that is visible to the logistics company. The company creates an Amazon Simple Notification Service (Amazon SNS) topic for each shipper to share data. Some shippers use a mobile app to submit shipment status updates.
The company needs to create a data exchange platform that provides each shipper specific access to the data that is relevant to their company.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Ingest the shipment updates from the mobile app into Amazon Simple Queue Service (Amazon SQS). Publish the updates to the SNS topic. Apply a filter policy to rewrite the body of each message.
- B . Ingest the shipment updates from the mobile app into Amazon Simple Queue Service (Amazon SQS). Use an AWS Lambda function to consume the updates from Amazon SQS and rewrite the body of each message. Publish the updates to the SNS topic.
- C . Ingest the shipment updates from the mobile app into a second SNS topic. Publish the updates to the shipper SNS topic. Apply a filter policy to rewrite the body of each message.
- D . Ingest the shipment updates from the mobile app into Amazon Simple Queue Service (Amazon
SQS). Filter and rewrite the messages in Amazon EventBridge Pipes. Publish the updates to the SNS topic.
B
Explanation:
The best solution is to useAmazon SQSto receive updates from the mobile app and process them with anAWS Lambdafunction. The Lambda function can rewrite the message body as necessary for each shipper and then publish the updates to the appropriateSNS topicfor distribution. Thissetup ensures that each shipper receives only the relevant data and minimizes operational overhead by using managed services.
Option A (SNS filter policy): SNS does not have the capability to rewrite message bodies before forwarding.
Option C (Second SNS topic): Using an additional SNS topic adds unnecessary complexity without solving the message rewriting requirement.
Option D (EventBridge Pipes): EventBridge Pipes is more complex than necessary for this use case, and Lambda can handle the logic more efficiently.
AWS
Reference: Amazon SQS
Amazon SNS with Lambda
A company wants to standardize its Amazon Elastic Block Store (Amazon EBS) volume encryption strategy. The company also wants to minimize the cost and configuration effort required to operate the volume encryption check.
Which solution will meet these requirements?
- A . Write API calls to describe the EBS volumes and to confirm the EBS volumes are encrypted. Use Amazon EventBridge to schedule an AWS Lambda function to run the API calls.
- B . Write API calls to describe the EBS volumes and to confirm the EBS volumes are encrypted. Run the API calls on an AWS Fargate task.
- C . Create an AWS Identity and Access Management (IAM) policy that requires the use of tags on EBS volumes. Use AWS Cost Explorer to display resources that are not properly tagged. Encrypt the untagged resources manually.
- D . Create an AWS Config rule for Amazon EBS to evaluate if a volume is encrypted and to flag the volume if it is not encrypted.
D
Explanation:
AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. By creating a Config rule, you can automatically check whether your Amazon EBS
volumes are encrypted and flag those that are not, with minimal cost and configuration effort.
AWS Config Rule: AWS Config provides managed rules that you can use to automatically check the compliance of your resources against predefined or custom criteria. In this case, you wouldcreate a rule to evaluate EBS volumes and determine if they are encrypted. If a volume is not encrypted, the rule will flag it, allowing you to take corrective action.
Operational Overhead: This approach significantly reduces operational overhead because once the rule is in place, it continuously monitors your EBS volumes for compliance, and there’s no need for manual checks or custom scripting.
Why Not Other Options?
Option A (Lambda with API calls and EventBridge): While this can work, it involves writing and maintaining custom code, which increases operational overhead compared to using a managed AWS Config rule.
Option B (API calls on Fargate): Running API calls on Fargate is more complex and costly compared to using AWS Config, which provides a simpler, managed solution.
Option C (IAM policy with Cost Explorer): This option does not directly enforce encryption compliance and involves manual intervention, making it less efficient and more prone to errors.
AWS
Reference: AWS Config Rules- Overview of AWS Config rules and how they can be used to evaluate resource configurations.
Amazon EBS Encryption- Information on how to manage and enforce encryption for EBS volumes.
A company has a social media application that is experiencing rapid user growth. The current architecture uses t-family Amazon EC2 instances. The current architecture struggles to handle the increasing number of user posts and images. The application experiences performance slowdowns during peak usage times.
A solutions architect needs to design an updated architecture that will resolve the performance issues and scale as usage increases.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Use the largest Amazon EC2 instance in the same family to host the application. Install a relational database on the instance to store all account information and to store posts and images.
- B . Use Amazon Simple Queue Service (Amazon SQS) to buffer incoming posts. Use a larger EC2 instance in the same family to host the application. Store account information in Amazon DynamoDB. Store posts and images in the local EC2 instance file system.
- C . Use an Amazon API Gateway REST API and AWS Lambda functions to process requests. Store
account information in Amazon DynamoDB. Use Amazon S3 to store posts and images. - D . Deploy multiple EC2 instances in the same family. Use an Application Load Balancer to distribute traffic. Use a shared file system to store account information and to store posts and images.
C
Explanation:
This question focuses on scalability, operational overhead, and performance during unpredictable workloads.
API Gateway + AWS Lambda enables serverless compute, which scales automatically based on the number of requests. It requires no provisioning, maintenance, or patching of servers ― eliminating operational overhead.
Amazon DynamoDB is a fully managed NoSQL database optimized for high-throughput workloads with single-digit millisecond latency.
Amazon S3 is designed for high availability and durability, and is ideal for storing unstructured content such as user-uploaded images.
By leveraging these fully managed and scalable services, the architecture meets the requirement of supporting rapid user growth while minimizing operational complexity. This solution aligns with the Performance Efficiency and Operational Excellence pillars in the AWS Well-Architected Framework.
Reference: Serverless Web Application Architecture
Using DynamoDB with Lambda
Best Practices for API Gateway
A company is launching a new application that will be hosted on Amazon EC2 instances. A solutions architect needs to design a solution that does not allow public IPv4 access that originates from the internet. However, the solution must allow the EC2 instances to make outbound IPv4 internet requests.
- A . Deploy a NAT gateway in public subnets in both Availability Zones. Create and configure one route table for each private subnet.
- B . Deploy an internet gateway in public subnets in both Availability Zones. Create and configure a shared route table for the private subnets.
- C . Deploy a NAT gateway in public subnets in both Availability Zones. Create and configure a shared route table for the private subnets.
- D . Deploy an egress-only internet gateway in public subnets in both Availability Zones. Create and configure one route table for each private subnet.
C
Explanation:
Why Option C is Correct:
NAT Gateway: Allows private subnets to access the internet for outbound requests while preventing inbound connections.
High Availability: Deploying NAT gateways in both AZs ensures fault tolerance. Shared Route Table: Simplifies routing configuration for private subnets.
Why Other Options Are Not Ideal:
Option A: Creating separate route tables for each subnet adds unnecessary complexity.
Option B: Internet gateways allow inbound access, violating the requirement to block public IPv4 access.
Option D: Egress-only internet gateways are designed for IPv6, not IPv4.
AWS
Reference: Amazon VPC NAT Gateway: AWS Documentation – NAT Gateway
A company’s packaged application dynamically creates and returns single-use text files in response to user requests. The company is using Amazon CloudFront for distribution, but wants to further reduce data transfer costs. The company cannot modify the application’s source code.
What should a solutions architect do to reduce costs?
- A . Use Lambda@Edge to compress the files as they are sent to users.
- B . Enable Amazon S3 Transfer Acceleration to reduce the response times.
- C . Enable caching on the CloudFront distribution to store generated files at the edge.
- D . Use Amazon S3 multipart uploads to move the files to Amazon S3 before returning them to users.
A
Explanation:
Lambda@Edge allows you to run functions at CloudFront edge locations, enabling modifications to content before it’s delivered to users, including compression ― which directly reduces data transfer cost.
“Lambda@Edge can be used to compress or modify your content before delivering it to end users, which reduces the amount of data transferred.”
― Lambda@Edge Use Cases
Since code modifications aren’t allowed, using Lambda@Edge is a non-invasive way to:
Compress responses.
Reduce transfer size = lower CloudFront cost.
Incorrect B: S3 Transfer Acceleration improves speed, not cost.
C: Caching doesn’t help if content is always unique (single-use).
D: Multipart uploads help with large file uploads, not transfers.
Reference: Lambda@Edge for Content Compression CloudFront Pricing and Cost Optimization
A company is building a web application that serves a content management system. The content management system runs on Amazon EC2 instances behind an Application Load Balancer (ALB). The EC2 instances run in an Auto Scaling group across multiple Availability Zones. Users are constantly adding and updating files, blogs, and other website assets in the content management system.
A solutions architect must implement a solution in which all the EC2 instances share up-to-date website content with the least possible lag time.
- A . Update the EC2 user data in the Auto Scaling group lifecycle policy to copy the website assets from the EC2 instance that was launched most recently. Configure the ALB to make changes to the website assets only in the newest EC2 instance.
- B . Copy the website assets to an Amazon Elastic File System (Amazon EFS) file system. Configure each EC2 instance to mount the EFS file system locally. Configure the website hosting application to reference the website assets that are stored in the EFS file system.
- C . Copy the website assets to an Amazon S3 bucket. Ensure that each EC2 instance downloads the website assets from the S3 bucket to the attached Amazon Elastic Block Store (Amazon EBS) volume. Run the S3 sync command once each hour to keep files up to date.
- D . Restore an Amazon Elastic Block Store (Amazon EBS) snapshot with the website assets. Attach the EBS snapshot as a secondary EBS volume when a new EC2 instance is launched. Configure the website hosting application to reference the website assets that are stored in the secondary EBS volume.
B
Explanation:
Amazon EFS provides a shared, elastic, low-latency file system that can be mounted concurrently by many EC2 instances across multiple Availability Zones, delivering strong read-after-write consistency so all instances see updates almost immediately. This is the standard pattern for CMS-style workloads that require shared, up-to-date assets with minimal lag. Syncing local copies from S3 (C) introduces polling windows and eventual consistency delays; hourly sync is not near-real time. Copying from a “newest instance” (A) is brittle and not scalable. EBS volumes/snapshots (D) are single-instance, single-AZ block devices and not designed for multi-writer sharing across instances/AZs. EFS’s multi-AZ design and POSIX semantics provide the simplest, most reliable solution with the least operational overhead.
Reference: Amazon EFS ― Use cases and benefits; Performance and consistency model; Mount targets across multiple AZs; Shared file storage for web content and CMS.
