Practice Free SAA-C03 Exam Online Questions
A company has primary and secondary data centers that are 500 miles (804.7 km) apart and interconnected with high-speed fiber-optic cable. The company needs a highly available and secure network connection between its data centers and a VPC on AWS for a mission-critical workload.
A solutions architect must choose a connection solution that provides maximum resiliency.
Which solution meets these requirements?
- A . Two AWS Direct Connect connections from the primary data center terminating at two Direct Connect locations on two separate devices
- B . A single AWS Direct Connect connection from each of the primary and secondary data centers terminating at one Direct Connect location on the same device
- C . Two AWS Direct Connect connections from each of the primary and secondary data centers terminating at two Direct Connect locations on two separate devices
- D . A single AWS Direct Connect connection from each of the primary and secondary data centers terminating at one Direct Connect location on two separate devices
C
Explanation:
For maximum resiliency and fault tolerance in a mission-critical scenario, AWS recommends redundant Direct Connect connections from multiple data centers to multiple AWS Direct Connect locations.
This protects against:
Data center failure
Device failure
Location outages
“For workloads that require high availability, we recommend that you use multiple Direct Connect connections at multiple Direct Connect locations.”
― AWS Direct Connect Resiliency Recommendations Option C follows AWS maximum resiliency model.
Reference: AWS Direct Connect Resiliency Models High Availability Using AWS Direct Connect
A company deployed an application in two AWS Regions. If the application fails in one Region, traffic
must fail over to the second Region. The failover must avoid stale DNS client caches, and the company requires one endpoint for both Regions.
Which solution meets these requirements?
- A . Use a CloudFront distribution with multiple origins.
- B . Use Route 53 weighted routing with equal weights.
- C . Use AWS Global Accelerator and assign static anycast IPs to the application.
- D . Use Route 53 IP-based routing to switch Regions.
C
Explanation:
AWS Global Accelerator provides static anycast IP addresses that remain constant regardless of Regional failover. AWS directs traffic to the optimal healthy Region without relying on DNS TTL values, eliminating the risk of stale DNS caches.
Route 53 routing (Options B and D) still depends on DNS caching behavior. CloudFront (Option A) is for content delivery, not Regional failover for applications.
A company uses a single Amazon S3 bucket to store data that multiple business applications must access. The company hosts the applications on Amazon EC2 Windows instances that are in a VPC. The company configured a bucket policy for the S3 bucket to grant the applications access to the bucket.
The company continually adds more business applications to the environment. As the number of business applications increases, the policy document becomes more difficult to manage. The S3 bucket policy document will soon reach its policy size quota. The company needs a solution to scale its architecture to handle more business applications.
Which solution will meet these requirements in the MOST operationally efficient way?
- A . Migrate the data from the S3 bucket to an Amazon Elastic File System (Amazon EFS) volume.
Ensure that all application owners configure their applications to use the EFS volume. - B . Deploy an AWS Storage Gateway appliance for each application. Reconfigure the applications to use a dedicated Storage Gateway appliance to access the S3 objects instead of accessing the objects directly.
- C . Create a new S3 bucket for each application. Configure S3 replication to keep the new buckets synchronized with the original S3 bucket. Instruct application owners to use their respective S3 buckets.
- D . Create an S3 access point for each application. Instruct application owners to use their respective S3 access points.
D
Explanation:
Amazon S3 Access Points simplify managing data access for shared datasets in S3 by allowing the creation of distinct access policies for different applications or users. Each access point has its own policy and can be managed independently. This method avoids overloading a single bucket policy and helps remain within policy size limits.
Option D provides a scalable and operationally efficient solution by offloading individual access controls from a central bucket policy to individually managed access points, which is ideal for environments with many consuming applications.
A company has a serverless web application that is comprised of AWS Lambda functions. The application experiences spikes in traffic that cause increased latency because of cold starts. The company wants to improve the application’s ability to handle traffic spikes and to minimize latency. The solution must optimize costs during periods when traffic is low.
Which solution will meet these requirements?
- A . Configure provisioned concurrency for the Lambda functions. Use AWS Application Auto Scaling to adjust the provisioned concurrency.
- B . Launch Amazon EC2 instances in an Auto Scaling group. Add a scheduled scaling policy to launch additional EC2 instances during peak traffic periods.
- C . Configure provisioned concurrency for the Lambda functions. Set a fixed concurrency level to handle the maximum expected traffic.
- D . Create a recurring schedule in Amazon EventBridge Scheduler. Use the schedule to invoke the Lambda functions periodically to warm the functions.
A
Explanation:
Provisioned Concurrency:
AWS Lambda’s provisioned concurrency ensures that a predefined number of execution environments are pre-warmed and ready to handle requests, reducing latency during traffic spikes.
This solution optimizes costs during low-traffic periods when combined with AWS Application Auto Scaling to dynamically adjust the provisioned concurrency based ondemand.
Incorrect Options Analysis:
Option B: Switching to EC2 would increase complexity and cost for a serverless application.
Option C: A fixed concurrency level may result in over-provisioning during low-traffic periods, leading to higher costs.
Option D: Periodically warming functions does not effectively handle sudden spikes in traffic.
Reference: AWS Lambda Provisioned Concurrency
A company’s solutions architect is building a static website to be deployed in Amazon S3 for a production environment. The website integrates with an Amazon Aurora PostgreSQL database by using an AWS Lambda function. The website that is deployed to production will use a Lambda alias that points to a specific version of the Lambda function.
The company must rotate the database credentials every 2 weeks. Lambda functions that the company deployed previously must be able to use the most recent credentials.
Which solution will meet these requirements?
- A . Store the database credentials in AWS Secrets Manager. Turn on rotation. Write code in the Lambda function to retrieve the credentials from Secrets Manager.
- B . Include the database credentials as part of the Lambda function code. Update the credentials periodically and deploy the new Lambda function.
- C . Use Lambda environment variables. Update the environment variables when new credentials are available.
- D . Store the database credentials in AWS Systems Manager Parameter Store. Turn on rotation. Write code in the Lambda function to retrieve the credentials from Systems Manager Parameter Store.
A
Explanation:
AWS Secrets Manager is the managed service for securely storing, rotating, and retrieving database credentials. When you store Aurora credentials in Secrets Manager and enable automatic rotation, Secrets Manager updates the credentials in both the database and the stored secret.
Each Lambda function version or alias can call Secrets Manager at runtime to retrieve the current secret value, so even older deployed Lambda versions that use an alias will always obtain the most recent credentials without redeployment.
Why others are not suitable:
B: Embeds credentials in code, requiring redeployment on every rotation and violating security best practices.
C: Environment variables are version-specific; old aliases would continue using outdated values unless you redeploy or change them.
D: Parameter Store can store and rotate secrets but is less integrated for database credential rotation than Secrets Manager; Secrets Manager is the purpose-built minimal-overhead choice here.
A company has developed an API by using an Amazon API Gateway REST API and AWS Lambda functions. The API serves static content and dynamic content to users worldwide. The company wants to decrease the latency of transferring the content for API requests.
Which solution will meet these requirements?
- A . Deploy the REST API as an edge-optimized API endpoint. Enable caching. Enable content encoding in the API definition to compress the application data in transit.
- B . Deploy the REST API as a Regional API endpoint. Enable caching. Enable content encoding in the API definition to compress the application data in transit.
- C . Deploy the REST API as an edge-optimized API endpoint. Enable caching. Configure reserved concurrency for the Lambda functions.
- D . Deploy the REST API as a Regional API endpoint. Enable caching. Configure reserved concurrency for the Lambda functions.
A
Explanation:
Edge-Optimized API: Designed for global users by routing requests through CloudFront’s edge locations, reducing latency.
Content Encoding: Enabling content encoding compresses data, further optimizing performance by decreasing payload size.
Caching: Adding API Gateway caching reduces the number of calls to Lambda and database backends, improving latency.
Reserved Concurrency: Although useful, this does not directly affect latency for transferring static and dynamic content.
AWS API Gateway Edge-Optimized APIs Documentation
A company has a VPC with multiple private subnets that host multiple applications. The applications must not be accessible to the internet. However, the applications need to access multiple AWS services. The applications must not use public IP addresses to access the AWS services.
- A . Configure interface VPC endpoints for the required AWS services. Route traffic from the private subnets through the interface VPC endpoints.
- B . Deploy a NAT gateway in each private subnet. Route traffic from the private subnets through the NAT gateways.
- C . Deploy internet gateways in each private subnet. Route traffic from the private subnets through the internet gateways.
- D . Set up an AWS Direct Connect connection between the private subnets. Route traffic from the private subnets through the Direct Connect connection.
A
Explanation:
AWS VPC endpoints (interface and gateway) allow private connectivity from VPC resources to AWS services without requiring public IP addresses or internet gateways. This ensures applications remain isolated in private subnets while securely accessing AWS services. NAT gateways (B) would allow internet access, which does not meet the security requirement. Internet gateways (C) directly expose traffic to the internet, which violates the isolation requirement. Direct Connect (D) connects on-premises environments to AWS but does not provide service access from private subnets.
Therefore, option A ― using interface VPC endpoints ― is the correct solution.
Reference:
• Amazon VPC User Guide ― VPC endpoints (interface and gateway)
• AWS Well-Architected Framework ― Security Pillar: Network isolation and private connectivity
A company has an online gaming application that has TCP and UDP multiplayer gaming capabilities. The company uses Amazon Route 53 to point the application traffic to multiple Network Load Balancers (NLBs) in different AWS Regions. The company needs to improve application performance and decrease latency for the online game in preparation for user growth.
Which solution will meet these requirements?
- A . Add an Amazon CloudFront distribution in front of the NLBs. Increase the Cache-Control: max-age parameter.
- B . Replace the NLBs with Application Load Balancers (ALBs). Configure Route 53 to use latency-based routing.
- C . Add AWS Global Accelerator in front of the NLBs. Configure a Global Accelerator endpoint to use the correct listener ports.
- D . Add an Amazon API Gateway endpoint behind the NLBs. Enable API caching. Override method caching for the different stages.
C
Explanation:
AWS Global Accelerator is designed to improve the availability and performance of applications with global users by using the AWS global network. It provides static anycast IP addresses and routes user traffic over the AWS edge network to the optimal AWS Region and endpoint based on health, geography, and routing policies. Global Accelerator supports both TCP and UDP traffic and can have Network Load Balancers as endpoints.
For latency-sensitive workloads such as multiplayer gaming, Global Accelerator reduces latency and jitter compared to internet-based routing and handles Regional failover quickly.
CloudFront (Option A) is optimized for HTTP/HTTPS content caching and is not appropriate for arbitrary TCP/UDP gaming traffic. Application Load Balancers (Option B) do not support UDP traffic. API Gateway (Option D) is for HTTP APIs and is not suitable for raw TCP/UDP game traffic.
A company plans to store sensitive user data on Amazon S3. Internal security compliance requirements mandate encryption of data before sending it to Amazon S3.
What should a solutions architect recommend to satisfy these requirements?
- A . Server-side encryption with customer-provided encryption keys
- B . Client-side encryption with Amazon S3 managed encryption keys
- C . Server-side encryption with keys stored in AWS Key Management Service (AWS KMS)
- D . Client-side encryption with a key stored in AWS Key Management Service (AWS KMS)
C
Explanation:
Explanation (AWS Docs):
Although the question says “before sending it,” AWS best practice for sensitive data is SSE-KMS (Server-side encryption with AWS KMS keys), which gives full key usage auditing. It integrates with AWS KMS and provides compliance-friendly encryption at rest automatically.
“SSE-KMS uses AWS Key Management Service to manage encryption keys. SSE-KMS also provides an audit trail of key usage.”
― Protecting Data Using Server-Side Encryption Why not D?
Client-side encryption requires custom key management and adds operational overhead. C is simpler and compliant.
A solutions architect is designing the architecture for a company website that is composed of static content. The company’s target customers are located in the United States and Europe.
Which architecture should the solutions architect recommend to MINIMIZE cost?
- A . Store the website files on Amazon S3 in the us-east-2 Region. Use an Amazon CloudFront distribution with the price class configured to limit the edge locations in use.
- B . Store the website files on Amazon S3 in the us-east-2 Region. Use an Amazon CloudFront distribution with the price class configured to maximize the use of edge locations.
- C . Store the website files on Amazon S3 in the us-east-2 Region and the eu-west-1 Region. Use an Amazon CloudFront geolocation routing policy to route requests to the closest Region to the user.
- D . Store the website files on Amazon S3 in the us-east-2 Region and the eu-west-1 Region. Use an Amazon CloudFront distribution with an Amazon Route 53 latency routing policy to route requests to the closest Region to the user.
A
Explanation:
The question focuses on minimizing costs while serving static content to users in the US and Europe.
Option A uses a single S3 bucket and configures CloudFront to limit edge locations, reducing costs by using fewer edge locations while still improving performance.
Option B maximizes edge locations, which increases costs unnecessarily.
Options C and Dinvolve storing data in multiple regions, which increases storage and operational costs. Thus, Option A is the most cost-effective solution.
