Practice Free SAA-C03 Exam Online Questions
A company is moving data from an on-premises data center to the AWS Cloud. The company must store all its data in an Amazon S3 bucket. To comply with regulations, the company must also ensure that the data will be protected against overwriting indefinitely.
Which solution will ensure that the data in the S3 bucket cannot be overwritten?
- A . Enable versioning for the S3 bucket. Use server-side encryption with Amazon S3 managed keys (SSE-S3) to protect the data.
- B . Disable versioning for the S3 bucket. Configure S3 Object Lock for the S3 bucket with a retention
period of 1 year. - C . Enable versioning for the S3 bucket. Configure S3 Object Lock for the S3 bucket with a legal hold.
- D . Configure S3 Storage Lens for the S3 bucket. Use server-side encryption with customer-provided keys (SSE-C) to protect the data.
A
Explanation:
Versioning in S3 preserves every version of every object ― preventing permanent overwrites. This ensures compliance where data cannot be overwritten or lost.
SSE-S3 ensures server-side encryption.
“When you enable versioning, Amazon S3 stores every version of every object. With versioning, you can preserve, retrieve, and restore every version of every object stored in an S3 bucket.”
― S3 Versioning
This satisfies regulatory requirements for protecting data from overwriting indefinitely.
Incorrect Options:
B: Object Lock with retention period is time-bound, not indefinite.
C: Legal hold blocks deletion but doesn’t directly prevent overwriting.
D: Storage Lens is analytics, not protection.
A company wants to publish a private website for its on-premises employees. The website consists of several HTML pages and image files. The website must be available only through HTTPS and must be available only to on-premises employees. A solutions architect plans to store the website files in an Amazon S3 bucket.
Which solution will meet these requirements?
- A . Create an S3 bucket policy to deny access when the source IP address is not the public IP address of the on-premises environment Set up an Amazon Route 53 alias record to point to the S3 bucket.
Provide the alias record to the on-premises employees to grant the employees access to the website. - B . Create an S3 access point to provide website access. Attach an access point policy to deny access when the source IP address is not the public IP address of the on-premises environment. Provide the S3 access point alias to the on-premises employees to grant the employees access to the website.
- C . Create an Amazon CloudFront distribution that includes an origin access control (OAC) that is configured for the S3 bucket. Use AWS Certificate Manager for SSL. Use AWS WAF with an IP set rule that allows access for the on-premises IP address. Set up an Amazon Route 53 alias record to point to the CloudFront distribution.
- D . Create an Amazon CloudFront distribution that includes an origin access control (OAC) that is configured for the S3 bucket. Create a CloudFront signed URL for the objects in the bucket. Set up an Amazon Route 53 alias record to point to the CloudFront distribution. Provide the signed URL to the on-premises employees to grant the employees access to the website.
C
Explanation:
This solution usesCloudFrontto serve the website securely over HTTPS using AWS Certificate Manager (ACM)for SSL certificates.
Origin Access Control (OAC)ensures that only CloudFront can access the S3 bucket directly. AWS WAF with an IP set rule restricts access to the website, allowing only the on-premises IP address. Route 53is used to create an alias record pointing to the CloudFront distribution. This setup ensures secure, private access to the website with low administrative overhead.
Option A and B: S3 bucket policies and access points do not provide HTTPS support, nor do they offer the same level of security as CloudFront with WAF.
Option D: Signed URLs are more suitable for temporary, expiring access rather than a permanent solution for on-premises employees.
AWS
Reference: Amazon CloudFront with Origin Access Control
A company hosts multiple applications on AWS for different product lines. The applications use different compute resources, including Amazon EC2 instances and Application Load Balancers. The applications run in different AWS accounts under the same organization in AWS Organizations across multiple AWS Regions. Teams for each product line have tagged each compute resource in the individual accounts.
The company wants more details about the cost for each product line from the consolidated billing feature in Organizations.
Which combination of steps will meet these requirements? (Select TWO.)
- A . Select a specific AWS generated tag in the AWS Billing console.
- B . Select a specific user-defined tag in the AWS Billing console.
- C . Select a specific user-defined tag in the AWS Resource Groups console.
- D . Activate the selected tag from each AWS account.
- E . Activate the selected tag from the Organizations management account.
B, E
Explanation:
To break down consolidated billing costs by “product line” using tags, the company must use cost allocation tags and ensure those tags are activated for billing at the payer/management level. Because the teams have already applied user-defined tags to the compute resources across accounts, the correct approach is to (1) use those user-defined tags in the billing tools and (2) activate them so they appear in cost and usage reporting for consolidated billing.
B is required because cost reporting by product line depends on selecting the user-defined cost allocation tag key (for example, ProductLine=…) in AWS billing and cost management tooling (such as Cost Explorer reports). AWS-generated tags are not what teams typically use for chargeback by business dimension, and the scenario explicitly says teams tagged resources (i.e., user-defined).
E is required because in an AWS Organizations consolidated billing setup, cost allocation tag activation is handled centrally from the Organizations management account so that the tags can be used consistently in consolidated cost views and reporting. Activating tags centrally ensures the billing system recognizes and includes those tag dimensions in cost allocation data.
Option C is not relevant to billing breakdown; Resource Groups is for organizing and operating on resources, not for consolidated billing allocation.
Option D is unnecessary in this scenario’s intended best practice because activation for consolidated billing reporting is done from the management account to standardize reporting. In short, you tag resources (already done), then activate the tag for billing and use it in billing reports to see cost by product line across accounts and Regions.
Therefore, B and E together meet the requirement with the correct billing-focused steps.
A news company that has reporters all over the world is hosting its broadcast system on AWS. The reporters send live broadcasts to the broadcast system. The reporters use software on their phones to send live streams through the Real Time Messaging Protocol (RTMP).
A solutions architect must design a solution that gives the reporters the ability to send the highest quality streams. The solution must provide accelerated TCP connections back to the broadcast system.
What should the solutions architect use to meet these requirements?
- A . Amazon CloudFront
- B . AWS Global Accelerator
- C . AWS Client VPN
- D . Amazon EC2 instances and AWS Elastic IP addresses
B
Explanation:
AWS Global Accelerator: This service provides a global fixed entry point to your applications and optimizes the path to your application through the AWS global network, reducing latency and improving performance.
Accelerated TCP Connections:
Global Accelerator uses the AWS global network to route traffic to the nearest edge location, improving the performance and reliability of your live streams.
It provides static IP addresses that act as a fixed entry point to your application, simplifying DNS management.
High-Quality Streams:
By leveraging Global Accelerator, reporters can send live streams with the highest quality and low latency.
This service automatically reroutes traffic to the nearest available AWS Region, ensuring consistent performance even during traffic spikes or failures.
Operational Efficiency: Using Global Accelerator simplifies the network setup and provides an optimized path for live streams without the need for complex configurations, making it an efficient solution for real-time streaming applications.
Reference: AWS Global Accelerator
How Global Accelerator Works
A company runs Amazon EC2 instances as web servers. Peak traffic occurs at two predictable times each day. The web servers remain mostly idle during the rest of the day.
A solutions architect must manage the web servers while maintaining fault tolerance in the most cost-effective way.
Which solution will meet these requirements?
- A . Use an EC2 Auto Scaling group to scale the instances based on demand.
- B . Purchase Reserved Instances to ensure peak capacity at all times.
- C . Use a cron job to stop the EC2 instances when traffic demand is low.
- D . Use a script to vertically scale the EC2 instances during peak demand.
A
Explanation:
AWS documentation states that EC2 Auto Scaling is the recommended, cost-effective, and fault-tolerant method to manage workloads with predictable or varying demand. Auto Scaling automatically adds instances during peak traffic and terminates them when demand is low, reducing compute cost while maintaining availability across multiple Availability Zones.
Reserved Instances (Option B) would force the company to pay for peak capacity all day, which is not cost-effective.
Stopping instances manually (Option C) or vertically scaling instances (Option D) reduces fault tolerance and increases operational overhead.
A company is migrating a daily Microsoft Windows batch job from the company’s on-premises environment to AWS. The current batch job runs for up to 1 hour. The company wants to modernize the batch job process for the cloud environment.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Create a fleet of Amazon EC2 instances in an Auto Scaling group to handle the Windows batch job processing.
- B . Implement an AWS Lambda function to process the Windows batch job. Use an Amazon EventBridge rule to invoke the Lambda function.
- C . Use AWS Fargate to deploy the Windows batch job as a container. Use AWS Batch to manage the batch job processing.
- D . Use Amazon Elastic Kubernetes Service (Amazon EKS) on Amazon EC2 instances to orchestrate Windows containers for the batch job processing.
C
Explanation:
AWS Batch supports Windows-based jobs and automates provisioning and scaling of compute environments. Paired with AWS Fargate, it removes the need to manage infrastructure. This solution requires the least operational overhead and is cloud-native, providing flexibility and scalability.
Reference: AWS Documentation C AWS Batch with Fargate for Windows Workloads
A developer creates a web application that runs on Amazon EC2 instances behind an Application Load Balancer (ALB). The instances are in an Auto Scaling group. The developer reviews the deployment and notices some suspicious traffic to the application. The traffic is malicious and is coming from a single public IP address. A solutions architect must block the public IP address.
Which solution will meet this requirement?
- A . Create a security group rule to deny all inbound traffic from the suspicious IP address. Associate the security group with the ALB.
- B . Implement Amazon Detective to monitor traffic and to block malicious activity from the internet.
Configure Detective to integrate with the ALB. - C . Implement AWS Resource Access Manager (AWS RAM) to manage traffic rules and to block malicious activity from the internet. Associate AWS RAM with the ALB.
- D . Add the malicious IP address to an IP set in AWS WAF. Create a web ACL. Include an IP set rule with the action set to BLOCK. Associate the web ACL with the ALB.
D
Explanation:
When an application is fronted by an Application Load Balancer (ALB) and malicious traffic is detected from a specific IP, the correct way to block the IP is by using AWS WAF (Web Application Firewall).
With AWS WAF, you can create an IP Set to include the offending IP address or range.
Then create a Web ACL (Access Control List) with a rule set to BLOCK requests from that IP set.
Finally, associate the Web ACL with the ALB.
Security groups (Option A) cannot deny specific IPs because they are stateful and allow-only rules.
Amazon Detective (Option B) is a security analysis and investigation tool; it doesn’t block traffic.
AWS RAM (Option C) is for resource sharing across accounts, not for blocking IPs.
This approach aligns with AWS’s Security Pillar of the Well-Architected Framework and is fully managed, with minimal operational effort.
Reference: Using AWS WAF with an Application Load Balancer
Block IPs with AWS WAF
A company runs a Node.js function on a server in its on-premises data center. The data center stores data in a PostgreSQL database. The company stores the credentials in a connection string in an environment variable on the server. The company wants to migrate its application to AWS and to replace the Node.js application server with AWS Lambda. The company also wants to migrate to Amazon RDS for PostgreSQL and to ensure that the database credentials are securely managed.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Store the database credentials as a parameter in AWS Systems Manager Parameter Store. Configure Parameter Store to automatically rotate the secrets every 30 days. Update the Lambda function to retrieve the credentials from the parameter.
- B . Store the database credentials as a secret in AWS Secrets Manager. Configure Secrets Manager to automatically rotate the credentials every 30 days Update the Lambda function to retrieve the credentials from the secret.
- C . Store the database credentials as an encrypted Lambda environment variable. Write a custom Lambda function to rotate the credentials. Schedule the Lambda function to run every 30 days.
- D . Store the database credentials as a key in AWS Key Management Service (AWS KMS). Configure automatic rotation for the key. Update the Lambda function to retrieve the credentials from the KMS key.
B
Explanation:
AWS Secrets Manager is designed specifically to securely store and manage sensitive information such as database credentials. It integrates seamlessly with AWS services like Lambda and RDS, and it provides automatic credential rotation with minimal operational overhead.
AWS Secrets Manager: By storing the database credentials in Secrets Manager, you ensure that the credentials are securely stored, encrypted, and managed. Secrets Manager provides a built-in mechanism to automatically rotate credentials at regular intervals (e.g., every 30 days), which helps in maintaining security best practices without requiring additional manual intervention.
Lambda Integration: The Lambda function can be easily configured to retrieve the credentials from Secrets Manager using the AWS SDK, ensuring that the credentials are accessed securely at runtime.
Why Not Other Options?
Option A (Parameter Store with Rotation): While Parameter Store can store parameters securely, Secrets Manager is more tailored for secrets management and automatic rotation, offering more features and less operational overhead.
Option C (Encrypted Lambda environment variable): Storing credentials directly in Lambda environment variables, even when encrypted, requires custom code to manage rotation, which increases operational complexity.
Option D (KMS with automatic rotation): KMS is for managing encryption keys, not for storing and rotating secrets like database credentials. This option would require more custom implementation to manage credentials securely.
AWS
Reference: AWS Secrets Manager- Detailed documentation on how to store, manage, and rotate secrets using AWS Secrets Manager.
Using Secrets Manager with AWS Lambda- Guidance on integrating Secrets Manager with Lambda for secure credential management.
A company has a serverless web application that is comprised of AWS Lambda functions. The application experiences spikes in traffic that cause increased latency because of cold starts. The company wants to improve the application’s ability to handle traffic spikes and to minimize latency. The solution must optimize costs during periods when traffic is low.
- A . Configure provisioned concurrency for the Lambda functions. Use AWS Application Auto Scaling to adjust the provisioned concurrency.
- B . Launch Amazon EC2 instances in an Auto Scaling group. Add a scheduled scaling policy to launch additional EC2 instances during peak traffic periods.
- C . Configure provisioned concurrency for the Lambda functions. Set a fixed concurrency level to handle the maximum expected traffic.
- D . Create a recurring schedule in Amazon EventBridge Scheduler. Use the schedule to invoke the Lambda functions periodically to warm the functions.
A
Explanation:
Key Requirements:
Handle traffic spikes efficiently and reduce latency caused by cold starts.
Optimize costs during low traffic periods.
Analysis of Options:
Option A:
Provisioned Concurrency: Reduces cold start latency by pre-warming Lambda environments for the required number of concurrent executions.
AWS Application Auto Scaling: Automatically adjusts provisioned concurrency based on demand, ensuring cost optimization by scaling down during low traffic.
Correct Approach: Provides a balance between performance during traffic spikes and cost optimization during idle periods.
Option B:
Using EC2 instances with Auto Scaling introduces unnecessary complexity for a serverless architecture. It requires additional management and does not address the issue of cold starts for Lambda.
Incorrect Approach: Contradicts the serverless design philosophy and increases operational overhead.
Option C:
Setting a fixed concurrency level ensures performance during spikes but does not optimize costs during low traffic. This approach would maintain provisioned instances unnecessarily.
Incorrect Approach: Lacks cost optimization.
Option D:
Using EventBridge Scheduler for periodic invocations may reduce cold starts but does not dynamically scale based on traffic demand. It also leads to unnecessary invocations during idle times.
Incorrect Approach: Suboptimal for high traffic fluctuations and cost control.
AWS Solution Architect
Reference: AWS Lambda Provisioned Concurrency
AWS Application Auto Scaling with Lambda
A company runs a Windows-based ecommerce application on Amazon EC2 instances. The application has a very high transaction rate. The company requires a durable storage solution that can deliver 200, 000 IOPS for each EC2 instance.
Which solution will meet these requirements?
- A . Host the application on EC2 instances that have Provisioned IOPS SSD (io2) Block Express Amazon Elastic Block Store (Amazon EBS) volumes attached.
- B . Install the application on an Amazon EMR cluster. Use Hadoop Distributed File System (HDFS) with General Purpose SSD (gp3) Amazon Elastic Block Store (Amazon EBS) volumes.
- C . Use Amazon FSx for Lustre as shared storage across the EC2 instances that run the application.
- D . Host the application on EC2 instances that have SSD instance store volumes and General Purpose SSD (gp3) Amazon Elastic Block Store (Amazon EBS) volumes attached.
A
Explanation:
Amazon EBS io2 Block Express volumes are designed to deliver sub-millisecond latency and up to 256, 000 IOPS per volume, with durability and high availability. This makes io2 Block Express the recommended choice for workloads requiring very high and predictable IOPS, such as enterprise databases and high-transaction-rate applications.
Reference Extract:
"EBS io2 Block Express volumes deliver up to 256, 000 IOPS and sub-millisecond latency, supporting high-performance, high-durability workloads."
Source: AWS Certified Solutions Architect C Official Study Guide, EBS Performance section.
