Practice Free SAA-C03 Exam Online Questions
A company uses AWS Cost Explorer to monitor its AWS costs. The company notices that Amazon Elastic Block Store (Amazon EBS) storage and snapshot costs increase every month. However, the company does not purchase additional EBS storage every month. The company wants to optimize monthly costs for its current storage usage.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Use logs in Amazon CloudWatch Logs to monitor the storage utilization of Amazon EBS. Use Amazon EBS Elastic Volumes to reduce the size of the EBS volumes.
- B . Use a custom script to monitor space usage. Use Amazon EBS Elastic Volumes to reduce the size of the EBS volumes.
- C . Delete all expired and unused snapshots to reduce snapshot costs.
- D . Delete all nonessential snapshots. Use Amazon Data Lifecycle Manager to create and manage the snapshots according to the company’s snapshot policy requirements.
D
Explanation:
Amazon Data Lifecycle Manager (DLM) automates the creation, retention, and deletion of EBSsnapshots. This allows organizations to define policies that ensure snapshots are only kept as long as needed, reducing costs automatically and minimizing manual effort. AWS recommends using DLM for optimizing storage and managing backup lifecycle with minimal overhead.
Reference: AWS Documentation C Amazon Data Lifecycle Manager
A company deploys its applications on Amazon Elastic Kubernetes Service (Amazon EKS) behind an Application Load Balancer in an AWS Region. The application needs to store data in a PostgreSQL
database engine. The company wants the data in the database to be highly available. The company also needs increased capacity for read workloads.
Which solution will meet these requirements with the MOST operational efficiency?
- A . Create an Amazon DynamoDB database table configured with global tables.
- B . Create an Amazon RDS database with Multi-AZ deployments
- C . Create an Amazon RDS database with Multi-AZ DB cluster deployment.
- D . Create an Amazon RDS database configured with cross-Region read replicas.
C
Explanation:
Amazon RDSMulti-AZ DB cluster deploymentensures high availability by automatically replicating data across multiple Availability Zones (AZs), and it supports failover in case of a failure in one AZ. This setup also provides increased capacity for read workloads by allowing read scaling with reader instances in different AZs. This solution offers the most operational efficiency with minimal manual intervention.
Option A (DynamoDB): DynamoDB is not suitable for a relational database workload, which requires a PostgreSQL engine.
Option B (RDS with Multi-AZ): While this provides high availability, it doesn’t offer read scaling capabilities.
Option D (Cross-Region Read Replicas): This adds complexity and is not necessary if the requirement is high availability within a single region.
AWS
Reference: Amazon RDS Multi-AZ DB Cluster
A company hosts a two-tier website that runs on Amazon EC2 instances. The website has a database that runs on Amazon RDS for MySQL. All users are required to log in to the website to see their own customized pages.
The website typically experiences low traffic. Occasionally, the website experiences sudden increases in traffic and becomes unresponsive. During these increases in traffic, the database experiences a heavy write load. A solutions architect must improve the website’s availability without changing the application code.
What should the solutions architect do to meet these requirements?
- A . Create an Amazon ElastiCache (Redis OSS) cluster. Configure the application to cache common database queries in the ElastiCache cluster.
- B . Create an Auto Scaling group. Configure Amazon CloudWatch alarms to scale the number of EC2 instances based on the percentage of CPU in use during the traffic increases.
- C . Create an Amazon CloudFront distribution that points to the EC2 instances as the origin. Enable caching of dynamic content, and configure a write throttle from the EC2 instances to the RDS database.
- D . Migrate the database to an Amazon Aurora Serverless cluster. Set the maximum Aurora capacity units (ACUs) to a value high enough to respond to the traffic increases. Configure the EC2 instances to connect to the Aurora database.
D
Explanation:
Comprehensive and Detailed 250 to 300 words of Explanation (AWS documentation-based, no links):
The key constraint is no application code changes, while the observed failure mode during spikes is heavy write load on the database that makes the site unresponsive. Scaling the web tier alone (Option B) may not help if the database is the bottleneck; in fact, adding more EC2 instances can increase concurrent write pressure and worsen the database overload. Caching (Option A) would require modifying the application to use Redis for query caching, which violates the no-code-change requirement.
Option C is not a fit: CloudFront caching is effective for cacheable content, but authenticated, personalized pages are typically not cacheable without application changes, and “write throttle” is not a standard CloudFront feature to protect the database.
Migrating from RDS for MySQL to Amazon Aurora Serverless is a managed approach designed to automatically adjust database capacity to match workload demand, which is well suited to “low baseline traffic with occasional sudden spikes.” By setting an appropriate maximum capacity, Aurora Serverless can scale to handle bursts more gracefully than a fixed-size RDS instance, improving availability during unpredictable demand. This reduces operational overhead because capacity management is largely handled by the service rather than by manual instance resizing or complex scaling scripts.
The final step―configuring EC2 instances to connect to the new Aurora endpoint―is an infrastructure configuration change, not an application code rewrite. The application continues to speak the same MySQL-compatible protocol (Aurora MySQL-compatible), preserving compatibility while improving resilience to spikes.
Therefore, D best meets the requirement to improve availability during sudden traffic increases driven by heavy database writes, with minimal operational effort and no application code changes.
A security team needs to enforce rotation of all IAM users’ access keys every 90 days. Keys older than 90 days must be automatically deactivated and removed. A solutions architect must create a remediation solution with minimal operational effort.
Which solution meets these requirements?
- A . Create an AWS Config rule to check key age. Configure the rule to run an AWS Batch job to remove the key.
- B . Create an Amazon EventBridge rule to check key age. Configure it to run an AWS Batch job to remove the key.
- C . Create an AWS Config rule to check key age. Define an EventBridge rule that schedules an AWS Lambda function to remove the key.
- D . Create an EventBridge rule to check key age. Define a second EventBridge rule to run an AWS Batch job to remove the key.
C
Explanation:
AWS Config has a built-in managed rule (access-keys-rotated) that evaluates IAM access key age.
Config rules detect non-compliant resources automatically, without building custom logic.
After Config identifies old keys, EventBridge can trigger an AWS Lambda function to disable and
delete the keys. Lambda provides a fully managed compute layer requiring no servers or batch environments.
Using AWS Batch (Options A, B, and D) adds unnecessary operational overhead for a simple automation task.
A marketing company receives a large amount of new clickstream data in Amazon S3 from a marketing campaign. The company needs to analyze the clickstream data in Amazon S3 quickly. Then the company needs to determine whether to process the data further in the data pipeline.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Create external tables in a Spark catalog Configure jobs in AWS Glue to query the data
- B . Configure an AWS Glue crawler to crawl the data. Configure Amazon Athena to query the data.
- C . Create external tables in a Hive metastore. Configure Spark jobs in Amazon EMR to query the data.
- D . Configure an AWS Glue crawler to crawl the data. Configure Amazon Kinesis Data Analytics to use SQL to query the data
B
Explanation:
AWS Glue Crawler: AWS Glue is a fully managed ETL (Extract, Transform, Load) service that makes it easy to prepare and load data for analytics. A Glue crawler can automatically discover new data and schema in Amazon S3, making it easy to keep the data catalog up-to-date.
Crawling the Data:
Set up an AWS Glue crawler to scan the S3 bucket containing the clickstream data.
The crawler will automatically detect the schema and create/update the tables in the AWS Glue Data Catalog.
Amazon Athena:
Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL.
Once the data catalog is updated by the Glue crawler, use Athena to query the clickstream data directly in S3.
Operational Efficiency: This solution leverages fully managed services, reducing operational overhead. Glue crawlers automate data cataloging, and Athena provides a serverless, pay-per-query model for quick data analysis without the need to set up or manage infrastructure.
Reference: AWS Glue
Amazon Athena
A company runs an application on Amazon EC2 instances. The application is deployed in private subnets in three Availability Zones of the us-east-1 Region. The instances must be able to connect to the internet to download files. The company wants a design that is highly available across the Region.
Which solution should be implemented to ensure that there are no disruptions to internet connectivity?
- A . Deploy a NAT instance in a private subnet of each Availability Zone.
- B . Deploy a NAT gateway in a public subnet of each Availability Zone.
- C . Deploy a transit gateway in a private subnet of each Availability Zone.
- D . Deploy an internet gateway in a public subnet of each Availability Zone.
B
Explanation:
Explanation (AWS Docs):
To allow private subnets to access the internet, deploy NAT gateways in a public subnet in each AZ for high availability. NAT instances are less scalable and less fault-tolerant.
“To create a highly available architecture, create a NAT gateway in each Availability Zone and configure your routing to use it.”
― NAT Gateway Overview
A company is designing a microservice-based architecture tor a new application on AWS. Each microservice will run on its own set of Amazon EC2 instances. Each microservice will need to interact with multiple AWS services such as Amazon S3 and Amazon Simple Queue Service (Amazon SQS).
The company wants to manage permissions for each EC2 instance based on the principle of least privilege.
Which solution will meet this requirement?
- A . Assign an IAM user to each micro-service. Use access keys stored within the application code to authenticate AWS service requests.
- B . Create a single IAM role that has permission to access all AWS services. Associate the IAM role with all EC2 instances that run the microservices
- C . Use AWS Organizations to create a separate account for each microservice. Manage permissions at the account level.
- D . Create individual IAM roles based on the specific needs of each microservice. Associate the IAM roles with the appropriate EC2 instances.
D
Explanation:
When designing a microservice architecture where each microservice interacts with different AWS services, it’s essential to follow the principle of least privilege. This means granting each microservice only the permissions it needs to perform its tasks, reducing the risk of unauthorized access or accidental actions.
The recommended approach is to create individualIAM roleswith policies that grant each microservice the specific permissions it requires. Then, these roles should be associated with the EC2 instances that run the corresponding microservice. By doing so, each EC2 instance will assume its specific IAM role, and permissions will be automatically managed by AWS.
IAM roles provide temporary credentials via the instance metadata service, eliminating the need to hard-code credentials in your application code, which enhances security.
AWS
Reference: IAM Roles for Amazon EC2explains how EC2 instances can use IAM roles to securely access AWS services without managing long-term credentials.
Best Practices for IAMincludes recommendations for implementing the least privilege principle and
using IAM roles effectively.
Why the other options are incorrect:
A company runs an application that stores and shares photos. Users upload the photos to an Amazon S3 bucket. Every day, users upload approximately 150 photos. The company wants to design a solution that creates a thumbnail of each new photo and stores the thumbnail in a second S3 bucket.
Which solution will meet these requirements MOST cost-effectively?
- A . Configure an Amazon EventBridge scheduled rule to invoke a scrip! every minute on a long-running Amazon EMR cluster. Configure the script to generate thumbnails for the photos that do not have thumbnails. Configure the script to upload the thumbnails to the second S3 bucket.
- B . Configure an Amazon EventBridge scheduled rule to invoke a script every minute on a memory-optimized Amazon EC2 instance that is always on. Configure the script to generate thumbnails for the photos that do not have thumbnails. Configure the script to upload the thumbnails to the second S3 bucket.
- C . Configure an S3 event notification to invoke an AWS Lambda function each time a user uploads a new photo to the application. Configure the Lambda function to generate a thumbnail and to upload the thumbnail to the second S3 bucket.
- D . Configure S3 Storage Lens to invoke an AWS Lambda function each time a user uploads a new photo to the application. Configure the Lambda function to generate a thumbnail and to upload the thumbnail to a second S3 bucket.
C
Explanation:
The most cost-effective and scalable solution for generating thumbnails when photos are uploaded to an S3 bucket is to useS3 event notificationsto trigger anAWS Lambda function. This approach avoids the need for a long-running EC2 instance or EMR cluster, making it highly cost-effective because Lambda only charges for the time it takes to process each event.
S3 Event Notifications: Automatically triggers the Lambda function when a new photo is uploaded to the S3 bucket.
AWS Lambda: A serverless compute service that scales automatically and only charges for execution time, which makes it the most economical choice when dealing with periodic events like photo uploads.
The Lambda function can generate the thumbnail and upload it to a second S3 bucket, fulfilling the requirement efficiently.
Option A and Option B (EMR or EC2 with scheduled scripts): These are less cost-effective as they
involve continuously running infrastructure, which incurs unnecessary costs.
Option D (S3 Storage Lens): S3 Storage Lens is a tool for storage analytics and is not designed for event-based photo processing.
AWS
Reference: Amazon S3 Event Notifications
AWS Lambda Pricing
A company runs multiple workloads in separate AWS environments. The company wants to optimize its AWS costs but must maintain the same level of performance for the environments.
The company’s production environment requires resources to be highly available. The other environments do not require highly available resources.
Each environment has the same set of networking components, including the following:
• 1 VPC
• 1 Application Load Balancer
• 4 subnets distributed across 2 Availability Zones (2 public subnets and 2 private subnets)
• 2 NAT gateways (1 in each public subnet)
• 1 internet gateway
Which solution will meet these requirements?
- A . Do not change the production environment workload. For each non-production workload, remove one NAT gateway and update the route tables for private subnets to target the remaining NAT gateway for the destination 0.0.0.0/0.
- B . Reduce the number of Availability Zones that all workloads in all environments use.
- C . Replace every NAT gateway with a t4g.large NAT instance. Update the route tables for each private subnet to target the NAT instance that is in the same Availability Zone for the destination 0.0.0.0/0.
- D . In each environment, create one transit gateway and remove one NAT gateway. Configure routing on the transit gateway to forward traffic for the destination 0.0.0.0/0 to the remaining NAT gateway. Update private subnet route tables to target the transit gateway for the destination 0.0.0.0/0.
A
Explanation:
Maintaining two NAT gateways for production ensures high availability. Reducing to one NAT gateway in non-production environments lowers cost while maintaining necessary connectivity. This approach is recommended by AWS for cost optimization in non-critical environments.
Reference Extract:
"For environments that do not require high availability, you can reduce costs by using a single NAT gateway and updating route tables accordingly."
Source: AWS Certified Solutions Architect C Official Study Guide, Cost Optimization and NAT Gateway section.
How can a company detect and notify security teams about PII in S3 buckets?
- A . Use Amazon Macie. Create an EventBridge rule for SensitiveData findings and send an SNS notification.
- B . Use Amazon GuardDuty. Create an EventBridge rule for CRITICAL findings and send an SNS notification.
- C . Use Amazon Macie. Create an EventBridge rule for SensitiveData: S3Object/Personal findings and send an SQS notification.
- D . Use Amazon GuardDuty. Create an EventBridge rule for CRITICAL findings and send an SQS notification.
A
Explanation:
Amazon Macieis purpose-built for detecting PII in S3.
Option Auses EventBridge to filter SensitiveData findings and notify via SNS, meeting the requirements.
Options B and Dinvolve GuardDuty, which is not designed for PII detection.
Option Cuses SQS, which is less suitable for immediate notifications.
