Practice Free SAA-C03 Exam Online Questions
A company is hosting multiple websites for several lines of business under its registered parent domain. Users accessing these websites will be routed to appropriate backend Amazon EC2instances
based on the subdomain. The websites host static webpages, images, and server-side scripts like PHP and JavaScript.
Some of the websites experience peak access during the first two hours of business with constant usage throughout the rest of the day. A solutions architect needs to design a solution that will automatically adjust capacity to these traffic patterns while keeping costs low.
Which combination of AWS services or features will meet these requirements? (Select TWO.)
- A . AWS Batch
- B . Network Load Balancer
- C . Application Load Balancer
- D . Amazon EC2 Auto Scaling
- E . Amazon S3 website hosting
C, D
Explanation:
An Application Load Balancer supports path- and host-based routing, which makes it ideal for routing requests based on subdomains. EC2 Auto Scaling ensures that the number of instances adjusts dynamically based on traffic, which helps manage cost and performance during predictable peak hours.
Reference: AWS Documentation C ALB with Auto Scaling for Web Applications
A consulting company provides professional services to customers worldwide. The company provides solutions and tools for customers to expedite gathering and analyzing data on AWS. The company needs to centrally manage and deploy a common set of solutions and tools for customers to use for self-service purposes.
Which solution will meet these requirements?
- A . Create AWS Cloud Formation templates for the customers.
- B . Create AWS Service Catalog products for the customers.
- C . Create AWS Systems Manager templates for the customers.
- D . Create AWS Config items for the customers.
B
Explanation:
AWS Service Catalogallows organizations to centrally manage commonly deployed IT services and offers self-service deployment capabilities to customers. By creatingService Catalog products, the consulting company can package their solutions and tools for easy reuse by customers while maintaining central control over configuration and access. This provides a standardized and automated solution with the least operational overhead for managing and deploying solutions across different customers.
Option A (CloudFormation): CloudFormation templates are useful but don’t provide the same level of management and user-friendly self-service capabilities as Service Catalog.
Option C (Systems Manager): Systems Manager is more focused on managing infrastructure and doesn’t offer the same self-service capabilities.
Option D (AWS Config): AWS Config is used for tracking resource configurations, not for deploying solutions.
AWS
Reference: AWS Service Catalog
A healthcare company stores personally identifiable information (PII) data in an Amazon RDS for Oracle database. The company must encrypt the PII data at rest. The company must use dedicated hardware modules to store and manage the encryption keys.
- A . Use AWS Key Management Service (AWS KMS) to configure encryption for the RDS database.
Store and manage keys in AWS CloudHSM. - B . Use AWS CloudHSM backed AWS KMS keys to configure transparent encryption for the RDS database.
- C . Use Amazon EC2 instance store encryption to encrypt database volumes by using AWS CloudHSM backed keys.
- D . Configure RDS snapshots and use server-side encryption with Amazon S3 managed keys (SSE-S3). Store the keys in AWS CloudHSM.
B
Explanation:
Amazon RDS supports encryption at rest by using AWS KMS keys backed by AWS CloudHSM. This allows use of dedicated FIPS 140-2 Level 3 validated hardware modules to manage encryption keys, meeting compliance for sensitive data such as PII.
From AWS Documentation:
“You can use AWS KMS with keys that are backed by AWS CloudHSM to control the encryption of RDS databases. This provides dedicated HSM-backed key storage and management.”
(Source: Amazon RDS User Guide C Encrypting Amazon RDS Resources)
Why B is correct:
Meets the requirement for dedicated HSM hardware.
Fully integrates with RDS for transparent encryption at rest.
Satisfies compliance standards for healthcare and regulated data.
Why others are incorrect:
A: Keys in CloudHSM directly are not used by RDS; they must be managed through KMS integration.
C: EC2 instance stores are ephemeral, not suitable for RDS databases.
D: SSE-S3 applies to S3 objects, not databases.
Reference: Amazon RDS User Guide C “Encryption at Rest with AWS KMS and CloudHSM” AWS CloudHSM User Guide
AWS Well-Architected Framework C Security Pillar
A company uses Amazon Redshift to store structured data and Amazon S3 to store unstructured data. The company wants to analyze the stored data and create business intelligence reports. The company needs a data visualization solution that is compatible with Amazon Redshift and Amazon S3.
Which solution will meet these requirements?
- A . Use Amazon Redshift query editor v2 to analyze data stored in Amazon Redshift. Use Amazon Athena to analyze data stored in Amazon S3. Use Amazon QuickSight to access Amazon Redshift and Athena, visualize the data analyses, and create business intelligence reports.
- B . Use Amazon Redshift Serverless to analyze data stored in Amazon Redshift. Use Amazon S3 Object Lambda to analyze data stored in Amazon S3. Use Amazon Managed Grafana to access Amazon Redshift and Object Lambda, visualize the data analyses, and create business intelligence reports.
- C . Use Amazon Redshift Spectrum to analyze data stored in Amazon Redshift. Use Amazon Athena to
analyze data stored in Amazon S3. Use Amazon QuickSight to access Amazon Redshift and Athena, visualize the data analyses, and create business intelligence reports. - D . Use Amazon OpenSearch Service to analyze data stored in Amazon Redshift and Amazon S3. Use Amazon Managed Grafana to access OpenSearch Service, visualize the data analyses, and create business intelligence reports.
C
Explanation:
This solution leverages:
Amazon Redshift Spectrum to query S3 data directly from Redshift.
Amazon Athena for ad-hoc analysis of S3 data.
Amazon QuickSight for unified visualization from multiple data sources.
“Redshift Spectrum enables you to run queries against exabytes of data in Amazon S3 without having to load or transform the data.”
“QuickSight supports both Amazon Redshift and Amazon Athena as data sources.”
― Redshift Spectrum
― Amazon QuickSight Supported Data Sources
This architecture allows scalable querying and visualization with minimum ETL overhead, ideal for BI dashboards.
Incorrect Options:
A: The query editor is not a BI tool.
B, D: Grafana is better for time-series data, not structured analytics or BI reports.
Reference: Redshift Spectrum
Amazon QuickSight Integration
A company runs an application on several Amazon EC2 instances. Multiple Amazon Elastic Block Store (Amazon EBS) volumes are attached to each EC2 instance. The company needs to back up the configurations and the data of the EC2 instances every night. The application must be recoverable in a secondary AWS Region.
Which solution will meet these requirements in the MOST operationally efficient way?
- A . Configure an AWS Lambda function to take nightly snapshots of the application’s EBS volumes and to copy the snapshots to a secondary Region.
- B . Create a backup plan in AWS Backup to take nightly backups. Copy the backups to a secondary Region. Add the EC2 instances to a resource assignment as part of the backup plan.
- C . Create a backup plan in AWS Backup to take nightly backups. Copy the backups to a secondary Region. Add the EBS volumes to a resource assignment as part of the backup plan.
- D . Configure an AWS Lambda function to take nightly snapshots of the application’s EBS volumes and to copy the snapshots to a secondary Availability Zone.
B
Explanation:
AWS Backup is a fully managed backup service that can create backup plans for EC2 instances, including both instance configurations and attached EBS volumes, with scheduled and cross-Region copy capabilities. By adding the EC2 instances to the resource assignment in the backup plan, AWS Backup automatically backs up all configurations and attached EBS volumes, and can copy backups to a secondary Region for disaster recovery, providing the highest operational efficiency with the least manual effort.
AWS Documentation Extract:
“AWS Backup provides fully managed backup for EC2 instances and attached EBS volumes, with scheduling, retention, and cross-Region copy built in. By adding the EC2 instance as a resource, the backup includes both configuration and attached volumes.”
(Source: AWS Backup documentation)
A, D: Custom Lambda scripts increase operational overhead and are not as integrated or robust as AWS Backup.
C: Assigning only EBS volumes does not include the EC2 instance configuration, which is needed for full recovery.
Reference: AWS Certified Solutions Architect C Official Study Guide, Disaster Recovery and Backup.
A company is deploying an application in three AWS Regions using an Application Load Balancer.
Amazon Route 53 will be used to distribute traffic between these Regions.
Which Route 53 configuration should a solutions architect use to provide the MOST high-performing experience?
- A . Create an A record with a latency policy.
- B . Create an A record with a geolocation policy.
- C . Create a CNAME record with a failover policy.
- D . Create a CNAME record with a geoproximity policy.
A
Explanation:
Latency-based routing in Amazon Route 53 is designed to route users to the Region that provides the lowest network latency, based on Amazon’s measurements of latency between AWS Regions and users’ networks. For applications deployed in multiple Regions, this provides the highest performance experience for global users.
Therefore, creating an A record with a latency routing policy is the correct choice.
Geolocation (Option B) routes based on user location, which may not always correspond to the lowest latency.
Failover (Option C) is for active-passive architectures, not performance optimization.
Geoproximity (Option D) is more complex and focused on directing traffic based on geographic bias rather than measured latency.
A multinational company operates in multiple AWS Regions. The company must ensure that its developers and administrators have secure, role-based access to AWS resources.
The roles must be specific to each user’s geographic location and job responsibilities.
The company wants to implement a solution to ensure that each team can access only resources within the team’s Region. The company wants to use its existing directory service to manage user
access. The existing directory service organizes users into roles based on location. The system must be capable of integrating seamlessly with multi-factor authentication (MFA).
Which solution will meet these requirements?
- A . Use AWS Security Token Service (AWS STS) to generate temporary access tokens. Integrate STS with the directory service. Assign Region-specific roles.
- B . Configure AWS IAM Identity Center with federated access. Integrate IAM Identity Center with the directory service to set up Region-specific IAM roles.
- C . Create IAM managed policies that restrict access by location. Apply policies based on group membership in the directory.
- D . Use custom Lambda functions to dynamically assign IAM policies based on login location and job function.
B
Explanation:
IAM Identity Center (formerly AWS SSO) is designed for:
Federated access from external directories (e.g., Active Directory, Okta)
Centralized permission management
Support for MFA
Granular control via Attribute-based access control (ABAC)
“IAM Identity Center allows you to manage SSO access to AWS accounts and business applications centrally. You can assign users and groups permissions based on directory attributes such as Region and job role.”
― IAM Identity Center Docs This option ensures: Federated, centralized access Region-specific permissions
MFA and role mapping via existing directory service
Reference: IAM Identity Center (SSO) Overview
Set Up Attribute-Based Access Control
A company runs a production application on a fleet of Amazon EC2 instances. The application reads messages from an Amazon Simple Queue Service (Amazon SQS) queue and processes the messages in parallel. The message volume is unpredictable and highly variable.
The company must ensure that the application continually processes messages without any downtime.
Which solution will meet these requirements MOST cost-effectively?
- A . Use only Spot Instances to handle the maximum capacity required.
- B . Use only Reserved Instances to handle the maximum capacity required.
- C . Use Reserved Instances to handle the baseline capacity. Use Spot Instances to provide additional capacity when required.
- D . Use Reserved Instances in an EC2 Auto Scaling group to handle the minimum capacity. Configure an auto scaling policy that is based on the SQS queue backlog.
C
Explanation:
AWS guidance is to cover steady baseline with commitments (Reserved Instances or Savings Plans) and use EC2 Spot Instances for burst capacity to minimize cost. Spot provides up to 90% discounts and is well-suited to fault-tolerant, queue-based workloads; interruptions are handled by replacing capacity automatically while messages remain durably in SQS. Using only Spot (A) risks capacity gaps; only RIs sized for peak (B) wastes cost during low demand.
Option D scales on backlog but uses On-Demand for bursts, which is more expensive than Spot. With C, baseline capacity (RIs) keeps processing continuously (no downtime), and Spot adds cost-efficient throughput during spikes, aligning with Well-Architected cost and reliability patterns for queue workers.
Reference: EC2 Spot Best Practices ― burst with Spot for interruptible/queued workloads; SQS ― durable buffering; Well-Architected Cost Optimization ― cover steady state with commitments, scale bursts with discounted capacity.
A company is building a new application that uses multiple serverless architecture components. The application architecture includes an Amazon API Gateway REST API and AWS Lambda functions to manage incoming requests.
The company needs a service to send messages that the REST API receives to multiple target Lambda functions for processing. The service must filter messages so each target Lambda function receives only the messages the function needs.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Send the requests from the REST API to an Amazon Simple Notification Service (Amazon SNS) topic. Subscribe multiple Amazon Simple Queue Service (Amazon SQS) queues to the SNS topic. Configure the target Lambda functions to poll the SQS queues.
- B . Send the requests from the REST API to a set of Amazon EC2 instances that are configured to process messages. Configure the instances to filter messages and to invoke the target Lambda functions.
- C . Send the requests from the REST API to Amazon Managed Streaming for Apache Kafka (Amazon MSK). Configure Amazon MSK to publish the messages to the target Lambda functions.
- D . Send the requests from the REST API to multiple Amazon Simple Queue Service (Amazon SQS) queues. Configure the target Lambda functions to poll the SQS queues.
A company recently migrated a large amount of research data to an Amazon S3 bucket. The company needs an automated solution to identify sensitive data in the bucket. A security team also needs to monitor access patterns for the data 24 hours a day, 7 days a week to identify suspicious activities or evidence of tampering with security controls.
- A . Set up AWS CloudTrail reporting, and grant the security team read-only access to the CloudTrail reports. Set up an Amazon S3 Inventory report to identify sensitive data. Review the findings with the security team.
- B . Enable Amazon Macie and Amazon GuardDuty on the account. Grant the security team access to Macie and GuardDuty. Review the findings with the security team.
- C . Set up an Amazon S3 Inventory report. Use Amazon Athena and Amazon QuickSight to identify sensitive data. Create a dashboard for the security team to review findings.
- D . Use AWS Identity and Access Management (IAM) Access Advisor to monitor for suspicious activity and tampering. Create a dashboard for the security team. Set up an Amazon S3 Inventory report to identify sensitive data. Review the findings with the security team.
B
Explanation:
To automatically identify sensitive data in Amazon S3 and monitor access patterns for suspicious activities:
Amazon Macie uses machine learning and pattern matching to discover and protect sensitive data in S3. It provides visibility into data security risks and enables automated protection against those risks.
Amazon GuardDuty is a threat detection service that continuously monitors for malicious activity and unauthorized behavior to protect AWS accounts and workloads. It analyzes events from AWS CloudTrail, VPC Flow Logs, and DNS logs.
By enabling both services, the company can automate the discovery of sensitive data and continuously monitor access patterns for potential security threats.
