Practice Free SAA-C03 Exam Online Questions
An ecommerce company runs a multi-tier application on AWS. The frontend and backend tiers both run on Amazon EC2 instances. The database tier runs on an Amazon RDS for MySQL DB instance. The backend tier communicates with the RDS DB instance.
The application makes frequent calls to return identical datasets from the database. The frequent calls on the database cause performance slowdowns. A solutions architect must improve the performance of the application backend.
Which solution will meet this requirement?
- A . Configure an Amazon Simple Notification Service (Amazon SNS) topic between the EC2 instances and the RDS DB instance.
- B . Configure an Amazon ElastiCache (Redis OSS) cache. Configure the backend EC2 instances to read from the cache.
- C . Configure an Amazon DynamoDB Accelerator (DAX) cluster. Configure the backend EC2 instances to read from the cluster.
- D . Configure Amazon Data Firehose to stream the calls to the database.
B
Explanation:
Caching frequently accessed, identical datasets is a well-established way to improve backend application performance by reducing load on the database. Amazon ElastiCache with Redis (open source) offers a fast, in-memory data store to cache query results, reducing latency and database requests.
Option B directly addresses the problem by offloading repeated read requests from the database to the cache.
Option A (SNS) is a messaging service and is unrelated to caching or improving database performance.
Option C (DAX) accelerates DynamoDB but the backend uses RDS MySQL, so DAX is inapplicable.
Option D (Data Firehose) is a data streaming service and does not optimize database read performance.
Reference: Caching Best Practices (https: //aws.amazon.com/caching/)
Amazon ElastiCache for Redis (https: //docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/WhatIs.html)
AWS Well-Architected Framework ― Performance Efficiency Pillar (https: //d1.awsstatic.com/whitepapers/architecture/AWS_Well-Architected_Framework.pdf)
A company tracks customer satisfaction by using surveys that the company hosts on its website. The surveys sometimes reach thousands of customers every hour. Survey results are currently sent in email messages to the company so company employees can manually review results and assess customer sentiment.
The company wants to automate the customer survey process. Survey results must be available for the previous 12 months.
Which solution will meet these requirements in the MOST scalable way?
- A . Send the survey results data to an Amazon API Gateway endpoint that is connected to an Amazon Simple Queue Service (Amazon SQS) queue. Create an AWS Lambda function to poll the SQS queue, call Amazon Comprehend for sentiment analysis, and save the results to an Amazon DynamoDB table. Set the TTL for all records to 365 days in the future.
- B . Send the survey results data to an API that is running on an Amazon EC2 instance. Configure the API to store the survey results as a new record in an Amazon DynamoDB table, call Amazon Comprehend for sentiment analysis, and save the results in a second DynamoDB table. Set the TTL for all records to 365 days in the future.
- C . Write the survey results data to an Amazon S3 bucket. Use S3 Event Notifications to invoke an AWS Lambda function to read the data and call Amazon Rekognition for sentiment analysis. Store the sentiment analysis results in a second S3 bucket. Use S3 Lifecycle policies on each bucket to expire objects after 365 days.
- D . Send the survey results data to an Amazon API Gateway endpoint that is connected to an Amazon Simple Queue Service (Amazon SQS) queue. Configure the SQS queue to invoke an AWS Lambda function that calls Amazon Lex for sentiment analysis and saves the results to an Amazon DynamoDB table. Set the TTL for all records to 365 days in the future.
A
Explanation:
This solution is the most scalable and efficient way to handle large volumes of survey data while automating sentiment analysis:
API Gateway and SQS: The survey results are sent to API Gateway, which forwards the data to an SQS queue. SQS can handle large volumes of messages and ensures that messages are not lost.
AWS Lambda: Lambda is triggered by polling the SQS queue, where it processes the survey data.
Amazon Comprehend: Comprehend is used for sentiment analysis, providing insights into customer satisfaction.
DynamoDB with TTL: Results are stored in DynamoDB with aTime to Live (TTL)attribute set to expire after 365 days, automatically removing old data and reducing storage costs.
Option B (EC2 API): Running an API on EC2 requires more maintenance and scalability management compared to API Gateway.
Option C (S3 and Rekognition): Amazon Rekognition is for image and video analysis, not sentiment analysis.
Option D (Amazon Lex): Amazon Lex is used for building conversational interfaces, not sentiment analysis.
AWS
Reference: Amazon Comprehend for Sentiment Analysis
Amazon SQS
DynamoDB TTL
A company has a transaction-processing application that is backed by an Amazon RDS MySQL database. When the load on the application increases, a large number of database connections are opened and closed frequently, which causes latency for the database transactions.
A solutions architect determines that the root cause of the latency is poor connection handling by the application. The solutions architect cannot modify the application code. The solutions architect needs to manage database connections to improve the database performance during periods of high load.
Which solution will meet these requirements?
- A . Upgrade the database instance to a larger instance type to handle a large number of database connections.
- B . Configure Amazon RDS storage autoscaling to dynamically increase the provisioned IOPS.
- C . Use Amazon RDS Proxy to pool and share database connections.
- D . Convert the database instance to a Multi-AZ deployment.
C
Explanation:
Amazon RDS Proxy is a fully managed database proxy for RDS that makes applications more scalable, more resilient to database failures, and more secure. RDS Proxy pools and shares database connections, allowing applications to open and close connections as needed without overwhelming the database. This is the recommended solution when the application cannot be modified to use connection pooling itself.
AWS Documentation Extract:
"Amazon RDS Proxy helps manage database connections to improve application scalability and performance. It pools connections and shares them among application clients, which can mitigate issues caused by opening and closing many database connections."
(Source: Amazon RDS Proxy documentation)
A: Upgrading the instance does not solve connection inefficiency and can be cost-ineffective.
B: Increasing IOPS only helps if storage is a bottleneck, not if connections are the issue.
D: Multi-AZ improves availability, not connection management.
Reference: AWS Certified Solutions Architect C Official Study Guide, RDS Performance and Proxy.
A company has an Amazon S3 data lake that is governed by AWS Lake Formation. The company wants to create a visualization in Amazon QuickSight by joining the data in the data lake with operational data that is stored in an Amazon Aurora MySQL database. The company wants to enforce column-level authorization so that the company’s marketing team can access only a subset of columns in the database.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Use Amazon EMR to ingest the data directly from the database to the QuickSight SPICE engine.
Include only the required columns. - B . Use AWS Glue Studio to ingest the data from the database to the S3 data lake. Attach an IAM policy to the QuickSight users to enforce column-level access control. Use Amazon S3 as the data source in QuickSight.
- C . Use AWS Glue Elastic Views to create a materialized view for the database in Amazon S3. Create an S3 bucket policy to enforce column-level access control for the QuickSight users. Use Amazon S3 as the data source in QuickSight.
- D . Use a Lake Formation blueprint to ingest the data from the database to the S3 data lake. Use Lake Formation to enforce column-level access control for the QuickSight users. Use Amazon Athena as the data source in QuickSight.
D
Explanation:
AWS Lake Formation provides centralized data access control, including fine-grained (column-level) permissions for data stored in S3 and accessed through services like Amazon Athena.
Using a Lake Formation blueprint to ingest data from Aurora MySQL into the data lake keeps ingestion and governance integrated. When QuickSight uses Athena as the data source, Athena enforces Lake Formation’s column-level permissions automatically. This allows the marketing team to see only the authorized subset of columns without custom access-control logic.
Options A, B, and C rely on manually limiting columns at ingestion time or using IAM or S3 bucket policies, which do not provide true column-level authorization for SQL queries and require significantly more manual work and maintenance.
A financial company is migrating banking applications to AWS accounts managed through AWS Organizations. The applications store sensitive customer data on Amazon EBS volumes, and the company takes regular snapshots for backups.
The company must implement controls across all accounts to prevent sharing EBS snapshots publicly,
with the least operational overhead.
Which solution will meet these requirements?
- A . Enable AWS Config rules for each OU to monitor EBS snapshot permissions.
- B . Enable block public access for EBS snapshots at the organization level.
- C . Create an IAM policy in the root account that prevents users from modifying snapshot permissions.
- D . Use AWS CloudTrail to track snapshot permission changes.
B
Explanation:
AWS provides EBS Block Public Access at the organization level in AWS Organizations. When enabled, it prevents any EBS snapshot―across all member accounts―from being shared publicly.
This is an organization-wide control implemented centrally, with no need for per-account configuration, monitoring rules, or custom IAM policy enforcement.
AWS Config (Option A) would only detect issues after they occur. IAM restrictions (Option C) are less effective because snapshot permission changes can occur through multiple paths. CloudTrail (Option
D) only logs events and does not block public sharing.
A company has a large fleet of vehicles that are equipped with internet connectivity to send telemetry to the company. The company receives over 1 million data points every 5 minutes from the vehicles. The company uses the data in machine learning (ML) applications to predict vehicle maintenance needs and to preorder parts. The company produces visual reports based on the captured data. The company wants to migrate the telemetry ingestion, processing, and visualization workloads to AWS.
Which solution will meet these requirements?
- A . Use Amazon Timestream for LiveAnalytics to store the data points. Grant Amazon SageMaker permission to access the data for processing. Use Amazon QuickSight to visualize the data.
- B . Use Amazon DynamoDB to store the data points. Use DynamoDB Connector to ingest data from DynamoDB into Amazon EMR for processing. Use Amazon QuickSight to visualize the data.
- C . Use Amazon Neptune to store the data points. Use Amazon Kinesis Data Streams to ingest data from Neptune into an AWS Lambda function for processing. Use Amazon QuickSight to visualize the data.
- D . Use Amazon Timestream to for LiveAnalytics to store the data points. Grant Amazon SageMaker permission to access the data for processing. Use Amazon Athena to visualize the data.
A
Explanation:
Amazon Timestream: Purpose-built time series database optimized for telemetry and IoT data ingestion and analytics.
Amazon SageMaker: Provides ML capabilities for predictive maintenance workflows.
Amazon QuickSight: Efficiently generates interactive, real-time visual reports from Timestream data.
Optimized for Scale: Timestream efficiently handles large-scale telemetry data with time-series indexing and queries.
Amazon Timestream Documentation
An ecommerce company runs Its application on AWS. The application uses an Amazon Aurora PostgreSQL cluster in Multi-AZ mode for the underlying database. During a recent promotionalcampaign, the application experienced heavy read load and write load. Users experienced timeout issues when they attempted to access the application.
A solutions architect needs to make the application architecture more scalable and highly available.
Which solution will meet these requirements with the LEAST downtime?
- A . Create an Amazon EventBndge rule that has the Aurora cluster as a source. Create an AWS Lambda function to log the state change events of the Aurora cluster. Add the Lambda function as a target for the EventBndge rule Add additional reader nodes to fail over to.
- B . Modify the Aurora cluster and activate the zero-downtime restart (ZDR) feature. Use Database Activity Streams on the cluster to track the cluster status.
- C . Add additional reader instances to the Aurora cluster Create an Amazon RDS Proxy target group for the Aurora cluster.
- D . Create an Amazon ElastiCache for Redis cache. Replicate data from the Aurora cluster to Redis by using AWS Database Migration Service (AWS DMS) with a write-around approach.
C
Explanation:
This solution directly addresses the scalability and high availability requirements with minimal downtime.
Additional Reader Instances: Adding more reader instances to the Aurora cluster will distribute the read load, improving the performance of the application under heavy read traffic. Aurora reader instances automatically replicate the data from the writer instance, enabling you to scale out read operations.
Amazon RDS Proxy: RDS Proxy improves database availability by managing database connections more efficiently and providing a connection pool. This reduces the overhead on the Aurora cluster during peak loads, further enhancing performance and availability without requiring changes to the application code.
Why Not Other Options?
Option A (EventBridge and Lambda): This doesn’t directly address the performance and availability issues. Logging state changes and adding reader nodes on failure events doesn’t provide proactive scalability.
Option B (Zero-Downtime Restart and Activity Streams): Zero-Downtime Restart (ZDR) is useful for minimizing downtime during maintenance but doesn’t directly improve scalability. Database Activity Streams are more for security monitoring than for performance enhancement.
Option D (ElastiCache for Redis): While adding a caching layer can help with read performance, it introduces complexity and may not be necessary if additional reader instances can handle the load.
AWS
Reference: Amazon Aurora Scaling- Information on scaling Aurora clusters with reader instances.
Amazon RDS Proxy- Details on how RDS Proxy can improve database performance and availability.
A company runs multiple workloads on virtual machines (VMs) in an on-premises data center. The company is expanding rapidly. The on-premises data center is not able to scale fast enough to meet business needs. The company wants to migrate the workloads to AWS.
The migration is time sensitive. The company wants to use a lift-and-shift strategy for non-critical workloads.
Which combination of steps will meet these requirements? (Select THREE.)
- A . Use the AWS Schema Conversion Tool (AWS SCT) to collect data about the VMs.
- B . Use AWS Application Migration Service. Install the AWS Replication Agent on the VMs.
- C . Complete the initial replication of the VMs. Launch test instances to perform acceptance tests on the VMs.
- D . Stop all operations on the VMs Launch a cutover instance.
- E . Use AWS App2Container (A2C) to collect data about the VMs.
- F . Use AWS Database Migration Service (AWS DMS) to migrate the VMs.
B, C, D
Explanation:
AWS Application Migration Service (AWS MGN) is the recommended tool for a lift-and-shift strategy, especially for time-sensitive migrations. It automates the replication of on-premises VMs to AWS, minimizing the effort required for migration and testing.
Key steps:
Replication with AWS MGN: The AWS Replication Agent is installed on the VMs to continuously replicate data to AWS, allowing you to manage migration easily.
Testing and Cutover: Initial replication allows for testing in AWS before performing the final cutover, ensuring that the migration process is smooth and data integrity is maintained.
AWS Documentation: AWS MGN is recommended for migrating virtual machines to the cloud with minimal downtime and disruption.
An ecommerce company runs applications in AWS accounts that are part of an organization in AWS Organizations. The applications run on Amazon Aurora PostgreSQL databases across all the accounts. The company needs to prevent malicious activity and must identify abnormal failed and incomplete login attempts to the databases.
- A . Attach service control policies (SCPs) to the root of the organization to identify the failed login attempts.
- B . Enable the Amazon RDS Protection feature in Amazon GuardDuty for the member accounts of the organization.
- C . Publish the Aurora general logs to a log group in Amazon CloudWatch Logs. Export the log data to a central Amazon S3 bucket.
- D . Publish all the Aurora PostgreSQL database events in AWS CloudTrail to a central Amazon S3 bucket.
B
Explanation:
Amazon GuardDuty includes RDS Protection that “monitors and profiles access to your Amazon Aurora and Amazon RDS databases” to detect threats such as suspicious login attempts, brute-force activity, and anomalous authentication patterns. GuardDuty can be enabled organization-wide in AWS Organizations with a delegated administrator to centralize findings for all member accounts, minimizing operational overhead. Findings include context like source IP, user, and DB instance, and integrate with Amazon EventBridge for alerting and automated response. SCPs (A) enforce or deny API permissions but do not provide detection/analytics. Exporting general logs (C) requires building and maintaining custom parsing/analytics pipelines. CloudTrail (D) records AWS control-plane API calls and does not log database-level login attempts. Therefore, enabling GuardDuty RDS Protection across the org provides the most operationally efficient, managed detection of abnormal failed and incomplete login attempts.
Reference: Amazon GuardDuty ― RDS Protection findings; GuardDuty organizational administration; Amazon RDS/Aurora ― integration with GuardDuty; AWS Security best practices (centralized threat detection).
A company is building a gaming application that needs to send unique events to multiple leaderboards, player matchmaking systems, and authentication services concurrently. The company requires an AWS-based event-driven system that delivers events in order and supports a publish-subscribe model. The gaming application must be the publisher, and the leaderboards, matchmaking systems, and authentication services must be the subscribers.
Which solution will meet these requirements?
- A . Amazon EventBridge event buses
- B . Amazon Simple Notification Service (Amazon SNS) FIFO topics
- C . Amazon Simple Notification Service (Amazon SNS) standard topics
- D . Amazon Simple Queue Service (Amazon SQS) FIFO queues
B
Explanation:
The requirement is an event-driven pub/sub system that guarantees ordered delivery of events.
Amazon SNS FIFO topics provide the publish-subscribe model along with FIFO (First-In-First-Out) delivery and exactly-once message processing, ensuring ordered delivery to multiple subscribers.
Option A, EventBridge, provides event buses but does not guarantee event ordering across multiple subscribers.
Option C (SNS standard topics) provides pub/sub but without ordering guarantees.
Option D (SQS FIFO queues) guarantees order but are point-to-point queues, not pub/sub.
Thus, Amazon SNS FIFO topics meet the requirements for ordered pub/sub messaging.
Reference: Amazon SNS FIFO Topics (https: //docs.aws.amazon.com/sns/latest/dg/fifo-topics.html)
Amazon EventBridge (https: //docs.aws.amazon.com/eventbridge/latest/userguide/what-is-amazon-eventbridge.html)
AWS Well-Architected Framework ― Performance Efficiency Pillar (https: //d1.awsstatic.com/whitepapers/architecture/AWS_Well-Architected_Framework.pdf)
