Practice Free SAA-C03 Exam Online Questions
A company is moving its on-premises Oracle database to Amazon Aurora PostgreSQL. The database has several applications that write to the same tables. The applications need to be migrated one by one with a month in between each migration. Management has expressed concerns that the database has a high number of reads and writes. The data must be kept in sync across both databases throughout the migration.
What should a solutions architect recommend?
- A . Use AWS DataSync for the initial migration. Use AWS Database Migration Service (AWS DMS) to create a change data capture (CDC) replication task and a table mapping to select all tables.
- B . Use AWS DataSync for the initial migration. Use AWS Database Migration Service (AWS DMS) to create a full load plus change data capture (CDC) replication task and a table mapping to select all tables.
- C . Use the AWS Schema Conversion Tool with AWS Database Migration Service (AWS DMS) using a memory optimized replication instance. Create a full load plus change data capture (CDC) replication task and a table mapping to select all tables.
- D . Use the AWS Schema Conversion Tool with AWS Database Migration Service (AWS DMS) using a compute optimized replication instance. Create a full load plus change data capture (CDC) replication task and a table mapping to select the largest tables.
C
Explanation:
https: //aws.amazon.com/ko/premiumsupport/knowledge-center/dms-memory-optimization/
A company migrated a MySQL database from the company’s on-premises data center to an Amazon RDS for MySQL DB instance. The company sized the RDS DB instance to meet the company’s average daily workload. Once a month, the database performs slowly when the company runs queries for a report. The company wants to have the ability to run reports and maintain the performance of the daily workloads.
Which solution will meet these requirements?
- A . Create a read replica of the database. Direct the queries to the read replica.
- B . Create a backup of the database. Restore the backup to another DB instance. Direct the queries to the new database.
- C . Export the data to Amazon S3. Use Amazon Athena to query the S3 bucket.
- D . Resize the DB instance to accommodate the additional workload.
C
Explanation:
Amazon Athena is a service that allows you to run SQL queries on data stored in Amazon S3. It is serverless, meaning you do not need to provision or manage any infrastructure. You only pay for the queries you run and the amount of data scanned1.
By using Amazon Athena to query your data in Amazon S3, you can achieve the following benefits: You can run queries for your report without affecting the performance of your Amazon RDS for MySQL DB instance. You can export your data from your DB instance to an S3 bucket and use Athena to query the data in the bucket. This way, you can avoid the overhead and contention of running queries on your DB instance.
You can reduce the cost and complexity of running queries for your report. You do not need to create a read replica or a backup of your DB instance, which would incur additional charges and require maintenance. You also do not need to resize your DB instance to accommodate the additional workload, which would increase your operational overhead.
You can leverage the scalability and flexibility of Amazon S3 and Athena. You can store large amounts of data in S3 and query them with Athena without worrying about capacity or performance limitations. You can also use different formats, compression methods, and partitioning schemes to optimize your data storage and query performance1.
A company wants to run big data workloads on Amazon EMR. The workloads need to process terabytes of data in memory.
A solutions architect needs to identify the appropriate EMR cluster instance configuration for the workloads.
Which solution will meet these requirements?
- A . Use a storage optimized instance for the primary node. Use compute optimized instances for core nodes and task nodes.
- B . Use a memory optimized instance for the primary node. Use storage optimized instances for core nodes and task nodes.
- C . Use a general purpose instance for the primary node. Use memory optimized instances for core nodes and task nodes.
- D . Use general purpose instances for the primary, core, and task nodes.
C
Explanation:
Big data workloads that need to process terabytes of data in memory requirememory-optimized instancesfor the core and task nodes to ensure sufficient memory for processing data efficiently. Primary Node: Ageneral purpose instanceis suitable because it manages cluster operations, including coordination and monitoring, and does not process data directly.
Core and Task Nodes: These nodes handle data storage and processing. Memory-optimized instances are ideal because they provide high memory-to-CPU ratios, which is critical for in-memory big data workloads.
Why Other Options Are Incorrect:
Option A: Storage optimized and compute optimized instances are not suitable for workloads that rely heavily on in-memory processing.
Option B: A memory-optimized primary node is unnecessary because the primary node does not process data.
Option D: General purpose instances for all nodes will not provide sufficient memory for processing
terabytes of data in memory.
AWS Documentation
Reference: Amazon EMR Instance Types
Memory-Optimized Instances
A company needs a solution to automate email ingestion. The company needs to automatically parse email messages, look for email attachments, and save any attachments to an Amazon S3 bucket in near real time. Email volume varies significantly from day to day.
Which solution will meet these requirements?
- A . Set up email receiving in Amazon Simple Email Service {Amazon SES). Create a rule set and a receipt rule. Create an AWS Lambda function that Amazon SES can invoke to process the email bodies and attachments.
- B . Set up email content filtering in Amazon Simple Email Service (Amazon SES). Create a content filtering rule based on sender, recipient, message body, and attachments.
- C . Set up email receiving in Amazon Simple Email Service (Amazon SES). Configure Amazon SES and S3 Event Notifications to process the email bodies and attachments.
- D . Create an AWS Lambda function to process the email bodies and attachments. Use Amazon EventBridge to invoke the Lambda function. Configure an EventBridge rule to listen for incoming emails.
A
Explanation:
Amazon SES (Simple Email Service) allows for the automatic ingestion of incoming emails. By setting up email receiving in SES and creating a rule set with a receipt rule, you can configure SES to invoke an AWS Lambda function whenever an email is received. The Lambda function can then process the email body and attachments, saving any attachments to an Amazon S3 bucket. This solution is highly scalable, cost-effective, and provides near real-time processing of emails with minimal operational overhead.
Option B (Content filtering): This only filters emails based on content and does not provide the functionality to save attachments to S3.
Option C (S3 Event Notifications): While SES can store emails in S3, SES with Lambda offers more flexibility for processing attachments in real-time.
Option D (EventBridge rule): EventBridge cannot directly listen for incoming emails, making this
solution incorrect.
AWS
Reference: Receiving Email with Amazon SES
Invoking Lambda from SES
A company is building a web application that serves a content management system. The content management system runs on Amazon EC2 instances behind an Application Load Balancer (Al B). The FC? instances run in an Auto Scaling group across multiple Availability 7ones. Users are constantly adding and updating files, blogs and other website assets in the content management system.
A solutions architect must implement a solution in which all the EC2 Instances share up-to-date website content with the least possible lag time.
Which solution meets these requirements?
- A . Update the EC2 user data in the Auto Scaling group lifecycle policy to copy the website assets from the EC2 instance that was launched most recently. Configure the ALB to make changes to the website assets only in the newest EC2 instance.
- B . Copy the website assets to an Amazon Elastic File System (Amazon EFS) file system. Configure each EC2 instance to mount the EFS file system locally. Configure the website hosting application to reference the website assets that are stored in the EFS file system.
- C . Copy the website assets to an Amazon S3 bucket. Ensure that each EC2 Instance downloads the website assets from the S3 bucket to the attached Amazon Elastic Block Store (Amazon EBS) volume. Run the S3 sync command once each hour to keep files up to date.
- D . Restore an Amazon Elastic Block Store (Amazon EBS) snapshot with the website assets. Attach the EBS snapshot as a secondary EBS volume when a new CC2 instance is launched. Configure the website hosting application to reference the website assets that are stored in the secondary EDS volume.
B
Explanation:
Understanding the Requirement: The company needs all EC2 instances to share up-to-date website content with minimal lag time, running behind an Application Load Balancer.
Analysis of Options:
EC2 User Data with ALB: Complex and not scalable as it requires updating each instance manually. Amazon EFS: Provides a scalable, shared file storage solution that can be mounted by multiple EC2 instances, ensuring all instances have access to the same up-to-date content.
Amazon S3 with EC2 Sync: Involves periodic synchronization which introduces lag and complexity.
Amazon EBS Snapshots: Not suitable for dynamic and frequent updates required by a content management system.
Best Solution:
Amazon EFS: Ensures all EC2 instances have access to a consistent and up-to-date set of website assets with minimal lag time, meeting the requirements effectively.
Reference: Amazon Elastic File System (EFS)
Mounting EFS File Systems on EC2 Instances
A company has a three-tier application for image sharing. The application uses an Amazon EC2 instance for the front-end layer, another EC2 instance for the application layer, and a third EC2 instance for a MySQL database. A solutions architect must design a scalable and highly available solution that requires the least amount of change to the application.
Which solution meets these requirements?
- A . Use Amazon S3 to host the front-end layer. Use AWS Lambda functions for the application layer.
Move the database to an Amazon DynamoDB table. Use Amazon S3 to store and serve users’ images. - B . Use load-balanced Multi-AZ AWS Elastic Beanstalk environments for the front-end layer and the application layer. Move the database to an Amazon RDS DB instance with multiple read replicas to serve users’ images.
- C . Use Amazon S3 to host the front-end layer. Use a fleet of EC2 instances in an Auto Scaling group for the application layer. Move the database to a memory optimized instance type to store and serve users’ images.
- D . Use load-balanced Multi-AZ AWS Elastic Beanstalk environments for the front-end layer and the application layer. Move the database to an Amazon RDS Multi-AZ DB instance. Use Amazon S3 to store and serve users’ images.
D
Explanation:
for "Highly available": Multi-AZ & for "least amount of changes to the application": Elastic Beanstalk automatically handles the deployment, from capacity provisioning, load balancing, auto-scaling to application health monitoring
A company runs a web application on Amazon EC2 instances in multiple Availability Zones. The EC2 instances are in private subnets. A solutions architect implements an internet-facing Application Load Balancer (ALB) and specifies the EC2 instances as the target group. However, the internet traffic is not reaching the EC2 instances.
How should the solutions architect reconfigure the architecture to resolve this issue?
- A . Replace the ALB with a Network Load Balancer. Configure a NAT gateway in a public subnet to allow internet traffic.
- B . Move the EC2 instances to public subnets. Add a rule to the EC2 instances’ security groups to allow outbound traffic to 0.0.0.0/0.
- C . Update the route tables for the EC2 instances’ subnets to send 0.0.0.0/0 traffic through the internet gateway route. Add a rule to the EC2 instances’ security groups to allow outbound traffic to 0.0.0.0/0.
- D . Create public subnets in each Availability Zone. Associate the public subnets with the ALB. Update the route tables for the public subnets with a route to the private subnets.
D
Explanation:
https: //aws.amazon.com/premiumsupport/knowledge-center/public-load-balancer-private-ec2/
A company needs guaranteed Amazon EC2 capacity in three specific Availability Zones in a specific AWS Region for an upcoming event that will last 1 week.
What should the company do to guarantee the EC2 capacity?
- A . Purchase Reserved instances that specify the Region needed
- B . Create an On Demand Capacity Reservation that specifies the Region needed
- C . Purchase Reserved instances that specify the Region and three Availability Zones needed
- D . Create an On-Demand Capacity Reservation that specifies the Region and three Availability Zones needed
D
Explanation:
https: //docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-capacity-reservations.html
Reserve instances: You will have to pay for the whole term (1 year or 3years) which is not cost effective
A company hosts an Amazon EC2 instance in a private subnet in a new VPC. The VPC also has a public subnet that has the default route set to an internet gateway. The private subnet does not have outbound internet access.
The EC2 instance needs to have the ability to download monthly security updates from an outside vendor. However, the company must block any connections that are initiated from the internet.
Which solution will meet these requirements?
- A . Configure the private subnet route table to use the internet gateway as the default route.
- B . Create a NAT gateway in the public subnet. Configure the private subnet route table to use the NAT gateway as the default route.
- C . Create a NAT instance in the private subnet. Configure the private subnet route table to use the NAT instance as the default route.
- D . Create a NAT instance in the private subnet. Configure the private subnet route table to use the internet gateway as the default route.
A company is designing an event-driven order processing system Each order requires multiple validation steps after the order is created. An independent AWS Lambda function performs each validation step. Each validation step is independent from the other validation steps Individual validation steps need only a subset of the order event information.
The company wants to ensure that each validation step Lambda function has access to only the information from the order event that the function requires. The components of the order processing system should be loosely coupled to accommodate future business changes.
Which solution will meet these requirements?
- A . Create an Amazon Simple Queue Service (Amazon SQS> queue for each validation step. Create a new Lambda function to transform the order data to the format that each validation step requires and to publish the messages to the appropriate SQS queues Subscribe each validation step Lambda function to its corresponding SQS queue
- B . Create an Amazon Simple Notification Service {Amazon SNS) topic. Subscribe the validation step Lambda functions to the SNS topic. Use message body filtering to send only the required data to each subscribed Lambda function.
- C . Create an Amazon EventBridge event bus. Create an event rule for each validation step Configure the input transformer to send only the required data to each target validation step Lambda function.
- D . Create an Amazon Simple Queue Service {Amazon SQS) queue Create a new Lambda function to subscribe to the SQS queue and to transform the order data to the format that each validation step requires. Use the new Lambda function to perform synchronous invocations of the validation step Lambda functions in parallel on separate threads.
C
Explanation:
Understanding the Requirement: The order processing system requires multiple independent validation steps, each handled by separate Lambda functions, with each function accessing only the subset of order information it needs. The system should be loosely coupled to accommodate future changes.
Analysis of Options:
Amazon SQS with a new Lambda function for transformation: This involves additional complexity in creating and managing multiple SQS queues and an extra Lambda function for data transformation. Amazon SNS with message filtering: While SNS supports message filtering, it is more suited for pub/sub messaging patterns rather than event-driven processing requiring fine-grained control over the data sent to each function.
Amazon EventBridge with input transformers: EventBridge is designed for event-driven architectures, allowing for fine-grained control with input transformers that can modify and filter the event data sent to each target Lambda function, ensuring each function receives only the necessary information.
SQS with synchronous Lambda invocations: This approach adds unnecessary complexity with synchronous invocations and is not ideal for an event-driven, loosely coupled architecture.
Best Solution:
Amazon EventBridge with input transformers: This option provides the most flexible, scalable, and loosely coupled architecture, enabling each Lambda function to receive only the required subset of data.
Reference: Amazon EventBridge
EventBridge Input Transformer