Practice Free SAA-C03 Exam Online Questions
A company’s website handles millions of requests each day, and the number of requests continues to increase. A solutions architect needs to improve the response time of the web application. The solutions architect determines that the application needs to decrease latency when retrieving product details from the Amazon DynamoDB table.
Which solution will meet these requirements with the LEAST amount of operational overhead?
- A . Set up a DynamoDB Accelerator (DAX) cluster. Route all read requests through DAX.
- B . Set up Amazon ElastiCache for Redis between the DynamoDB table and the web application.
Route all read requests through Redis. - C . Set up Amazon ElastiCache for Memcached between the DynamoDB table and the web application. Route all read requests through Memcached.
- D . Set up Amazon DynamoDB Streams on the table, and have AWS Lambda read from the table and populate Amazon ElastiCache. Route all read requests through ElastiCache.
A
Explanation:
it allows the company to improve the response time of the web application and decrease latency
when retrieving product details from the Amazon DynamoDB table. By setting up a DynamoDB
Accelerator (DAX) cluster, the company can use a fully managed, highly available, in-memory cache
for DynamoDB that delivers up to a 10x performance improvement. By routing all read requests
through DAX, the company can reduce the number of read operations on the DynamoDB table and
improve the user experience.
Reference: Amazon DynamoDB Accelerator (DAX)
Using DAX with DynamoDB
A company manages AWS accounts in AWS Organizations. AWS IAM Identity Center (AWS Single Sign-On) and AWS Control Tower are configured for the accounts. The company wants to manage multiple user permissions across all the accounts.
The permissions will be used by multiple IAM users and must be split between the developer and administrator teams. Each team requires different permissions. The company wants a solution that includes new users that are hired on both teams.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Create individual users in IAM Identity Center (or each account. Create separate developer and administrator groups in IAM Identity Center. Assign the users to the appropriate groups Create a custom IAM policy for each group to set fine-grained permissions.
- B . Create individual users in IAM Identity Center for each account. Create separate developer and administrator groups in IAM Identity Center. Assign the users to the appropriate groups. Attach AWS managed IAM policies to each user as needed for fine-grained permissions.
- C . Create individual users in IAM Identity Center Create new developer and administrator groups in IAM Identity Center. Create new permission sets that include the appropriate IAM policies for each group. Assign the new groups to the appropriate accounts Assign the new permission sets to the new groups When new users are hired, add them to the appropriate group.
- D . Create individual users in IAM Identity Center. Create new permission sets that include the appropriate IAM policies for each user. Assign the users to the appropriate accounts. Grant additional IAM permissions to the users from within specific accounts. When new users are hired, add them to IAM Identity Center and assign them to the accounts.
C
Explanation:
This solution meets the requirements with the least operational overhead because it leverages the features of IAM Identity Center and AWS Control Tower to centrally manage multiple user permissions across all the accounts. By creating new groups and permission sets, the company can assign fine-grained permissions to the developer and administrator teams based on their roles and responsibilities. The permission sets are applied to the groups at the organization level, so they are automatically inherited by all the accounts in the organization. When new users are hired, the company only needs to add them to the appropriate group in IAM Identity Center, and they will automatically get the permissions assigned to that group. This simplifies the user management and reduces the manual effort of assigning permissions to each user individually.
Reference: Managing access to AWS accounts and applications
Managing permissions sets
Managing groups
A company is hosting a web application on AWS using a single Amazon EC2 instance that stores user-uploaded documents in an Amazon EBS volume. For better scalability and availability, the company duplicated the architecture and created a second EC2 instance and EBS volume in another Availability Zone placing both behind an Application Load Balancer After completing this change, users reported that, each time they refreshed the website, they could see one subset of their documents or the other, but never all of the documents at the same time.
What should a solutions architect propose to ensure users see all of their documents at once?
- A . Copy the data so both EBS volumes contain all the documents.
- B . Configure the Application Load Balancer to direct a user to the server with the documents
- C . Copy the data from both EBS volumes to Amazon EFS Modify the application to save new documents to Amazon EFS
- D . Configure the Application Load Balancer to send the request to both servers Return each document from the correct server.
C
Explanation:
https: //docs.aws.amazon.com/efs/latest/ug/how-it-works.html#how-it-works-ec2
A financial services company has a two-tier consumer banking application. The frontend serves static web content. The backend consists of APIs. The company needs to migrate the frontend component to AWS. The backend of the application will remain on-premises. The company must protect the application from common web vulnerabilities and attacks.
- A . Migrate the frontend to Amazon EC2 instances. Deploy an Application Load Balancer (ALB) in front of the instances. Use the instances to invoke the on-premises APIs. Associate AWS WAF rules with the instances.
- B . Deploy the frontend as an Amazon CloudFront distribution that has multiple origins. Configure one origin to be an Amazon S3 bucket that serves the static web content. Configure a second origin to route traffic to the on-premises APIs based on the URL pattern. Associate AWS WAF rules with the distribution.
- C . Migrate the frontend to Amazon EC2 instances. Deploy a Network Load Balancer (NLB) in front of the instances. Use the instances to invoke the on-premises APIs. Create an AWS Network Firewall instance. Route all traffic through the Network Firewall instance.
- D . Deploy the frontend as a static website based on an Amazon S3 bucket. Use an Amazon API Gateway REST API and a set of Amazon EC2 instances to invoke the on-premises APIs. AssociateAWS WAF rules with the REST API and the S3 bucket.
B
Explanation:
Key Requirements:
Host the frontend on AWS as a static website.
Protect the application from common web vulnerabilities.
Minimal operational overhead.
Analysis of Options:
Option A:
Hosting the frontend on EC2 with an ALB introduces unnecessary complexity for serving static content.
AWS WAF rules can protect the ALB, but managing EC2 instances adds operational overhead.
Incorrect Approach: High operational complexity for a simple static website.
Option B:
Amazon CloudFront: Acts as a global CDN, reducing latency and protecting against DDoS attacks. Multiple Origins: Allows static content to be served from S3 while routing API traffic to the on-premises backend.
AWS WAF: Integrates with CloudFront to provide web application protection.
Correct Approach: Offers low operational overhead with optimal security and performance.
Option C:
Using NLB and Network Firewall is unnecessary for a static website. This approach increases cost and complexity without addressing the frontend requirements effectively.
Incorrect Approach: Over-engineered solution.
Option D:
Hosting the frontend on S3 and using API Gateway is a viable option, but managing AWS WAF rules separately for both the S3 bucket and the REST API increases complexity.
Incorrect Approach: Less efficient than using CloudFront with multiple origins.
AWS Solution Architect
Reference: Amazon CloudFront Overview
AWS WAF with CloudFront
A company is implementing a shared storage solution for a media application that is hosted m the AWS Cloud. The company needs the ability to use SMB clients to access data. The solution must he fully managed.
Which AWS solution meets these requirements?
- A . Create an AWS Storage Gateway volume gateway. Create a file share that uses the required client protocol Connect the application server to the file share.
- B . Create an AWS Storage Gateway tape gateway Configure (apes to use Amazon S3 Connect the application server lo the tape gateway
- C . Create an Amazon EC2 Windows instance Install and configure a Windows file share role on the instance. Connect the application server to the file share.
- D . Create an Amazon FSx for Windows File Server tile system Attach the fie system to the origin server. Connect the application server to the file system
D
Explanation:
https: //aws.amazon.com/fsx/lustre/
Amazon FSx has native support for Windows file system features and for the industry-standard Server Message Block (SMB) protocol to access file storage over a network. https: //docs.aws.amazon.com/fsx/latest/WindowsGuide/what-is.html
A company is migrating a document management application to AWS. The application runs on Linux servers. The company will migrate the application to Amazon EC2 instances in an Auto Scaling group. The company stores 7 TiB of documents in a shared storage file system. An external relational database tracks the documents.
Documents are stored once and can be retrieved multiple times for reference at any time. The company cannot modify the application during the migration. The storage solution must be highly available and must support scaling over time.
Which solution will meet these requirements MOST cost-effectively?
- A . Deploy an EC2 instance with enhanced networking as a shared NFS storage system. Export the NFS share. Mount the NFS share on the EC2 instances in the Auto Scaling group.
- B . Create an Amazon S3 bucket that uses the S3 Standard-Infrequent Access (S3 Standard-IA) storage class Mount the S3 bucket on the EC2 instances in the Auto Scaling group.
- C . Deploy an SFTP server endpoint by using AWS Transfer for SFTP and an Amazon S3 bucket. Configure the EC2 instances in the Auto Scaling group toconnect to the SFTP server.
- D . Create an Amazon.. System (Amazon fcFS) file system with mount points in multiple Availability Zones. Use the bFS Stondard-intrcqucnt Access (Standard-IA) storage class. Mount the NFS share on the EC2 instances in the Auto Scaling group.
D
Explanation:
Requirement Analysis: The company needs highly available, scalable storage for a document management application without modifying the application during migration.
EFS Overview: Amazon EFS provides scalable file storage that can be mounted concurrently on multiple EC2 instances across different Availability Zones.
EFS Standard-IA: Using the Standard-IA storage class helps reduce costs for infrequently accessed data while maintaining high availability and scalability. Implementation:
Create an EFS file system.
Configure mount targets in multiple Availability Zones to ensure high availability.
Mount the EFS file system on EC2 instances in the Auto Scaling group.
Conclusion: This solution meets the high availability, scalability, and cost-effectiveness requirements without needing application modifications.
Reference
Amazon EFS: Amazon EFS Documentation
EFS Storage Classes: Amazon EFS Storage Classes
A company wants to relocate its on-premises MySQL database to AWS. The database accepts regular imports from a client-facing application, which causes a high volume of write operations. The company is concerned that the amount of traffic might be causing performance issues within the application.
- A . Provision an Amazon RDS for MySQL DB instance with Provisioned IOPS SSD storage. Monitor write operation metrics by using Amazon CloudWatch. Adjust the provisioned IOPS if necessary.
- B . Provision an Amazon RDS for MySQL DB instance with General Purpose SSD storage. Place an Amazon ElastiCache cluster in front of the DB instance. Configure the application to query ElastiCache instead.
- C . Provision an Amazon DocumentDB (with MongoDB compatibility) instance with a memory-optimized instance type. Monitor Amazon CloudWatch for performance-related issues. Change the instance class if necessary.
- D . Provision an Amazon Elastic File System (Amazon EFS) file system in General Purpose performance mode. Monitor Amazon CloudWatch for IOPS bottlenecks. Change to Provisioned Throughput performance mode if necessary.
A company has 15 employees. The company stores employee start dates in an Amazon DynamoDB table. The company wants to send an email message to each employee on the day of the employee’s work anniversary.
Which solution will meet these requirements with the MOST operational efficiency?
- A . Create a script that scans the DynamoDB table and uses Amazon Simple Notification Service (Amazon SNS) to send email messages to employees when necessary. Use a cron job to run this script every day on an Amazon EC2 instance.
- B . Create a script that scans the DynamoDB table and uses Amazon Simple Queue Service {Amazon SQS) to send email messages to employees when necessary. Use a cron job to run this script every day on an Amazon EC2 instance.
- C . Create an AWS Lambda function that scans the DynamoDB table and uses Amazon Simple Notification Service (Amazon SNS) to send email messages to employees when necessary. Schedule this Lambda function to run every day.
- D . Create an AWS Lambda function that scans the DynamoDB table and uses Amazon Simple Queue Service (Amazon SQS) to send email messages to employees when necessary Schedule this Lambda function to run every day.
C
Explanation:
AWS Lambda for Operational Efficiency:
AWS Lambdais a serverless compute service that allows you to run code without provisioning or managing servers. It automatically scales based on the number of invocations and eliminates the need to maintain and monitor EC2 instances, making it far more operationally efficient compared to running a cron job on EC2.
By using Lambda, you pay only for the compute time that your function uses. This is especially
beneficial when dealing with lightweight tasks, such as scanning a DynamoDB table and sending
email messages once a day.
Amazon DynamoDB:
DynamoDBis a highly scalable, fully managed NoSQL database. The table stores employee start
dates, and scanning the table to find the employees who have a work anniversary on the current day
is a lightweight operation. Lambda can easily perform this operation using theDynamoDB Scan APIor
queries, depending on how the data is structured.
Amazon SNS for Email Notifications:
Amazon Simple Notification Service (SNS)is a fully managed messaging service that supports sending notifications to a variety of endpoints, including email. SNS is well-suited for sendingout email messages to employees, as it can handle the fan-out messaging pattern (sending the same message to multiple recipients).
In this scenario, once Lambda identifies employees who have their work anniversaries, it can use SNS to send the email notifications efficiently. SNS integrates seamlessly with Lambda, and sending emails via SNS is a common pattern for this type of use case.
Event Scheduling:
To automate this daily task, you can schedule the Lambda function usingAmazon EventBridge (formerly CloudWatch Events). EventBridge can trigger the Lambda function on a daily schedule (cron-like scheduling). This avoids the complexity and operational overhead of manually setting up cron jobs on EC2 instances.
Why Not EC2 or SQS?
Option A & Bsuggest running a cron job on anAmazon EC2instance. This approach requires you to manage, scale, and patch the EC2 instance, which increases operational overhead. Lambda is a better choice because it automatically scales and doesn’t require server management.
Amazon Simple Queue Service (SQS)is ideal for decoupling distributed systems but isn’t necessary in this context because the goal is to send notifications to employees on their work anniversaries. SQS adds unnecessary complexity for this straightforward use case, whereSNSis the simpler and more efficient solution.
AWS
Reference: AWS Lambda
Amazon SNS
Amazon DynamoDB
Amazon EventBridge
Summary:
UsingAWS Lambdacombined withAmazon SNSto send notifications, and scheduling the function withAmazon EventBridgeto run daily, is the most operationally efficient solution. It leverages AWS serverless technologies, which reduce the need for infrastructure management and provide automatic scaling. Therefore, Option Cis the correct and optimal choice.
A company is designing an application to connect AWS Lambda functions to an Amazon RDS for MySQL DB instance. The DB instance manages many connections. The company needs to modify the application to improve connectivity and recovery.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Use Amazon RDS Proxy for connection pooling. Modify the application to use the RDS Proxy for connections to the DB instance.
- B . Create a new RDS instance for connection pooling. Modify the application to use the new RDS instance for connectivity.
- C . Create read replicas to distribute the load of the DB instance. Create a Network Load Balancer to distribute the load across the read replicas.
- D . Migrate the RDS for MySQL DB instance to Amazon Aurora MySQL to increase DB instance performance.
A
Explanation:
Amazon RDS Proxy helps manage thousands of concurrent database connections by pooling and reusing them efficiently. It is especially useful for serverless applications like AWS Lambda that can open numerous connections quickly, potentially overwhelming the database. Using RDS Proxy reduces connection management overhead and improves fault tolerance.
Reference: AWS Documentation C Amazon RDS Proxy
A company is developing a serverless web application that gives users the ability to interact with real-time analytics from online games. The data from the games must be streamed in real time. The company needs a durable, low-latency database option for user data. The company does not know how many users will use the application. Any design considerations must provide response times of single-digit milliseconds as the application scales.
Which combination of AWS services will meet these requirements? (Select TWO.)
- A . Amazon CloudFront
- B . Amazon DynamoDB
- C . Amazon Kinesis
- D . Amazon RDS
- E . AWS Global Accelerator
B, C
Explanation:
Amazon Kinesis allows real-time ingestion of game events at scale, while Amazon DynamoDB provides millisecond-latency access to user data, automatically scaling with demand. This combination ensures real-time processing and fast data retrieval without managing infrastructure.
Reference: AWS Documentation C Real-Time Processing with Kinesis and Low-Latency Databases with DynamoDB