Practice Free SAA-C03 Exam Online Questions
A company stores text files in Amazon S3. The text files include customer chat messages, date and time information, and customer personally identifiable information (Pll).
The company needs a solution to provide samples of the conversations to an external service provider for quality control. The external service provider needs to randomly pick sample conversations up to the most recent conversation. The company must not share the customer Pll with the external service provider. The solution must scale when the number of customer conversations increases.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Create an Object Lambda Access Point. Create an AWS Lambda function that redacts the Pll when the function reads the file. Instruct the external service provider to access the Object Lambda Access Point.
- B . Create a batch process on an Amazon EC2 instance that regularly reads all new files, redacts the Pll from the files, and writes the redacted files to a different S3 bucket. Instruct the external service provider to access the bucket that does not contain the Pll.
- C . Create a web application on an Amazon EC2 instance that presents a list of the files, redacts the Pll from the files, and allows the external service provider to download new versions of the files that have the Pll redacted.
- D . Create an Amazon DynamoDB table. Create an AWS Lambda function that reads only the data in the files that does not contain Pll. Configure the Lambda function to store the non-PII data in the DynamoDB table when a new file is written to Amazon S3. Grant the external service provider access to the DynamoDB table.
A
Explanation:
The correct solution is to create an Object Lambda Access Point and an AWS Lambda function that redacts the PII when the function reads the file. This way, the company can use the S3 Object Lambda feature to modify the S3 object content on the fly, without creating a copy or changing the original object. The external service provider can access the Object Lambda Access Point and get the redacted version of the file. This solution has the least operational overhead because it does not require any additional storage, processing, or synchronization. The solution also scales automatically with the number of customer conversations and the demand from the external service provider. The other options are incorrect because:
Option B is using a batch process on an EC2 instance to read, redact, and write the files to a different S3 bucket. This solution has more operational overhead because it requires managing the EC2 instance, the batch process, and the additional S3 bucket. It also introduces latency and inconsistency between the original and the redacted files.
Option C is using a web application on an EC2 instance to present, redact, and download the files. This solution has more operational overhead because it requires managing the EC2 instance, the web application, and the download process. It also exposes the original files to the web application, which increases the risk of leaking the PII.
Option D is using a DynamoDB table and a Lambda function to store the non-PII data from the files. This solution has more operational overhead because it requires managing the DynamoDB table, the Lambda function, and the data transformation. It also changes the format and the structure of the original files, which may affect the quality control process.
Reference: S3 Object Lambda
Object Lambda Access Point
Lambda function
A company uses AWS Organizations to create dedicated AWS accounts for each business unit to manage each business unit’s account independently upon request. The root email recipient missed a notification that was sent to the root user email address of one account. The company wants to ensure that all future notifications are not missed. Future notifications must be limited to account administrators.
Which solution will meet these requirements?
- A . Configure the company’s email server to forward notification email messages that are sent to the AWS account root user email address to all users in the organization.
- B . Configure all AWS account root user email addresses as distribution lists that go to a few administrators who can respond to alerts. Configure AWS account alternate contacts in the AWS Organizations console or programmatically.
- C . Configure all AWS account root user email messages to be sent to one administrator who is responsible for monitoring alerts and forwarding those alerts to the appropriate groups.
- D . Configure all existing AWS accounts and all newly created accounts to use the same root user email address. Configure AWS account alternate contacts in the AWS Organizations console or programmatically.
B
Explanation:
Use a group email address for the management account’s root userhttps: //docs.aws.amazon.com/organizations/latest/userguide/orgs_best-practices_mgmt-acct.html#best-practices_mgmt-acct_email-address
A company recently launched a new product that is highly available in one AWS Region. The product consists of an application that runs on Amazon Elastic Container Service (Amazon ECS), a public Application Load Balancer (ALB), and an Amazon DynamoDB table. The company wants a solution that will make the application highly available across Regions.
Which combination of steps will meet these requirements? (Select THREE.)
- A . In a different Region, deploy the application to a new ECS cluster that is accessible through a new ALB.
- B . Create an Amazon Route 53 failover record.
- C . Modify the DynamoDB table to create a DynamoDB global table.
- D . In the same Region, deploy the application to an Amazon Elastic Kubernetes Service (Amazon EKS) cluster that is accessible through a new ALB.
- E . Modify the DynamoDB table to create global secondary indexes (GSIs).
- F . Create an AWS PrivateLink endpoint for the application.
A, B, C
Explanation:
To make the application highly available across regions:
Deploy the application in a different region using a new ECS clusterand ALB to ensure regional redundancy.
UseRoute 53 failover routingto automatically direct traffic to the healthy region in case of failure. UseDynamoDB Global Tablesto ensure the database is replicated and available across multiple regions, supporting read and write operations in each region.
Option D (EKS cluster in the same region): This does not provide regional redundancy.
Option E (Global Secondary Indexes): GSIs improve query performance but do not provide multi-region availability.
Option F (PrivateLink): PrivateLink is for secure communication, not for cross-region high availability.
AWS
Reference: DynamoDB Global Tables
Amazon ECS with ALB
An ecommerce company hosts an API that handles sales requests. The company hosts the API frontend on Amazon EC2 instances that run behind an Application Load Balancer (ALB). The company hosts the API backend on EC2 instances that perform the transactions. The backend tiers are loosely coupled by an Amazon Simple Queue Service (Amazon SQS) queue.
The company anticipates a significant increase in request volume during a new product launch event.
The company wants to ensure that the API can handle increased loads successfully.
- A . Double the number of frontend and backend EC2 instances to handle the increased traffic during the product launch event. Create a dead-letter queue to retain unprocessed sales requests when the demand exceeds the system capacity.
- B . Place the frontend EC2 instances into an Auto Scaling group. Create an Auto Scaling policy to launch new instances to handle the incoming network traffic.
- C . Place the frontend EC2 instances into an Auto Scaling group. Add an Amazon ElastiCache cluster in front of the ALB to reduce the amount of traffic the API needs to handle.
- D . Place the frontend and backend EC2 instances into separate Auto Scaling groups. Create a policy for the frontend Auto Scaling group to launch instances based on incoming network traffic. Create a policy for the backend Auto Scaling group to launch instances based on the SQS queue backlog.
A company has a dynamic web application hosted on two Amazon EC2 instances. The company has its own SSL certificate, which is on each instance to perform SSL termination.
There has been an increase in traffic recently, and the operations team determined that SSL encryption and decryption is causing the compute capacity of the web servers to reach their maximum limit.
What should a solutions architect do to increase the application’s performance?
- A . Create a new SSL certificate using AWS Certificate Manager (ACM) install the ACM certificate on each instance
- B . Create an Amazon S3 bucket Migrate the SSL certificate to the S3 bucket Configure the EC2 instances to reference the bucket for SSL termination
- C . Create another EC2 instance as a proxy server Migrate the SSL certificate to the new instance and configure it to direct connections to the existing EC2 instances
- D . Import the SSL certificate into AWS Certificate Manager (ACM) Create an Application Load Balancer with an HTTPS listener that uses the SSL certificate from ACM
D
Explanation:
https: //aws.amazon.com/certificate-manager/:
"With AWS Certificate Manager, you can quickly request a certificate, deploy it on ACM-integrated AWS resources, such as Elastic Load Balancers, Amazon CloudFront distributions, and APIs on API Gateway, and let AWS Certificate Manager handle certificate renewals. It also enables you to create private certificates for your internal resources and manage the certificate lifecycle centrally."