Practice Free SAA-C03 Exam Online Questions
A media company hosts a web application on AWS for uploading videos. Only authenticated users should upload within a specified time frame after authentication.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Configure the application to generate IAM temporary security credentials for authenticated users.
- B . Create an AWS Lambda function that generates pre-signed URLs when a user authenticates.
- C . Develop a custom authentication service that integrates with Amazon Cognito to control and log direct S3 bucket access through the application.
- D . Use AWS Security Token Service (AWS STS) to assume a pre-defined IAM role that grants authenticated users temporary permissions to upload videos directly to the S3 bucket.
B
Explanation:
Option B: Pre-signed URLs provide temporary, authenticated access to S3, limiting uploads to the time frame specified. This solution is lightweight, efficient, and easy to implement.
Option A requires the management of IAM temporary credentials, adding complexity.
Option C involves unnecessary development effort.
Option D introduces more complexity with STS and roles than pre-signed URLs.
A company has a three-tier web application. An Application Load Balancer (ALB) is in front of Amazon EC2 instances that are in the ALB target group. An Amazon S3 bucket stores documents.
The company requires the application to meet a recovery time objective (RTO) of 60 seconds.
Which solution will meet this requirement?
- A . Replicate S3 objects to a second AWS Region. Create a second ALB and a minimum set of EC2 instances in the second Region. Ensure that the EC2 instances are shut down until they are needed. Configure Amazon Route 53 to fail over to the second Region by using an IP-based routing policy.
- B . Use AWS Backup to take hourly backups of the EC2 instances. Back up the S3 data to a second AWS Region. Use AWS CloudFormation to deploy the entire infrastructure in the second Region when needed.
- C . Create daily snapshots of the EC2 instances in a second AWS Region. Use the snapshots to recreate the instances in the second Region. Back up the S3 data to the second Region. Perform a failover by modifying the application DNS record when needed.
- D . Replicate S3 objects to a second AWS Region. Create a second ALB and a minimum set of EC2 instances in the second Region. Ensure that the EC2 instances in the second Region are running. Configure Amazon Route 53 to fail over to the secondary Region based on health checks.
D
Explanation:
To achieve a 60-second RTO, pre-warming the DR environment (including running EC2 instances and Route 53 health checks) is essential. Active/passive failover using Route 53 with health checks ensures fast redirection when the primary Region becomes unavailable. S3 cross-region replication ensures document availability.
Reference: AWS Disaster Recovery C Active-Passive Strategy with Route 53 and Health Checks
A company uses AWS Lake Formation to govern its S3 data lake. It wants to visualize data in QuickSight by joining S3 data with Aurora MySQL operational data. The marketing team must see only specific columns.
Which solution provides column-level authorization with the least operational overhead?
- A . Use EMR to ingest database data into SPICE with only required columns.
- B . Use AWS Glue Studio to ingest database data into S3 and use IAM policies for column control.
- C . Use AWS Glue Elastic Views to create materialized S3 views with column restrictions.
- D . Use a Lake Formation blueprint to ingest database data to S3. Use Lake Formation for column-level access control. Use Athena as the QuickSight data source.
D
Explanation:
AWS Lake Formation provides fine-grained (column-level) access control for data stored in S3. Using a Lake Formation blueprint ensures database ingestion is automated and governed.
QuickSight can query Athena, and Athena honors Lake Formation permissions, enforcing column-level controls automatically.
Options A, B, and C rely on manual filtering or IAM policies, which cannot enforce column-level authorization for SQL queries.
A global company is migrating its workloads from an on-premises data center to AWS. The AWS environment includes multiple AWS accounts. IAM roles. AWS Config rules, and a VPC.
The company wants an automated process to provision new accounts on demand when the company’s business units require new accounts.
Which solution will meet these requirements with LEAST effort?
- A . Use AWS Control Tower to set up an organization in AWS Organizations. Use AWS Control Tower Account Factory for Terraform (AFT) to provision new AWS accounts.
- B . Create an organization in AWS Organizations. Use the AWS CLI CreateAccount API action to provision new AWS accounts. Organize the business units with organizational units (OUs).
- C . Create an AWS Lambda function that uses the AWS Organizations API to create new accounts.
Invoke the Lambda function from an AWS CloudFormation template in AWS Service Catalog. - D . Create an organization in AWS Organizations. Use AWS Step Functions to orchestrate the account creation process. Send account creation requests to an Amazon API Gateway API endpoint to invoke an AWS Lambda function that creates new accounts.
A company wants to restrict access to the content of its web application. The company needs to protect the content by using authorization techniques that are available on AWS. The company also wants to implement a serverless architecture for authorization and authentication that has low login latency.
The solution must integrate with the web application and serve web content globally. The application currently has a small user base, but the company expects the application’s user base to increase
Which solution will meet these requirements?
- A . Configure Amazon Cognito for authentication. Implement Lambda@Edge for authorization. Configure Amazon CloudFront to serve the web application globally
- B . Configure AWS Directory Service for Microsoft Active Directory for authentication. Implement AWS Lambda for authorization. Use an Application Load Balancer to serve the web application globally.
- C . Configure Amazon Cognito for authentication. Implement AWS Lambda for authorization Use Amazon S3 Transfer Acceleration to serve the web application globally.
- D . Configure AWS Directory Service for Microsoft Active Directory for authentication. Implement Lambda@Edge for authorization. Use AWS Elastic Beanstalk to serve the web application globally.
A
Explanation:
Amazon Cognitoprovides scalable, serverless authentication, andLambda@Edgeis used for authorization, providing low-latency access control at the edge. Amazon CloudFrontserves the web application globally with reduced latency and ensures secure access for users around the world. This solution minimizes operational overhead while providing scalability and security.
Option B (Directory Service): Directory Service is more suitable for enterprise use cases involving Active Directory, not for web-based applications.
Option C (S3 Transfer Acceleration): S3 Transfer Acceleration helps with file transfers but does not provide authorization features.
Option D (Elastic Beanstalk): Elastic Beanstalk adds unnecessary overhead when CloudFront can handle global delivery efficiently.
AWS
Reference: Amazon Cognito
Lambda@Edge
A company needs a solution to process customer orders from a global ecommerce platform. The solution must automatically start processing new orders immediately and must maintain a history of all order processing attempts.
Which solution will meet these requirements in the MOST cost-effective way?
- A . Create an Amazon EventBridge rule that invokes an AWS Lambda function once every minute to check for new orders. Configure the Lambda function to process orders and store results in Amazon Aurora.
- B . Create an Amazon EventBridge event pattern that monitors the ecommerce platform’s order events. Configure an EventBridge rule to invoke an AWS Lambda function when the platform receives a new order. Configure the function to store the results in Amazon DynamoDB.
- C . Use an Amazon EC2 instance to poll the ecommerce platform for new orders. Configure the instance to invoke an AWS Lambda function to process new orders. Configure the function to log results to Amazon CloudWatch.
- D . Use an Amazon SQS queue to invoke an AWS Lambda function when the platform receives a new order. Configure the function to process batches of orders and to store results in an Amazon EFS file system.
B
Explanation:
The correct answer is B because the company needs order processing to begin immediately when new orders arrive and also needs a history of all order processing attempts in the most cost-effective way. Amazon EventBridge is a serverless event bus that can respond to events as they occur, which makes it a strong fit for near real-time order processing. By using an EventBridge event pattern and a rule that invokes AWS Lambda when a new order event is received, the solution avoids polling and processes orders only when there is actual work to perform.
This event-driven design is more cost-effective than running scheduled checks or maintaining EC2 instances. AWS Lambda provides fully managed compute with automatic scaling and charges based on actual execution time. Amazon DynamoDB is a highly scalable and fully managed NoSQL database that is well suited for storing order-processing results and maintaining a durable history of processing attempts with low operational overhead.
Option A is incorrect because invoking Lambda every minute to check for new orders is a polling model, which introduces delay and unnecessary invocations.
Option C is incorrect because using an EC2 instance to poll for events creates higher operational cost and overhead.
Option D is less appropriate because Amazon EFS is not the best storage choice for durable event history or structured tracking of order-processing attempts.
AWS architectural guidance favors event-driven, serverless patterns for asynchronous order processing when immediate response, scalability, and low cost are required. Therefore, EventBridge + Lambda + DynamoDB is the best solution.
An ecommerce company wants to collect user clickstream data from the company’s website for real-time analysis. The website experiences fluctuating traffic patterns throughout the day. The company needs a scalable solution that can adapt to varying levels of traffic.
Which solution will meet these requirements?
- A . Use a data stream in Amazon Kinesis Data Streams in on-demand mode to capture the clickstream data. Use AWS Lambda to process the data in real time.
- B . Use Amazon Data Firehose to capture the clickstream data. Use AWS Glue to process the data in real time.
- C . Use Amazon Kinesis Video Streams to capture the clickstream data. Use AWS Glue to process the data in real time.
- D . Use Amazon Managed Service for Apache Flink (previously known as Amazon Kinesis Data Analytics) to capture the clickstream data. Use AWS Lambda to process the data in real time.
A
Explanation:
Amazon Kinesis Data Streams in on-demand mode is purpose-built for real-time ingestion of streaming data and scales automatically with traffic, which is ideal for fluctuating workloads like clickstream data.
Combined with AWS Lambda, which scales concurrently based on incoming events, this setup ensures efficient, real-time processing with minimal configuration and cost overhead.
Option B involves Firehose, which is not optimal for low-latency processing.
Option C is incorrect as
Kinesis Video Streams is designed for video data, not clickstream.
Option D introduces Flink, which adds unnecessary complexity when Lambda suffices.
A company is designing a new multi-tier web application that consists of the following components:
• Web and application servers that run on Amazon EC2 instances as part of Auto Scaling groups
• An Amazon RDS DB instance for data storage
A solutions architect needs to limit access to the application servers so that only the web servers can access them.
Which solution will meet these requirements?
- A . Deploy AWS PrivateLink in front of the application servers. Configure the network ACL to allow only the web servers to access the application servers.
- B . Deploy a VPC endpoint in front of the application servers Configure the security group to allow only the web servers to access the application servers
- C . Deploy a Network Load Balancer with a target group that contains the application servers’ Auto Scaling group Configure the network ACL to allow only the web servers to access the application servers.
- D . Deploy an Application Load Balancer with a target group that contains the application servers’ Auto Scaling group. Configure the security group to allow only the web servers to access the application servers.
D
Explanation:
Application Load Balancer (ALB): ALB is suitable for routing HTTP/HTTPS traffic to the application servers. It provides advanced routing features and integrates well with Auto Scaling groups.
Target Group Configuration:
Create a target group for the application servers and register the Auto Scaling group with this target group.
Configure the ALB to forward requests from the web servers to the application servers.
Security Group Setup:
Configure the security group of the application servers to only allow traffic from the web servers’ security group.
This ensures that only the web servers can access the application servers, meeting the requirement to limit access.
Benefits:
Security: Using security groups to restrict access ensures a secure environment where only intended traffic is allowed.
Scalability: ALB works seamlessly with Auto Scaling groups, ensuring the application can handle varying loads efficiently.
Reference: Application Load Balancer
Security Groups for Your VPC
