Practice Free SAA-C03 Exam Online Questions
A company uses Amazon API Gateway to manage its REST APIs that third-party service providers access. The company must protect the REST APIs from SQL injection and cross-site scripting attacks.
What is the MOST operationally efficient solution that meets these requirements?
- A . Configure AWS Shield.
- B . Configure AWS WAR
- C . Set up API Gateway with an Amazon CloudFront distribution Configure AWS Shield in CloudFront.
- D . Set up API Gateway with an Amazon CloudFront distribution. Configure AWS WAF in CloudFront
D
Explanation:
Amazon API Gateway with CloudFront: API Gateway allows you to create, deploy, and manage APIs, while CloudFront provides a CDN to deliver content with low latency and high transfer speeds.
AWS WAF (Web Application Firewall):
AWS WAF can be configured in CloudFront to protect against common web exploits, including SQL injection and cross-site scripting (XSS).
WAF allows you to create custom rules to block specific attack patterns and can be managed centrally.
Configuration:
Deploy your APIs using Amazon API Gateway.
Set up an Amazon CloudFront distribution in front of the API Gateway.
Configure AWS WAF on the CloudFront distribution to apply security rules.
Operational Efficiency: This solution provides robust protection with minimal operational overhead by leveraging managed AWS services, ensuring that your APIs are secure without extensive custom implementation.
Reference: Using AWS WAF to Protect Your APIs
How CloudFront Works with AWS WAF
A company has an application that runs on Amazon EC2 instances in an Auto Scaling group. The application uses hardcoded credentials to access an Amazon RDS database.
To comply with new regulations, the company needs to automatically rotate the database password for the application service account every 90 days.
Which solution will meet these requirements?
- A . Create an AWS Lambda function to generate new passwords and upload them to EC2 instances by using SSH.
- B . Create a secret for the database credentials in AWS Secrets Manager. Enable rotation every 90 days. Modify the application to retrieve credentials from Secrets Manager.
- C . Create an Amazon ECS task to rotate passwords and upload them to EC2 instances.
- D . Create a new EC2 instance that runs a cron job to rotate passwords.
B
Explanation:
Hardcoded database credentials create security risks and operational challenges, especially when compliance requires regular credential rotation. AWS Secrets Manager is the AWS-recommended service for securely storing, managing, and rotating secrets such as database credentials.
Option B is the correct solution because Secrets Manager natively supports automated credential rotation using AWS-managed or custom Lambda functions. By enabling rotation on a 90-day schedule, Secrets Manager automatically updates the credentials in the RDS database and stores the new values securely. The application retrieves credentials dynamically at runtime, eliminating the need for storing passwords on EC2 instances.
Options A, C, and D all rely on custom scripts, SSH access, and manual distribution of secrets, which significantly increases operational overhead, security risk, and failure potential. These approaches also violate AWS best practices by spreading sensitive credentials across multiple hosts.
Secrets Manager integrates with IAM for fine-grained access control, supports auditing through AWS CloudTrail, and improves overall security posture while reducing operational complexity.
Therefore, B best meets the requirements in a secure, scalable, and compliant manner.
A company wants DevOps teams to create IAM roles, but no role may have administrative permissions.
Which solution will meet these requirements?
- A . Use SCPs to deny Administrator Access policy usage.
- B . Use SCPs to require a permissions boundary when creating IAM roles.
- C . Allow all permissions and auto-delete noncompliant roles.
- D . Attach restrictive permissions boundaries directly to IAM users.
B
Explanation:
Using Service Control Policies (SCPs) to enforce permissions boundaries ensures that all created roles are constrained, regardless of attached policies. This is the most secure, scalable, and AWS-recommended preventive control.
An ecommerce company hosts an API that handles sales requests. The company hosts the API frontend on Amazon EC2 instances that run behind an Application Load Balancer (ALB). The company hosts the API backend on EC2 instances that perform the transactions. The backend tiers are loosely coupled by an Amazon Simple Queue Service (Amazon SQS) queue.
The company anticipates a significant increase in request volume during a new product launch event.
The company wants to ensure that the API can handle increased loads successfully.
- A . Double the number of frontend and backend EC2 instances to handle the increased traffic during the product launch event. Create a dead-letter queue to retain unprocessed sales requests when the demand exceeds the system capacity.
- B . Place the frontend EC2 instances into an Auto Scaling group. Create an Auto Scaling policy to launch new instances to handle the incoming network traffic.
- C . Place the frontend EC2 instances into an Auto Scaling group. Add an Amazon ElastiCache cluster in front of the ALB to reduce the amount of traffic the API needs to handle.
- D . Place the frontend and backend EC2 instances into separate Auto Scaling groups. Create a policy for the frontend Auto Scaling group to launch instances based on incoming network traffic. Create a policy for the backend Auto Scaling group to launch instances based on the SQS queue backlog.
D
Explanation:
To handle increased loads effectively, it’s essential to implement Auto Scaling for both frontend and backend tiers:
Frontend Auto Scaling Group: Scaling based on incoming network traffic ensures that the application can handle increased user requests.
Backend Auto Scaling Group: Scaling based on the Amazon SQS queue backlog ensures that the backend can process messages as they arrive, preventing delays.
This approach allows each tier to scale independently based on its specific load, ensuring optimal resource utilization and performance.
Reference: Tutorial: Set up a scaled and load-balanced application Scaling policy based on Amazon SQSAWS Documentation
A solutions architect has an application container, an AWS Lambda function, and an Amazon Simple Queue Service (Amazon SQS) queue. The Lambda function uses the SQS queue as an event source. The Lambda function makes a call to a third-party machine learning (ML) API when the function is invoked. The response from the third-party API can take up to 60 seconds to return.
The Lambda function’s timeout value is currently 65 seconds. The solutions architect has noticed that the Lambda function sometimes processes duplicate messages from the SQS queue.
What should the solutions architect do to ensure that the Lambda function does not process duplicate messages?
- A . Configure the Lambda function with a larger amount of memory.
- B . Configure an increase in the Lambda function’s timeout value.
- C . Configure the SQS queue’s delivery delay value to be greater than the maximum time it takes to call the third-party API.
- D . Configure the SQS queue’s visibility timeout value to be greater than the maximum time it takes to call the third-party API.
D
Explanation:
When using an SQS queue as an event source for AWS Lambda, the visibility timeout of the SQS queue plays a critical role in preventing duplicate message processing.
"If your Lambda function doesn’t process the message and delete it from the queue within the visibility timeout period, the message becomes visible again and can be processed again by the same or another function instance."
― AWS Lambda with SQS
In this scenario, the third-party API may take up to 60 seconds to respond. Since the Lambda function is configured with a 65-second timeout, the visibility timeout of the queue must be greater than or equal to the maximum function execution time to avoid the same message being reprocessed.
Incorrect A: Memory allocation doesn’t impact duplicate message handling.
B: Timeout is already sufficient; increasing it further does not solve the core issue.
C: Delivery delay controls initial delivery delay, not reprocessing logic.
Reference: AWS Lambda and Amazon SQS Developer Guide
SQS Visibility Timeout Documentation
A company is using an Amazon Redshift cluster to run analytics queries for multiple sales teams. In addition to the typical workload, on the last Monday morning of each month, thousands of users run reports. Users have reported slow response times during the monthly surge.
The company must improve query performance without impacting the availability of the Redshift cluster.
Which solution will meet these requirements?
- A . Resize the Redshift cluster by using the classic resize capability of Amazon Redshift before every monthly surge. Reduce the cluster to its original size after each surge.
- B . Resize the Redshift cluster by using the elastic resize capability of Amazon Redshift before every monthly surge. Reduce the cluster to its original size after each surge.
- C . Enable the concurrency scaling feature for the Redshift cluster for specific workload management (WLM) queues.
- D . Enable Amazon Redshift Spectrum for the Redshift cluster before every monthly surge.
C
Explanation:
Concurrency Scaling allows Amazon Redshift to add transient capacity automatically to handle bursts in concurrent queries. This is ideal for scenarios with predictable surge periods, such as end-of-month reporting.
It improves performance without manual resizing or cluster modification and without affecting availability. Resizing requires manual intervention and potential downtime, and Redshift Spectrum is designed for querying data in S3, not for solving concurrency issues.
A company stores sensitive customer data in an Amazon DynamoDB table. The company frequently updates the data. The company wants to use the data to personalize offers for customers.
The company’s analytics team has its own AWS account. The analytics team runs an application on Amazon EC2 instances that needs to process data from the DynamoDB tables. The company needs to follow security best practices to create a process to regularly share data from DynamoDB to the analytics team.
Which solution will meet these requirements?
- A . Export the required data from the DynamoDB table to an Amazon S3 bucket as multiple JSON files.
Provide the analytics team with the necessary IAM permissions to access the S3 bucket. - B . Allow public access to the DynamoDB table. Create an IAM user that has permission to access DynamoDB. Share the IAM user with the analytics team.
- C . Allow public access to the DynamoDB table. Create an IAM user that has read-only permission for DynamoDB. Share the IAM user with the analytics team.
- D . Create a cross-account IAM role. Create an IAM policy that allows the AWS account ID of the analytics team to access the DynamoDB table. Attach the IAM policy to the IAM role. Establish a trust relationship between accounts.
D
Explanation:
Usingcross-account IAM rolesis the most secure and scalable way to share data between AWS accounts.
Atrust relationshipallows the analytics team’s account to assume the role in the main account and access the DynamoDB table directly.
Ais feasible but involves data duplication and additional costs for storing the JSON files in S3.
B and Cviolate security best practices by allowing public access to sensitive data and sharing credentials, which is highly discouraged.
AWS Documentation
Reference: Cross-Account Access with Roles
Best Practices for Amazon DynamoDB Security
A company uses Amazon S3 to store customer data that contains personally identifiable information (PII) attributes. The company needs to make the customer information available to company resources through an AWS Glue Catalog. The company needs to have fine-grained access control for the data so that only specific IAM roles can access the PII data.
- A . Create one IAM policy that grants access to PII. Create a second IAM policy that grants access to non-PII data. Assign the PII policy to the specified IAM roles.
- B . Create one IAM role that grants access to PII. Create a second IAM role that grants access to non-PII data. Assign the PII policy to the specified IAM roles.
- C . Use AWS Lake Formation to provide the specified IAM roles access to the PII data.
- D . Use AWS Glue to create one view for PII data. Create a second view for non-PII data. Provide the specified IAM roles access to the PII view.
C
Explanation:
AWS Lake Formationis designed for managing fine-grained access control to data in an efficient manner:
Granular Permissions: Lake Formation allows column-level, row-level, and table-level access controls, which can precisely define access to PII data.
Integration with AWS Glue Catalog: Lake Formation natively integrates with AWS Glue for seamless data cataloging and access control.
Operational Efficiency: Centralized access control policies minimize the need for separate IAM roles or policies.
Why Other Options Are Not Ideal:
Option A:
Creating multiple IAM policies introduces complexity and lacks column-level access control. Not efficient.
Option B:
Managing multiple IAM roles for granular access is operationally complex. Not efficient.
Option D:
Creating views in Glue adds unnecessary complexity and may not provide the level of granularity that Lake Formation offers. Not the best choice.
AWS
Reference: AWS Lake Formation: AWS Documentation – Lake Formation Fine-Grained Permissions with Lake Formation: AWS Documentation – Fine-Grained Permissions
An application has performance issues due to increased demand. The demand is on read-only historical records in Amazon RDS using custom queries. The company wants improved performance without changing database structure and with minimal management overhead.
Which approach meets the requirement?
- A . Deploy DynamoDB and move all data.
- B . Deploy Amazon ElastiCache (Redis OSS) and cache application data.
- C . Deploy Memcached on EC2 and cache data.
- D . Deploy DynamoDB Accelerator (DAX) on Amazon RDS.
B
Explanation:
Amazon ElastiCache (Redis OSS) provides an in-memory cache that drastically improves read performance with minimal operational overhead.
It integrates easily with RDS and does not require schema or database changes.
EC2-based caching (Option C) increases management overhead.
DAX (Option D) accelerates DynamoDB only, not RDS.
Moving to DynamoDB (Option A) requires complete application and schema redesign.
A media publishing company is building an application on AWS to give users the ability to print their own books. The application frontend runs on a Docker container.
The amount of incoming orders varies significantly. The incoming orders can temporarily exceed the throughput of the company’s book printing machines. Order-processing payloads are up to 4 MB in size.
The company needs to develop a solution that can scale to handle incoming orders.
Which solution will meet this requirement?
- A . Use Amazon Simple Queue Service (Amazon SQS) to queue incoming orders. Create an AWS Lambda@Edge function to process orders. Deploy the frontend application on Amazon Elastic Kubernetes Service (Amazon EKS).
- B . Use Amazon Simple Notification Service (Amazon SNS) to queue incoming orders. Create an AWS Lambda function to process orders. Deploy the frontend application on AWS Fargate.
- C . Use Amazon Simple Queue Service (Amazon SQS) to queue incoming orders. Create an AWS Lambda function to process orders. Deploy the frontend application on Amazon Elastic Container Service (Amazon ECS) with the AWS Fargate launch type.
- D . Use Amazon Simple Notification Service (Amazon SNS) to queue incoming orders. Create an AWS Lambda@Edge function to process orders. Deploy the frontend application on Amazon EC2 instances.
C
Explanation:
The workload is bursting and must be able to buffer more orders than the printers can process. The best practice for this is a durable, scalable queue: Amazon SQS.
SQS decouples the frontend from the backend and can handle large, spiky volumes of messages.
AWS Lambda is a good fit for processing messages from SQS because it scales automatically with queue depth and requires no server management.
Deploying the frontend in Docker on Amazon ECS with Fargate provides a fully managed, auto-scaling container environment without managing EC2 instances.
Why others are not correct:
A and D: SNS is a pub/sub service, not a queue; it does not provide durable back-pressure buffering when the printers are slower than incoming requests. Lambda@Edge is for CloudFront edge processing, not back-end order processing.
B: Same SNS issue―no durable queue semantics and not ideal for workload that must be buffered and retried.
