Practice Free SAA-C03 Exam Online Questions
A company uses an Amazon EC2 Auto Scaling group to host an API. The EC2 instances are in a target group that is associated with an Application Load Balancer (ALB). The company stores data in an Amazon Aurora PostgreSQL database.
The API has a weekly maintenance window. The company must ensure that the API returns a static maintenance response during the weekly maintenance window.
Which solution will meet this requirement with the LEAST operational overhead?
- A . Create a table in Aurora PostgreSQL that has fields to contain keys and values. Create a key for a maintenance flag. Set the flag when the maintenance window starts. Configure the API to query the table for the maintenance flag and to return a maintenance response if the flag is set. Reset the flag when the maintenance window is finished.
- B . Create an Amazon Simple Queue Service (Amazon SQS) queue. Subscribe the EC2 instances to the queue. Publish a message to the queue when the maintenance window starts. Configure the API to return a maintenance message if the instances receive a maintenance start message from the queue. Publish another message to the queue when the maintenance window is finished to restore normal operation.
- C . Create a listener rule on the ALB to return a maintenance response when the path on a request matches a wildcard. Set the rule priority to one. Perform the maintenance. When the maintenance window is finished, delete the listener rule.
- D . Create an Amazon Simple Notification Service (Amazon SNS) topic Subscribe the EC2 instances to the topic Publish a message to the topic when the maintenance window starts. Configure the API to return a maintenance response if the instances receive the maintenance start message from the topic. Publish another message to the topic when the maintenance window finshes to restore normal operation.
C
Explanation:
Creating a listener rule on theApplication Load Balancer (ALB)to return a maintenance response during the maintenance window is the most straightforward solution with the least operational overhead. The rule can be configured to match all incoming requests and return a custom response, and it can be easily removed once maintenance is complete.
Option A (Aurora table flag): This adds unnecessary complexity for a temporary maintenance response.
Option B and D (SQS or SNS): These options introduce more components than needed for a simple maintenance message.
AWS
Reference: ALB Listener Rules
A company uses an organization in AWS Organizations to manage a multi-account landing zone. The company requires all users who access AWS accounts in the organization to use a centralized identity system that follows the principle of least privilege for operational tasks. The company currently uses an external identity provider (IdP).
Which combination of solutions will meet these requirements? (Select TWO.)
- A . Use AWS Identity and Access Management (IAM) to create IAM users and IAM user groups in each AWS account.
- B . Create permission sets in AWS IAM Identity Center. Assign the appropriate permission sets to the IAM users and IAM user groups in the accounts.
- C . Assign each IAM user to an IAM role by using an inline IAM policy based on operational duties. Assign each role to the appropriate AWS account in the organization.
- D . Configure a SAML identity provider in AWS Identity and Access Management (IAM) in each AWS account to establish a trust relationship with the company’s external IdP.
- E . Enable AWS IAM Identity Center in the organization management account. Create user accounts and user groups.
B,E
Explanation:
AWS recommends using AWS IAM Identity Center (formerly AWS SSO) for centralized authentication and access control across multiple accounts in an AWS Organization, especially when integrating with an external IdP.
From AWS Documentation:
“Use IAM Identity Center to provide centralized access to multiple AWS accounts or applications. You can integrate with an external IdP via SAML 2.0. Assign users permissions through permission sets that define the roles users can assume.”
(Source: AWS IAM Identity Center User Guide)
Why B and E are correct:
E enables centralized identity federation using IAM Identity Center with your external IdP.
B uses permission sets to apply least-privilege access roles to users and groups across accounts, in alignment with the principle of least privilege.
Why others are incorrect:
Option A: IAM users in each account break centralized access model and are hard to manage at scale.
Option C: Managing individual IAM roles and inline policies across accounts is not scalable.
Option D: Per-account SAML providers are redundant when using IAM Identity Center, which provides centralized federation.
Reference: AWS IAM Identity Center User Guide
AWS Well-Architected Framework C Security Pillar
AWS Organizations and Identity Center Integration Docs
A company is designing a new application that uploads files to an Amazon S3 bucket. The uploaded files are processed to extract metadata.
Processing must take less than 5 seconds. The volume and frequency of the uploads vary from a few files each hour to hundreds of concurrent uploads.
Which solution will meet these requirements MOST cost-effectively?
- A . Configure AWS CloudTrail trails to log Amazon S3 API calls. Use AWS AppSync to process the files.
- B . Configure a new object created S3 event notification within the bucket to invoke an AWS Lambda function to process the files.
- C . Configure Amazon Kinesis Data Streams to deliver the files to the S3 bucket. Invoke an AWS Lambda function to process the files.
- D . Deploy an Amazon EC2 instance. Create a script that lists all files in the S3 bucket and processes new files. Use a cron job that runs every minute to run the script.
B
Explanation:
Using S3 event notifications to trigger AWS Lambda for file processing is a cost-effective and serverless solution. Lambda scales automatically with upload volume, and processing each file takes less than 5 seconds, fitting within Lambda’s execution time.
Option A: AWS AppSync is designed for GraphQL APIs and is not suitable for file processing.
Option C: Kinesis is overkill and more expensive for this use case.
Option D: Running an EC2 instance incurs ongoing costs and is less flexible compared to Lambda.
AWS Documentation
Reference: Amazon S3 Event Notifications
AWS Lambda Overview
A company is setting up a development environment on AWS for a team of developers. The team needs to access multiple Amazon S3 buckets to store project data. The team also needs to use Amazon EC2 to run development instances.
The company needs to ensure that the developers have access only to specific Amazon S3 buckets and EC2 instances. Access permissions must be assigned according to each developer’s role on the team. The company wants to minimize the use of permanent credentials and to ensure access is securely managed according to the principle of least privilege.
Which solution will meet these requirements?
- A . Create IAM roles that have administrative-level permissions for Amazon S3 and Amazon EC2.
Require developers to sign in by using Amazon Cognito to access Amazon S3 and Amazon EC2. - B . Create IAM roles that have fine-grained permissions for Amazon S3 and Amazon EC2. Configure AWS IAM Identity Center to manage credentials for the developers.
- C . Create IAM users that have programmatic access to Amazon S3 and Amazon EC2. Generate individual access keys for each developer to access Amazon S3 and Amazon EC2.
- D . Create a VPC endpoint for Amazon S3. Require developers to access Amazon EC2 instances and Amazon S3 buckets through a bastion host.
B
Explanation:
The most secure and manageable way to provide developers with temporary, least-privilege access is by using AWS IAM Identity Center (formerly AWS SSO). IAM Identity Center allows assigning IAM roles with scoped permissions based on the developer’s team role. This ensures no permanent credentials are required and minimizes risk.
Option B enables role-based access with centralized identity and access management, making it the most secure and scalable solution for managing developer permissions.
A company has deployed resources in the us-east-1 Region. The company also uses thousands of AWS Outposts servers deployed at remote locations around the world. These Outposts servers regularly download new software versions from us-east-1 that consist of hundreds of files. The company wants to improve the latency of the software download process.
Which solution will meet these requirements?
- A . Create an Amazon S3 bucket in us-east-1. Configure the bucket for static website hosting. Use bucket policies and ACLs to provide read access to the Outposts servers.
- B . Create an Amazon S3 bucket in us-east-1 and a second bucket in us-west-2. Configure replication. Set up a CloudFront distribution with origin failover between the buckets. Download by using signed URLs.
- C . Create an Amazon S3 bucket in us-east-1. Configure S3 Transfer Acceleration. Configure the Outposts servers to download by using the acceleration endpoint.
- D . Create an Amazon S3 bucket in us-east-1. Set up a CloudFront distribution using all edge locations with caching enabled. Configure the bucket as the origin. Download the software by using signed URLs.
D
Explanation:
Amazon CloudFront uses a globally distributed network of edge locations that cache content close to users. When Outposts servers around the world download large software packages, CloudFront provides significantly reduced latency due to edge caching. This is the AWS-recommended solution for accelerating downloads of static files at scale and across global locations.
S3 Transfer Acceleration optimizes uploads and downloads to a single Region but does not provide edge caching. Multi-Region replication with failover does not reduce latency globally because requests still must reach the regional origins.
Therefore, CloudFront with caching enabled is the correct design for improving download speed worldwide.
A media company is using video conversion tools that run on Amazon EC2 instances. The video conversion tools run on a combination of Windows EC2 instances and Linux EC2 instances. Each video file is tens of gigabytes in size. The video conversion tools must process the video files in the shortest possible amount of time. The company needs a single, centralized file storage solution that can be mounted on all the EC2 instances that host the video conversion tools.
Which solution will meet these requirements?
- A . Deploy Amazon FSx for Windows File Server with hard disk drive (HDD) storage.
- B . Deploy Amazon FSx for Windows File Server with solid state drive (SSD) storage.
- C . Deploy Amazon Elastic File System (Amazon EFS) with Max I/O performance mode.
- D . Deploy Amazon Elastic File System (Amazon EFS) with General Purpose performance mode.
C
Explanation:
Amazon EFSwithMax I/O performance modeis designed for workloads that require high levels of
parallelism, such as video processing across multiple EC2 instances. EFS provides shared file storage that can be mounted on both Windows and Linux EC2 instances, and the Max I/O mode ensures the best performance for handling large files and concurrent access across multiple instances.
Option A and B (FSx for Windows File Server): FSx for Windows File Server is optimized for Windows workloads and would not be ideal for Linux instances or high-throughput, parallel workloads.
Option D (EFS General Purpose mode): General Purpose mode offers lower latency but doesn’t support the high throughput needed for large, concurrent workloads.
AWS
Reference: Amazon EFS Performance Modes
A company runs an internet-facing web application on AWS and uses Amazon Route 53 with a public hosted zone.
The company wants to log DNS response codes to support future root cause analysis.
Which solution will meet these requirements?
- A . Use Route 53 to configure query logging.
- B . Use AWS CloudTrail to record all Route 53 queries.
- C . Use Amazon CloudWatch metrics for Route 53.
- D . Use AWS Trusted Advisor for root cause analysis.
A
Explanation:
To capture DNS query and response data, including response codes, Amazon Route 53 provides query logging, which is the most precise and AWS-supported solution for this requirement.
Option A enables Route 53 query logging, which records detailed information about DNS queries, such as the queried domain, record type, source IP, and DNS response code. These logs are delivered to Amazon CloudWatch Logs, where administrators can search, analyze, and retain them for forensic investigation and root cause analysis.
Option B is incorrect because AWS CloudTrail records API calls to AWS services, not DNS query traffic.
Option C provides aggregated metrics (such as query counts and health checks) but does not include per-query response codes.
Option D offers best-practice recommendations but does not collect or analyze DNS query data.
Therefore, A is the correct solution because Route 53 query logging provides the detailed, low-level DNS visibility required for troubleshooting and operational analysis.
An ecommerce company runs several internal applications in multiple AWS accounts. The company uses AWS Organizations to manage its AWS accounts.
A security appliance in the company’s networking account must inspect interactions between applications across AWS accounts.
Which solution will meet these requirements?
- A . Deploy a Network Load Balancer (NLB) in the networking account to send traffic to the security appliance. Configure the application accounts to send traffic to the NLB by using an interface VPC endpoint in the application accounts
- B . Deploy an Application Load Balancer (ALB) in the application accounts to send traffic directly to the security appliance.
- C . Deploy a Gateway Load Balancer (GWLB) in the networking account to send traffic to the security appliance. Configure the application accounts to send traffic to the GWLB by using an interface GWLB endpoint in the application accounts
- D . Deploy an interface VPC endpoint in the application accounts to send traffic directly to the security appliance.
C
Explanation:
TheGateway Load Balancer (GWLB)is specifically designed to route traffic through a security appliance in a hub-and-spoke model, making it the ideal solution for inspecting traffic between multiple AWS accounts. GWLB enables you to simplify, scale, and deploy third-party virtual appliances transparently, and it can work across multiple VPCs or accounts using interface endpoints (Gateway Load Balancer Endpoints).
Key AWS features:
Traffic Inspection: The GWLB allows the centralized security appliance to inspect traffic between different VPCs, making it suitable for inspecting inter-account interactions.
Interface VPC Endpoints: By using interface endpoints in the application accounts, traffic can securely and efficiently be routed to the security appliance in the networking account.
AWS Documentation: The use of GWLB aligns with AWS’s best practices for centralized network security, simplifying architecture and reducing operational complexity.
