Practice Free SAA-C03 Exam Online Questions
A company is using AWS Identity and Access Management (IAM) Access Analyzer to refine IAM permissions for employee users. The company uses an organization in AWS Organizations and AWS Control Tower to manage its AWS accounts. The company has designated a specific member account as an audit account.
A solutions architect needs to set up IAM Access Analyzer to aggregate findings from all member accounts in the audit account.
What is the first step the solutions architect should take?
- A . Use AWS CloudTrail to configure one trail for all accounts. Create an Amazon S3 bucket in the audit
account. Configure the trail to send logs related to access activity to the new S3 bucket in the audit account. - B . Configure a delegated administrator account for IAM Access Analyzer in the AWS Control Tower management account. In the delegated administrator account for IAM Access Analyzer, specify the AWS account ID of the audit account.
- C . Create an Amazon S3 bucket in the audit account. Generate a new permissions policy, and add a service role to the policy to give IAM Access Analyzer access to AWS CloudTrail and the S3 bucket in the audit account.
- D . Add a new trust policy that includes permissions to allow IAM Access Analyzer to perform sts: AssumeRole actions. Modify the permissions policy to allow IAM Access Analyzer to generate policies.
B
Explanation:
The first step is to configure a delegated administrator account for IAM Access Analyzer at the organization level. Only after delegating the administrator account can you aggregate Access Analyzer findings from all member accounts into a designated audit account. This must be set up in the AWS Organizations management account.
AWS Documentation Extract:
“You must designate a delegated administrator for IAM Access Analyzer at the organization level. The delegated administrator account aggregates findings from all member accounts.”
(Source: IAM Access Analyzer documentation)
A, C, D: These steps do not establish the organization-wide aggregation required for Access Analyzer.
Reference: AWS Certified Solutions Architect C Official Study Guide, Access Analyzer Delegation.
A shipping company wants to run a Kubernetes container-based web application in disconnected mode while the company’s ships are in transit at sea. The application must provide local users with high availability.
- A . Use AWS Snowball Edge as the primary and secondary sites.
- B . Use AWS Snowball Edge as the primary site, and use an AWS Local Zone as the secondary site.
- C . Use AWS Snowball Edge as the primary site, and use an AWS Outposts server as the secondary site.
- D . Use AWS Snowball Edge as the primary site, and use an AWS Wavelength Zone as the secondary site.
A
Explanation:
When operating in disconnected or limited-connectivity environments, such as ships at sea, AWS recommends using AWS Snowball Edge devices to host local compute and storage workloads. Snowball Edge supports Amazon EC2 instances and AWS IoT Greengrass, and can run Amazon EKS Anywhere for local Kubernetes cluster deployments.
From AWS Documentation:
“You can use AWS Snowball Edge devices to run compute-intensive applications in remote or disconnected locations. Snowball Edge devices support running Amazon EC2 instances and Amazon EKS Anywhere clusters.”
(Source: AWS Snow Family C Developer Guide)
Why A is correct:
Snowball Edge provides local compute and storage even without internet connectivity.
Multiple Snowball Edge devices can be clustered together for high availability and failover.
Fully self-contained environment suitable for ships or field operations.
Why the others are incorrect:
B, C, D require network connectivity to AWS Regions or Zones (Local Zone, Outposts, Wavelength), which is not available at sea. These options cannot operate fully disconnected.
Reference: AWS Snow Family Developer Guide C “Running Compute Applications on AWS Snowball Edge”
AWS Well-Architected Framework C Resilience Pillar
AWS Architecture Blog C “Edge Computing with Snow Family”
A company is migrating a new application from an on-premises data center to a new VPC in the AWS Cloud. The company has multiple AWS accounts and VPCs that share many subnets and applications.
The company wants to have fine-grained access control for the new application. The company wants to ensure that all network resources across accounts and VPCs that are granted permission to access the new application can access the application.
- A . Set up a VPC peering connection for each VPC that needs access to the new application VPC.
Update route tables in each VPC to enable connectivity. - B . Deploy a transit gateway in the account that hosts the new application. Share the transit gateway with each account that needs to connect to the application. Update route tables in the VPC that hosts the new application and in the transit gateway to enable connectivity.
- C . Use an AWS PrivateLink endpoint service to make the new application accessible to other VPCs.
Control access to the application by using an endpoint policy. - D . Use an Application Load Balancer (ALB) to expose the new application to the internet. Configure authentication and authorization processes to ensure that only specified VPCs can access the application.
C
Explanation:
AWS PrivateLinkis the most suitable solution for providing fine-grained access control while allowing multiple VPCs, potentially across multiple accounts, to access the new application. This approach offers the following advantages:
Fine-grained control: Endpoint policies can restrict access to specific services or principals.
No need for route table updates: Unlike VPC peering or transit gateways, AWS PrivateLink does not require complex route table management.
Scalable architecture: PrivateLink scales to support traffic from multiple VPCs.
Secure connectivity: Ensures private connectivity over the AWS network, without exposing resources to the internet.
Why Other Options Are Not Ideal:
Option A:
VPC peering is not scalable when connecting multiple VPCs or accounts.
Route table management becomes complex as the number of VPCs increases.Not scalable.
Option B:
While transit gateways provide scalable VPC connectivity, they are not ideal for fine-grained access control.
Transit gateways allow connectivity but do not inherently restrict access to specific applications.Not ideal for fine-grained access control.
Option D:
Exposing the application through an ALB over the internet is not secure and does not align with the requirement to use private network resources.Security risk.
AWS
Reference: AWS PrivateLink: AWS Documentation – PrivateLink
AWS Networking Services Comparison: AWS Whitepaper – Networking Services
A company runs all its business applications in the AWS Cloud. The company uses AWS Organizations to manage multiple AWS accounts.
A solutions architect needs to review all permissions granted to IAM users to determine which users have more permissions than required.
Which solution will meet these requirements with the LEAST administrative overhead?
- A . Use Network Access Analyzer to review all access permissions in the company’s AWS accounts.
- B . Create an AWS CloudWatch alarm that activates when an IAM user creates or modifies resources in an AWS account.
- C . Use AWS Identity and Access Management (IAM) Access Analyzer to review all the company’s resources and accounts.
- D . Use Amazon Inspector to find vulnerabilities in existing IAM policies.
C
Explanation:
IAM Access Analyzer analyzes permissions granted using policies to determine what resources are shared with an external entity, and helps identify excessive permissions or least privilege violations across all accounts in an AWS Organization. It is specifically designed for reviewing and refining IAM permissions with minimal administrative effort.
AWS Documentation Extract:
“IAM Access Analyzer helps you identify the resources in your organization and accounts, such as Amazon S3 buckets or IAM roles, that are shared with an external entity. You can also use Access Analyzer policy checks to refine permissions and implement least privilege.”
(Source: IAM Access Analyzer documentation)
A: Network Access Analyzer is for VPC network access analysis, not IAM permissions.
B: CloudWatch alarms are not suitable for detailed permission analysis.
D: Amazon Inspector is for security vulnerability assessment, not IAM policy review.
Reference: AWS Certified Solutions Architect C Official Study Guide, IAM Security Analysis.
A company’s software development team needs an Amazon RDS Multi-AZ cluster. The RDS cluster will serve as a backend for a desktop client that is deployed on premises. The desktop client requires direct connectivity to the RDS cluster.
The company must give the development team the ability to connect to the cluster by using the client when the team is in the office.
Which solution provides the required connectivity MOST securely?
- A . Create a VPC and two public subnets. Create the RDS cluster in the public subnets. Use AWS Site-to-Site VPN with a customer gateway in the company’s office.
- B . Create a VPC and two private subnets. Create the RDS cluster in the private subnets. Use AWS Site-to-Site VPN with a customer gateway in the company’s office.
- C . Create a VPC and two private subnets. Create the RDS cluster in the private subnets. Use RDS security groups to allow the company’s office IP ranges to access the cluster.
- D . Create a VPC and two public subnets. Create the RDS cluster in the public subnets. Create a cluster user for each developer. Use RDS security groups to allow the users to access the cluster.
B
Explanation:
Requirement Analysis: Need secure, direct connectivity from an on-premises client to an RDS cluster, accessible only when in the office.
VPC with Private Subnets: Ensures the RDS cluster is not publicly accessible, enhancing security.
Site-to-Site VPN: Provides secure, encrypted connection between on-premises office and AWS VPC.
Implementation:
Create a VPC with two private subnets.
Launch the RDS cluster in the private subnets.
Set up a Site-to-Site VPN connection with a customer gateway in the office.
Conclusion: This setup ensures secure and direct connectivity with minimal exposure, meeting the requirement for secure access from the office.
Reference
AWS Site-to-Site VPN: AWS Site-to-Site VPN Documentation
Amazon RDS: Amazon RDS Documentation
A company is deploying a critical application by using Amazon RDS for MySQL. The application must be highly available and must recover automatically. The company needs to support interactive users (transactional queries) and batch reporting (analytical queries) with no more than a 4-hour lag. The analytical queries must not affect the performance of the transactional queries.
Which solution will meet these requirements?
- A . Configure Amazon RDS for MySQL in a Multi-AZ DB instance deployment with one standby instance. Point the transactional queries to the primary DB instance. Point the analytical queries to a secondary DB instance that runs in a different Availability Zone.
- B . Configure Amazon RDS for MySQL in a Multi-AZ DB cluster deployment with two standby instances. Point the transactional queries to the primary DB instance. Point the analytical queries to the reader endpoint.
- C . Configure Amazon RDS for MySQL to use multiple read replicas across multiple Availability Zones. Point the transactional queries to the primary DB instance. Point the analytical queries to one of the replicas in a different Availability Zone.
- D . Configure Amazon RDS for MySQL as the primary database for the transactional queries with automated backups enabled. Each night, create a read-only database from the most recent snapshot to support the analytical queries. Terminate the previously created database.
C
Explanation:
The requirement has three key elements: high availability with automatic recovery, separation of transactional and analytical workloads, and acceptable reporting lag up to 4 hours. The clean AWS-native pattern for isolating heavy read/reporting traffic from transactional writes in RDS for MySQL is to use read replicas and direct reporting queries to the replica endpoint(s).
Option C meets these requirements well. The primary RDS for MySQL instance handles transactional traffic (reads/writes). One or more RDS read replicas asynchronously replicate data from the primary. Because replication is asynchronous, some lag is expected; the requirement explicitly tolerates up to a 4-hour lag, which fits the read replica model. Directing batch reporting and analytical queries to a replica prevents those expensive queries from consuming CPU, memory, and I/O on the primary, thereby protecting interactive user performance. Deploying replicas across multiple Availability Zones also improves availability of the reporting tier and reduces the risk that an AZ issue prevents reporting access.
Option A is incorrect because a standard Multi-AZ DB instance uses a synchronous standby that is not readable and is intended for failover, not for serving analytical queries.
Option B describes a Multi-AZ DB cluster deployment, which does provide readable standbys in some RDS engines; however, for the classic RDS MySQL exam pattern, the most direct and widely used method to isolate analytics is read replicas.
Option D creates a nightly snapshot-based read-only copy; this increases operational complexity, can result in stale data for most of the day, and introduces provisioning delays, which is unnecessary when replicas provide continuous near-real-time copies.
Therefore, C is the best fit because it provides HA for the primary, isolates reporting reads to replicas, and meets the allowed replication lag requirement.
A company runs a web application on Amazon EC2 instances in an Auto Scaling group that has a target group. The company designed the application to work with session affinity (sticky sessions) for a better user experience.
The application must be available publicly over the internet as an endpoint. A WAF must be applied to the endpoint for additional security. Session affinity (sticky sessions) must be configured on the endpoint.
- A . Create a public Network Load Balancer. Specify the application target group.
- B . Create a Gateway Load Balancer. Specify the application target group.
- C . Create a public Application Load Balancer. Specify the application target group.
- D . Create a second target group. Add Elastic IP addresses to the EC2 instances.
- E . Create a web ACL in AWS WAF. Associate the web ACL with the endpoint.
C, E
Explanation:
The Application Load Balancer (ALB) supports sticky sessions (session affinity) using application cookies. AWS WAF integrates natively with ALB to provide Layer 7 protection at the same endpoint.
From AWS Documentation:
“You can enable sticky sessions for your Application Load Balancer target groups to ensure that a user’s requests are consistently routed to the same target. AWS WAF integrates with Application Load Balancer to protect your web applications from common exploits.”
(Source: Elastic Load Balancing User Guide & AWS WAF Developer Guide)
Why C and E are correct:
C: ALB operates at Layer 7 (HTTP/HTTPS), supports sticky sessions, and can serve as a public endpoint.
E: AWS WAF can be directly associated with the ALB to inspect traffic and enforce rules.Together, they fulfill both the security and session affinity requirements.
Why others are incorrect:
A: Network Load Balancer doesn’t support session affinity.
B: Gateway Load Balancer is used for virtual appliances, not web applications.
D: Using EIPs bypasses load balancing and WAF integration.
Reference: Elastic Load Balancing User Guide C “Sticky Sessions for Application Load Balancers” AWS WAF Developer Guide C “Associating a Web ACL with an ALB”
AWS Well-Architected Framework C Security and Performance Pillars
An ecommerce company is migrating its on-premises workload to the AWS Cloud. The workload currently consists of a web application and a backend Microsoft SQL database for storage.
The company expects a high volume of customers during a promotional event. The new infrastructure in the AWS Cloud must be highly available and scalable.
Which solution will meet these requirements with the LEAST administrative overhead?
- A . Migrate the web application to two Amazon EC2 instances across two Availability Zones behind an Application Load Balancer. Migrate the database to Amazon RDS for Microsoft SQL Server with read replicas in both Availability Zones.
- B . Migrate the web application to an Amazon EC2 instance that runs in an Auto Scaling group across two Availability Zones behind an Application Load Balancer. Migrate the database to two EC2
instances across separate AWS Regions with database replication. - C . Migrate the web application to Amazon EC2 instances that run in an Auto Scaling group across two Availability Zones behind an Application Load Balancer. Migrate the database to Amazon RDS with Multi-AZ deployment.
- D . Migrate the web application to three Amazon EC2 instances across three Availability Zones behind an Application Load Balancer. Migrate the database to three EC2 instances across three Availability Zones.
C
Explanation:
To ensure high availability and scalability, the web application should run in anAuto Scaling groupacross two Availability Zones behind anApplication Load Balancer (ALB). The database should be migrated toAmazon RDSwithMulti-AZ deployment, which ensures fault tolerance and automatic failover in case of an AZ failure. This setup minimizes administrative overhead while meeting the company’s requirements for high availability and scalability.
Option A: Read replicas are typically used for scaling read operations, and Multi-AZ provides better availability for a transactional database.
Option B: Replicating across AWS Regions adds unnecessary complexity for a single web application.
Option D: EC2 instances across three Availability Zones add unnecessary complexity for this scenario.
AWS
Reference: Auto Scaling Groups
Amazon RDS Multi-AZ
A company hosts its applications in multiple private and public subnets in a VPC. The applications in the private subnets need to access an API. The API is available on the internet and is hosted in the company’s on-premises data center. A solutions architect needs to establish connectivity for applications in the private subnets.
Which solution will meet these requirements MOST cost-effectively?
- A . Create a transit gateway to connect the VPC to the on-premises network. Use the transit gateway to route API calls from the private subnets to the on-premises data center.
- B . Create a NAT gateway in the public subnet of the VPC. Use the NAT gateway to allow the private subnets to access the API over the internet.
- C . Establish an AWS PrivateLink connection to connect the VPC to the on-premises network. Use PrivateLink to make API calls from the private subnets to the on-premises data center.
- D . Implement an AWS Site-to-Site VPN connection between the VPC and the on-premises data center. Use the VPN connection to make API calls from the private subnets to the on-premises data center.
D
Explanation:
AWS Site-to-Site VPN is a cost-effective way to securely connect your on-premises data center with
AWS resources. In this scenario:
Applications in private subnetsrequire access to the API hosted in the on-premises data center.
ASite-to-Site VPN connectionis a secure and cost-efficient option to route traffic between the VPC and on-premises resources.
Transit GatewayandPrivateLinkare not cost-effective for this use case.
NAT Gatewayonly provides internet access for private subnets, which is not suitable for reaching an on-premises resource.
AWS Documentation
Reference: AWS Site-to-Site VPN
A company runs a custom application on Amazon EC2 On-Demand Instances. The application has frontend nodes that must run 24/7. The backend nodes only need to run for short periods depending on the workload.
Frontend nodes accept jobs and place them in queues. Backend nodes asynchronously process jobs from the queues, and jobs can be restarted. The company wants to scale infrastructure based on workload, using the most cost-effective option.
Which solution meets these requirements MOST cost-effectively?
- A . Use Reserved Instances for the frontend nodes. Use AWS Fargate for the backend nodes.
- B . Use Reserved Instances for the frontend nodes. Use Spot Instances for the backend nodes.
- C . Use Spot Instances for the frontend nodes. Use Reserved Instances for the backend nodes.
- D . Use Spot Instances for the frontend nodes. Use AWS Fargate for the backend nodes.
B
Explanation:
AWS documentation states that workloads running 24/7 should use Reserved Instances or Savings Plans for the lowest cost. Therefore, the frontend nodes, which always run, should use Reserved Instances.
The backend nodes process asynchronous, restartable jobs, which makes them ideal for EC2 Spot
Instances, the most cost-effective compute option for interruption-tolerant workloads.
Fargate (Options A and D) is significantly more expensive for large compute usage. Spot Instances cannot be used for critical frontend nodes (Option C).
