Practice Free SAA-C03 Exam Online Questions
A company has an organization in AWS Organizations that has all features enabled. The company has multiple Amazon S3 buckets in multiple AWS Regions around the world. The S3 buckets contain sensitive data.
The company needs to ensure that no personally identifiable information (PII) is stored in the S3 buckets. The company also needs a scalable solution to identify PII.
Which solution will meet these requirements?
- A . In the Organizations management account, configure an Amazon Macie administrator IAM user as the delegated administrator for the global organization. Use the Macie administrator user to configure Macie settings to scan for PII.
- B . For each Region in the Organizations management account, designate a delegated Amazon Macie administrator account. In the Macie administrator account, add all accounts in the organization. Use the Macie administrator account to enable Macie. Configure automated sensitive data discovery for all accounts in the organization.
- C . For each Region in the Organizations management account, configure a service control policy (SCP) to identify PII. Apply the SCP to the organization root.
- D . In the Organizations management account, configure AWS Lambda functions to scan for PII in each Region.
B
Explanation:
Amazon Macie is a fully managed data security and data privacy service that uses machine learning and pattern matching to discover and protect sensitive data in AWS. To scale across Regions and accounts in AWS Organizations, Macie supports delegated administration, automated sensitive data discovery, and multi-account aggregation through a centralized admin account.
Reference: AWS Documentation C Amazon Macie Multi-Account Configuration
A company wants to use AWS Direct Connect to connect on-premises networks to AWS. The company runs many VPCs in a single Region and plans to scale to hundreds of VPCs.
Which service will simplify and scale the network architecture?
- A . VPC endpoints
- B . AWS Transit Gateway
- C . Amazon Route 53
- D . AWS Secrets Manager
B
Explanation:
As the number of VPCs grows, managing individual VPC connections becomes complex and unscalable. AWS Transit Gateway is specifically designed to act as a central hub that connects multiple VPCs and on-premises networks through Direct Connect.
Option B simplifies network architecture by allowing hundreds or thousands of VPCs to connect through a single gateway, reducing routing complexity and operational overhead. Transit Gateway supports scalable routing, centralized inspection, and simplified expansion.
The other options do not address network connectivity or scaling challenges. Therefore, B is the correct solution.
A company uses AWS to run its workloads. The company uses AWS Organizations to manage its accounts. The company needs to identify which departments are responsible for specific costs.
New accounts are constantly created in the Organizations account structure. The Organizations continuous integration and continuous delivery (CI/CD) framework already adds the populated department tag to the AWS resources. The company wants to use an AWS Cost Explorer report to identify the service costs by department from all AWS accounts.
Which combination of steps will meet these requirements with the MOST operational efficiency? (Select TWO.)
- A . Activate the aws: createdBy cost allocation tag and the department cost allocation tag in the management account.
- B . Create a new cost and usage report in Cost Explorer. Group by the department cost allocation tag.
Apply a filter to see all linked accounts and services. - C . Activate only the department cost allocation tag in the management account.
- D . Create a new cost and usage report in Cost Explorer. Group by the department cost allocation tag without any other filters.
- E . Activate only the aws: createdBy cost allocation tag in the management account.
C, D
Explanation:
To track costs by department, you must activate the custom department tag as a cost allocation tag in the AWS Organizations management account. Once activated, Cost Explorer and cost and usage reports can group costs by this tag for all linked accounts. The most operationally efficient way is to activate only the relevant department tag and create a cost and usage report grouped by that tag.
AWS Documentation Extract:
“To use a tag for cost allocation, you must activate it in the AWS Billing and Cost Management console. After activation, you can use the tag to group costs in Cost Explorer and reports.”
(Source: AWS Cost Management documentation)
A, E: aws: createdBy is not related to department cost grouping and is unnecessary.
B: Applying extra filters is optional; D is more direct and operationally efficient.
Reference: AWS Certified Solutions Architect C Official Study Guide, Cost Allocation and Tagging.
A company is using an Amazon Elastic Kubernetes Service (Amazon EKS) cluster. The company must ensure that Kubernetes service accounts in the EKS cluster have secure and granular access to specific AWS resources by using IAM roles for service accounts (IRSA).
Which combination of solutions will meet these requirements? (Select TWO.)
- A . Create an IAM policy that defines the required permissions. Attach the policy directly to the IAM role of the EKS nodes.
- B . Implement network policies within the EKS cluster to prevent Kubernetes service accounts from accessing specific AWS services.
- C . Modify the EKS cluster’s IAM role to include permissions for each Kubernetes service account.
Ensure a one-to-one mapping between IAM roles and Kubernetes roles. - D . Define an IAM role that includes the necessary permissions. Annotate the Kubernetes service accounts with the Amazon Resource Name (ARN) of the IAM role.
- E . Set up a trust relationship between the IAM roles for the service accounts and an OpenID Connect (OIDC) identity provider.
D, E
Explanation:
IAM Roles for Service Accounts (IRSA) is the AWS-recommended mechanism to grant fine-grained, pod-level AWS permissions in Amazon EKS. IRSA avoids the security risks of granting permissions to the entire node and aligns with the principle of least privilege.
To implement IRSA, two core components are required. First, an IAM role must be created with a trust policy that references an OIDC identity provider associated with the EKS cluster. This is why Option E is required. The OIDC provider allows AWS STS to authenticate Kubernetes service accounts and issue temporary credentials. Without this trust relationship, service accounts cannot assume IAM roles securely.
Second, the Kubernetes service account must be explicitly annotated with the ARN of the IAM role it is allowed to assume. This is provided by Option
D. The annotation creates a direct mapping between a specific service account and a specific IAM role, ensuring granular access control at the pod level.
Option A is incorrect because attaching policies to the node IAM role grants permissions to all pods on the node, which violates least-privilege principles.
Option B addresses network-level access but does not control AWS API authorization.
Option C is incorrect because IAM roles should not be overloaded with permissions for multiple service accounts, and there is no direct one-to-one mapping between IAM roles and Kubernetes roles in this way.
Therefore, D and E together form the correct and secure IRSA configuration, enabling granular, auditable, and secure access to AWS resources from EKS workloads.
A company hosts a public web application on AWS with a three-tier architecture: a frontend Auto Scaling group, an application Auto Scaling group, and an Amazon RDS database.
During unexpected traffic spikes, the company notices long delays in startup time when the frontend and application tiers scale out. The company needs to improve scaling performance without negatively affecting user experience.
Which solution will meet these requirements MOST cost-effectively?
- A . Decrease the minimum number of EC2 instances for both Auto Scaling groups. Increase the desired number of instances to meet peak demand.
- B . Configure the maximum number of instances for both Auto Scaling groups to the number required for peak demand. Create a warm pool.
- C . Increase the maximum number of EC2 instances for both Auto Scaling groups to meet normal demand. Create a warm pool.
- D . Use scheduled scaling. Increase EC2 and RDS instance sizes.
B
Explanation:
The problem is scale-out latency: when traffic spikes, new instances take time to launch, initialize, and pass health checks. This can temporarily degrade user experience even if Auto Scaling eventually adds enough capacity. The goal is to improve responsiveness to sudden spikes cost-effectively.
A key Auto Scaling feature for this situation is an EC2 Auto Scaling warm pool. A warm pool maintains a set of pre-initialized instances (stopped or running, depending on configuration) that Auto Scaling can rapidly transition into service. This reduces the time needed for instance launch and initialization, enabling faster scale-out during unexpected surges.
Option B is best because it supports the application’s peak needs by ensuring Auto Scaling can scale up to the required maximum capacity and keeps warm capacity ready. While a warm pool has some cost (especially if instances are kept running), it is typically more cost-effective than permanently running peak capacity all the time. It improves user experience by minimizing cold start delays during spikes.
Option A is counterproductive: lowering minimum capacity increases the likelihood of scale-out delays.
Option C is illogical as written (“increase maximum to meet normal demand”); maximum should reflect peak potential, and a warm pool without sufficient max headroom will not solve the spike problem.
Option D uses scheduled scaling, but the spike is unexpected, and increasing instance
sizes increases cost and does not address launch delay; it may also reduce horizontal scaling flexibility.
Therefore, B is the most cost-effective way to improve scale-out performance and protect user experience during unexpected bursts.
A company runs a Java-based job on an Amazon EC2 instance. The job runs every hour and takes 10 seconds to run. The job runs on a scheduled interval and consumes 1 GB of memory. The CPU utilization of the instance is low except for short surges during which the job uses the maximum CPU available. The company wants to optimize the costs to run the job.
- A . Use AWS App2Container (A2C) to containerize the job. Run the job as an Amazon Elastic Container Service (Amazon ECS) task on AWS Fargate with 0.5 virtual CPU (vCPU) and 1 GB of memory.
- B . Copy the code into an AWS Lambda function that has 1 GB of memory. Create an Amazon EventBridge scheduled rule to run the code each hour.
- C . Use AWS App2Container (A2C) to containerize the job. Install the container in the existing Amazon Machine Image (AMI). Ensure that the schedule stops the container when the task finishes.
- D . Configure the existing schedule to stop the EC2 instance at the completion of the job and restart the EC2 instance when the next job starts.
B
Explanation:
AWS Lambda is event driven and “scales automatically with the number of requests,” charging per request and compute duration in milliseconds; you can schedule with Amazon EventBridge rules. Allocating 1 GB memory proportionally increases available CPU, which suits short CPU bursts. For a 10-second, once-per-hour job, Lambda eliminates instance idle time, providing the lowest cost and no servers to manage. Fargate tasks (A) bill per-second with a 1-minute minimum and require container packaging; for a 10-second job, costs are higher than Lambda.
Options C and D keep EC2 management overhead and either maintain instances or add start/stop orchestration complexity; EC2 costs dominate because the instance sits idle for 59+ minutes each hour. Therefore, Lambda + EventBridge is the most cost-effective and operationally simple solution.
Reference: AWS Lambda Developer Guide ― pricing (per-ms), automatic scaling, EventBridge scheduling; Well-Architected Cost Optimization ― match pricing model to workload and remove idle capacity.
A company plans to run a high performance computing (HPC) workload on Amazon EC2 Instances. The workload requires low-latency network performance and high network throughput with tightly coupled node-to-node communication.
Which solution will meet these requirements?
- A . Configure the EC2 instances to be part of a cluster placement group
- B . Launch the EC2 instances with Dedicated Instance tenancy.
- C . Launch the EC2 instances as Spot Instances.
- D . Configure an On-Demand Capacity Reservation when the EC2 instances are launched.
A
Explanation:
Cluster Placement Group: This type of placement group is designed to provide low-latency network performance and high throughput by grouping instances within a single Availability Zone. It is ideal for applications that require tightly coupled node-to-node communication.
Configuration:
When launching EC2 instances, specify the option to launch them in a cluster placement group.
This ensures that the instances are physically located close to each other, reducing latency and increasing network throughput.
Benefits:
Low-Latency Communication: Instances in a cluster placement group benefit from enhanced networking capabilities, enabling low-latency communication.
High Network Throughput: The network performance within a cluster placement group is optimized for high throughput, which is essential for HPC workloads.
Reference: Placement Groups
High Performance Computing on AWS
A company has a social media application that is experiencing rapid user growth. The current architecture uses t-family Amazon EC2 instances. The current architecture struggles to handle the increasing number of user posts and images. The application experiences performance slowdowns during peak usage times.
A solutions architect needs to design an updated architecture that will resolve the performance issues and scale as usage increases.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Use the largest Amazon EC2 instance in the same family to host the application. Install a relational database on the instance to store all account information and to store posts and images.
- B . Use Amazon Simple Queue Service (Amazon SQS) to buffer incoming posts. Use a larger EC2 instance in the same family to host the application. Store account information in Amazon DynamoDB. Store posts and images in the local EC2 instance file system.
- C . Use an Amazon API Gateway REST API and AWS Lambda functions to process requests. Store
account information in Amazon DynamoDB. Use Amazon S3 to store posts and images. - D . Deploy multiple EC2 instances in the same family. Use an Application Load Balancer to distribute traffic. Use a shared file system to store account information and to store posts and images.
C
Explanation:
This question focuses on scalability, operational overhead, and performance during unpredictable workloads.
API Gateway + AWS Lambda enables serverless compute, which scales automatically based on the number of requests. It requires no provisioning, maintenance, or patching of servers ― eliminating operational overhead.
Amazon DynamoDB is a fully managed NoSQL database optimized for high-throughput workloads with single-digit millisecond latency.
Amazon S3 is designed for high availability and durability, and is ideal for storing unstructured content such as user-uploaded images.
By leveraging these fully managed and scalable services, the architecture meets the requirement of supporting rapid user growth while minimizing operational complexity. This solution aligns with the Performance Efficiency and Operational Excellence pillars in the AWS Well-Architected Framework.
Reference: Serverless Web Application Architecture
Using DynamoDB with Lambda
Best Practices for API Gateway
A company uses two AWS accounts named Account A and Account B. Account A hosts a data analytics application. Account B hosts a data lake in an Amazon S3 bucket. Data analysts in Account A need to access the data lake in Account B. The access solution must be secure, use temporary credentials, enforce the principle of least privilege, and avoid long-term access keys.
Which solution will meet these requirements?
- A . Create IAM users in Account B and share the access keys for the users with analysts in Account A.
- B . Use an S3 bucket policy to configure the S3 bucket in Account B to be publicly accessible.
- C . Configure a resource-based policy for the S3 bucket in Account B to allow access from an IAM role in Account A.
- D . Use a bastion host in Account B to proxy analyst requests from Account A through an Amazon EC2 instance.
C
Explanation:
The correct answer is C because the company needs a secure cross-account access solution that uses temporary credentials, follows least privilege, and avoids long-term access keys. The best practice for this design is to allow an IAM role in Account A to access the Amazon S3 bucket in Account B through a resource-based bucket policy. Analysts or applications in Account A can assume the IAM role and receive temporary credentials through AWS Security Token Service (AWS STS), which satisfies the requirement to avoid permanent access keys.
This approach is secure because permissions can be scoped precisely to the required bucket and prefixes, supporting least-privilege access. It also avoids creating separate IAM users in Account B and eliminates the operational and security risks of sharing static credentials across accounts. Cross-account access through IAM roles and S3 bucket policies is the standard AWS pattern for securely granting one account access to resources in another account.
Option A is incorrect because creating IAM users and sharing access keys introduces long-term credentials, which the company explicitly wants to avoid.
Option B is incorrect because making the S3 bucket public violates security requirements and does not enforce least privilege.
Option D is incorrect because using a bastion host adds unnecessary infrastructure and operational overhead and is not the recommended approach for S3 access.
AWS security best practices favor role-based access with temporary credentials and resource policies for cross-account resource sharing. Therefore, configuring the S3 bucket policy in Account B to allow access from an IAM role in Account A is the most secure and appropriate solution.
A company hosts dozens of multi-tier applications on AWS. The presentation layer and logic layer are Amazon EC2 Linux instances that use Amazon EBS volumes.
The company needs a solution to ensure that operating system vulnerabilities are not introduced to the EC2 instances when the company deploys new features. The company uses custom AMIs to deploy EC2 instances in an Auto Scaling group. The solution must scale to handle all applications that the company hosts.
Which solution will meet these requirements?
- A . Use Amazon Inspector to patch operating system vulnerabilities. Invoke Amazon Inspector when a new AMI is deployed.
- B . Use AWS Backup to back up the EBS volume of each updated instance. Use the EBS backup volumes to create new AMIs. Use the existing Auto Scaling group to deploy the new AMIs.
- C . Use AWS Systems Manager Patch Manager to patch operating system vulnerabilities in the custom
AMIs. - D . Use EC2 Image Builder to create new AMIs when the company deploys new features. Include the update-linux component in the build components of the new AMIs. Use the existing Auto Scaling group to deploy the new AMIs.
D
Explanation:
EC2 Image Builder is the AWS-managed service specifically designed to automate the creation, patching, hardening, and testing of AMIs.
For this scenario, best practice is to:
Use EC2 Image Builder pipelines to generate new golden AMIs whenever features are deployed or on a schedule.
Include build components such as update-linux to automatically apply OS security patches and updates during image creation.
Deploy the new AMIs through existing Auto Scaling groups, ensuring that every newly launched instance is already patched and compliant.
This approach:
Prevents OS vulnerabilities from being introduced at deployment time.
Scales across dozens of applications because AMI pipelines are reusable and automated.
Minimizes manual effort and reduces drift between instances.
Amazon Inspector (Option A) identifies vulnerabilities but does not itself patch AMIs.
AWS Backup (Option B) is for data protection, not vulnerability management.
Patch Manager (Option C) patches running instances, but the requirement is to ensure vulnerabilities are not introduced with new AMIs; golden image pipelines with EC2 Image Builder are the recommended solution.
