Practice Free SAA-C03 Exam Online Questions
A company is designing a new Amazon Elastic Kubernetes Service (Amazon EKS) deployment to host multi-tenant applications that use a single cluster. The company wants to ensure that each pod has its own hosted environment. The environments must not share CPU, memory, storage, or elastic network interfaces.
Which solution will meet these requirements?
- A . Use Amazon EC2 instances to host self-managed Kubernetes clusters. Use taints and tolerations to
enforce isolation boundaries. - B . Use Amazon EKS with AWS Fargate. Use Fargate to manage resources and to enforce isolation boundaries.
- C . Use Amazon EKS and self-managed node groups. Use taints and tolerations to enforce isolation boundaries.
- D . Use Amazon EKS and managed node groups. Use taints and tolerations to enforce isolation boundaries.
B
Explanation:
AWS Fargate provides per-pod isolation for CPU, memory, storage, and networking, making it ideal for multi-tenant use cases.
AWS Documentation
Reference: EKS with Fargate
A company is migrating a data processing application to AWS. The application processes several short-lived batch jobs that cannot be disrupted. The process generates data after each batch job finishes running. The company accesses the data for 30 days following data generation. After 30 days, the company stores the data for 2 years.
The company wants to optimize costs for the application and data storage.
Which solution will meet these requirements?
- A . Use Amazon EC2 Spot Instances to run the application. Store the data in Amazon S3 Standard. Move the data to S3 Glacier Instant Retrieval after 30 days. Configure a bucket policy to delete the data after 2 years.
- B . Use Amazon EC2 On-Demand Instances to run the application. Store the data in Amazon S3 Glacier Instant Retrieval. Move the data to S3 Glacier Deep Archive after 30 days. Configure an S3 Lifecycle configuration to delete the data after 2 years.
- C . Use Amazon EC2 Spot Instances to run the application. Store the data in Amazon S3 Standard. Move the data to S3 Glacier Flexible Retrieval after 30 days. Configure a bucket policy to delete the data after 2 years.
- D . Use Amazon EC2 On-Demand Instances to run the application. Store the data in Amazon S3 Standard. Move the data to S3 Glacier Deep Archive after 30 days. Configure an S3 Lifecycle configuration to delete the data after 2 years.
D
Explanation:
Amazon EC2 On-Demand Instances: Since the batch jobs cannot be disrupted, On-Demand Instances provide the necessary reliability and availability.
Amazon S3 Standard: Storing data in S3 Standard for the first 30 days ensures quick and frequent access.
S3 Glacier Deep Archive: After 30 days, moving data to S3 Glacier Deep Archive significantly reduces storage costs for data that is rarely accessed.
S3 Lifecycle Configuration: Automating the transition and deletion of objects using lifecycle policies ensures cost optimization and compliance with data retention requirements.
Reference: Amazon S3 Storage Classes
Managing your storage lifecycleAWS Documentation
A solutions architect needs to build a log storage solution for a client. The client has an application that produces user activity logs that track user API calls to the application. The application typically
produces 50 GB of logs each day. The client needs a storage solution that makes the logs available for occasional querying and analytics.
- A . Store user activity logs in an Amazon S3 bucket. Use Amazon Athena to perform queries and analytics.
- B . Store user activity logs in an Amazon OpenSearch Service cluster. Use OpenSearch Dashboards to perform queries and analytics.
- C . Store user activity logs in an Amazon RDS instance. Use an Open Database Connectivity (ODBC) connector to perform queries and analytics.
- D . Store user activity logs in an Amazon CloudWatch Logs log group. Use CloudWatch Logs Insights to perform queries and analytics.
A
Explanation:
For infrequent or ad hoc querying of log data, Amazon S3 + Amazon Athena provides the most cost-effective, serverless, and scalable analytics solution.
From AWS Documentation:
“Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL. Athena is serverless, so there is no infrastructure to manage, and you pay only for the queries that you run.”
(Source: Amazon Athena User Guide)
Why A is correct:
Amazon S3 offers durable, scalable, and cost-efficient storage.
Athena allows SQL-based querying on structured or semi-structured data like logs.
No need to provision or manage infrastructure.
Ideal for occasional querying at low cost.
Why the others are not optimal:
Option B: OpenSearch adds cost and is best for frequent, low-latency log querying.
Option C: RDS is not optimized for large-scale write-heavy log ingestion and costs more.
Option D: CloudWatch Logs is suitable for real-time monitoring, not for long-term storage and analytics of large log volumes.
Reference: Amazon Athena User Guide
AWS Well-Architected Framework C Cost Optimization Pillar
Amazon S3 Storage Classes and Pricing Guide
A company wants DevOps teams to create IAM roles, but no role may have administrative permissions.
Which solution will meet these requirements?
- A . Use SCPs to deny AdministratorAccess policy usage.
- B . Use SCPs to require a permissions boundary when creating IAM roles.
- C . Allow all permissions and auto-delete noncompliant roles.
- D . Attach restrictive permissions boundaries directly to IAM users.
B
Explanation:
Using Service Control Policies (SCPs) to enforce permissions boundaries ensures that all created roles are constrained, regardless of attached policies. This is the most secure, scalable, and AWS-recommended preventive control.
A company has Amazon EC2 instances in multiple AWS Regions. The instances all store and retrieve confidential data from the same Amazon S3 bucket. The company wants to improve the security of its current architecture.
The company wants to ensure that only the Amazon EC2 instances within its VPC can access the S3 bucket. The company must block all other access to the bucket.
Which solution will meet this requirement?
- A . Use IAM policies to restrict access to the S3 bucket.
- B . Use server-side encryption (SSE) to encrypt data in the S3 bucket at rest. Store the encryption key on the EC2 instances.
- C . Create a VPC endpoint for Amazon S3. Configure an S3 bucket policy to allow connections only from the endpoint.
- D . Use AWS Key Management Service (AWS KMS) with customer-managed keys to encrypt the data before sending the data to the S3 bucket.
C
Explanation:
Creating a VPC endpointfor S3 and configuring abucket policyto allow access only from the endpoint ensures that only EC2 instances within the VPC can access the S3 bucket. This solution improves security by restricting access at the network level without the need for public internet access.
Option A (IAM policies): IAM policies alone cannot restrict access based on the network location.
Option B and D (Encryption): Encryption secures data at rest but does not restrict network access to the bucket.
AWS
Reference: Amazon S3 VPC Endpoints
A company wants to create a payment processing application. The application must run when a payment record arrives in an existing Amazon S3 bucket. The application must process each payment record exactly once. The company wants to use an AWS Lambda function to process the payments.
Which solution will meet these requirements?
- A . Configure the existing S3 bucket to send object creation events to Amazon EventBridge. Configure EventBridge to route events to an Amazon Simple Queue Service (Amazon SQS) FIFO queue. Configure the Lambda function to run when a new event arrives in the SQS queue.
- B . Configure the existing S3 bucket to send object creation events to an Amazon Simple Notification Service (Amazon SNS) topic. Configure the Lambda function to run when a new event arrives in the SNS topic.
- C . Configure the existing S3 bucket to send object creation events to an Amazon Simple Queue Service (Amazon SQS) queue. Configure the Lambda function to run when a new event arrives in the SQS queue.
- D . Configure the existing S3 bucket to send object creation events directly to the Lambda function.
Configure the Lambda function to handle object creation events and to process the payments.
A company uses Amazon EC2 instances and Amazon Elastic Block Store (Amazon EBS) volumes to run an application. The company creates one snapshot of each EBS volume every day.
The company needs to prevent users from accidentally deleting the EBS volume snapshots. The solution must not change the administrative rights of a storage administrator user.
Which solution will meet these requirements with the LEAST administrative effort?
- A . Create an IAM role that has permission to delete snapshots. Attach the role to a new EC2 instance.
Use the AWS CLI from the new EC2 instance to delete snapshots. - B . Create an IAM policy that denies snapshot deletion. Attach the policy to the storage administrator user.
- C . Add tags to the snapshots. Create tag-level retention rules in the Recycle Bin for EBS snapshots.
Configure rule lock settings for the retention rules. - D . Take EBS snapshots by using the EBS direct APIs. Copy the snapshots to an Amazon S3 bucket.
Configure S3 Versioning and Object Lock on the bucket.
C
Explanation:
Amazon EBS Snapshots Recycle Bin enables you to specify retention rules for EBS snapshots based on tags. When snapshots are deleted, they are retained in the Recycle Bin for a specified duration, preventing accidental deletion. Tag-level rules allow selective protection without changing IAM roles or user permissions.
Reference: AWS Documentation C Amazon EBS Snapshots and Recycle Bin
A company is developing an ecommerce application that will consist of a load-balanced front end, a container-based application, and a relational database. A solutions architect needs to create a highly available solution that operates with as little manual intervention as possible.
Which solutions meet these requirements? (Select TWO.)
- A . Create an Amazon RDS DB instance in Multi-AZ mode.
- B . Create an Amazon RDS DB instance and one or more replicas in another Availability Zone.
- C . Create an Amazon EC2 instance-based Docker cluster to handle the dynamic application load.
- D . Create an Amazon Elastic Container Service (Amazon ECS) cluster with a Fargate launch type to handle the dynamic application load.
- E . Create an Amazon Elastic Container Service (Amazon ECS) cluster with an Amazon EC2 launch type to handle the dynamic application load.
A, D
Explanation:
For the relational database tier, Amazon RDS Multi-AZ provides high availability with minimal manual intervention. In Multi-AZ mode, RDS maintains a synchronous standby in a different Availability Zone and provides automatic failover during certain failure scenarios. This reduces operational burden because the service handles replication, health monitoring, and failover orchestration.
For the container-based application tier, Amazon ECS with the Fargate launch type minimizes operational overhead because it removes the need to provision, patch, and scale the underlying EC2 instances that host containers. With Fargate, AWS manages the compute infrastructure, and the team focuses on task definitions, scaling policies, and application configuration. This supports a highly available, load-balanced architecture because ECS services can run tasks across multiple Availability Zones behind an Application Load Balancer, and scaling is managed via ECS Service Auto Scaling.
Option B describes read replicas, which are primarily used for scaling reads and, depending on
configuration, may not provide the same automated failover characteristics as Multi-AZ for high availability. Read replicas are not a direct substitute for Multi-AZ HA.
Option C (self-managed Docker cluster on EC2) is operationally heavy: you must manage node capacity, patching, cluster lifecycle, and scheduling.
Option E (ECS on EC2) is a valid container platform but still requires managing EC2 instances, AMIs, and scaling/patching, which increases manual intervention compared to Fargate.
Therefore, combining RDS Multi-AZ (A) for database resilience and ECS Fargate (D) for serverless container operations meets the HA requirement with the least manual effort.
A company recently launched a new application for its customers. The application runs on multiple Amazon EC2 instances across two Availability Zones. End users use TCP to communicate with the application.
The application must be highly available and must automatically scale as the number of users increases.
Which combination of steps will meet these requirements MOST cost-effectively? (Select TWO.)
- A . Add a Network Load Balancer in front of the EC2 instances.
- B . Configure an Auto Scaling group for the EC2 instances.
- C . Add an Application Load Balancer in front of the EC2 instances.
- D . Manually add more EC2 instances for the application.
- E . Add a Gateway Load Balancer in front of the EC2 instances.
A, B
Explanation:
For an application requiring TCP communication and high availability:
Network Load Balancer (NLB)is the best choice for load balancing TCP traffic because it is designed for handling high-throughput, low-latency connections.
Auto Scaling groupensures that the application can automatically scale based on demand, adding or removing EC2 instances as needed, which is crucial for handling user growth.
Option C (Application Load Balancer): ALB is primarily for HTTP/HTTPS traffic, not ideal for TCP.
Option D (Manual scaling): Manually adding instances does not provide the automation or scalability required.
Option E (Gateway Load Balancer): GLB is used for third-party virtual appliances, not for direct application load balancing.
AWS
Reference: Network Load Balancer
Auto Scaling Group
A company is deploying a new application to a VPC on existing Amazon EC2 instances. The application has a presentation tier that uses an Auto Scaling group of EC2 instances. The application also has a database tier that uses an Amazon RDS Multi-AZ database.
The VPC has two public subnets that are split between two Availability Zones. A solutions architect adds one private subnet to each Availability Zone for the RDS database. The solutions architect wants to restrict network access to the RDS database to block access from EC2 instances that do not host the new application.
Which solution will meet this requirement?
- A . Modify the RDS database security group to allow traffic from a CIDR range that includes IP addresses of the EC2 instances that host the new application.
- B . Associate a new ACL with the private subnets. Deny all incoming traffic from IP addresses that belong to any EC2 instance that does not host the new application.
- C . Modify the RDS database security group to allow traffic from the security group that is associated with the EC2 instances that host the new application.
- D . Associate a new ACL with the private subnets. Deny all incoming traffic except for traffic from a CIDR range that includes IP addresses of the EC2 instances that host the new application.
C
Explanation:
Correct Approach:
AWS Security Groups:
Security groups operate at the instance level, making them the ideal tool for controlling access to specific resources such as an Amazon RDS database.
By default, security groups deny all incoming traffic. You can allow access by explicitly specifying another security group.
Associating an RDS database security group with the EC2 instances’ security group ensures only the specified EC2 instances can access the RDS database.
Incorrect Options Analysis:
Option A: Using CIDR blocks for IP-based access is less secure and more difficult to manage. Additionally, Auto Scaling groups dynamically allocate IP addresses, making this approach impractical.
Option B: Network ACLs (NACLs) operate at the subnet level and are stateless. While NACLs can deny or allow traffic, they are not suited to application-specific access control.
Option D: Similar to Option B, using a NACL with CIDR ranges for EC2 IPs is difficult to manage and not application-specific.
Reference: Amazon RDS Security Groups
Security Group Best Practices
Differences Between Security Groups and NACLs
