Practice Free SAA-C03 Exam Online Questions
A company is building a new furniture inventory application. The company has deployed the application on a fleet of Amazon EC2 instances across multiple Availability Zones. The EC2 instances run behind an Application Load Balancer (ALB) in their VPC.
A solutions architect has observed that incoming traffic seems to favor one EC2 instance, resulting in latency for some requests.
What should the solutions architect do to resolve this issue?
- A . Disable session affinity (sticky sessions) on the ALB.
- B . Replace the ALB with a Network Load Balancer.
- C . Increase the number of EC2 instances in each Availability Zone.
- D . Adjust the frequency of the health checks on the ALB’s target group.
A
Explanation:
The issue described in the question, where incoming traffic seems to favor one EC2 instance, is often caused by session affinity (also known as sticky sessions) being enabled on the Application Load Balancer (ALB). When session affinity is enabled, the ALB routes requests from the same client to the same EC2 instance. This can cause an imbalance in traffic distribution, leading to performance bottlenecks on certain instances while others remain underutilized.
To resolve this issue, disabling session affinity ensures that the ALB distributes incoming traffic evenly across all EC2 instances, allowing better load distribution and reducing latency. The ALB will rely on its round-robin or least outstanding requests algorithm (depending on the configuration) to distribute traffic more evenly across instances.
Option B (Network Load Balancer): The NLB is designed for Layer 4 (TCP) traffic and low latency use cases, but it is not needed here as the problem is with load balancing logic at the application layer (Layer 7). The ALB is more appropriate for HTTP/HTTPS traffic.
Option C (Increase EC2 Instances): Adding more EC2 instances does not solve the root issue of uneven traffic distribution.
Option D (Health Check Frequency): Adjusting health check frequency won’t address the imbalance caused by session affinity.
AWS
Reference: Application Load Balancer Sticky Sessions
A solutions architect is creating a new Amazon CloudFront distribution for an application. Some of the information submitted by users is sensitive. The application uses HTTPS but needs another layer of security. The sensitive information should be protected throughout the entire application stack, and access to the information should be restricted to certain applications.
Which action should the solutions architect take?
- A . Configure a CloudFront signed URL.
- B . Configure a CloudFront signed cookie.
- C . Configure a CloudFront field-level encryption profile.
- D . Configure CloudFront and set the Origin Protocol Policy setting to HTTPS Only for the Viewer Protocol Policy.
C
Explanation:
Field-level encryption in Amazon CloudFront provides end-to-end encryption for specific data fields (e.g., credit card numbers, social security numbers). It ensures that sensitive fields are encrypted at the edge before being forwarded to the origin, and only the authorized application with the private key can decrypt them.
This adds a layer of protection beyond HTTPS, which encrypts the whole payload but not individual fields. Signed URLs and cookies are for access control, not encryption. Setting HTTPS Only is a good practice but does not satisfy the field-specific encryption requirement.
A company is building a cloud-based application on AWS that will handle sensitive customer data. The application uses Amazon RDS for the database. Amazon S3 for object storage, and S3 Event Notifications that invoke AWS Lambda for serverless processing.
The company uses AWS IAM Identity Center to manage user credentials. The development, testing, and operations teams need secure access to Amazon RDS and Amazon S3 while ensuring the confidentiality of sensitive customer data. The solution must comply with the principle of least privilege.
Which solution meets these requirements with the LEAST operational overhead?
- A . Use IAM roles with least privilege to grant all the teams access. Assign IAM roles to each team with customized IAM policies defining specific permission for Amazon RDS and S3 object access based on team responsibilities.
- B . Enable IAM Identity Center with an Identity Center directory. Create and configure permission sets with granular access to Amazon RDS and Amazon S3. Assign all the teams to groups that have specific access with the permission sets.
- C . Create individual IAM users for each member in all the teams with role-based permissions. Assign the IAM roles with predefined policies for RDS and S3 access to each user based on user needs. Implement IAM Access Analyzer for periodic credential evaluation.
- D . Use AWS Organizations to create separate accounts for each team. Implement cross-account IAM roles with least privilege Grant specific permission for RDS and S3 access based on team roles and responsibilities.
B
Explanation:
This solution allows for secure and least-privilege access with minimal operational overhead.
IAM Identity Center: AWS IAM Identity Center (formerly AWS SSO) enables you to centrally manage access to multiple AWS accounts and applications. By using IAM Identity Center, you can assign permission sets that define what users or groups can access, ensuring that only necessary permissions are granted.
Permission Sets: Permission sets in IAM Identity Center allow you to define granular access controls for specific services, such as Amazon RDS and S3. You can tailor these permissions to meet the needs of different teams, adhering to the principle of least privilege.
Group Management: By assigning users to groups and associating those groups with specific permission sets, you reduce the complexity and overhead of managing individual IAM roles and policies. This method also simplifies compliance and audit processes.
Why Not Other Options?:
Option A (IAM roles): While IAM roles can provide least-privilege access, managing multiple roles and policies across teams increases operational overhead compared to using IAM Identity Center.
Option C (Individual IAM users): Managing individual IAM users and roles can be cumbersome and does not scale well compared to group-based management in IAM Identity Center.
Option D (AWS Organizations with cross-account roles): Creating separate accounts and cross-account roles adds unnecessary complexity and overhead for this use case, where IAM Identity Center provides a more straightforward solution. AWS
Reference: AWS IAM Identity Center- Overview and best practices for using IAM Identity Center. Managing Access Permissions Using IAM Identity Center- Guide on creating and managing permission sets for secure access.
A company wants to implement a data lake in the AWS Cloud. The company must ensure that only specific teams have access to sensitive data in the data lake. The company must have row-level access control for the data lake.
- A . Use Amazon RDS to store the data. Use IAM roles and permissions for data governance and access control.
- B . Use Amazon Redshift to store the data. Use IAM roles and permissions for data governance and access control.
- C . Use Amazon S3 to store the data. Use AWS Lake Formation for data governance and access control.
- D . Use AWS Glue Catalog to store the data. Use AWS Glue DataBrew for data governance and access control.
C
Explanation:
Detailed
An internal product team is deploying a new application to a private VPC in a company’s AWS account. The application runs on Amazon EC2 instances that are in a security group named App1. The EC2 instances store application data in an Amazon S3 bucket and use AWS Secrets Manager to store application service credentials. The company’s security policy prohibits applications in a private VPC from using public IP addresses to communicate.
Which combination of solutions will meet these requirements? (Select TWO.)
- A . Configure gateway endpoints for Amazon S3 and AWS Secrets Manager.
- B . Configure interface VPC endpoints for Amazon S3 and AWS Secrets Manager.
- C . Add routes to the endpoints in the VPC route table.
- D . Associate the App1 security group with the interface VPC endpoints. Configure a self-referencing security group rule to allow inbound traffic.
- E . Associate the App1 security group with the gateway endpoints. Configure a self-referencing
security group rule to allow inbound traffic.
B, C
Explanation:
To securely access AWS services like S3 and Secrets Manager from a private VPC without using public IPs, interface VPC endpoints are required. These endpoints are accessible via private IP addresses. For the application to reach these endpoints, appropriate routes must be configured in the route table.
Reference: AWS Documentation C Interface VPC Endpoints and PrivateLink
A company runs an AWS Lambda function in private subnets in a VPC. The subnets have a default route to the internet through an Amazon EC2 NAT instance. The Lambda function processes input data and saves its output as an object to Amazon S3.
Intermittently, the Lambda function times out while trying to upload the object because of saturated traffic on the NAT instance’s network. The company wants to access Amazon S3 without traversing the internet.
Which solution will meet these requirements?
- A . Replace the EC2 NAT instance with an AWS managed NAT gateway.
- B . Increase the size of the EC2 NAT instance in the VPC to a network optimized instance type
- C . Provision a gateway endpoint for Amazon S3 in the VPC. Update the route tables of the subnets accordingly.
- D . Provision a transit gateway. Place transit gateway attachments in the private subnets where the Lambda function is running.
C
Explanation:
Gateway Endpoint for Amazon S3: A VPC endpoint for Amazon S3 allows you to privately connect your VPC to Amazon S3 without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection.
Provisioning the Endpoint:
Navigate to the VPC Dashboard.
Select "Endpoints" and create a new endpoint.
Choose the service name for S3 (com.amazonaws.region.s3).
Select the appropriate VPC and subnets.
Adjust the route tables of the subnets to include the new endpoint.
Update Route Tables: Modify the route tables of the subnets to direct traffic destined for S3 to the newly created endpoint. This ensures that traffic to S3 does not go through the NAT instance, avoiding the saturated network and eliminating timeouts.
Operational Efficiency: This solution minimizes operational overhead by removing dependency on the NAT instance and avoiding internet traffic, leading to more stable and secure S3 interactions.
Reference: VPC Endpoints for Amazon S3
Creating a Gateway Endpoint
A company runs an application on several Amazon EC2 instances. Multiple Amazon Elastic Block Store (Amazon EBS) volumes are attached to each EC2 instance. The company needs to back up the configurations and the data of the EC2 instances every night. The application must be recoverable in a secondary AWS Region.
Which solution will meet these requirements in the MOST operationally efficient way?
- A . Configure an AWS Lambda function to take nightly snapshots of the application’s EBS volumes and to copy the snapshots to a secondary Region.
- B . Create a backup plan in AWS Backup to take nightly backups. Copy the backups to a secondary Region. Add the EC2 instances to a resource assignment as part of the backup plan.
- C . Create a backup plan in AWS Backup to take nightly backups. Copy the backups to a secondary Region. Add the EBS volumes to a resource assignment as part of the backup plan.
- D . Configure an AWS Lambda function to take nightly snapshots of the application’s EBS volumes and to copy the snapshots to a secondary Availability Zone.
B
Explanation:
AWS Backup is a fully managed backup service that can create backup plans for EC2 instances, including both instance configurations and attached EBS volumes, with scheduled and cross-Region copy capabilities. By adding the EC2 instances to the resource assignment in the backup plan, AWS Backup automatically backs up all configurations and attached EBS volumes, and can copy backups to a secondary Region for disaster recovery, providing the highest operational efficiency with the least manual effort.
AWS Documentation Extract:
“AWS Backup provides fully managed backup for EC2 instances and attached EBS volumes, with scheduling, retention, and cross-Region copy built in. By adding the EC2 instance as a resource, the backup includes both configuration and attached volumes.”
(Source: AWS Backup documentation)
A, D: Custom Lambda scripts increase operational overhead and are not as integrated or robust as AWS Backup.
C: Assigning only EBS volumes does not include the EC2 instance configuration, which is needed for full recovery.
Reference: AWS Certified Solutions Architect C Official Study Guide, Disaster Recovery and Backup.
A company runs an application on Amazon EC2 instances behind an Application Load Balancer (ALB). The company wants to create a public API for the application that uses JSON Web Tokens (JWT) for authentication. The company wants the API to integrate directly with the ALB.
Which solution will meet these requirements?
- A . Use Amazon API Gateway to create a REST API.
- B . Use Amazon API Gateway to create an HTTP API.
- C . Use Amazon API Gateway to create a WebSocket API.
- D . Use Amazon API Gateway to create a gRPC API.
B
Explanation:
Amazon API Gateway supports multiple API types: REST, HTTP, WebSocket, and gRPC. HTTP APIs are a newer, lightweight option that support JWT authorizers natively, enabling secure, scalable authentication for APIs with JSON Web Tokens.
HTTP APIs can integrate with ALBs as a backend target, providing direct connectivity and simplified API management with JWT authentication built in.
REST APIs support JWT but are more feature-rich and complex, often used for legacy or more complex use cases, and have higher costs and latency. WebSocket APIs are for real-time, bidirectional communication, which is not requested here. gRPC APIs support RPC calls but are less common for public HTTP-based APIs with JWT auth.
Therefore, HTTP API with JWT authorizers is the best fit for this use case.
Reference: AWS Well-Architected Framework ― Security Pillar
(https: //d1.awsstatic.com/whitepapers/architecture/AWS_Well-Architected_Framework.pdf)
Amazon API Gateway HTTP APIs
(https: //docs.aws.amazon.com/apigateway/latest/developerguide/http-api.html)
Using JWT Authorizers with HTTP APIs
(https: //docs.aws.amazon.com/apigateway/latest/developerguide/http-api-jwt-authorizer.html)
A company wants to relocate its on-premises MySQL database to AWS. The database accepts regular imports from a client-facing application, which causes a high volume of write operations. The company is concerned that the amount of traffic might be causing performance issues within the application.
- A . Provision an Amazon RDS for MySQL DB instance with Provisioned IOPS SSD storage. Monitor write operation metrics by using Amazon CloudWatch. Adjust the provisioned IOPS if necessary.
- B . Provision an Amazon RDS for MySQL DB instance with General Purpose SSD storage. Place an Amazon ElastiCache cluster in front of the DB instance. Configure the application to query ElastiCache instead.
- C . Provision an Amazon DocumentDB (with MongoDB compatibility) instance with a memory-optimized instance type. Monitor Amazon CloudWatch for performance-related issues. Change the instance class if necessary.
- D . Provision an Amazon Elastic File System (Amazon EFS) file system in General Purpose performance mode. Monitor Amazon CloudWatch for IOPS bottlenecks. Change to Provisioned Throughput performance mode if necessary.
A
Explanation:
Comprehensive and Detailed
For a MySQL database experiencing high write operations, using Amazon RDS with Provisioned IOPS (io1 or io2) SSD storage is recommended to achieve consistent and low-latency performance. Provisioned IOPS allows you to specify a desired IOPS rate, which is crucial for write-intensive workloads.
Monitoring write operation metrics through Amazon CloudWatch enables you to observe performance and adjust the provisioned IOPS as needed to meet application demands.
Reference: Modifying settings for Provisioned IOPS SSD storage AWS Documentation Amazon CloudWatch metrics for Amazon RDSAWS Documentation
A company uses AWS Organizations to manage multiple AWS accounts. Each department in the company has its own AWS account. A security team needs to implement centralized governance and control to enforce security best practices across all accounts. The team wants to have control over which AWS services each account can use. The team needs to restrict access to sensitive resources based on IP addresses or geographic regions. The root user must be protected with multi-factor authentication (MFA) across all accounts.
- A . Use AWS Identity and Access Management (IAM) to manage IAM users and IAM roles in each account. Implement MFA for the root user in each account. Enforce service restrictions by using AWS managed prefix lists.
- B . Use AWS Control Tower to establish a multi-account environment. Use service control policies (SCPs) to enforce service restrictions in AWS Organizations. Configure MFA for the root user across all accounts.
- C . Use AWS Systems Manager to enforce service restrictions across multiple accounts. Use IAM policies to enforce MFA for the root user across all accounts.
- D . Use AWS IAM Identity Center to manage user access and to enforce service restrictions by using permissions boundaries in each account.
B
Explanation:
Comprehensive and Detailed
AWS Control Tower provides a straightforward way to set up and govern a secure, multi-account AWS environment based on AWS best practices. It automates the setup of a baseline environment, or landing zone, that includes:
Service Control Policies (SCPs): These are used to manage permissions across AWS Organizations, allowing you to set permission guardrails. SCPs can restrict access to specific AWS services and actions, helping enforce security best practices.
Multi-Factor Authentication (MFA): AWS Control Tower can enforce MFA for the root user across all accounts, enhancing security.
Centralized Governance: It offers centralized logging and monitoring, making it easier to manage and
audit multiple AWS accounts.
Reference: AWS Control Tower User Guide
Service Control Policies
Root user best practices for your AWS account
