Practice Free SAA-C03 Exam Online Questions
A company is migrating applications to AWS. The applications are deployed in different accounts. The company manages the accounts centrally by using AWS Organizations. The company’s security team needs a single sign-on (SSO) solution across all the company’s accounts. The company must continue managing the users and groups in its on-premises self-managed Microsoft Active Directory.
Which solution will meet these requirements?
- A . Enable AWS Single Sign-On (AWS SSO) from the AWS SSO console. Create a one-way forest trust or a one-way domain trust to connect the company’s self-managed Microsoft Active Directory with AWS SSO by using AWS Directory Service for Microsoft Active Directory.
- B . Enable AWS Single Sign-On (AWS SSO) from the AWS SSO console. Create a two-way forest trust to connect the company’s self-managed Microsoft Active Directory with AWS SSO by using AWS Directory Service for Microsoft Active Directory.
- C . Use AWS Directory Service. Create a two-way trust relationship with the company’s self-managed Microsoft Active Directory.
- D . Deploy an identity provider (IdP) on premises. Enable AWS Single Sign-On (AWS SSO) from the AWS SSO console.
A
Explanation:
To provide single sign-on (SSO) across all the company’s accounts while continuing to manage users and groups in its on-premises self-managed Microsoft Active Directory, the solution is to enable AWS Single Sign-On (SSO) from the AWS SSO console and create a one-way forest trust or a one-way domain trust to connect the company’s self-managed Microsoft Active Directory with AWS SSO by using AWS Directory Service for Microsoft Active Directory. This solution is described in the AWS documentation
A company has a large Microsoft SharePoint deployment running on-premises that requires Microsoft Windows shared file storage. The company wants to migrate this workload to the AWS Cloud and is considering various storage options. The storage solution must be highly available and integrated with Active Directory for access control.
Which solution will satisfy these requirements?
- A . Configure Amazon EFS storage and set the Active Directory domain for authentication
- B . Create an SMB Me share on an AWS Storage Gateway tile gateway in two Availability Zones
- C . Create an Amazon S3 bucket and configure Microsoft Windows Server to mount it as a volume
- D . Create an Amazon FSx for Windows File Server file system on AWS and set the Active Directory domain for authentication
A company uses an on-premises network-attached storage (NAS) system to provide file shares to its high performance computing (HPC) workloads. The company wants to migrate its latency-sensitive HPC workloads and its storage to the AWS Cloud. The company must be able to provide NFS and SMB multi-protocol access from the file system.
Which solution will meet these requirements with the LEAST latency? (Select TWO.)
- A . Deploy compute optimized EC2 instances into a cluster placement group.
- B . Deploy compute optimized EC2 instances into a partition placement group.
- C . Attach the EC2 instances to an Amazon FSx for Lustre file system.
- D . Attach the EC2 instances to an Amazon FSx for OpenZFS file system.
- E . Attach the EC2 instances to an Amazon FSx for NetApp ONTAP file system.
A, E
Explanation:
A cluster placement group is a logical grouping of EC2 instances within a single Availability Zone that are placed close together to minimize network latency. This is suitable for latency-sensitive HPC workloads that require high network performance. A compute optimized EC2 instance is an instance type that has a high ratio of vCPUs to memory, which is ideal for compute-intensive applications. Amazon FSx for NetApp ONTAP is a fully managed service that provides NFS and SMB multi-protocol access from the file system, as well as features such as data deduplication, compression, thin provisioning, and snapshots. This solution will meet the requirements with the least latency, as it leverages the low-latency network and storage performance of AWS.
Reference: 1 explains how cluster placement groups work and their benefits.
2 describes the characteristics and use cases of compute optimized EC2 instances.
3 provides an overview of Amazon FSx for NetApp ONTAP and its features.
A company has hired an external vendor to perform work in the company’s AWS account. The vendor uses an automated tool that is hosted in an AWS account that the vendor owns. The vendor does not have IAM access to the company’s AWS account.
How should a solutions architect grant this access to the vendor?
- A . Create an IAM role in the company’s account to delegate access to the vendor’s IAM role. Attach the appropriate IAM policies to the role for the permissions that the vendor requires.
- B . Create an IAM user in the company’s account with a password that meets the password complexity requirements. Attach the appropriate IAM policies to the user for the permissions that the vendor requires.
- C . Create an IAM group in the company’s account. Add the tool’s IAM user from the vendor account to the group. Attach the appropriate IAM policies to the group for the permissions that the vendor requires.
- D . Create a new identity provider by choosing “AWS account” as the provider type in the IAM console. Supply the vendor’s AWS account ID and user name. Attach the appropriate IAM policies to the new provider for the permissions that the vendor requires.
A
Explanation:
https: //docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_common-scenarios_third-party.html
A company is migrating a legacy application from an on-premises data center to AWS. The application relies on hundreds of cron Jobs that run between 1 and 20 minutes on different recurring schedules throughout the day.
The company wants a solution to schedule and run the cron jobs on AWS with minimal refactoring.
The solution must support running the cron jobs in response to an event in the future.
Which solution will meet these requirements?
- A . Create a container image for the cron jobs. Use Amazon EventBridge Scheduler to create a recurring schedule. Run the cron job tasks as AWS Lambda functions.
- B . Create a container image for the cron jobs. Use AWS Batch on Amazon Elastic Container Service (Amazon ECS) with a scheduling policy to run the cron jobs.
- C . Create a container image for the cron jobs. Use Amazon EventBridge Scheduler to create a recurring schedule Run the cron job tasks on AWS Fargate.
- D . Create a container image for the cron jobs. Create a workflow in AWS Step Functions that uses a Wait state to run the cron jobs at a specified time. Use the RunTask action to run the cron job tasks on AWS Fargate.
C
Explanation:
This solution is the most suitable for running cron jobs on AWS with minimal refactoring, while also supporting the possibility of running jobs in response to future events.
Container Image for Cron Jobs: By containerizing the cron jobs, you can package the environment and dependencies required to run the jobs, ensuring consistency and ease of deployment across different environments.
Amazon EventBridge Scheduler: EventBridge Scheduler allows you to create a recurring schedule that can trigger tasks (like running your cron jobs) at specific times or intervals. It provides fine-grained control over scheduling and integrates seamlessly with AWS services.
AWS Fargate: Fargate is a serverless compute engine for containers that removes the need to manage EC2 instances. It allows you to run containers without worrying about the underlying infrastructure. Fargate is ideal for running jobs that can vary in duration, like cron jobs, as it scales automatically based on the task’s requirements.
Why Not Other Options?
Option A (Lambda): While AWS Lambda could handle short-running cron jobs, it has limitations in terms of execution duration (maximum of 15 minutes) and might not be suitable for jobs that run up to 20 minutes.
Option B (AWS Batch on ECS): AWS Batch is more suitable for batch processing and workloads that require complex job dependencies or orchestration, which might be more than what is needed for simple cron jobs.
Option D (Step Functions with Wait State): While Step Functions provide orchestration capabilities, this approach would introduce unnecessary complexity and overhead compared to the straightforward scheduling with EventBridge and running on Fargate.
AWS
Reference: Amazon EventBridge Scheduler- Details on how to schedule tasks using Amazon EventBridge Scheduler.
AWS Fargate- Information on how to run containers in a serverless manner using AWS Fargate.
A company operates an ecommerce website on Amazon EC2 instances behind an Application Load Balancer (ALB) in an Auto Scaling group. The site is experiencing performance issues related to a high request rate from illegitimate external systems with changing IP addresses. The security team is worried about potential DDoS attacks against the website. The company must block the illegitimate incoming requests in a way that has a minimal impact on legitimate users.
What should a solutions architect recommend?
- A . Deploy Amazon Inspector and associate it with the ALB.
- B . Deploy AWS WAF, associate it with the ALB, and configure a rate-limiting rule.
- C . Deploy rules to the network ACLs associated with the ALB to block the incoming traffic.
- D . Deploy Amazon GuardDuty and enable rate-limiting protection when configuring GuardDuty.
B
Explanation:
This answer is correct because it meets the requirements of blocking the illegitimate incoming requests in a way that has a minimal impact on legitimate users. AWS WAF is a web application firewall that helps protect your web applications or APIs against common web exploits that may affect availability, compromise security, or consume excessive resources. AWS WAF gives you control over how traffic reaches your applications by enabling you to create security rules that block common attack patterns, such as SQL injection or cross-site scripting, and rules that filter out specific traffic patterns you define. You can associate AWS WAF with an ALB to protect the web application from malicious requests. You can configure a rate-limiting rule in AWS WAF to track the rate of requests for each originating IP address and block requests from an IP address that exceeds a certain limit within a five-minute period. This way, you can mitigate potential DDoS attacks and improve the performance of your website.
Reference:
https: //docs.aws.amazon.com/waf/latest/developerguide/what-is-aws-waf.html
https: //docs.aws.amazon.com/waf/latest/developerguide/waf-rule-statement-type-rate-based.html
A company uses AWS to run its e-commerce platform, which is critical to its operations and experiences a high volume of traffic and transactions. The company has configured a multi-factor authentication (MFA) device to secure its AWS account root user credentials. The company wants to ensure that it will not lose access to the root user account if the MFA device is lost.
Which solution will meet these requirements?
- A . Set up a backup administrator account that the company can use to log in if the company loses the MFA device.
- B . Add multiple MFA devices for the root user account to handle the disaster scenario.
- C . Create a new administrator account when the company cannot access the root account.
- D . Attach the administrator policy to another IAM user when the company cannot access the root account.
A company hosts a frontend application that uses an Amazon API Gateway API backend that is integrated with AWS Lambda When the API receives requests, the Lambda function loads many libranes Then the Lambda function connects to an Amazon RDS database processes the data and returns the data to the frontend application. The company wants to ensure that response latency is as low as possible for all its users with the fewest number of changes to the company’s operations.
Which solution will meet these requirements?
- A . Establish a connection between the frontend application and the database to make queries faster by bypassing the API
- B . Configure provisioned concurrency for the Lambda function that handles the requests
- C . Cache the results of the queries in Amazon S3 for faster retneval of similar datasets.
- D . Increase the size of the database to increase the number of connections Lambda can establish at one time
B
Explanation:
Configure provisioned concurrency for the Lambda function that handles the requests. Provisioned concurrency allows you to set the amount of compute resources that are available to the Lambda function, so that it can handle more requests at once and reduce latency. Caching the results of the queries in Amazon S3 could also help to reduce latency, but it would not be as effective as setting up provisioned concurrency. Increasing the size of the database would not help to reduce latency, as this would not increase the number of connections the Lambda function could establish, and establishing a direct connection between the frontend application and the database would bypass the API, which would not be the best solution either.
Using AWS Lambda with Amazon API Gateway – AWS Lambda https: //docs.aws.amazon.com/lambda/latest/dg/services-apigateway.html
AWS Lambda FAQs
https: //aws.amazon.com/lambda/faqs/
A company hosts a frontend application that uses an Amazon API Gateway API backend that is integrated with AWS Lambda When the API receives requests, the Lambda function loads many libranes Then the Lambda function connects to an Amazon RDS database processes the data and returns the data to the frontend application. The company wants to ensure that response latency is as low as possible for all its users with the fewest number of changes to the company’s operations.
Which solution will meet these requirements?
- A . Establish a connection between the frontend application and the database to make queries faster by bypassing the API
- B . Configure provisioned concurrency for the Lambda function that handles the requests
- C . Cache the results of the queries in Amazon S3 for faster retneval of similar datasets.
- D . Increase the size of the database to increase the number of connections Lambda can establish at one time
B
Explanation:
Configure provisioned concurrency for the Lambda function that handles the requests. Provisioned concurrency allows you to set the amount of compute resources that are available to the Lambda function, so that it can handle more requests at once and reduce latency. Caching the results of the queries in Amazon S3 could also help to reduce latency, but it would not be as effective as setting up provisioned concurrency. Increasing the size of the database would not help to reduce latency, as this would not increase the number of connections the Lambda function could establish, and establishing a direct connection between the frontend application and the database would bypass the API, which would not be the best solution either.
Using AWS Lambda with Amazon API Gateway – AWS Lambda https: //docs.aws.amazon.com/lambda/latest/dg/services-apigateway.html
AWS Lambda FAQs
https: //aws.amazon.com/lambda/faqs/
A company runs an SMB file server in its data center. The file server stores large files that the company frequently accesses for up to 7 days after the file creation date. After 7 days, the company needs to be able to access the files with a maximum retrieval time of 24 hours.
Which solution will meet these requirements?
- A . Use AWS DataSync to copy data that is older than 7 days from the SMB file server to AWS.
- B . Create an Amazon S3 File Gateway to increase the company’s storage space. Create an S3 Lifecycle policy to transition the data to S3 Glacier Deep Archive after 7 days.
- C . Create an Amazon FSx File Gateway to increase the company’s storage space. Create an Amazon S3 Lifecycle policy to transition the data after 7 days.
- D . Configure access to Amazon S3 for each user. Create an S3 Lifecycle policy to transition the data to S3 Glacier Flexible Retrieval after 7 days.
B
Explanation:
Amazon S3 File Gateway is a service that provides a file-based interface to Amazon S3, which appears as a network file share. It enables you to store and retrieve Amazon S3 objects through standard file storage protocols such as SMB. S3 File Gateway can also cache frequently accessed data locally for low-latency access. S3 Lifecycle policy is a feature that allows you to define rules that automate the management of your objects throughout their lifecycle. You can use S3 Lifecycle policy to transition objects to different storage classes based on their age and access patterns. S3 Glacier Deep Archive is a storage class that offers the lowest cost for long-term data archiving, with a retrieval time of 12 hours or 48 hours. This solution will meet the requirements, as it allows the company to store large files in S3 with SMB file access, and to move the files to S3 Glacier Deep Archive after 7 days for cost savings and compliance.
Reference:
1 provides an overview of Amazon S3 File Gateway and its benefits.
2 explains how to use S3 Lifecycle policy to manage object storage lifecycle.
3 describes the features and use cases of S3 Glacier Deep Archive storage class.