Practice Free SAA-C03 Exam Online Questions
A company has a three-tier web application that processes orders from customers. The web tier consists of Amazon EC2 instances behind an Application Load Balancer. The processing tier consists of EC2 instances. The company decoupled the web tier and processing tier by using Amazon Simple Queue Service (Amazon SQS). The storage layer uses Amazon DynamoDB.
At peak times some users report order processing delays and halts. The company has noticed that during these delays, the EC2 instances are running at 100%CPU usage, and the SQS queue fills up. The peak times are variable and unpredictable.
The company needs to improve the performance of the application.
Which solution will meet these requirements?
- A . Use scheduled scaling for Amazon EC2 Auto Scaling to scale out the processing tier instances for the duration of peak usage times. Use the CPU Utilization metric to determine when to scale.
- B . Use Amazon ElastiCache for Redis in front of the DynamoDB backend tier. Use target utilization as a metric to determine when to scale.
- C . Add an Amazon CloudFront distribution to cache the responses for the web tier. Use HTTP latency as a metric to determine when to scale.
- D . Use an Amazon EC2 Auto Scaling target tracking policy to scale out the processing tier instances.
Use the ApproximateNumberOfMessages attribute to determine when to scale.
D
Explanation:
The issue in this case is related to the processing tier, where EC2 instances are overwhelmed at peak times, causing delays.
Option D, using an Amazon EC2 Auto Scaling target tracking policy based on the Approximate Number Of Messages in the SQS queue, is the best solution.
Auto Scaling with Target Tracking:
Target tracking policies dynamically scale out or in based on a specific metric. For this use case, you can monitor the Approximate Number Of Messages in the SQS queue. When the number of messages (orders) in the queue increases, the Auto Scaling group will scale out more EC2 instances to handle the additional load, ensuring that the queue doesn’t build up and cause delays.
This solution is ideal for handling variable and unpredictable peak times, as Auto Scaling can automatically adjust based on real-time load rather than scheduled times.
Why Not the Other Options?
Option A (Scheduled Scaling): Scheduled scaling works well for predictable peak times, but this company experiences unpredictable peak usage, making scheduled scaling less effective.
Option B (ElastiCache for Redis): Adding a caching layer would help if DynamoDB were the bottleneck, but in this case, the issue is CPU overload on EC2 instances in the processing tier.
Option C (CloudFront): CloudFront would help cache static content from the web tier, but it wouldn’t resolve the issue of the processing tier’s overloaded EC2 instances. AWS
Reference: Amazon EC2 Auto Scaling Target Tracking
Amazon SQS ApproximateNumberOfMessages
A company has an AWS Lambda function that needs read access to an Amazon S3 bucket that is located in the same AWS account.
Which solution will meet these requirement in the MOST secure manner?
- A . Apply an S3 bucket pokey that grants road access to the S3 bucket
- B . Apply an IAM role to the Lambda function Apply an IAM policy to the role to grant read access to the S3 bucket
- C . Embed an access key and a secret key In the Lambda function’s coda to grant the required IAM permissions for read access to the S3 bucket
- D . Apply an IAM role to the Lambda function. Apply an IAM policy to the role to grant read access to all S3 buckets In the account
B
Explanation:
This option is the most secure because it follows the principle of least privilege and grants only the necessary permissions to the Lambda function without exposing any credentials in the code. The IAM role can be configured as the Lambda function’sexecution role and the IAM policy can specify the S3 bucket ARN and the s3: GetObject action12.
Option A is less secure because it grants read access to any principal that has access to the S3 bucket, which could be more than the Lambda function.
Option C is less secure because it embeds credentials in the code, which could be compromised or exposed.
Option D is less secure because it grants read access to all S3 buckets in the account, which could be more than what the Lambda function needs.
A company has applications that run on Amazon EC2 instances. The EC2 instances connect to Amazon RDS databases by using an IAM role that has associated policies. The company wants to use AWS Systems Manager to patch the EC2 instances without disrupting the running applications.
Which solution will meet these requirements?
- A . Create a new IAM role. Attach the Amazon SSM Managed lnstance Core policy to the new IAM role.
Attach the new IAM role to the EC2 instances and the existing IAM role. - B . Create an IAM user. Attach the Amazon SSM Managed lnstance Core policy to the IAM user.
Configure Systems Manager to use the IAM user to manage the EC2 instances. - C . Enable Default Host Configuration Management in Systems Manager to manage the EC2 instances.
- D . Remove the existing policies from the existing IAM role. Add the Amazon SSM Managed lnstance Core policy to the existing IAM role.
C
Explanation:
The most suitable solution for the company’s requirements is to enable Default Host Configuration Management in Systems Manager to manage the EC2 instances. This solution will allow the company to patch the EC2 instances without disrupting the running applications and without manually creating or modifying IAM roles or users.
Default Host Configuration Management is a feature of AWS Systems Manager that enables Systems
Manager to manage EC2 instances automatically as managed instances. A managed instance is an EC2 instance that is configured for use with Systems Manager. The benefits of managing instances with Systems Manager include the following:
Connect to EC2 instances securely using Session Manager.
Perform automated patch scans using Patch Manager.
View detailed information about instances using Systems Manager Inventory.
Track and manage instances using Fleet Manager.
Keep SSM Agent up to date automatically.
Default Host Configuration Management makes it possible to manage EC2 instances without having to manually create an IAM instance profile. Instead, Default Host Configuration Management creates and applies a default IAM role to ensure that Systems Manager has permissions to manage all instances in the Region and account where it is activated. If the permissions provided are not sufficient for the use case, the default IAM role can be modified or replaced with a custom role1.
The other options are not correct because they either have more operational overhead or do not meet the requirements. Creating a new IAM role, attaching the Amazon SSM Managed Instance Core policy to the new IAM role, and attaching the new IAM role and the existing IAM role to the EC2 instances is not correct because this solution requires manual creation and management of IAM roles, which adds complexity and cost to the solution. The Amazon SSM Managed Instance Core policy is a managed policy that grants permissions for Systems Manager core functionality2. Creating an IAM user, attaching the Amazon SSM Managed Instance Core policy to the IAM user, and configuring Systems Manager to use the IAM user to manage the EC2 instances is not correct because this solution requires manual creation and management of IAM users, which adds complexity and cost to the solution. An IAM user is an identity within an AWS account that has specific permissions for a single person or application3. Removing the existing policies from the existing IAM role and adding the Amazon SSM Managed Instance Core policy to the existing IAM role is not correct because this solution may disrupt the running applications that rely on the existing policies for accessing RDS databases. An IAM role is an identity within an AWS account that has specific permissions for a service or entity4.
Reference: AWS managed policy: Amazon SSM Managed Instance Core
IAM users
IAM roles
Default Host Management Configuration – AWS Systems Manager
A company runs a website that uses a content management system (CMS) on Amazon EC2. The CMS runs on a single EC2 instance and uses an Amazon Aurora MySQL Multi-AZ DB instance for the data tier. Website images are stored on an Amazon Elastic Block Store (Amazon EBS) volume that is mounted inside the EC2 instance.
Which combination of actions should a solutions architect take to improve the performance and resilience of the website? (Select TWO.)
- A . Move the website images into an Amazon S3 bucket that is mounted on every EC2 instance.
- B . Share the website images by using an NFS share from the primary EC2 instance. Mount this share on the other EC2 instances.
- C . Move the website images onto an Amazon Elastic File System (Amazon EFS) file system that is mounted on every EC2 instance.
- D . Create an Amazon Machine Image (AMI) from the existing EC2 instance Use the AMI to provision new instances behind an Application Load Balancer as part of an Auto Scaling group. Configure the Auto Scaling group to maintain a minimum of two instances. Configure an accelerator in AWS Global Accelerator for the website.
- E . Create an Amazon Machine Image (AMI) from the existing EC2 instance. Use the AMI to provision new instances behind an Application Load Balancer as part of an Auto Scaling group. Configure the Auto Scaling group to maintain a minimum of two instances. Configure an Amazon CloudFront distribution for the website.
C, E
Explanation:
Option C provides moving the website images onto an Amazon EFS file system that is mounted on every EC2 instance. Amazon EFS provides a scalable and fully managed file storage solution that can be accessed concurrently from multiple EC2 instances. This ensures that the website images can be accessed efficiently and consistently by all instances, improving performance In Option E. The Auto Scaling group maintains a minimum of two instances, ensuring resilience by automatically replacing any unhealthy instances. Additionally, configuring an Amazon CloudFront distribution for the website further improves performance by caching content at edge locations closer to the end-users, reducing latency and improving content delivery. Hence combining these actions, the website’s performance is improved through efficient image storage and content delivery
A company wants to create an Amazon EMR cluster that multiple teams will use. The company wants to ensure that each team’s big data workloads can access only the AWS services that each team needs to interact with. The company does not want the workloads to have access to Instance Metadata Service Version 2 (IMDSv2) on the cluster’s underlying EC2 instances.
Which solution will meet these requirements?
- A . Configure interface VPC endpoints for each AWS service that the teams need. Use the required interface VPC endpoints to submit the big data workloads.
- B . Create EMR runtime roles. Configure the cluster to use the runtime roles. Use the runtime roles to submit the big data workloads.
- C . Create an EC2 IAM instance profile that has the required permissions for each team. Use the instance profile to submit the big data workloads.
- D . Create an EMR security configuration that has the EnableApplicationScoped IAM Role option set to false. Use the security configuration to submit the big data workloads.
B
Explanation:
EMR runtime roles allow fine-grained permissions per job, letting each team access only the services they are authorized to use. This isolates IAM permissions per workload and avoids exposing instance-level credentials through IMDSv2. Runtime roles improve security posture in multi-tenant EMR environments.
Reference: AWS Documentation C EMR Runtime Roles and Access Isolation
A company is building a containerized application on premises and decides to move the application to AWS. The application will have thousands of users soon after li is deployed. The company Is unsure how to manage the deployment of containers at scale. The company needs to deploy the containerized application in a highly available architecture that minimizes operational overhead.
Which solution will meet these requirements?
- A . Store container images In an Amazon Elastic Container Registry (Amazon ECR) repository. Use an Amazon Elastic Container Service (Amazon ECS) cluster with the AWS Fargate launch type to run the containers. Use target tracking to scale automatically based on demand.
- B . Store container images in an Amazon Elastic Container Registry (Amazon ECR) repository. Use an Amazon Elastic Container Service (Amazon ECS) cluster with the Amazon EC2 launch type to run the containers. Use target tracking to scale automatically based on demand.
- C . Store container images in a repository that runs on an Amazon EC2 instance. Run the containers on EC2 instances that are spread across multiple Availability Zones. Monitor the average CPU utilization in Amazon CloudWatch. Launch new EC2 instances as needed
- D . Create an Amazon EC2 Amazon Machine Image (AMI) that contains the container image Launch EC2 Instances in an Auto Scaling group across multiple Availability Zones. Use an Amazon CloudWatch alarm to scale out EC2 instances when the average CPU utilization threshold is breached.
A
Explanation:
AWS Fargate is a serverless experience for user applications, allowing the user to concentrate on building applications instead of configuring and managing servers. Fargate also automates resource management, allowing users to easily scale their applications in response to demand.
A company is storing 700 terabytes of data on a large network-attached storage (NAS) system in fts corporate data center. The company has a hybrid environment with a 10 Gbps AWS Direct Connect connection.
After an audit from a regulator, the company has 90 days to move the data to the cloud. The company needs to move the data efficiently and without disruption. The company still needs to be able to access and update the data during the transfer window.
Which solution will meet these requirements?
- A . Create an AWS DataSync agent in the corporate data center. Create a data transfer task. Start the transfer to an Amazon S3 bucket.
- B . Back up the data to AWS Snowball Edge Storage Optimized devices. Ship the devices to an AWS data center. Mount a target Amazon S3 bucket on the on-premises file system.
- C . Use rsync to copy the data directly from local storage to a designated Amazon S3 bucket over the Direct Connect connection.
- D . Back up the data on tapes. Ship the tapes to an AWS data center. Mount a target Amazon S3 bucket on the on-premises file system.
A
Explanation:
This answer is correct because it meets the requirements of moving the data efficiently and without disruption, and still being able to access and update the data during the transfer window. AWS DataSync is an online data movement and discovery service that simplifies and accelerates data migrations to AWS and helps you move data quickly and securely between on-premises storage, edge locations, other clouds, and AWS Storage. You can create an AWS DataSync agent in the corporate data center to connect your NAS system to AWS over the Direct Connect connection. You can create a data transfer task to specify the source location, destination location, and options for transferring the data. You can start the transfer to an Amazon S3 bucket and monitor the progress of the task. DataSync automatically encrypts data in transit and verifies data integrity during transfer.
DataSync also supports incremental transfers, which means that only files that have changed since the last transfer are copied. This way, you can ensure that your data is synchronized between your NAS system and S3 bucket, and you can access and update the data during the transfer window.
Reference:
https: //docs.aws.amazon.com/datasync/latest/userguide/what-is-datasync.html
https: //docs.aws.amazon.com/datasync/latest/userguide/how-datasync-works.html
A company wants to migrate its on-premises data center to AWS. According to the company’s compliance requirements, the company can use only the ap-northeast-3 Region. Company administrators are not permitted to connect VPCs to the internet.
Which solutions will meet these requirements? (Choose two.)
- A . Use AWS Control Tower to implement data residency guardrails to deny internet access and deny access to all AWS Regions except ap-northeast-3.
- B . Use rules in AWS WAF to prevent internet access. Deny access to all AWS Regions except ap-northeast-3 in the AWS account settings.
- C . Use AWS Organizations to configure service control policies (SCPS) that prevent VPCs from gaining internet access. Deny access to all AWS Regions except ap-northeast-3.
- D . Create an outbound rule for the network ACL in each VPC to deny all traffic from 0.0.0.0/0. Create an IAM policy for each user to prevent the use of any AWS Region other than ap-northeast-3.
- E . Use AWS Config to activate managed rules to detect and alert for internet gateways and to detect and alert for new resources deployed outside of ap-northeast-3.
A, C
Explanation:
https: //docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps_examples_vpc.html#example_vpc_2
A company is developing an application that provides order shipping statistics for retrieval by a REST API. The company wants to extract the shipping statistics, organize the data into an easy-to-read HTML format, and send the report to several email addresses at the same time every morning.
Which combination of steps should a solutions architect take to meet these requirements? (Choose two.)
- A . Configure the application to send the data to Amazon Kinesis Data Firehose.
- B . Use Amazon Simple Email Service (Amazon SES) to format the data and to send the report by email.
- C . Create an Amazon EventBridge (Amazon CloudWatch Events) scheduled event that invokes an AWS Glue job to query the application’s API for the data.
- D . Create an Amazon EventBridge (Amazon CloudWatch Events) scheduled event that invokes an AWS Lambda function to query the application’s API for the data.
- E . Store the application data in Amazon S3. Create an Amazon Simple Notification Service (Amazon SNS) topic as an S3 event destination to send the report by
B, D
Explanation:
https: //docs.aws.amazon.com/ses/latest/dg/send-email-formatted.html
D. Create an Amazon EventBridge (Amazon CloudWatch Events) scheduled event that invokes an AWS Lambda function to query the application’s API for the data. This step can be done using AWS Lambda to extract the shipping statistics and organize the data into an HTML format.
B. Use Amazon Simple Email Service (Amazon SES) to format the data and send the report by email. This step can be done by using Amazon SES to send the report to multiple email addresses at the same time every morning.
Therefore, options D and B are the correct choices for this question.
Option A is incorrect because
Kinesis Data Firehose is not necessary for this use case.
Option C is incorrect because AWS Glue is not required to query the application’s API.
Option E is incorrect because S3 event notifications cannot be used to send the report by email.
A company is building a mobile app on AWS. The company wants to expand its reach to millions of users. The company needs to build a platform so that authorized users can watch the company’s content on their mobile devices
What should a solutions architect recommend to meet these requirements?
- A . Publish content to a public Amazon S3 bucket. Use AWS Key Management Service (AWS KMS) keys to stream content.
- B . Set up IPsec VPN between the mobile app and the AWS environment to stream content
- C . Use Amazon CloudFront Provide signed URLs to stream content.
- D . Set up AWS Client VPN between the mobile app and the AWS environment to stream content.
C
Explanation:
Amazon CloudFront is a content delivery network (CDN) that securely delivers data, videos, applications, and APIs to customers globally with low latency and high transfer speeds. CloudFront supports signed URLs that provide authorized access to your content. This feature allows the company to control who can access their content and for how long, providing a secure and scalable solution for millions of users.