Practice Free SAA-C03 Exam Online Questions
A solutions architect is creating a new VPC design There are two public subnets for the load balancer, two private subnets for web servers and two private subnets for MySQL. The web servers use only HTTPS. The solutions architect has already created a security group tor the load balancer allowing port 443 from 0 0 0 0/0 Company policy requires that each resource has the teas! access required to still be able to perform its tasks
Which additional configuration strategy should the solutions architect use to meet these requirements?
- A . Create a security group for the web servers and allow port 443 from 0.0.0.0/0 Create a security group for the MySQL servers and allow port 3306 from the web servers security group
- B . Create a network ACL for the web servers and allow port 443 from 0.0.0.0/0 Create a network ACL (or the MySQL servers and allow port 3306 from the web servers security group
- C . Create a security group for the web servers and allow port 443 from the load balancer Create a security group for the MySQL servers and allow port 3306 from the web servers security group
- D . Create a network ACL ‘or the web servers and allow port 443 from the load balancer Create a network ACL for the MySQL servers and allow port 3306 from the web servers security group
C
Explanation:
This answer is correct because it provides a resilient and durable replacement for the on-premises file share that is compatible with Windows IIS web servers. Amazon FSx for Windows File Server is a fully managed service that provides shared file storage built on Windows Server. It supports the SMB protocol and integrates with Microsoft ActiveDirectory, which enables seamless access and authentication for Windows-based applications.
Amazon FSx for Windows File Server also offers the following benefits:
Resilience: Amazon FSx for Windows File Server can be deployed in multiple Availability Zones, which provides high availability and failover protection. It also supports automatic backups and restores, as well as self-healing features that detect and correct issues.
Durability: Amazon FSx for Windows File Server replicates data within and across Availability Zones, and stores data on highly durable storage devices. It also supports encryption at rest and in transit, as well as file access auditing and data deduplication.
Performance: Amazon FSx for Windows File Server delivers consistent sub-millisecond latencies and high throughput for file operations. It also supports SSD storage, native Windows features such as Distributed File System (DFS) Namespaces and Replication, and user-driven performance scaling. By configuring the Amazon FSx file share to use an AWS KMS CMK to encrypt the images in the file share, the company can protect the images from unauthorized access and comply with company policy. By using NTFS permission sets on the images, the company can prevent accidental deletion of the images by restricting who can modify or delete them.
Reference: Amazon FSx for Windows File Server
Using Microsoft Windows file shares
A company uses NFS to store large video files in on-premises network attached storage. Each video file ranges in size from 1MB to 500 GB. The total storage is 70 TB and is no longer growing. The company decides to migrate the video files to Amazon S3. The company must migrate the video files as soon as possible while using the least possible network bandwidth.
Which solution will meet these requirements?
- A . Create an S3 bucket Create an IAM role that has permissions to write to the S3 bucket. Use the AWS CLI to copy all files locally to the S3 bucket.
- B . Create an AWS Snowball Edge job. Receive a Snowball Edge device on premises. Use the Snowball Edge client to transfer data to the device. Return the device so that AWS can import the data into Amazon S3.
- C . Deploy an S3 File Gateway on premises. Create a public service endpoint to connect to the S3 File Gateway Create an S3 bucket Create a new NFS file share on the S3 File Gateway Point the new file share to the S3 bucket. Transfer the data from the existing NFS file share to the S3 File Gateway.
- D . Set up an AWS Direct Connect connection between the on-premises network and AWS. Deploy an S3 File Gateway on premises. Create a public virtual interlace (VIF) to connect to the S3 File Gateway. Create an S3 bucket. Create a new NFS file share on the S3 File Gateway. Point the new file share to the S3 bucket. Transfer the data from the existing NFS file share to the S3 File Gateway.
B
Explanation:
The basic difference between Snowball and Snowball Edge is the capacity they provide. Snowball provides a total of 50 TB or 80 TB, out of which 42 TB or 72 TB is available, while Amazon Snowball Edge provides 100 TB, out of which 83 TB is available.
A company’s facility has badge readers at every entrance throughout the building. When badges are scanned, the readers send a message over HTTPS to indicate who attempted to access that particular entrance.
A solutions architect must design a system to process these messages from the sensors. The solution must be highly available, and the results must be made available for the company’s security team to analyze.
Which system architecture should the solutions architect recommend?
- A . Launch an Amazon EC2 instance to serve as the HTTPS endpoint and to process the messages Configure the EC2 instance to save the results to an Amazon S3 bucket.
- B . Create an HTTPS endpoint in Amazon API Gateway. Configure the API Gateway endpoint to invoke an AWS Lambda function to process the messages and save the results to an Amazon DynamoDB table.
- C . Use Amazon Route 53 to direct incoming sensor messages to an AWS Lambda function. Configure the Lambda function to process the messages and save the results to an Amazon DynamoDB table.
- D . Create a gateway VPC endpoint for Amazon S3. Configure a Site-to-Site VPN connection from the facility network to the VPC so that sensor data can be written directly to an S3 bucket by way of the VPC endpoint.
B
Explanation:
Deploy Amazon API Gateway as an HTTPS endpoint and AWS Lambda to process and save the messages to an Amazon DynamoDB table. This option provides a highly available and scalable solution that can easily handle large amounts of data. It also integrates with other AWS services, making it easier to analyze and visualize the data for the security team.
A company is designing the network for an online multi-player game. The game uses the UDP networking protocol and will be deployed in eight AWS Regions. The network architecture needs to minimize latency and packet loss to give end users a high-quality gaming experience.
Which solution will meet these requirements?
- A . Set up a transit gateway in each Region. Create inter-Region peering attachments between each transit gateway.
- B . Set up AWS Global Accelerator with UDP listeners and endpoint groups in each Region.
- C . Set up Amazon CloudFront with UDP turned on. Configure an origin in each Region.
- D . Set up a VPC peering mesh between each Region. Turn on UDP for each VPC.
B
Explanation:
The best solution for this situation is option B, setting up AWS Global Accelerator with UDP listeners and endpoint groups in each Region. AWS Global Accelerator is a networking service that improves the availability and performance of internet applications by routing user requests to the nearest AWS Region [1]. It also improves the performance of UDP applications by providing faster, more reliable data transfers with lower latency and fewer packet losses. By setting up UDP listeners and endpoint groups in each Region, Global Accelerator will route traffic to the nearest Region for faster response times and a better user experience.
A company is migrating an application from on-premises servers to Amazon EC2 instances. As part of the migration design requirements, a solutions architect must implement infrastructure metric alarms. The company does not need to take action if CPU utilization increases to more than 50%for a short burst of time. However, if the CPU utilization increases to more than 50%and read IOPS on the disk are high at the same time, the company needs to act as soon as possible. The solutions architect also must reduce false alarms.
What should the solutions architect do to meet these requirements?
- A . Create Amazon CloudWatch composite alarms where possible.
- B . Create Amazon CloudWatch dashboards to visualize the metrics and react to issues quickly.
- C . Create Amazon CloudWatch Synthetics canaries to monitor the application and raise an alarm.
- D . Create single Amazon CloudWatch metric alarms with multiple metric thresholds where possible.
A
Explanation:
Composite alarms determine their states by monitoring the states of other alarms. You can **use composite alarms to reduce alarm noise**. For example, you can create a composite alarm where the underlying metric alarms go into ALARM when they meet specific conditions. You then canset up your composite alarm to go into ALARM and send you notifications when the underlying metric alarms go into ALARM by configuring the underlying metric alarms never to take actions. Currently, composite alarms can take the following actions: https: //docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Create_Composite_Alarm.html
A company has hired a solutions architect to design a reliable architecture for its application. The application consists of one Amazon RDS DB instance and two manually provisioned Amazon EC2 instances that run web servers. The EC2 instances are located in a single Availability Zone.
An employee recently deleted the DB instance, and the application was unavailable for 24 hours as a result. The company is concerned with the overall reliability of its environment.
What should the solutions architect do to maximize reliability of the application’s infrastructure?
- A . Delete one EC2 instance and enable termination protection on the other EC2 instance. Update the DB instance to be Multi-AZ, and enable deletion protection.
- B . Update the DB instance to be Multi-AZ, and enable deletion protection. Place the EC2 instances behind an Application Load Balancer, and run them in an EC2 Auto Scaling group across multiple Availability Zones.
- C . Create an additional DB instance along with an Amazon API Gateway and an AWS Lambda function. Configure the application to invoke the Lambda function through API Gateway. Have the Lambda function write the data to the two DB instances.
- D . Place the EC2 instances in an EC2 Auto Scaling group that has multiple subnets located in multiple Availability Zones. Use Spot Instances instead of On-Demand Instances. Set up Amazon CloudWatch alarms to monitor the health of the instances. Update the DB instance to be Multi-AZ, and enable deletion protection.
B
Explanation:
This answer is correct because it meets the requirements of maximizing the reliability of the application’s infrastructure. You can update the DB instance to be Multi-AZ, which means that Amazon RDS automatically provisions and maintains a synchronous standby replica in a different Availability Zone. The primary DB instance is synchronously replicated across Availability Zones to a standby replica to provide data redundancy and minimize latency spikes during system backups.
Running a DB instance with high availability can enhance availability during planned system maintenance. It can also help protect your databases against DB instance failure and Availability Zone disruption. You can also enable deletion protection on the DB instance, which prevents the DB instance from being deleted by any user. You can place the EC2 instances behind an Application Load Balancer, which distributes incoming application traffic across multiple targets, such as EC2 instances, in multiple Availability Zones. This increases the availability and fault tolerance of your applications.
You can run the EC2 instances in an EC2 Auto Scaling group across multiple Availability Zones, which ensures that you have the correct number of EC2 instances available to handle the load for your application. You can use scaling policies to adjust the number of instances in your Auto Scaling group in response to changing demand.
Reference:
https: //docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZSingleStandby.html
https: //docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_DeleteInstance.html#USER_DeleteInstance.DeletionProtection
https: //docs.aws.amazon.com/elasticloadbalancing/latest/application/introduction.html
https: //docs.aws.amazon.com/autoscaling/ec2/userguide/AutoScalingGroup.html
A new employee has joined a company as a deployment engineer. The deployment engineer will be using AWS CloudFormation templates to create multiple AWS resources. A solutions architect wants the deployment engineer to perform job activities while following the principle of least privilege.
Which steps should the solutions architect do in conjunction to reach this goal? (Select two.)
- A . Have the deployment engineer use AWS account roof user credentials for performing AWS CloudFormation stack operations.
- B . Create a new IAM user for the deployment engineer and add the IAM user to a group that has the PowerUsers IAM policy attached.
- C . Create a new IAM user for the deployment engineer and add the IAM user to a group that has the Administrate/Access IAM policy attached.
- D . Create a new IAM User for the deployment engineer and add the IAM user to a group that has an IAM policy that allows AWS CloudFormation actions only.
- E . Create an IAM role for the deployment engineer to explicitly define the permissions specific to the AWS CloudFormation stack and launch stacks using Dial IAM role.
D, E
Explanation:
https: //docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.htmlhttps: //docs.aws.amazon.com/I AM/latest/UserGuide/id_users.html
A company needs to design a resilient web application to process customer orders. The web application must automatically handle increases in web traffic and application usage without affecting the customer experience or losing customer orders.
Which solution will meet these requirements?
- A . Use a NAT gateway to manage web traffic. Use Amazon EC2 Auto Scaling groups to receive, process, and store processed customer orders. Use an AWS Lambda function to capture and store unprocessed orders.
- B . Use a Network Load Balancer (NLB) to manage web traffic. Use an Application Load Balancer to receive customer orders from the NLB. Use Amazon Redshift with a Multi-AZ deployment to store unprocessed and processed customer orders.
- C . Use a Gateway Load Balancer (GWLB) to manage web traffic. Use Amazon Elastic Container Service (Amazon ECS) to receive and process customer orders. Use the GWLB to capture and store unprocessed orders. Use Amazon DynamoDB to store processed customer orders.
- D . Use an Application Load Balancer to manage web traffic. Use Amazon EC2 Auto Scaling groups to receive and process customer orders. Use Amazon Simple Queue Service (Amazon SQS) to store unprocessed orders. Use Amazon RDS with a Multi-AZ deployment to store processed customer orders.
D
Explanation:
This architecture uses ALB for routing, Auto Scaling for elasticity, SQS to buffer unprocessed orders and decouple services, and RDS Multi-AZ for high availability and durability of transactional data. This ensures resilience and fault tolerance even during traffic spikes.
Reference: AWS Well-Architected Framework C Decoupled Architectures with SQS and High Availability with RDS
A company has multiple VPCs across AWS Regions to support and run workloads that are isolated from workloads in other Regions Because of a recent application launch requirement, the company’s VPCs must communicate with all other VPCs across all Regions.
Which solution will meet these requirements with the LEAST amount of administrative effort?
- A . Use VPC peering to manage VPC communication in a single Region Use VPC peering across Regions to manage VPC communications.
- B . Use AWS Direct Connect gateways across all Regions to connect VPCs across regions and manage VPC communications.
- C . Use AWS Transit Gateway to manage VPC communication in a single Region and Transit Gateway peering across Regions to manage VPC communications.
- D . Use AWS PrivateLink across all Regions to connect VPCs across Regions and manage VPC communications.
C
Explanation:
Understanding the Requirement: The company needs to enable communication between VPCs across multiple AWS Regions with minimal administrative effort.
Analysis of Options:
VPC Peering: Managing multiple VPC peering connections across regions is complex and difficult to scale, leading to significant administrative overhead.
AWS Direct Connect Gateways: Primarily used for creating private connections between AWS and on-premises environments, not for inter-VPC communication across regions.
AWS Transit Gateway: Simplifies VPC interconnections within a region and supports Transit Gateway peering for cross-region connectivity, reducing administrative complexity.
AWS PrivateLink: Used for accessing AWS services and third-party services over a private connection,
not for inter-VPC communication.
Best Solution:
AWS Transit Gateway with Transit Gateway Peering: This option provides a scalable and efficient solution for managing VPC communications both within a single region and across multiple regions with minimal administrative overhead.
Reference: AWS Transit Gateway
Transit Gateway Peering
A company has a popular gaming platform running on AWS. The application is sensitive to latency because latency can impact the user experience and introduce unfair advantages to some players. The application is deployed in every AWS Region. It runs on Amazon EC2 instances that are part of Auto Scaling groups configured behind Application Load Balancers (ALBs). A solutions architect needs to implement a mechanism to monitor the health of the application and redirect traffic to healthy endpoints.
Which solution meets these requirements?
- A . Configure an accelerator in AWS Global Accelerator. Add a listener for the port that the application listens on, and attach it to a Regional endpoint in each Region. Add the ALB as the endpoint.
- B . Create an Amazon CloudFront distribution and specify the ALB as the origin server. Configure the cache behavior to use origin cache headers. Use AWS Lambda functions to optimize the traffic.
- C . Create an Amazon CloudFront distribution and specify Amazon S3 as the origin server. Configure the cache behavior to use origin cache headers. Use AWS Lambda functions to optimize the traffic.
- D . Configure an Amazon DynamoDB database to serve as the data store for the application. Create a DynamoDB Accelerator (DAX) cluster to act as the in-memory cache for DynamoDB hosting the application data.
A
Explanation:
AWS Global Accelerator directs traffic to the optimal healthy endpoint based on health checks, it can also route traffic to the closest healthy endpoint based on geographic location of the client. By configuring an accelerator and attaching it to a Regional endpoint in each Region, and adding the ALB as the endpoint, the solution will redirect traffic to healthy endpoints, improving the user experience by reducing latency and ensuring that the application is running optimally. This solution will ensure that traffic is directed to the closest healthy endpoint and will help to improve the overall user experience.