Practice Free SAA-C03 Exam Online Questions
A company is developing a new application that uses a relational database to store user data and application configurations. The company expects the application to have steady user growth. The company expects the database usage to be variable and read-heavy, with occasional writes. The company wants to cost-optimize the database solution. The company wants to use an AWS managed database solution that will provide the necessary performance.
Which solution will meet these requirements MOST cost-effectively?
- A . Deploy the database on Amazon RDS. Use Provisioned IOPS SSD storage to ensure consistent performance for read and write operations.
- B . Deploy the database on Amazon Aurora Serveriess to automatically scale the database capacity based on actual usage to accommodate the workload.
- C . Deploy the database on Amazon DynamoDB. Use on-demand capacity mode to automatically scale throughput to accommodate the workload.
- D . Deploy the database on Amazon RDS Use magnetic storage and use read replicas to accommodate the workload
B
Explanation:
Amazon Aurora Serverless is a cost-effective, on-demand, autoscaling configuration for Amazon Aurora. It automatically adjusts the database’s capacity based on the current demand, which is ideal for workloads with variable and unpredictable usage patterns. Since the application is expected to be read-heavy with occasional writes and steady growth, Aurora Serverless can provide the necessary performance without requiring the management of database instances.
Cost-Optimization: Aurora Serverless only charges for the database capacity you use, making it a more cost-effective solution compared to always running provisioned database instances, especially for workloads with fluctuating demand.
Scalability: It automatically scales database capacity up or down based on actual usage, ensuring that you always have the right amount of resources available.
Performance: Aurora Serverless is built on the same underlying storage as Amazon Aurora, providing high performance and availability.
Why Not Other Options?
Option A (RDS with Provisioned IOPS SSD): While Provisioned IOPS SSD ensures consistent performance, it is generally more expensive and less flexible compared to the autoscaling nature of Aurora Serverless.
Option C (DynamoDB with On-Demand Capacity): DynamoDB is a NoSQL database and may not be the best fit for applications requiring relational database features.
Option D (RDS with Magnetic Storage and Read Replicas): Magnetic storage is outdated and generally slower. While read replicas help with read-heavy workloads, the overall performance might not be optimal, and magnetic storage doesn’t provide the necessary performance. AWS
Reference: Amazon Aurora Serverless- Information on how Aurora Serverless works and its use cases.
Amazon Aurora Pricing- Details on the cost-effectiveness of Aurora Serverless.
A company wants to migrate 100 GB of historical data from an on-premises location to an Amazon S3 bucket. The company has a 100 megabits per second (Mbps) internet connection on premises. The company needs to encrypt the data in transit to the S3 bucket. The company will store new data directly in Amazon S3.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Use the s3 sync command in the AWS CLI to move the data directly to an S3 bucket.
- B . Use AWS DataSync to migrate the data from the on-premises location to an S3 bucket.
- C . Use AWS Snowball to move the data to an S3 bucket.
- D . Set up an IPsec VPN from the on-premises location to AWS. Use the s3 cp command in the AWS CLI to move the data directly to an S3 bucket.
B
Explanation:
AWS DataSync is a data transfer service that makes it easy for you to move large amounts of data online between on-premises storage and AWS storage services over the internet or AWS Direct Connect. DataSync automatically encrypts your data in transit using TLS encryption, and verifies data integrity during transfer using checksums. DataSync can transfer data up to 10 times faster than open-source tools, and reduces operational overhead by simplifying and automating tasks such as scheduling, monitoring, and resuming transfers.
Reference: https: //aws.amazon.com/datasync/
A company is running a legacy system on an Amazon EC2 instance. The application code cannot be modified, and the system cannot run on more than one instance. A solutions architect must design a resilient solution that can improve the recovery time for the system.
What should the solutions architect recommend to meet these requirements?
- A . Enable termination protection for the EC2 instance.
- B . Configure the EC2 instance for Multi-AZ deployment.
- C . Create an Amazon CloudWatch alarm to recover the EC2 instance in case of failure.
- D . Launch the EC2 instance with two Amazon Elastic Block Store (Amazon EBS) volumes that use RAID configurations for storage redundancy.
C
Explanation:
To design a resilient solution that can improve the recovery time for the system, a solutions architect should recommend creating an Amazon CloudWatch alarm to recover the EC2 instance in case of failure. This solution has the following benefits:
It allows the EC2 instance to be automatically recovered when a system status check failure occurs, such as loss of network connectivity, loss of system power, software issues on the physical host, or hardware issues on the physical host that impact network reachability1.
It preserves the instance ID, private IP addresses, Elastic IP addresses, and all instance metadata of the original instance. A recovered instance is identical to the original instance, except for any data that is in-memory, which is lost during the recovery process1.
It does not require any modification of the application code or the EC2 instance configuration. The
solutions architect can create a CloudWatch alarm using the AWS Management Console, the AWS
CLI, or the CloudWatch API2.
Reference:
1: https: //docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-recover.html
2: https: //docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-recover.html#ec2-instance-recover-create-alarm
A solutions architect is designing a highly available Amazon ElastiCache for Redis based solution. The solutions architect needs to ensure that failures do not result in performance degradation or loss of data locally and within an AWS Region. The solution needs to provide high availability at the node level and at the Region level.
Which solution will meet these requirements?
- A . Use Multi-AZ Redis replication groups with shards that contain multiple nodes.
- B . Use Redis shards that contain multiple nodes with Redis append only files (AOF) tured on.
- C . Use a Multi-AZ Redis cluster with more than one read replica in the replication group.
- D . Use Redis shards that contain multiple nodes with Auto Scaling turned on.
A
Explanation:
This answer is correct because it provides high availability at the node level and at the Region level for the ElastiCache for Redis solution. A Multi-AZ Redis replication group consists of a primary cluster and up to five read replica clusters, each in a different Availability Zone. If the primary cluster fails, one of the read replicas is automatically promoted to be the new primary cluster. A Redis replication group with shards enables partitioning of the data across multiple nodes, which increases the scalability and performance of the solution. Each shard can have one or more replicas to provide redundancy and read scaling.
Reference:
https: //docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/AutoFailover.html
https: //docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/Shards.html
A company hosts a database that runs on an Amazon RDS instance deployed to multiple Availability Zones. A periodic script negatively affects a critical application by querying the database.
How can application performance be improved with minimal costs?
- A . Add functionality to the script to identify the instance with the fewest active connections and query that instance.
- B . Create a read replica of the database. Configure the script to query only the read replica.
- C . Instruct the development team to manually export new entries at the end of the day.
- D . Use Amazon ElastiCache to cache the common queries the script runs.
B
Explanation:
Option Aintroduces complexity and does not scale well.
Option Bcreates a read replica, offloading read traffic from the primary RDS instance without
impacting the critical application.
Option Cis manual and inefficient.
Option Dmight help for caching frequently queried data but is not ideal for ad-hoc reporting. Therefore, Option B is the best choice.
A company has a three-tier web application that is deployed on AWS. The web servers are deployed in a public subnet in a VPC. The application servers and database servers are deployed in private
subnets in the same VPC. The company has deployed a third-party virtual firewall appliance from AWS Marketplace in an inspection VPC. The appliance is configured with an IP interface that can accept IP packets.
A solutions architect needs to Integrate the web application with the appliance to inspect all traffic to the application before the traffic teaches the web server.
Which solution will moot these requirements with the LEAST operational overhead?
- A . Create a Network Load Balancer the public subnet of the application’s VPC to route the traffic lo the appliance for packet inspection
- B . Create an Application Load Balancer in the public subnet of the application’s VPC to route the traffic to the appliance for packet inspection
- C . Deploy a transit gateway m the inspection VPC Configure route tables to route the incoming pockets through the transit gateway
- D . Deploy a Gateway Load Balancer in the inspection VPC Create a Gateway Load Balancer endpoint to receive the incoming packets and forward the packets to the appliance
D
Explanation:
https: //aws.amazon.com/blogs/networking-and-content-delivery/scaling-network-traffic-inspection-using-aws-gateway-load-balancer/
A solutions architect is designing a two-tiered architecture that includes a public subnet and a database subnet. The web servers in the public subnet must be open to the internet on port 443. The Amazon RDS for MySQL D6 instance in the database subnet must be accessible only to the web servers on port 3306.
Which combination of steps should the solutions architect take to meet these requirements? (Select TWO.)
- A . Create a network ACL for the public subnet Add a rule to deny outbound traffic to 0 0 0 0/0 on port 3306
- B . Create a security group for the DB instance Add a rule to allow traffic from the public subnet CIDR block on port 3306
- C . Create a security group for the web servers in the public subnet Add a rule to allow traffic from 0 0 0 O’O on port 443
- D . Create a security group for the DB instance Add a rule to allow traffic from the web servers’ security group on port 3306
- E . Create a security group for the DB instance Add a rule to deny all traffic except traffic from the web servers’ security group on port 3306
B, C
Explanation:
Security groups are virtual firewalls that protect AWS instances and can be applied to EC2, ELB and RDS1. Security groups have rules for inbound and outbound traffic and are stateful, meaning that responses to allowed inbound traffic are allowed to flow out of the instance2. Network ACLs are different from security groups in several ways. They cover entire subnets, not individual instances, and are stateless, meaning that they require rules for both inbound and outbound traffic2. Network ACLs also support deny rules, while security groups only support allow rules2.
To meet the requirements of the scenario, the solutions architect should create two security groups: one for the DB instance and one for the web servers in the public subnet. The security group for the DB instance should allow traffic from the public subnet CIDR block on port 3306, which is the default port for MySQL3. This way, only the web servers in the public subnet can access the DB instance on that port. The security group for the web servers should allow traffic from 0 0 0 O’O on port 443, which is the default port forHTTPS4. This way, the web servers can accept secure connections from the internet on that port.
A company wants to standardize its Amazon Elastic Block Store (Amazon EBS) volume encryption strategy. The company also wants to minimize the cost and configuration effort required to operate the volume encryption check.
Which solution will meet these requirements?
- A . Write API calls to describe the EBS volumes and to confirm the EBS volumes are encrypted. Use Amazon EventBridge to schedule an AWS Lambda function to run the API calls.
- B . Write API calls to describe the EBS volumes and to confirm the EBS volumes are encrypted. Run the API calls on an AWS Fargate task.
- C . Create an AWS Identity and Access Management (IAM) policy that requires the use of tags on EBS volumes. Use AWS Cost Explorer to display resources that are not properly tagged. Encrypt the untagged resources manually.
- D . Create an AWS Config rule for Amazon EBS to evaluate if a volume is encrypted and to flag the volume if it is not encrypted.
D
Explanation:
AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. By creating a Config rule, you can automatically check whether your Amazon EBS volumes are encrypted and flag those that are not, with minimal cost and configuration effort.
AWS Config Rule: AWS Config provides managed rules that you can use to automatically check the compliance of your resources against predefined or custom criteria. In this case, you would create a rule to evaluate EBS volumes and determine if they are encrypted. If a volume is not encrypted, the rule will flag it, allowing you to take corrective action.
Operational Overhead: This approach significantly reduces operational overhead because once the rule is in place, it continuously monitors your EBS volumes for compliance, and there’s no need for manual checks or custom scripting.
Why Not Other Options?
Option A (Lambda with API calls and EventBridge): While this can work, it involves writing and maintaining custom code, which increases operational overhead compared to using a managed AWS Config rule.
Option B (API calls on Fargate): Running API calls on Fargate is more complex and costly compared to using AWS Config, which provides a simpler, managed solution.
Option C (IAM policy with Cost Explorer): This option does not directly enforce encryption compliance and involves manual intervention, making it less efficient and more prone to errors.
AWS
Reference: AWS Config Rules- Overview of AWS Config rules and how they can be used to evaluate resource configurations.
Amazon EBS Encryption- Information on how to manage and enforce encryption for EBS volumes.
A company wants to run its payment application on AWS. The application receives payment notifications from mobile devices Payment notifications require a basic validation before they are sent for further processing
The backend processing application is long running and requires compute and memory to be adjusted. The company does not want to manage the infrastructure
Which solution will meet these requirements with the LEAST operational overhead?
- A . Create an Amazon Simple Queue Service (Amazon SQS) queue Integrate the queue with an Amazon EventBndge rule to receive payment notifications from mobile devices Configure the rule to validate payment notifications and send the notifications to the backend application Deploy the backend application on Amazon Elastic Kubernetes Service (Amazon EKS) Anywhere Create a standalone cluster
- B . Create an Amazon API Gateway API Integrate the API with an AWS Step Functions state machine to receive payment notifications from mobile devices Invoke the state machine to validate payment
notifications and send the notifications to the backend application Deploy the backend application on Amazon Elastic Kubernetes Sen/ice (Amazon EKS). Configure an EKS cluster with self-managed nodes. - C . Create an Amazon Simple Queue Sen/ice (Amazon SQS) queue Integrate the queue with an Amazon EventBridge rule to receive payment notifications from mobile devices Configure the rule to validate payment notifications and send the notifications to the backend application Deploy the backend application on Amazon EC2 Spot Instances Configure a Spot Fleet with a default allocation strategy.
- D . Create an Amazon API Gateway API Integrate the API with AWS Lambda to receive payment notifications from mobile devices Invoke a Lambda function to validate payment notifications and send the notifications to the backend application Deploy the backend application on Amazon Elastic Container Service (Amazon ECS). Configure Amazon ECS with an AWS Fargate launch type.
D
Explanation:
This option is the best solution because it allows the company to run its payment application on AWS with minimal operational overhead and infrastructure management. By using Amazon API Gateway, the company can create a secure and scalable API to receive payment notifications from mobile devices. By using AWS Lambda, the company can run a serverless function to validate the payment notifications and send them to the backend application. Lambda handles the provisioning, scaling, and security of the function, reducing the operational complexity and cost. By using Amazon ECS with AWS Fargate, the company can run the backend application on a fully managed container service that scales the compute resources automatically and does not require any EC2 instances to manage. Fargate allocates the right amount of CPU and memory for each container and adjusts them as needed.
A company is running a critical business application on Amazon EC2 instances behind an Application Load Balancer. The EC2 instances run in an Auto Scaling group and access an Amazon RDS DB instance. The design did not pass an operational review because the EC2 instances and the DB instance are all located in a single Availability Zone A solutions architect must update the design to use a second Availability Zone
Which solution will make the application highly available?
- A . Provision a subnet in each Availability Zone Configure the Auto Scaling group to distribute the EC2 instances across bothAvailability Zones Configure the DB instance with connections to each network
- B . Provision two subnets that extend across both Availability Zones Configure the Auto Scaling group to distribute the EC2 instancesacross both Availability Zones Configure the DB instance with connections to each network
- C . Provision a subnet in each Availability Zone Configure the Auto Scaling group to distribute the EC2 instances across both Availability Zones Configure the DB instance for Multi-AZ deployment
- D . Provision a subnet that extends across both Availability Zones Configure the Auto Scaling group to distribute the EC2 instancesacross both Availability Zones Configure the DB instance for Multi-AZ deployment
C
Explanation:
https: //aws.amazon.com/vpc/faqs/#: ~: text=Can%20a%20subnet%20span%20Availability, within%20a%20single%20Availability%20Zone.