Practice Free SAA-C03 Exam Online Questions
A company wants to run a gaming application on Amazon EC2 instances that are part of an Auto Scaling group in the AWS Cloud. The application will transmit data by using UDP packets. The company wants to ensure that the application can scale out and in as traffic increases and decreases.
What should a solutions architect do to meet these requirements?
- A . Attach a Network Load Balancer to the Auto Scaling group
- B . Attach an Application Load Balancer to the Auto Scaling group.
- C . Deploy an Amazon Route 53 record set with a weighted policy to route traffic appropriately
- D . Deploy a NAT instance that is configured with port forwarding to the EC2 instances in the Auto Scaling group.
A
Explanation:
This solution meets the requirements of running a gaming application that transmits data by using UDP packets and scaling out and in as traffic increases and decreases. A Network Load Balancer can handle millions of requests per second while maintaining high throughput at ultra low latency, and it supports both TCP and UDP protocols. An Auto Scaling group can automatically adjust the number of EC2 instances based on the demand and the scaling policies.
Option B is incorrect because an Application Load Balancer does not support UDP protocol, only HTTP and HTTPS.
Option C is incorrect because Amazon Route 53 is a DNS service that can route traffic based on different policies, but it does not provide load balancing or scaling capabilities.
Option D is incorrect because a NAT instance is used to enable instances in a private subnet to connect to the internet or other AWS services, but it does not provide load balancing or scaling capabilities.
Reference:
https: //aws.amazon.com/blogs/aws/new-udp-load-balancing-for-network-load-balancer/
https: //docs.aws.amazon.com/autoscaling/ec2/userguide/AutoScalingGroup.html
A company is migrating its on-premises Oracle database to an Amazon RDS for Oracle database. The company needs to retain data for 90 days to meet regulatory requirements. The company must also be able to restore the database to a specific point in time for up to 14 days.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Create Amazon RDS automated backups. Set the retention period to 90 days.
- B . Create an Amazon RDS manual snapshot every day. Delete manual snapshots that are older than 90 days.
- C . Use the Amazon Aurora Clone feature for Oracle to create a point-in-time restore. Delete clones that are older than 90 days
- D . Create a backup plan that has a retention period of 90 days by using AWS Backup for Amazon RDS.
D
Explanation:
AWS Backup is the most appropriate solution for managing backups with minimal operational overhead while meeting the regulatory requirement to retain data for 90 days and enabling point-in-time restore for up to 14 days.
AWS Backup: AWS Backup provides a centralized backup management solution that supports automated backup scheduling, retention management, and compliance reporting across AWS services, including Amazon RDS. By creating a backup plan, you can define a retention period (in this case, 90 days) and automate the backup process.
Point-in-Time Restore (PITR): Amazon RDS supports point-in-time restore for up to 35 days with automated backups. By using AWS Backup in conjunction with RDS, you ensure that your backup strategy meets the requirement for restoring data to a specific point in time within the last 14 days.
Why Not Other Options?
Option A (RDS Automated Backups): While RDS automated backups support PITR, they do not directly support retention beyond 35 days without manual intervention.
Option B (Manual Snapshots): Manually creating and managing snapshots is operationally intensive and less automated compared to AWS Backup.
Option C (Aurora Clones): Aurora Clone is a feature specific to Amazon Aurora and is not applicable
to Amazon RDS for Oracle.
AWS
Reference: AWS Backup- Overview of AWS Backup and its capabilities.
Amazon RDS Automated Backups- Information on how RDS automated backups work and their limitations.
A company hosts an internal serverless application on AWS by using Amazon API Gateway and AWS Lambda. The company’s employees report issues with high latency when they begin using the application each day. The company wants to reduce latency.
Which solution will meet these requirements?
- A . Increase the API Gateway throttling limit.
- B . Set up a scheduled scaling to increase Lambda provisioned concurrency before employees begin to use the application each day.
- C . Create an Amazon CloudWatch alarm to initiate a Lambda function as a target for the alarm at the beginning of each day.
- D . Increase the Lambda function memory.
B
Explanation:
AWS Lambda is a serverless compute service that lets you run code without provisioning or managing servers. Lambda scales automatically based on the incoming requests, but it may take some time to initialize new instances of your function if there is a sudden increase in demand. This may result in high latency or cold starts for your application. To avoid this, you can use provisioned concurrency, which ensures that your function is initialized and ready to respond at any time. You can also set up a scheduled scaling policy that increases the provisioned concurrency before employees begin to use the application each day, and decreases it when the demand is low.
Reference: https: //docs.aws.amazon.com/lambda/latest/dg/configuration-concurrency.html
A solutions architect is designing an application that helps users fill out and submit registration forms. The solutions architect plans to use a two-tier architecture that includes a web application server tier and a worker tier.
The application needs to process submitted forms quickly. The application needs to process each form exactly once. The solution must ensure that no data is lost.
Which solution will meet these requirements?
- A . Use an Amazon Simple Queue Service {Amazon SQS) FIFO queue between the web application server tier and the worker tier to store and forward form data.
- B . Use an Amazon API Gateway HTTP API between the web application server tier and the worker tier to store and forward form data.
- C . Use an Amazon Simple Queue Service (Amazon SQS) standard queue between the web application server tier and the worker tier to store and forward form data.
- D . Use an AWS Step Functions workflow. Create a synchronous workflow between the web application server tier and the worker tier that stores and forwards form data.
A
Explanation:
To process each form exactly once and ensure no data is lost, using an Amazon SQS FIFO (First-In-First-Out) queue is the most appropriate solution. SQS FIFO queues guarantee that messages are processed in the exact order they are sent and ensure that each message is processed exactly once. This ensures data consistency and reliability, both of which are crucial for processing user-submitted forms without data loss.
SQS acts as a buffer between the web application server and the worker tier, ensuring that submitted forms are stored reliably and forwarded to the worker tier for processing. This also decouples the application, improving its scalability and resilience.
Option B (API Gateway): API Gateway is better suited for API management rather than acting as a message queue for form processing.
Option C (SQS Standard Queue): While SQS Standard queues offer high throughput, they do not guarantee exactly-once processing or the strict ordering needed for this use case.
Option D (Step Functions): Step Functions are useful for orchestrating workflows but add unnecessary complexity for simple message queuing and form processing.
AWS
Reference: Amazon SQS FIFO Queues
Decoupling Application Tiers Using Amazon SQS
A company needs to provide customers with secure access to its data. The company processes customer data and stores the results in an Amazon S3 bucket.
All the data is subject to strong regulations and security requirements. The data must be encrypted at
rest. Each customer must be able to access only their data from their AWS account. Company
employees must not be able to access the data.
Which solution will meet these requirements?
- A . Provision an AWS Certificate Manager (ACM) certificate for each customer. Encrypt the data client-side. In the private certificate policy, deny access to the certificate for all principals except an IAM role that the customer provides.
- B . Provision a separate AWS Key Management Service (AWS KMS) key for each customer. Encrypt the data server-side. In the S3 bucket policy, deny decryption of data for all principals except an IAM role that the customer provides.
- C . Provision a separate AWS Key Management Service (AWS KMS) key for each customer. Encrypt the data server-side. In each KMS key policy, deny decryption of data for all principals except an IAM role that the customer provides.
- D . Provision an AWS Certificate Manager (ACM) certificate for each customer. Encrypt the data client-side. In the public certificate policy, deny access to the certificate for all principals except an IAM role that the customer provides.
C
Explanation:
The correct solution is to provision a separate AWS KMS key for each customer and encrypt the data server-side. This way, the company can use the S3 encryption feature to protect the data at rest and delegate the control of the encryption keys to the customers. The customers can then use their own IAM roles to access and decrypt their data. The company employees will not be able to access the data because they are not authorized by the KMS key policies.
The other options are incorrect because:
Option A and D are using ACM certificates to encrypt the data client-side. This is not a recommended practice for S3 encryption because it adds complexity and overhead to the encryption process. Moreover, the company will have to manage the certificates and their policies for each customer, which is not scalable and secure.
Option B is using a separate KMS key for each customer, but it is using the S3 bucket policy to control the decryption access. This is not a secure solution because the bucket policy applies to the entire bucket, not to individual objects. Therefore, the customers will be able to access and decrypt each other’s data if they have the permission to list the bucket contents. The bucket policy also overrides the KMS key policy, which means the company employees can access the data if they have the permission to use the KMS key.
Reference: S3 encryption
KMS key policies
ACM certificates
A company wants to manage Amazon Machine Images (AMIs). The company currently copies AMIs to the same AWS Region where the AMIs were created. The company needs to design an application that captures AWS API calls and sends alerts whenever the Amazon EC2 CreateImage API operation is called within the company’s account.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Create an AWS Lambda function to query AWS CloudTrail logs and to send an alert when a CreateImage API call is detected.
- B . Configure AWS CloudTrail with an Amazon Simple Notification Service (Amazon SNS) notification that occurs when updated logs are sent to Amazon S3. Use Amazon Athena to create a new table and to query on CreateImage when an API call is detected.
- C . Create an Amazon EventBridge (Amazon CloudWatch Events) rule for the CreateImage API call. Configure the target as an Amazon Simple Notification Service (Amazon SNS) topic to send an alert when a CreateImage API call is detected.
- D . Configure an Amazon Simple Queue Service (Amazon SQS) FIFO queue as a target for AWS CloudTrail logs. Create an AWS Lambda function to send an alert to an Amazon Simple Notification Service (Amazon SNS) topic when a CreateImage API call is detected.
C
Explanation:
https: //docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/monitor-ami-events.html#: ~: text=For%20example%2C%20you%20can%20create%20an%20EventBridge%20rule%20that%20detects%20when%20the%20AMI%20creation%20process%20has%20completed%20and%20then%20invokes%20an%20Amazon%20SNS%20topic%20to%20send%20an%20email%20notificati on%20to%20you.
A research company uses on-premises devices to generate data for analysis. The company wants to use the AWS Cloud to analyze the data. The devices generate .csv files and support writing the data to SMB file share. Company analysts must be able to use SQL commands to query the data. The analysts will run queries periodically throughout the day.
Which combination of steps will meet these requirements MOST cost-effectively? (Select THREE.)
- A . Deploy an AWS Storage Gateway on premises in Amazon S3 File Gateway mode.
- B . Deploy an AWS Storage Gateway on premises in Amazon FSx File Gateway mode.
- C . Set up an AWS Glue crawler to create a table based on the data that is in Amazon S3.
- D . Set up an Amazon EMR cluster with EMR Fife System (EMRFS) to query the data that is in Amazon S3. Provide access to analysts.
- E . Set up an Amazon Redshift cluster to query the data that is in Amazon S3. Provide access to analysts.
- F . Set up Amazon Athena to query the data that is in Amazon S3. Provide access to analysts.
A, C, F
Explanation:
To meet the requirements of the use case in a cost-effective way, the following steps are recommended:
Deploy an AWS Storage Gateway on premises in Amazon S3 File Gateway mode. This will allow the company to write the .csv files generated by the devices to an SMB file share, which will be stored as objects in Amazon S3 buckets. AWS Storage Gateway is a hybrid cloud storage service that integrates on-premises environments with AWS storage. Amazon S3 File Gateway mode provides a seamless way to connect to Amazon S3 and access a virtually unlimited amount of cloud storage1.
Set up an AWS Glue crawler to create a table based on the data that is in Amazon S3. This will enable the company to use standard SQL to query the data stored in Amazon S3 buckets. AWS Glue is a serverless data integration service that simplifies data preparation and analysis. AWS Glue crawlers can automatically discover and classify data from various sources, and create metadata tables in the AWS Glue Data Catalog2. The Data Catalog is a central repository that stores information about data sources and how to access them3.
Set up Amazon Athena to query the data that is in Amazon S3. This will provide the company analysts with a serverless and interactive query service that can analyze data directly in Amazon S3 using standard SQL. Amazon Athena is integrated with the AWS Glue Data Catalog, so users can easily point Athena at the data source tables defined by the crawlers. Amazon Athena charges only for the queries that are run, and offers a pay-per-query pricing model, which makes it a cost-effective option for periodic queries4.
The other options are not correct because they are either not cost-effective or not suitable for the use case. Deploying an AWS Storage Gateway on premises in Amazon FSx File Gateway mode is not correct because this mode provides low-latency access to fully managed Windows file shares in AWS, which is not required for the use case. Setting up an Amazon EMR cluster with EMR File System (EMRFS) to query the data that is in Amazon S3 is not correct because this option involves setting up and managing a cluster of EC2 instances, which adds complexity and cost to the solution. Setting up an Amazon Redshift cluster to query the data that is in Amazon S3 is not correct because this option also involves provisioning and managing a cluster of nodes, which adds overhead and cost to the solution.
Reference:
What is AWS Storage Gateway?
What is AWS Glue?
AWS Glue Data Catalog
What is Amazon Athena?
A company wants to use an event-driven programming model with AWS Lambda. The company wants to reduce startup latency for Lambda functions that run on Java 11. The company does not have strict latency requirements for the applications. The company wants to reduce cold starts and outlier latencies when a function scales up.
Which solution will meet these requirements MOST cost-effectively?
- A . Configure Lambda provisioned concurrency.
- B . Increase the timeout of the Lambda functions.
- C . Increase the memory of the Lambda functions.
- D . Configure Lambda SnapStart.
D
Explanation:
To reduce startup latency for Lambda functions that run on Java 11, Lambda SnapStart is a suitable solution. Lambda SnapStart is a feature that enables faster cold starts and lower outlier latencies for Java 11 functions. Lambda SnapStart uses a pre-initialized Java Virtual Machine (JVM) to run the functions, which reduces the initialization time and memory footprint. Lambda SnapStart does not incur any additional charges.
Reference: Lambda SnapStart for Java 11 Functions
Lambda SnapStart FAQs
A company has a multi-tier application deployed on several Amazon EC2 instances in an Auto Scaling group. An Amazon RDS for Oracle instance is the application’s data layer that uses Oracle-specific PL/SQL functions. Traffic to the application has been steadily increasing. This is causing the EC2 instances to become overloaded and the RDS instance to run out of storage. The Auto Scaling group does not have any scaling metrics and defines the minimum healthy instance count only. The company predicts that traffic will continue to increase at a steady but unpredictable rate before levelling off.
What should a solutions architect do to ensure the system can automatically scale for the increased traffic? (Select TWO.)
- A . Configure storage Auto Scaling on the RDS for Oracle Instance.
- B . Migrate the database to Amazon Aurora to use Auto Scaling storage.
- C . Configure an alarm on the RDS for Oracle Instance for low free storage space
- D . Configure the Auto Scaling group to use the average CPU as the scaling metric
- E . Configure the Auto Scaling group to use the average free memory as the seeing metric
A, D
Explanation:
Auto scaling storage RDS will ease storage issues and migrating Oracle Pl/Sql to Aurora is cumbersome. Also Aurora has auto storage scaling by default. https: //docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_PIOPS.StorageTypes.ht ml#USER_PIOPS.Autoscaling
A solutions architect needs to copy files from an Amazon S3 bucket to an Amazon Elastic File System (Amazon EFS) file system and another S3 bucket. The files must be copied continuously. New files are added to the original S3 bucket consistently. The copied files should be overwritten only if the source file changes.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Create an AWS DataSync location for both the destination S3 bucket and the EFS file system. Create a task for the destination S3 bucket and the EFS file system. Set the transfer mode to transfer only data that has changed.
- B . Create an AWS Lambda function. Mount the file system to the function. Set up an S3 event notification to invoke the function when files are created and changed in Amazon S3. Configure the function to copy files to the file system and the destination S3 bucket.
- C . Create an AWS DataSync location for both the destination S3 bucket and the EFS file system. Create a task for the destination S3 bucket and the EFS file system. Set the transfer mode to transfer all data.
- D . Launch an Amazon EC2 instance in the same VPC as the file system. Mount the file system. Create a script to routinely synchronize all objects that changed in the origin S3 bucket to the destination S3 bucket and the mounted file system.
A
Explanation:
AWS DataSync is a service that makes it easy to move large amounts of data between AWS storage services and on-premises storage systems. AWS DataSync can copy files from an S3 bucket to an EFS file system and another S3 bucket continuously, as well as overwrite only the files that have changed in the source. This solution will meet the requirements with the least operational overhead, as it does not require any code development or manual intervention.
Reference: 4 explains how to create AWS DataSync locations for different storage services.
5 describes how to create and configure AWS DataSync tasks for data transfer.
6 discusses the different transfer modes that AWS DataSync supports.