Practice Free SAA-C03 Exam Online Questions
A company is launching a new gaming application. The company will use Amazon EC2 Auto Scaling groups to deploy the application. The application stores user data in a relational database.
The company has office locations around the world that need to run analytics on the user data in the database. The company needs a cost-effective database solution that provides cross-Region disaster recovery with low-latency read performance across AWS Regions.
Which solution will meet these requirements?
- A . Create an Amazon ElastiCache for Redis cluster in the Region where the application is deployed. Create read replicas in Regions where the company offices are located. Ensure the company offices read from the read replica instances.
- B . Create Amazon DynamoDB global tables. Deploy the tables to the Regions where the company offices are located and to the Region where the application is deployed. Ensure that each company office reads from the tables that are in the same Region as the office.
- C . Create an Amazon Aurora global database. Configure the primary cluster to be in the Region where the application is deployed. Configure the secondary Aurora replicas to be in the Regions where the company offices are located. Ensure the company offices read from the Aurora replicas.
- D . Create an Amazon RDS Multi-AZ DB cluster deployment in the Region where the application is deployed. Ensure the company offices read from read replica instances.
A company needs to transfer 600 TB of data from its on-premises network-attached storage (NAS) system to the AWS Cloud. The data transfer must be complete within 2 weeks. The data is sensitive and must be encrypted in transit. The company’s internet connection can support an upload speed of 100 Mbps.
Which solution meets these requirements MOST cost-effectively?
- A . Use Amazon S3 multi-part upload functionality to transfer the fees over HTTPS
- B . Create a VPN connection between the on-premises NAS system and the nearest AWS Region Transfer the data over the VPN connection
- C . Use the AWS Snow Family console to order several AWS Snowball Edge Storage Optimized devices Use the devices to transfer the data to Amazon S3.
- D . Set up a 10 Gbps AWS Direct Connect connection between the company location and (he nearest AWS Region Transfer the data over a VPN connection into the Region to store the data in Amazon S3
C
Explanation:
The best option is to use the AWS Snow Family console to order several AWS Snowball Edge Storage Optimized devices and use the devices to transfer the data to Amazon S3. Snowball Edge is a petabyte-scale data transfer device that can help transfer large amounts of data securely and quickly. Using Snowball Edge can be the most cost-effective solution for transferring large amounts of data over long distances and can help meet the requirement of transferring 600 TB of data within two weeks.
A company has a large fleet of vehicles that are equipped with internet connectivity to send telemetry to the company. The company receives over 1 million data points every 5 minutes from the vehicles. The company uses the data in machine learning (ML) applications to predict vehicle maintenance needs and to preorder parts. The company produces visual reports based on the captured data. The company wants to migrate the telemetry ingestion, processing, and visualization workloads to AWS.
Which solution will meet these requirements?
- A . Use Amazon Timestream for LiveAnalytics to store the data points. Grant Amazon SageMaker permission to access the data for processing. Use Amazon QuickSight to visualize the data.
- B . Use Amazon DynamoDB to store the data points. Use DynamoDB Connector to ingest data from DynamoDB into Amazon EMR for processing. Use Amazon QuickSight to visualize the data.
- C . Use Amazon Neptune to store the data points. Use Amazon Kinesis Data Streams to ingest data from Neptune into an AWS Lambda function for processing. Use Amazon QuickSight to visualize the data.
- D . Use Amazon Timestream to for LiveAnalytics to store the data points. Grant Amazon SageMaker permission to access the data for processing. Use Amazon Athena to visualize the data.
A
Explanation:
Amazon Timestream: Purpose-built time series database optimized for telemetry and IoT data ingestion and analytics.
Amazon SageMaker: Provides ML capabilities for predictive maintenance workflows.
Amazon QuickSight: Efficiently generates interactive, real-time visual reports from Timestream data.
Optimized for Scale: Timestream efficiently handles large-scale telemetry data with time-series indexing and queries.
Amazon Timestream Documentation
A company is redesigning a static website. The company needs a solution to host the new website in the company’s AWS account. The solution must be secure and scalable.
Which combination of solutions will meet these requirements? (Select THREE.)
- A . Configure an Amazon CloudFront distribution. Set the Amazon S3 bucket as the origin.
- B . Associate an AWS Certificate Manager (ACM) TLS certificate to the Amazon CloudFront distribution.
- C . Enable static website hosting for the Amazon S3 bucket.
- D . Create an Amazon S3 bucket to store the static website content.
- E . Export the website’s SSL/TLS certificate from AWS Certificate Manager (ACM) to the root of the Amazon S3 bucket.
- F . Turn off Block Public Access for the Amazon S3 bucket.
A company is using AWS DataSync to migrate millions of files from an on-premises system to AWS.
The files are 10 KB in size on average.
The company wants to use Amazon S3 for file storage. For the first year after the migration the files will be accessed once or twice and must be immediately available. After 1 year the files must be archived for at least 7 years.
Which solution will meet these requirements MOST cost-effectively?
- A . Use an archive tool lo group the files into large objects. Use DataSync to migrate the objects. Store the objects in S3 Glacier Instant Retrieval for the first year. Use a lifecycle configuration to transition the files to S3 Glacier Deep Archive after 1 year with a retention period of 7 years.
- B . Use an archive tool to group the files into large objects. Use DataSync to copy the objects to S3 Standard-Infrequent Access (S3 Standard-IA). Use a lifecycle configuration to transition the files to S3 Glacier Instant Retrieval after 1 year with a retention period of 7 years.
- C . Configure the destination storage class for the files as S3 Glacier Instant. Retrieval Use a lifecycle policy to transition the files to S3 Glacier Flexible Retrieval after 1 year with a retention period of 7 years.
- D . Configure a DataSync task to transfer the files to S3 Standard-Infrequent Access (S3 Standard-IA) Use a lifecycle configuration to transition the files to S3. Deep Archive after 1 year with a retention period of 7 years.
A company moved its on-premises PostgreSQL database to an Amazon RDS for PostgreSQL DB instance. The company successfully launched a new product. The workload on the database has increased.
The company wants to accommodate the larger workload without adding infrastructure.
Which solution will meet these requirements MOST cost-effectively?
- A . Buy reserved DB instances for the total workload. Make the Amazon RDS for PostgreSQL DB instance larger.
- B . Make the Amazon RDS for PostgreSQL DB instance a Multi-AZ DB instance.
- C . Buy reserved DB instances for the total workload. Add another Amazon RDS for PostgreSQL DB instance.
- D . Make the Amazon RDS for PostgreSQL DB instance an on-demand DB instance.
A
Explanation:
This answer is correct because it meets the requirements of accommodating the larger workload without adding infrastructure and minimizing the cost. Reserved DB instances are a billing discount applied to the use of certain on-demand DB instances in your account. Reserved DB instances provide you with a significant discount compared to on-demand DB instance pricing. You can buy reserved DB instances for the total workload and choose between three payment options: No Upfront, Partial Upfront, or All Upfront. You can make the Amazon RDS for PostgreSQL DB instance larger by modifying its instance type to a higher performance class. This way, you can increase the CPU, memory, and network capacity of your DB instance and handle the increased workload.
Reference:
https: //docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithReservedDBInstances.html
https: //docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.DBInstanceClass.html
A company is migrating an old application to AWS. The application runs a batch job every hour and is CPU intensive. The batch job takes 15 minutes on average with an on-premises server. The server has 64 virtual CPU (vCPU) and 512 GiB of memory
Which solution will run the batch job within 15 minutes with the LEAST operational overhead?
- A . Use AWS Lambda with functional scaling
- B . Use Amazon Elastic Container Service (Amazon ECS) with AWS Fargate
- C . Use Amazon Lightsail with AWS Auto Scaling
- D . Use AWS Batch on Amazon EC2
D
Explanation:
Use AWS Batch on Amazon EC2. AWS Batch is a fully managed batch processing service that can be used to easily run batch jobs on Amazon EC2 instances. It can scale the number of instances to match the workload, allowing the batch job to be completed in the desired time frame with minimal operational overhead.
Using AWS Lambda with Amazon API Gateway – AWS Lambda https: //docs.aws.amazon.com/lambda/latest/dg/services-apigateway.html
AWS Lambda FAQs
https: //aws.amazon.com/lambda/faqs/
A company wants to run its critical applications in containers to meet requirements tor scalability and availability. The company prefers to focus on maintenance of the critical applications. The company does not want to be responsible for provisioning and managing the underlying infrastructure that runs the containerized workload
What should a solutions architect do to meet those requirements?
- A . Use Amazon EC2 Instances, and Install Docker on the Instances
- B . Use Amazon Elastic Container Service (Amazon ECS) on Amazon EC2 worker nodes
- C . Use Amazon Elastic Container Service (Amazon ECS) on AWS Fargate
- D . Use Amazon EC2 instances from an Amazon Elastic Container Service (Amazon ECS)-op6mized Amazon Machine Image (AMI).
C
Explanation:
using AWS ECS on AWS Fargate since they requirements are for scalability and availability without having to provision and manage the underlying infrastructure to run the containerized workload. https: //docs.aws.amazon.com/AmazonECS/latest/userguide/what-is-fargate.html
A company is building an application on AWS that connects to an Amazon RDS database. The company wants to manage the application configuration and to securely store and retrieve credentials for the database and other services.
Which solution will meet these requirements with the LEAST administrative overhead?
- A . Use AWS AppConfig to store and manage the application configuration. Use AWS Secrets Manager to store and retrieve the credentials.
- B . Use AWS Lambda to store and manage the application configuration. Use AWS Systems Manager Parameter Store to store and retrieve the credentials.
- C . Use an encrypted application configuration file Store the file in Amazon S3 for the application configuration. Create another S3 file to store and retrieve the credentials.
- D . Use AWS AppConfig to store and manage the application configuration. Use Amazon RDS to store and retrieve the credentials.
A
Explanation:
This solution meets the company’s requirements with minimal administrative overhead and ensures security and ease of management.
AWS AppConfig: AWS AppConfig is a service designed to manage application configuration in a secure and validated way. It allows you to deploy configurations safely and quickly without affecting the application’s performance or availability.
AWS Secrets Manager: AWS Secrets Manager is specifically designed to manage, retrieve, and rotate credentials for databases and other services. It integrates seamlessly with AWS services like Amazon RDS, making it an ideal solution for securely storing and retrieving database credentials. Secrets Manager also provides automatic rotation of credentials, reducing the operational burden.
Why Not Other Options?
Option B (AWS Lambda + Parameter Store): While AWS Lambda can be used for managing configurations and AWS Systems Manager Parameter Store can store credentials, this approach involves more manual setup and does not offer the same level of integrated management and security as AppConfig and Secrets Manager.
Option C (Encrypted S3 Configuration File): Storing configuration and credentials in S3 files involves more manual management and security considerations, increasing the administrative overhead.
Option D (AppConfig + RDS for credentials): RDS is not designed for storing application credentials; it’s better suited for managing database instances and their configurations.
AWS
Reference: AWS AppConfig- Describes how to use AWS AppConfig for managing application configurations. AWS Secrets Manager- Provides details on securely storing and retrieving credentials using AWS Secrets Manager.
A media company runs an application on multiple Amazon EC2 instances that requires high storage input/output operations per second (IOPS).
To achieve the necessary performance, a solutions architect wants to stripe multiple Amazon EBS volumes together and attach the volumes to EC2 instances. The solutions architect wants to receive a notification when IOPS are over-provisioned.
Which solution will meet these requirements?
- A . Configure auto scaling for the EBS volumes to automatically increase or decrease IOPS based on the EC2 instance CPU utilization metric.
- B . Deploy the application on an EC2 instance type that supports the highest possible IOPS.
- C . Create a custom AWS Config rule to monitor the provisioned IOPS for the EBS volumes that are attached to the EC2 instances and to send notifications.
- D . Adjust the IOPS of each EBS volume daily based on Amazon CloudWatch metrics for IOPS utilization.
C
Explanation:
AWS Config allows for creation of custom rules to monitor EBS configurations. Combined with CloudWatch metrics and Amazon SNS, custom rules can track over-provisioned IOPS and send alerts when thresholds are breached, allowing proactive cost and performance management.
Reference: AWS Documentation C AWS Config with Custom Rules for EBS Monitoring