Practice Free SAA-C03 Exam Online Questions
A company runs its applications on Amazon EC2 instances that are backed by Amazon Elastic Block Store (Amazon EBS). The EC2 instances run the most recent Amazon Linux release. The applications are experiencing availability issues when the company’s employees store and retrieve files that are 25 GB or larger. The company needs a solution that does not require the company to transfer files between EC2 instances. The files must be available across many EC2 instances and across multiple Availability Zones.
Which solution will meet these requirements?
- A . Migrate all the files to an Amazon S3 bucket. Instruct the employees to access the files from the S3 bucket.
- B . Take a snapshot of the existing EBS volume. Mount the snapshot as an EBS volume across the EC2 instances. Instruct the employees to access the files from the EC2 instances.
- C . Mount an Amazon Elastic File System (Amazon EFS) file system across all the EC2 instances.
Instruct the employees to access the files from the EC2 instances. - D . Create an Amazon Machine Image (AMI) from the EC2 instances. Configure new EC2 instances from the AMI that use an instance store volume. Instruct the employees to access the files from the EC2 instances
C
Explanation:
To store and access files that are 25 GB or larger across many EC2 instances and across multiple Availability Zones, Amazon Elastic File System (Amazon EFS) is a suitable solution. Amazon EFS provides a simple, scalable, elastic file system that can be mounted on multiple EC2 instances concurrently. Amazon EFS supports high availability and durability by storing data across multiple Availability Zones within a Region.
Reference:
What Is Amazon Elastic File System?
Using EFS with EC2
A company uses Amazon RDS (or PostgreSQL to run its applications in the us-east-1 Region. The company also uses machine learning (ML) models to forecast annual revenue based on neat real-time reports. The reports are generated by using the same RDS for PostgreSQL database. The database performance slows during business hours. The company needs to improve database performance.
Which solution will meet these requirements MOST cost-effectively?
- A . Create a cross-Region read replica. Configure the reports to be generated from the read replica.
- B . Activate Multi-AZ DB instance deployment for RDS for PostgreSQL. Configure the reports to be generated from the standby database.
- C . Use AWS Data Migration Service (AWS DMS) to logically replicate data lo a new database.
Configure the reports to be generated from the new database. - D . Create a read replica in us-east-1. Configure the reports to be generated from the read replica.
D
Explanation:
To improve the performance of the primary RDS PostgreSQL database during business hours and reduce the load, the best solution is to create aread replicain the same region (us-east-1). This will offload the read-heavy operations (like generating reports) to the replica, reducing the burden on the primary instance, which improves overall performance. Additionally, read replicas provide near real-time replication, making them ideal for real-time reporting use cases.
Option A (cross-Region read replica): This adds unnecessary latency for real-time reporting and increased costs due to cross-region data transfer.
Option B (Multi-AZ): Multi-AZ deployments are for high availability and disaster recovery but won’t offload the read traffic, as the standby database cannot serve read requests.
Option C (AWS DMS replication): This adds complexity and is not as cost-effective as using an RDS
read replica for the same region.
AWS
Reference: Amazon RDS Read Replicas
Amazon RDS Performance Best Practices
A company needs to set up a centralized solution to audit API calls to AWS for workloads that run on AWS services and non AWS services. The company must store logs of the audits for 7 years.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Set up a data lake in Amazon S3. Incorporate AWS CloudTrail logs and logs from non AWS services into the data lake. Use CloudTrail to store the logs for 7 years.
- B . Configure custom integrations for AWS CloudTrail Lake to collect and store CloudTrail events from AWS services and non AWS services. Use CloudTrail to store the logs for 7 years.
- C . Enable AWS CloudTrail for AWS services. Ingest non AWS services into CloudTrail to store the logs for 7 years
- D . Create new Amazon CloudWatch Logs groups. Send the audit data from non AWS services to the CloudWatch Logs groups. Enable AWS CloudTrail for workloads that run on AWS. Use CloudTrail to store the logs for 7 years.
B
Explanation:
AWS CloudTrail Lakeis a fully managed service that allows the collection, storage, and querying of CloudTrail events for both AWS and non-AWS services. CloudTrail Lake can be customized to collect logs from various sources, ensuring a centralized audit solution. It also supports long-term storage, so logs can be retained for 7 years, meeting the compliance requirement.
Option A (Data Lake): Setting up a data lake in S3 introduces unnecessary operational complexity compared to CloudTrail Lake.
Option C (Ingest non-AWS services into CloudTrail): CloudTrail Lake is better suited for this task with less operational overhead.
Option D (CloudWatch Logs): While CloudWatch can store logs, CloudTrail Lake is specifically designed for API auditing and storage.
AWS
Reference: AWS CloudTrail Lake
A global company runs its applications in multiple AWS accounts in AWS Organizations. The company’s applications use multipart uploads to upload data to multiple Amazon S3 buckets across AWS Regions. The company wants to report on incomplete multipart uploads for cost compliance purposes.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Configure AWS Config with a rule to report the incomplete multipart upload object count.
- B . Create a service control policy (SCP) to report the incomplete multipart upload object count.
- C . Configure S3 Storage Lens to report the incomplete multipart upload object count.
- D . Create an S3 Multi-Region Access Point to report the incomplete multipart upload object count.
C
Explanation:
S3 Storage Lens is a cloud storage analytics feature that provides organization-wide visibility into object storage usage and activity across multiple AWS accounts in AWS Organizations. S3 Storage Lens can report the incomplete multipart upload object count as one of the metrics that it collects and displays on an interactive dashboard in the S3 console. S3 Storage Lens can also export metrics in CSV or Parquet format to an S3 bucket for further analysis. This solution will meet the requirements with the least operational overhead, as it does not require any code development or policy changes.
Reference: 1 explains how to use S3 Storage Lens to gain insights into S3 storage usage and activity.
2 describes the concept and benefits of multipart uploads.
A research laboratory needs to process approximately 8 TB of data. The laboratory requires sub-millisecond latencies and a minimum throughput of 6 GBps for the storage subsystem Hundreds of Amazon EC2 instances that run Amazon Linux will distribute and process the data.
Which solution will meet the performance requirements?
- A . Create an Amazon FSx for NetApp ONTAP file system Set each volume’s tiering policy to ALL Import the raw data into the file system Mount the file system on the EC2 instances
- B . Create an Amazon S3 bucket to stofe the raw data Create an Amazon FSx for Lustre file system that uses persistent SSD storage Select the option to import data from and export data to Amazon S3 Mount the file system on the EC2 instances
- C . Create an Amazon S3 bucket to store the raw data Create an Amazon FSx for Lustre file system that uses persistent HDD storage Select the option to import data from and export data to Amazon S3 Mount the file system on the EC2 instances
- D . Create an Amazon FSx for NetApp ONTAP file system Set each volume’s tienng policy to NONE. Import the raw data into the file system Mount the file system on the EC2 instances
B
Explanation:
Create an Amazon S3 bucket to store the raw data Create an Amazon FSx for Lustre file system that uses persistent SSD storage Select the option to import data from and export data to Amazon S3 Mount the file system on the EC2 instances. Amazon FSx for Lustre uses SSD storage for sub-millisecond latencies and up to 6 GBps throughput, and can import data from and export data to Amazon S3. Additionally, the option to select persistent SSD storage will ensure that the data is stored on the disk and not lost if the file system is stopped.
A company wants to implement a backup strategy for Amazon EC2 data and multiple Amazon S3 buckets. Because of regulatory requirements, the company must retain backup files for a specific time period. The company must not alter the files for the duration of the retention period.
Which solution will meet these requirements?
- A . Use AWS Backup to create a backup vault that has a vault lock in governance mode. Create the required backup plan.
- B . Use Amazon Data Lifecycle Manager to create the required automated snapshot policy.
- C . Use Amazon S3 File Gateway to create the backup. Configure the appropriate S3 Lifecycle management.
- D . Use AWS Backup to create a backup vault that has a vault lock in compliance mode. Create the required backup plan.
D
Explanation:
AWS Backup is a fully managed service that allows you to centralize and automate data protection of AWS services across compute, storage, and database. AWS Backup Vault Lock is an optional feature of a backup vault that can help you enhance the security and control over your backup vaults. When a lock is active in Compliance mode and the grace time is over, the vault configuration cannot be altered or deleted by a customer, account/data owner, or AWS. This ensures that your backups are available for you until they reach the expiration of their retention periods and meet the regulatory requirements.
Reference: https: //docs.aws.amazon.com/aws-backup/latest/devguide/vault-lock.html
A company plans to use Amazon ElastiCache for its multi-tier web application A solutions architect creates a Cache VPC for the ElastiCache cluster and an App VPC for the application’s Amazon EC2 instances Both VPCs are in the us-east-1 Region
The solutions architect must implement a solution to provide tne application’s EC2 instances with access to the ElastiCache cluster
Which solution will meet these requirements MOST cost-effectively?
- A . Create a peering connection between the VPCs Add a route table entry for the peering connection in both VPCs Configure an inbound rule for the ElastiCache cluster’s security group to allow inbound connection from the application’s security group
- B . Create a Transit VPC Update the VPC route tables in the Cache VPC and the App VPC to route traffic through the Transit VPC Configure an inbound rule for the ElastiCache cluster’s security group to allow inbound connection from the application’s security group
- C . Create a peering connection between the VPCs Add a route table entry for the peering connection in both VPCs Configure an inbound rule for the peering connection’s security group to allow inbound connection from the application’s secunty group
- D . Create a Transit VPC Update the VPC route tables in the Cache VPC and the App VPC to route traffic through the Transit VPC Configure an inbound rule for the Transit VPCs security group to allow inbound connection from the application’s security group
A
Explanation:
Creating a peering connection between the two VPCs and configuring an inbound rule for the ElastiCache cluster’s security group to allow inbound connection from the application’s security group is the most cost-effective solution. Peering connections are free and you only incur the cost of configuring the security group rules. The Transit VPC solution requires additional VPCs and associated resources, which would incur additional costs.
Before Testing | AWS Certification Information and Policies | AWS https: //aws.amazon.com/certification/policies/before-testing/
A company is building a new application that uses multiple serverless architecture components. The application architecture includes an Amazon API Gateway REST API and AWS Lambda functions to manage incoming requests.
The company needs a service to send messages that the REST API receives to multiple target Lambda functions for processing. The service must filter messages so each target Lambda function receives only the messages the function needs.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Send the requests from the REST API to an Amazon Simple Notification Service (Amazon SNS) topic. Subscribe multiple Amazon Simple Queue Service (Amazon SQS) queues to the SNS topic. Configure the target Lambda functions to poll the SQS queues.
- B . Send the requests from the REST API to a set of Amazon EC2 instances that are configured to process messages. Configure the instances to filter messages and to invoke the target Lambda functions.
- C . Send the requests from the REST API to Amazon Managed Streaming for Apache Kafka (Amazon MSK). Configure Amazon MSK to publish the messages to the target Lambda functions.
- D . Send the requests from the REST API to multiple Amazon Simple Queue Service (Amazon SQS) queues. Configure the target Lambda functions to poll the SQS queues.
A company has an Amazon S3 data lake that is governed by AWS Lake Formation. The company wants to create a visualization in Amazon QuickSight by joining the data in the data lake with operational data that is stored in an Amazon Aurora MySQL database. The company wants to enforce column-level authorization so that the company’s marketing team can access only a subset of columns in the database
Which solution will meet these requirements with the LEAST operational overhead?
- A . Use Amazon EMR to ingest the data directly from the database to the QuickSight SPICE engine Include only the required columns
- B . Use AWS Glue Studio to ingest the data from the database to the S3 data lake Attach an IAM policy to the QuickSight users to enforce column-level access control. Use Amazon S3 as the data source in QuickSight
- C . Use AWS Glue Elastic Views to create a materialized view for the database in Amazon S3 Create an S3 bucket policy to enforce column-level access control for the QuickSight users Use Amazon S3 as the data source in QuickSight.
- D . Use a Lake Formation blueprint to ingest the data from the database to the S3 data lake Use Lake Formation to enforce column-level access control for the QuickSight users Use Amazon Athena as the data source in QuickSight
D
Explanation:
Enforce column-level authorization with Amazon QuickSight and AWS Lake Formationhttps: //aws.amazon.com/blogs/big-data/enforce-column-level-authorization-with-amazon-quicksight-and-aws-lake-formation/
A company hosts a containerized web application on a fleet of on-premises servers that process incoming requests. The number of requests is growing quickly. The on-premises servers cannot handle the increased number of requests. The company wants to move the application to AWS with minimum code changes and minimum development effort.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Use AWS Fargate on Amazon Elastic Container Service (Amazon ECS) to run the containerized web application with Service Auto Scaling. Use an Application Load Balancer to distribute the incoming requests.
- B . Use two Amazon EC2 instances to host the containerized web application. Use an Application Load Balancer to distribute the incoming requests
- C . Use AWS Lambda with a new code that uses one of the supported languages. Create multiple Lambda functions to support the load. Use Amazon API Gateway as an entry point to the Lambda functions.
- D . Use a high performance computing (HPC) solution such as AWS ParallelClusterto establish an HPC cluster that can process the incoming requests at the appropriate scale.
A
Explanation:
AWS Fargate is a serverless compute engine that lets users run containers without having to manage servers or clusters of Amazon EC2 instances1. Users can use AWS Fargate on Amazon Elastic Container Service (Amazon ECS) to run the containerized web application with Service Auto Scaling. Amazon ECS is a fully managed container orchestration service that supports both Docker and Kubernetes2. Service Auto Scaling is a feature that allows users to adjust the desired number of tasks in an ECS service based on CloudWatch metrics, such as CPU utilization or request count3. Users can use AWS Fargate on Amazon ECS to migrate the application to AWS with minimum code changes and minimum development effort, as they only need to package their application in containers and specify the CPU and memory requirements.
Users can also use an Application Load Balancer to distribute the incoming requests. An Application Load Balancer is a load balancer that operates at the application layer and routes traffic to targets based on the content of the request. Users can register their ECS tasks as targets for an Application Load Balancer and configure listener rules to route requests to different target groups based on path or host headers. Users can use an Application Load Balancer to improve the availability and performance of their web application.