Practice Free SAA-C03 Exam Online Questions
A company uses AWS Cost Explorer to monitor its AWS costs. The company notices that Amazon Elastic Block Store (Amazon EBS) storage and snapshot costs increase every month. However, the company does not purchase additional EBS storage every month. The company wants to optimize monthly costs for its current storage usage.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Use logs in Amazon CloudWatch Logs to monitor the storage utilization of Amazon EBS. Use Amazon EBS Elastic Volumes to reduce the size of the EBS volumes.
- B . Use a custom script to monitor space usage. Use Amazon EBS Elastic Volumes to reduce the size of the EBS volumes.
- C . Delete all expired and unused snapshots to reduce snapshot costs.
- D . Delete all nonessential snapshots. Use Amazon Data Lifecycle Manager to create and manage the snapshots according to the company’s snapshot policy requirements.
D
Explanation:
Amazon Data Lifecycle Manager (DLM) automates the creation, retention, and deletion of EBSsnapshots. This allows organizations to define policies that ensure snapshots are only kept as long as needed, reducing costs automatically and minimizing manual effort. AWS recommends using DLM for optimizing storage and managing backup lifecycle with minimal overhead.
Reference: AWS Documentation C Amazon Data Lifecycle Manager
A company uses Amazon RDS for PostgreSQL databases for its data tier. The company must implement password rotation for the databases.
Which solution meets this requirement with the LEAST operational overhead?
- A . Store the password in AWS Secrets Manager. Enable automatic rotation on the secret.
- B . Store the password in AWS Systems Manager Parameter Store. Enable automatic rotation on the parameter.
- C . Store the password in AWS Systems Manager Parameter Store. Write an AWS Lambda function that rotates the password.
- D . Store the password in AWS Key Management Service (AWS KMS). Enable automatic rotation on the AWS KMS key.
A
Explanation:
AWS Secrets Manager is the recommended service for managing and automatically rotating database credentials. It integrates natively with Amazon RDS (including PostgreSQL), supports built-in rotation functionality, and requires minimal setup.
Secrets Manager also supports versioning and auditing, which enhances operational excellence and security. Parameter Store does not natively support credential rotation. AWS KMS manages key encryption―not application secrets―so it is not applicable here.
A software company needs to upgrade a critical web application. The application is hosted in a public subnet. The EC2 instance runs a MySQL database. The application’s DNS records are published in an Amazon Route 53 zone.
A solutions architect must reconfigure the application to be scalable and highly available. The solutions architect must also reduce MySQL read latency.
Which combination of solutions will meet these requirements? (Select TWO.)
- A . Launch a second EC2 instance in a second AWS Region. Use a Route 53 failover routing policy to redirect the traffic to the second EC2 instance.
- B . Create and configure an Auto Scaling group to launch private EC2 instances in multiple Availability Zones. Add the instances to a target group behind a new Application Load Balancer.
- C . Migrate the database to an Amazon Aurora MySQL cluster. Create the primary DB instance and reader DB instance in separate Availability Zones.
- D . Create and configure an Auto Scaling group to launch private EC2 instances in multiple AWS Regions. Add the instances to a target group behind a new Application Load Balancer.
- E . Migrate the database to an Amazon Aurora MySQL cluster with cross-Region read replicas.
B, C
Explanation:
To improve scalability and availability, EC2 Auto Scaling across multiple Availability Zones with an Application Load Balancer ensures resilient infrastructure. Migrating to Amazon Aurora MySQL with reader endpoints reduces read latency by offloading read traffic to replicas in other AZs, while also increasing high availability.
Reference: AWS Documentation C Aurora Multi-AZ and EC2 Auto Scaling with ALB
A company is launching a new gaming application. The company will use Amazon EC2 Auto Scaling groups to deploy the application. The application stores user data in a relational database.
The company has office locations around the world that need to run analytics on the user data in the database. The company needs a cost-effective database solution that provides cross-Region disaster recovery with low-latency read performance across AWS Regions.
Which solution will meet these requirements?
- A . Create an Amazon ElastiCache for Redis cluster in the Region where the application is deployed. Create read replicas in Regions where the company offices are located. Ensure the company offices read from the read replica instances.
- B . Create Amazon DynamoDB global tables. Deploy the tables to the Regions where the company offices are located and to the Region where the application is deployed. Ensure that each company office reads from the tables that are in the same Region as the office.
- C . Create an Amazon Aurora global database. Configure the primary cluster to be in the Region where the application is deployed. Configure the secondary Aurora replicas to be in the Regions where the company offices are located. Ensure the company offices read from the Aurora replicas.
- D . Create an Amazon RDS Multi-AZ DB cluster deployment in the Region where the application is deployed. Ensure the company offices read from read replica instances.
A company runs an application that uses Docker containers in an on-premises data center. The application runs on a container host that stores persistent data files in a local volume. Container instances use the stored persistent data.
The company wants to migrate the application to fully managed AWS services.
Which solution will meet these requirements?
- A . Use Amazon Elastic Kubernetes Service (Amazon EKS) with self-managed nodes. Attach an Amazon Elastic Block Store (Amazon EBS) volume to an Amazon EC2 instance. Mount the EBS volume on the containers to provide persistent storage.
- B . Use Amazon Elastic Container Service (Amazon ECS) with the AWS Fargate launch type. Create an Amazon Elastic File System (Amazon EFS) volume. Mount the EFS volume on the containers to provide persistent storage.
- C . Use Amazon Elastic Container Service (Amazon ECS) with the AWS Fargate launch type. Create an Amazon DynamoDB table. Configure the application to use the DynamoDB table for persistent storage.
- D . Use Amazon Elastic Container Service (Amazon ECS) with the Amazon EC2 launch type. Create an Amazon Elastic File System (Amazon EFS) volume. Mount the EFS volume on the containers to provide persistent storage.
B
Explanation:
The company wants to move from an on-premises Docker environment to fully managed AWS services with persistent storage.
The best fit is:
Amazon ECS with AWS Fargate launch type: This is a serverless container orchestration solution where AWS manages the underlying infrastructure, removing the need to manage EC2 or Kubernetes nodes.
Amazon EFS (Elastic File System): This is a fully managed, scalable, and shared file system for use with ECS tasks. It supports persistent storage for containers, replacing the local volumes used on-premises.
This combination (ECS + Fargate + EFS) is fully managed and requires no manual server maintenance.
Option A uses EKS with self-managed nodes, which is not fully managed.
Option C (DynamoDB) is for structured key-value storage, not for persistent file storage.
Option D uses ECS with EC2 launch type, which is not serverless and requires managing instances.
Reference: Using Amazon ECS with AWS Fargate
Mounting EFS volumes in ECS tasks
A marketing company receives a large amount of new clickstream data in Amazon S3 from a marketing campaign. The company needs to analyze the clickstream data in Amazon S3 quickly. Then the company needs to determine whether to process the data further in the data pipeline.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Create external tables in a Spark catalog Configure jobs in AWS Glue to query the data
- B . Configure an AWS Glue crawler to crawl the data. Configure Amazon Athena to query the data.
- C . Create external tables in a Hive metastore. Configure Spark jobs in Amazon EMR to query the data.
- D . Configure an AWS Glue crawler to crawl the data. Configure Amazon Kinesis Data Analytics to use
SQL to query the data
B
Explanation:
AWS Glue Crawler: AWS Glue is a fully managed ETL (Extract, Transform, Load) service that makes it easy to prepare and load data for analytics. A Glue crawler can automatically discover new data and schema in Amazon S3, making it easy to keep the data catalog up-to-date.
Crawling the Data:
Set up an AWS Glue crawler to scan the S3 bucket containing the clickstream data.
The crawler will automatically detect the schema and create/update the tables in the AWS Glue Data Catalog.
Amazon Athena:
Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL.
Once the data catalog is updated by the Glue crawler, use Athena to query the clickstream data directly in S3.
Operational Efficiency: This solution leverages fully managed services, reducing operational overhead. Glue crawlers automate data cataloging, and Athena provides a serverless, pay-per-query model for quick data analysis without the need to set up or manage infrastructure.
Reference: AWS Glue
Amazon Athena
A company uses AWS to run its workloads. The company uses AWS Organizations to manage its accounts. The company needs to identify which departments are responsible for specific costs. New accounts are constantly created in the Organizations account structure. The Organizations continuous integration and continuous delivery (CI/CD) framework already adds the populated department tag to the AWS resources. The company wants to use an AWS Cost Explorer report to identify the service costs by department from all AWS accounts.
Which combination of steps will meet these requirements with the MOST operational efficiency? (Select TWO.)
- A . Activate the aws: createdBy cost allocation tag and the department cost allocation tag in the management account.
- B . Create a new cost and usage report in Cost Explorer. Group by the department cost allocation tag.
Apply a filter to see all linked accounts and services. - C . Activate only the department cost allocation tag in the management account.
- D . Create a new cost and usage report in Cost Explorer. Group by the department cost allocation tag without any other filters.
- E . Activate only the aws: createdBy cost allocation tag in the management account.
C, D
Explanation:
To track costs by department, you must activate the custom department tag as a cost allocation tag in the AWS Organizations management account. Once activated, Cost Explorer and cost and usage reports can group costs by this tag for all linked accounts. The most operationally efficient way is to activate only the relevant department tag and create a cost and usage report grouped by that tag.
AWS Documentation Extract:
“To use a tag for cost allocation, you must activate it in the AWS Billing and Cost Management console. After activation, you can use the tag to group costs in Cost Explorer and reports.” (Source: AWS Cost Management documentation)
A, E: aws: createdBy is not related to department cost grouping and is unnecessary.
B: Applying extra filters is optional; D is more direct and operationally efficient.
Reference: AWS Certified Solutions Architect C Official Study Guide, Cost Allocation and Tagging.
An ecommerce company runs a PostgreSQL database on an Amazon EC2 instance. The database stores data in Amazon Elastic Block Store (Amazon EBS) volumes. The daily peak input/output transactions per second (IOPS) do not exceed 15, 000 IOPS. The company wants to migrate the database to Amazon RDS for PostgreSQL and to provision disk IOPS performance that is independent of disk storage capacity.
Which solution will meet these requirements MOST cost-effectively?
- A . Configure General Purpose SSD (gp2) EBS volumes. Provision a 5 TiB volume.
- B . Configure Provisioned IOPS SSD (io1) EBS volumes. Provision 15, 000 IOPS.
- C . Configure General Purpose SSD (gp3) EBS volumes. Provision 15, 000 IOPS.
- D . Configure magnetic EBS volumes to achieve maximum IOPS.
C
Explanation:
EBS gp3 volumes allow you to independently configure IOPS and throughput, up to 16, 000 IOPS, regardless of the volume size, and at a lower cost compared to io1/io2 provisioned IOPS volumes. This meets the requirement for cost-effective, predictable IOPS performance for Amazon RDS for PostgreSQL.
AWS Documentation Extract:
"With gp3 volumes, you can provision performance independent of storage capacity, up to 16, 000 IOPS. This allows for cost-effective scaling for applications that require high performance at a lower price point compared to io1."
(Source: Amazon EBS documentation, gp3 Volumes)
A: gp2 ties IOPS to volume size; 5 TiB is wasteful if only 15, 000 IOPS are needed.
B: io1 works but is significantly more expensive than gp3 for most workloads.
D: Magnetic volumes do not support high IOPS.
Reference: AWS Certified Solutions Architect C Official Study Guide, EBS Storage Options.
A company has a large fleet of vehicles that are equipped with internet connectivity to send telemetry to the company. The company receives over 1 million data points every 5 minutes from the vehicles. The company uses the data in machine learning (ML) applications to predict vehicle maintenance needs and to preorder parts. The company produces visual reports based on the captured data. The company wants to migrate the telemetry ingestion, processing, and visualization workloads to AWS.
Which solution will meet these requirements?
- A . Use Amazon Timestream for Live Analytics to store the data points. Grant Amazon SageMaker permission to access the data for processing. Use Amazon QuickSight to visualize the data.
- B . Use Amazon DynamoDB to store the data points. Use DynamoDB Connector to ingest data from DynamoDB into Amazon EMR for processing. Use Amazon QuickSight to visualize the data.
- C . Use Amazon Neptune to store the data points. Use Amazon Kinesis Data Streams to ingest data from Neptune into an AWS Lambda function for processing. Use Amazon QuickSight to visualize the data.
- D . Use Amazon Timestream to for LiveAnalytics to store the data points. Grant Amazon SageMaker permission to access the data for processing. Use Amazon Athena to visualize the data.
A
Explanation:
Amazon Timestream: Purpose-built time series database optimized for telemetry and IoT data ingestion and analytics.
Amazon SageMaker: Provides ML capabilities for predictive maintenance workflows.
Amazon QuickSight: Efficiently generates interactive, real-time visual reports from Timestream data.
Optimized for Scale: Timestream efficiently handles large-scale telemetry data with time-series indexing and queries.
Amazon Timestream Documentation
A company manages multiple AWS accounts in an organization in AWS Organizations. The company’s applications run on Amazon EC2 instances in multiple AWS Regions. The company needs a solution to simplify the management of security rules across the accounts in its organization. The solution must apply shared security group rules, audit security groups, and detect unused and redundant rules in VPC security groups across all AWS environments.
Which solution will meet these requirements with the MOST operational efficiency?
- A . Use AWS Firewall Manager to create a set of rules based on the security requirements. Replicate the rules to all the AWS accounts and Regions.
- B . Use AWS CloudFormation StackSets to provision VPC security groups based on the specifications across multiple accounts and Regions. Deploy AWS Network Firewall to define the firewall rules to control network traffic across multiple accounts and Regions.
- C . Use AWS CloudFormation StackSets to provision VPC security groups based on the specifications across multiple accounts and Regions. Configure AWS Config and AWS Lambda to evaluate compliance information and to automate enforcement across all accounts and Regions.
- D . Use AWS Network Firewall to build policies based on the security requirements. Centrally apply the new policies to all the VPCs and accounts.
A
Explanation:
AWS Firewall Manager integrates with AWS Organizations to centrally manage and apply security group policies, AWS WAF rules, and AWS Shield Advanced protections. It automates the propagation of rules across accounts and Regions and can also audit and remediate noncompliant configurations.
Reference: AWS Documentation C AWS Firewall Manager for Centralized Security Group Management
