Practice Free SAA-C03 Exam Online Questions
A company that uses AWS Organizations runs 150 applications across 30 different AWS accounts. The company used AWS Cost and Usage Report to create a new report in the management account. The report is delivered to an Amazon S3 bucket that is replicated to a bucket in the data collection account.
The company’s senior leadership wants to view a custom dashboard that provides NAT gateway costs each day starting at the beginning of the current month.
Which solution will meet these requirements?
- A . Share an Amazon QuickSight dashboard that includes the requested table visual. Configure QuickSight to use AWS DataSync to query the new report.
- B . Share an Amazon QuickSight dashboard that includes the requested table visual. Configure
QuickSight to use Amazon Athena to query the new report. - C . Share an Amazon CloudWatch dashboard that includes the requested table visual. Configure CloudWatch to use AWS DataSync to query the new report.
- D . Share an Amazon CloudWatch dashboard that includes the requested table visual. Configure CloudWatch to use Amazon Athena to query the new report.
B
Explanation:
The AWS Cost and Usage Report (CUR) delivers detailed, line-item billing data to Amazon S3. AWS recommends querying CUR with Amazon Athena by creating external tables over the CUR S3 location (partitioned by time) to produce daily cost aggregations such as NAT Gateway (EC2: NatGateway) usage and cost. Amazon QuickSight natively connects to Athena as a data source to build and share dashboards with visuals (tables, time series) filtered from the start of the current month. DataSync (A, C) is a file transfer service and cannot query data. CloudWatch dashboards (C, D) visualize metrics/logs, not CUR datasets. Therefore, using Athena to query CUR and QuickSight to present a daily NAT gateway cost dashboard is the most direct and operationally efficient approach.
Reference: CUR ― querying with Amazon Athena; QuickSight ― Athena data source; Cost categories and service/usage type fields for NAT Gateway; AWS Cost Management best practices.
A company wants to use AWS Direct Connect to connect the company’s on-premises networks to the AWS Cloud. The company runs several VPCs in a single AWS Region. The company plans to expand its VPC fleet to include hundreds of VPCs.
A solutions architect needs to simplify and scale the company’s network infrastructure to accommodate future VPCs.
Which service or resource will meet these requirements?
- A . VPC endpoints
- B . AWS Transit Gateway
- C . Amazon Route 53
- D . AWS Secrets Manager
B
Explanation:
AWS Transit Gateway is purpose-built for large-scale, hub-and-spoke network architectures. It simplifies connectivity between multiple VPCs and on-premises environments, which is ideal for
managing hundreds of VPCs.
“AWS Transit Gateway enables you to connect your VPCs and on-premises networks through a central hub. This simplifies your network and puts an end to complex peering relationships.”
― Transit Gateway Documentation Features:
Scales to thousands of VPCs.
Integrates with AWS Direct Connect via Direct Connect Gateway. Centralized routing control.
Incorrect Options:
A: VPC endpoints are for private access to AWS services―not VPC-to-VPC connectivity.
C: Route 53 is DNS, not a network transport layer.
D: Secrets Manager is for secret storage, not networking.
Reference: AWS Transit Gateway Overview Transit Gateway Scaling
An ecommerce company experiences a surge in mobile application traffic every Monday at 8 AM during the company’s weekly sales events. The application’s backend uses an Amazon API Gateway HTTP API and AWS Lambda functions to process user requests. During peak sales periods, users report encountering TooManyRequestsException errors from the Lambda functions. The errors result in a degraded user experience. A solutions architect needs to design a scalable and resilient solution that minimizes the errors and ensures that the application’s overall functionality remains unaffected.
- A . Create an Amazon Simple Queue Service (Amazon SQS) queue. Send user requests to the SQS queue. Configure the Lambda function with provisioned concurrency. Set the SQS queue as the event source trigger.
- B . Use AWS Step Functions to orchestrate and process user requests. Configure Step Functions to invoke the Lambda functions and to manage the request flow.
- C . Create an Amazon Simple Notification Service (Amazon SNS) topic. Send user requests to the SNS topic. Configure the Lambda functions with provisioned concurrency. Subscribe the functions to the SNS topic.
- D . Create an Amazon Simple Queue Service (Amazon SQS) queue. Send user requests to the SQS queue. Configure the Lambda functions with reserved concurrency. Set the SQS queue as the event source trigger for the functions.
A
Explanation:
TooManyRequestsException errors occur when Lambda exceeds concurrency limits. The recommended pattern is to use Amazon SQS with Lambda to decouple and buffer traffic, ensuring that bursts of requests are queued and processed smoothly. Enabling provisioned concurrency for Lambda ensures that functions are pre-initialized and ready to handle spikes in load with low latency. Step Functions (B) is designed for workflow orchestration, not high-throughput request buffering. SNS with Lambda (C) does not provide buffering and may overwhelm Lambda during bursts. Reserved concurrency (D) limits function scaling instead of improving resilience.
Therefore, option A provides a scalable and resilient solution, minimizing errors during traffic surges.
Reference:
• AWS Lambda Developer Guide ― Provisioned concurrency and scaling with SQS
• Amazon SQS User Guide ― Using Lambda with Amazon SQS
• AWS Well-Architected Framework ― Reliability Pillar
A company stores data in an on-premises Oracle relational database. The company needs to make the data available in Amazon Aurora PostgreSQL for analysis. The company uses an AWS Site-to-Site VPN connection to connect its on-premises network to AWS.
The company must capture the changes that occur to the source database during the migration to Aurora PostgreSQL.
Which solution will meet these requirements?
- A . Use the AWS Schema Conversion Tool (AWS SCT) to convert the Oracle schema to Aurora
PostgreSQL schema. Use the AWS Database Migration Service (AWS DMS) full-load migration task to migrate the data. - B . Use AWS DataSync to migrate the data to an Amazon S3 bucket. Import the S3 data to Aurora PostgreSQL by using the Aurora PostgreSQL aws_s3 extension.
- C . Use the AWS Schema Conversion Tool (AWS SCT) to convert the Oracle schema to Aurora PostgreSQL schema. Use AWS Database Migration Service (AWS DMS) to migrate the existing data and replicate the ongoing changes.
- D . Use an AWS Snowball device to migrate the data to an Amazon S3 bucket. Import the S3 data to Aurora PostgreSQL by using the Aurora PostgreSQL aws_s3 extension.
C
Explanation:
For the migration of data from an on-premises Oracle database to Amazon Aurora PostgreSQL, this solution effectively handles schema conversion, data migration, and ongoing data replication.
AWS Schema Conversion Tool (SCT): SCT is used to convert the Oracle database schema to a format compatible with Aurora PostgreSQL. This tool automatically converts the database schema and code objects, like stored procedures, to the target database engine.
AWS Database Migration Service (DMS): DMS is employed to perform the data migration. It supports both full-load migrations (for initial data transfer) and continuous replication of ongoing changes (Change Data Capture, or CDC). This ensures that any updates to the Oracle database during the migration are captured and applied to the Aurora PostgreSQL database, minimizing downtime.
Why Not Other Options?
Option A (SCT + DMS full-load only): This option does not capture ongoing changes, which is crucial for a live database migration to ensure data consistency.
Option B (DataSync + S3): AWS DataSync is more suited for file transfers rather than database migrations, and it doesn’t support ongoing change replication.
Option D (Snowball + S3): Snowball is typically used for large-scale data transfers that don’t require continuous synchronization, making it less suitable for this scenario where ongoing changes must be captured.
AWS
Reference: AWS Schema Conversion Tool- Guidance on using SCT for database schema conversions.
AWS Database Migration Service- Detailed documentation on using DMS for data migrations and ongoing replication.
A software company needs to upgrade a critical web application. The application is hosted in a public subnet. The EC2 instance runs a MySQL database. The application’s DNS records are published in an Amazon Route 53 zone.
A solutions architect must reconfigure the application to be scalable and highly available. The solutions architect must also reduce MySQL read latency.
Which combination of solutions will meet these requirements? (Select TWO.)
- A . Launch a second EC2 instance in a second AWS Region. Use a Route 53 failover routing policy to redirect the traffic to the second EC2 instance.
- B . Create and configure an Auto Scaling group to launch private EC2 instances in multiple Availability Zones. Add the instances to a target group behind a new Application Load Balancer.
- C . Migrate the database to an Amazon Aurora MySQL cluster. Create the primary DB instance and reader DB instance in separate Availability Zones.
- D . Create and configure an Auto Scaling group to launch private EC2 instances in multiple AWS Regions. Add the instances to a target group behind a new Application Load Balancer.
- E . Migrate the database to an Amazon Aurora MySQL cluster with cross-Region read replicas.
B, C
Explanation:
To improve scalability and availability, EC2 Auto Scaling across multiple Availability Zones with an Application Load Balancer ensures resilient infrastructure. Migrating to Amazon Aurora MySQL with reader endpoints reduces read latency by offloading read traffic to replicas in otherAZs, while also increasing high availability.
Reference: AWS Documentation C Aurora Multi-AZ and EC2 Auto Scaling with ALB
A company needs a secure connection between its on-premises environment and AWS. This connection does not need high bandwidth and will handle a small amount of traffic. The connection should be set up quickly.
What is the MOST cost-effective method to establish this type of connection?
- A . Implement a client VPN
- B . Implement AWS Direct Connect.
- C . Implement a bastion host on Amazon EC2.
- D . Implement an AWS Site-to-Site VPN connection.
D
Explanation:
AWS Site-to-Site VPN: This provides a secure and encrypted connection between an on-premises environment and AWS. It is a cost-effective solution suitable for low bandwidth and small traffic needs.
Quick Setup:
Site-to-Site VPN can be quickly set up by configuring a virtual private gateway on the AWS side and a customer gateway on the on-premises side.
It uses standard IPsec protocol to establish the VPN tunnel.
Cost-Effectiveness: Compared to AWS Direct Connect, which requires dedicated physical connections and higher setup costs, a Site-to-Site VPN is less expensive and easier to implement for smaller traffic requirements.
Reference: AWS Site-to-Site VPN
A solutions architect needs to save a particular automated database snapshot from an Amazon RDS for Microsoft SQL Server DB instance for longer than the maximum number of days.
Which solution will meet these requirements in the MOST operationally efficient way?
- A . Create a manual copy of the snapshot.
- B . Export the contents of the snapshot to an Amazon S3 bucket.
- C . Change the retention period of the snapshot to 45 days.
- D . Create a native SQL Server backup. Save the backup to an Amazon S3 bucket.
A
Explanation:
Creating a manual copy of the automated snapshot is the most operationally efficient option because it directly meets the requirement―retain a specific snapshot beyond the automated retention window―with the least added process and tooling. In Amazon RDS, automated backups and their snapshots are retained only for the configured backup retention period (up to the service maximum). When that retention period is exceeded, older automated snapshots are removed automatically. However, manual snapshots are retained until you explicitly delete them, so converting (copying) an automated snapshot to a manual snapshot is the standard operational approach to keep a point-in-time backup for long-term retention.
Option C is incorrect because you cannot extend automated snapshot retention beyond the maximum supported retention; also, setting “45 days” may still be within or beyond the service limits depending on the engine and configuration, and it doesn’t guarantee indefinite retention.
Option B (export to S3) is not the most operationally efficient for the stated goal: exporting is a different workflow (often used for analytics, archiving in open formats, or cross-tool usage) and introduces extra steps, format considerations, and ongoing management in S3.
Option D is also heavier operationally: native SQL Server backups require managing backup jobs, storage layout, restores, and permissions, and it shifts responsibility to the customer for operational correctness.
Therefore, A is the simplest and most AWS-native way to preserve an RDS snapshot long-term with minimal operational overhead.
A company is developing a monolithic Microsoft Windows based application that will run on Amazon EC2 instances. The application will run long data-processing jobs that must not be in-terrupted. The company has modeled expected usage growth for the next 3 years. The company wants to optimize costs for the EC2 instances during the 3-year growth period.
- A . Purchase a Compute Savings Plan with a 3-year commitment. Adjust the hourly commit-ment based on the plan recommendations.
- B . Purchase an EC2 Instance Savings Plan with a 3-year commitment. Adjust the hourly com-mitment based on the plan recommendations.
- C . Purchase a Compute Savings Plan with a 1-year commitment. Renew the purchase and adjust the capacity each year as necessary.
- D . Deploy the application on EC2 Spot Instances. Use an Auto Scaling group with a minimum size of 1 to ensure that the application is always running.
A
Explanation:
For steady, predictable EC2 usage with potential changes in instance families over time, AWS recommends Savings Plans. Compute Savings Plans “apply to any EC2 instance regardless of region, instance family, operating system, or tenancy,” and also apply to AWS Fargate and AWS Lambda, delivering the most flexibility over a multi-year horizon. A 3-year term provides the highest discount
among Savings Plans for long-lived workloads. EC2 Instance Savings Plans are limited to a chosen instance family in a region; as needs evolve (e.g., size or family changes), discounts may not fully apply. Spot Instances are not appropriate for long, interruption-sensitive jobs because Spot capacity can be reclaimed with short notice. Therefore, a Compute Savings Plan (3-year) best matches cost optimization with flexibility for growth and changes.
Reference: AWS Cost Management ― Savings Plans (Compute vs. EC2 Instance), EC2 purchasing options guidance, Well-Architected Cost Optimization (choose pricing models to match workload).
A company maintains its accounting records in a custom application that runs on Amazon EC2 instances. The company needs to migrate the data to an AWS managed service for development and maintenance of the application data. The solution must require minimal operational support and provide immutable, cryptographically verifiable logs of data changes.
Which solution will meet these requirements MOST cost-effectively?
- A . Copy the records from the application into an Amazon Redshift cluster.
- B . Copy the records from the application into an Amazon Neptune cluster.
- C . Copy the records from the application into an Amazon Timestream database.
- D . Copy the records from the application into an Amazon Quantum Ledger Database (Amazon QLDB) ledger.
D
Explanation:
Amazon QLDB is the most cost-effective and suitable service for maintainingimmutable, cryptographically verifiable logsof data changes. QLDB provides a fully managed ledgerdatabase with a built-in cryptographic hash chain, making it ideal for recording changes to accounting records, ensuring data integrity and security.
QLDB reduces operational overhead by offering fully managed services, so there’s no need for server management, and it’s built specifically to ensure immutability and verifiability, making it the best fit for the given requirements.
Option A (Redshift): Redshift is designed for analytics and not for immutable, cryptographically verifiable logs.
Option B (Neptune): Neptune is a graph database, which is not suitable for this use case.
Option C (Timestream): Timestream is a time series database optimized for time-stamped data, but it does not provide immutable or cryptographically verifiable logs.
AWS
Reference: Amazon QLDB
How QLDB Works
A company recently launched a new product that is highly available in one AWS Region. The product consists of an application that runs on Amazon Elastic Container Service (Amazon ECS), apublic Application Load Balancer (ALB), and an Amazon DynamoDB table. The company wants a solution that will make the application highly available across Regions.
Which combination of steps will meet these requirements? (Select THREE.)
- A . In a different Region, deploy the application to a new ECS cluster that is accessible through a new ALB.
- B . Create an Amazon Route 53 failover record.
- C . Modify the DynamoDB table to create a DynamoDB global table.
- D . In the same Region, deploy the application to an Amazon Elastic Kubernetes Service (Amazon EKS) cluster that is accessible through a new ALB.
- E . Modify the DynamoDB table to create global secondary indexes (GSIs).
- F . Create an AWS PrivateLink endpoint for the application.
A, B, C
Explanation:
To make the application highly available across regions:
Deploy the application in a different region using a newECS clusterandALBto ensure regional redundancy.
UseRoute 53 failover routingto automatically direct traffic to the healthy region in case of failure.
UseDynamoDB Global Tablesto ensure the database is replicated and available across multiple regions, supporting read and write operations in each region.
Option D (EKS cluster in the same region): This does not provide regional redundancy.
Option E (Global Secondary Indexes): GSIs improve query performance but do not provide multi-region availability.
Option F (PrivateLink): PrivateLink is for secure communication, not for cross-region high availability.
AWS
Reference: DynamoDB Global Tables
Amazon ECS with ALB
