Practice Free SAA-C03 Exam Online Questions
A company uses Amazon Redshift to store structured data and Amazon S3 to store unstructured data. The company wants to analyze the stored data and create business intelligence reports. The company needs a data visualization solution that is compatible with Amazon Redshift and Amazon S3.
Which solution will meet these requirements?
- A . Use Amazon Redshift query editor v2 to analyze data stored in Amazon Redshift. Use Amazon Athena to analyze data stored in Amazon S3. Use Amazon QuickSight to access Amazon Redshift and Athena, visualize the data analyses, and create business intelligence reports.
- B . Use Amazon Redshift Serverless to analyze data stored in Amazon Redshift. Use Amazon S3 Object Lambda to analyze data stored in Amazon S3. Use Amazon Managed Grafana to access Amazon Redshift and Object Lambda, visualize the data analyses, and create business intelligence reports.
- C . Use Amazon Redshift Spectrum to analyze data stored in Amazon Redshift. Use Amazon Athena to analyze data stored in Amazon S3. Use Amazon QuickSight to access Amazon Redshift and Athena, visualize the data analyses, and create business intelligence reports.
- D . Use Amazon OpenSearch Service to analyze data stored in Amazon Redshift and Amazon S3. Use Amazon Managed Grafana to access OpenSearch Service, visualize the data analyses, and create business intelligence reports.
C
Explanation:
This solution leverages:
Amazon Redshift Spectrum to query S3 data directly from Redshift.
Amazon Athena for ad-hoc analysis of S3 data.
Amazon QuickSight for unified visualization from multiple data sources.
“Redshift Spectrum enables you to run queries against exabytes of data in Amazon S3 without having to load or transform the data.”
“QuickSight supports both Amazon Redshift and Amazon Athena as data sources.”
― Redshift Spectrum
― Amazon QuickSight Supported Data Sources
This architecture allows scalable querying and visualization with minimum ETL overhead, ideal for BI dashboards.
Incorrect A: The query editor is not a BI tool.
B, D: Grafana is better for time-series data, not structured analytics or BI reports.
Reference: Redshift Spectrum
Amazon QuickSight Integration
A company runs an application on Amazon EC2 instances behind an Application Load Balancer (ALB). The company wants to create a public API for the application that uses JSON Web Tokens (JWT) for authentication. The company wants the API to integrate directly with the ALB.
Which solution will meet these requirements?
- A . Use Amazon API Gateway to create a REST API.
- B . Use Amazon API Gateway to create an HTTP API.
- C . Use Amazon API Gateway to create a WebSocket API.
- D . Use Amazon API Gateway to create a gRPC API.
B
Explanation:
Amazon API Gateway supports multiple API types: REST, HTTP, WebSocket, and gRPC. HTTP APIs are a newer, lightweight option that support JWT authorizers natively, enabling secure, scalable authentication for APIs with JSON Web Tokens.
HTTP APIs can integrate with ALBs as a backend target, providing direct connectivity and simplified API management with JWT authentication built in.
REST APIs support JWT but are more feature-rich and complex, often used for legacy or more complex use cases, and have higher costs and latency. WebSocket APIs are for real-time, bidirectional communication, which is not requested here. gRPC APIs support RPC calls but are less common for public HTTP-based APIs with JWT auth.
Therefore, HTTP API with JWT authorizers is the best fit for this use case.
Reference: AWS Well-Architected Framework ― Security Pillar (https: //d1.awsstatic.com/whitepapers/architecture/AWS_Well-Architected_Framework.pdf)
Amazon API Gateway HTTP APIs
(https: //docs.aws.amazon.com/apigateway/latest/developerguide/http-api.html)
Using JWT Authorizers with HTTP APIs
(https: //docs.aws.amazon.com/apigateway/latest/developerguide/http-api-jwt-authorizer.html)
A company is redesigning its data intake process. In the existing process, the company receives data transfers and uploads the data to an Amazon S3 bucket every night. The company uses AWS Glue crawlers and jobs to prepare the data for a machine learning (ML) workflow.
The company needs a low-code solution to run multiple AWS Glue jobs in sequence and provide a visual workflow.
Which solution will meet these requirements?
- A . Use an Amazon EC2 instance to run a cron job and a script to check for the S3 files and call the AWS Glue jobs. Create an Amazon CloudWatch dashboard to visualize the workflow.
- B . Use Amazon EventBridge to call an AWS Step Functions workflow for the AWS Glue jobs. Use Step Functions to create a visual workflow.
- C . Use S3 Event Notifications to invoke a series of AWS Lambda functions and AWS Glue jobs in sequence. Use Amazon QuickSight to create a visual workflow.
- D . Create an Amazon Elastic Container Service (Amazon ECS) task that contains a Python script that manages the AWS Glue jobs and creates a visual workflow. Use Amazon EventBridge Scheduler to start the ECS task.
B
Explanation:
AWS Step Functions provides a low-code, fully managed workflow service with a visual interface to orchestrate AWS Glue jobs in sequence. Step Functions integrates natively with AWS Glue and can be triggered by Amazon EventBridge based on events or schedules.
AWS Documentation Extract:
“AWS Step Functions makes it easy to coordinate multiple AWS services into serverless workflows so you can build and update apps quickly. Step Functions provides visual workflows and integrates with AWS Glue for ETL orchestration.”
(Source: AWS Step Functions documentation)
A: Manual scripting and dashboards do not provide a low-code or integrated visual workflow.
C: QuickSight is for BI, not workflow visualization.
D: ECS adds unnecessary complexity.
Reference: AWS Certified Solutions Architect C Official Study Guide, Automation with Step Functions.
A company is migrating a data processing application to AWS. The application processes several short-lived batch jobs that cannot be disrupted. The process generates data after each batch job finishes running. The company accesses the data for 30 days following data generation. After 30 days, the company stores the data for 2 years.
The company wants to optimize costs for the application and data storage.
Which solution will meet these requirements?
- A . Use Amazon EC2 Spot Instances to run the application. Store the data in Amazon S3 Standard. Move the data to S3 Glacier Instant Retrieval after 30 days. Configure a bucket policy to delete the data after 2 years.
- B . Use Amazon EC2 On-Demand Instances to run the application. Store the data in Amazon S3 Glacier Instant Retrieval. Move the data to S3 Glacier Deep Archive after 30 days. Configure an S3 Lifecycle configuration to delete the data after 2 years.
- C . Use Amazon EC2 Spot Instances to run the application. Store the data in Amazon S3 Standard. Move the data to S3 Glacier Flexible Retrieval after 30 days. Configure a bucket policy to delete the data after 2 years.
- D . Use Amazon EC2 On-Demand Instances to run the application. Store the data in Amazon S3 Standard. Move the data to S3 Glacier Deep Archive after 30 days. Configure an S3 Lifecycle configuration to delete the data after 2 years.
D
Explanation:
Amazon EC2 On-Demand Instances: Since the batch jobs cannot be disrupted, On-Demand Instances provide the necessary reliability and availability.
Amazon S3 Standard: Storing data in S3 Standard for the first 30 days ensures quick and frequent access.
S3 Glacier Deep Archive: After 30 days, moving data to S3 Glacier Deep Archive significantly reduces storage costs for data that is rarely accessed.
S3 Lifecycle Configuration: Automating the transition and deletion of objects using lifecycle policies ensures cost optimization and compliance with data retention requirements.
Reference: Amazon S3 Storage Classes
Managing your storage lifecycleAWS Documentation
A company uses AWS Organizations to manage multiple AWS accounts. The company needs a secure, event-driven architecture in which specific Amazon SNS topics in Account A can publish messages to specific Amazon SQS queues in Account B.
Which solution meets these requirements while maintaining least privilege?
- A . Create a new IAM role in Account A that can publish to any SQS queue. Share the role ARN with Account B.
- B . Add SNS topic ARNs to SQS queue policies in Account
- C . Configure SNS topics to publish to any queue. Encrypt the queue with an AWS KMS key.
- D . Modify the SQS queue policies in Account B to allow only specific SNS topic ARNs from Account A to publish messages. Ensure the SNS topics have publish permissions for the specific queue ARN.
- E . Create a shared IAM role across both accounts with permission to publish to all SQS queues.
Enable cross-account access.
C
Explanation:
AWS documentation states that the correct and least-privilege method for cross-account SNS-to-SQS integration is:
• Add specific SNS topic ARNs to the SQS queue policy.
• Allow only those topics to publish messages to the queue.
• Ensure SNS has permission to publish to the specific queue ARN. This ensures strict scoping and adheres to least privilege.
Options A and D grant overly broad permissions.
Option B allows publishing to any queue, which violates least privilege.
A company has stored millions of objects across multiple prefixes in an Amazon S3 bucket by using the Amazon S3 Glacier Deep Archive storage class. The company needs to delete all data older than 3 years except for a subset of data that must be retained. The company has identified the data that must be retained and wants to implement a serverless solution.
Which solution will meet these requirements?
- A . Use S3 Inventory to list all objects. Use the AWS CLI to create a script that runs on an Amazon EC2 instance that deletes objects from the inventory list.
- B . Use AWS Batch to delete objects older than 3 years except for the data that must be retained
- C . Provision an AWS Glue crawler to query objects older than 3 years. Save the manifest file of old objects. Create a script to delete objects in the manifest.
- D . Enable S3 Inventory. Create an AWS Lambda function to filter and delete objects. Invoke the Lambda function with S3 Batch Operations to delete objects by using the inventory reports.
D
Explanation:
To meet the requirement of deleting objects older than 3 years while retaining certain data, this solution leverages serverless technologies to minimize operational overhead.
S3 Inventory: S3 Inventory provides a flat file that lists all the objects in an S3 bucket and their metadata, which can be configured to include data such as the last modified date. This inventory can be generated daily or weekly.
AWS Lambda Function: A Lambda function can be created to process the S3 Inventory report, filtering out the objects that need to be retained and identifying those that should be deleted.
S3 Batch Operations: S3 Batch Operations can execute tasks such as object deletion at scale. By invoking the Lambda function through S3 Batch Operations, you can automate the process of deleting the identified objects, ensuring that the solution is serverless and requires minimal operational management.
Why Not Other Options?
Option A (AWS CLI script on EC2): Running a script on an EC2 instance adds unnecessary operational overhead and is not serverless.
Option B (AWS Batch): AWS Batch is designed for running large-scale batch computing workloads, which is overkill for this scenario.
Option C (AWS Glue + script): AWS Glue is more suited for ETL tasks, and this approach would add unnecessary complexity compared to the serverless Lambda solution.
AWS
Reference: Amazon S3 Inventory- Information on how to set up and use S3 Inventory.
S3 Batch Operations- Documentation on how to perform bulk operations on S3 objects using S3 Batch Operations.
An ecommerce company is migrating its on-premises workload to the AWS Cloud. The workload currently consists of a web application and a backend Microsoft SQL database for storage.
The company expects a high volume of customers during a promotional event. The new infrastructure in the AWS Cloud must be highly available and scalable.
Which solution will meet these requirements with the LEAST administrative overhead?
- A . Migrate the web application to two Amazon EC2 instances across two Availability Zones behind an Application Load Balancer. Migrate the database to Amazon RDS for Microsoft SQL Server with read replicas in both Availability Zones.
- B . Migrate the web application to an Amazon EC2 instance that runs in an Auto Scaling group across two Availability Zones behind an Application Load Balancer. Migrate the database to two EC2
instances across separate AWS Regions with database replication. - C . Migrate the web application to Amazon EC2 instances that run in an Auto Scaling group across two Availability Zones behind an Application Load Balancer. Migrate the database to Amazon RDS with Multi-AZ deployment.
- D . Migrate the web application to three Amazon EC2 instances across three Availability Zones behind an Application Load Balancer. Migrate the database to three EC2 instances across three Availability Zones.
C
Explanation:
To ensure high availability and scalability, the web application should run in anAuto Scaling groupacross two Availability Zones behind anApplication Load Balancer (ALB). The database should be migrated toAmazon RDSwithMulti-AZ deployment, which ensures fault tolerance and automatic failover in case of an AZ failure. This setup minimizes administrative overhead while meeting the company’s requirements for high availability and scalability.
Option A: Read replicas are typically used for scaling read operations, and Multi-AZ provides better availability for a transactional database.
Option B: Replicating across AWS Regions adds unnecessary complexity for a single web application.
Option D: EC2 instances across three Availability Zones add unnecessary complexity for this scenario.
AWS
Reference: Auto Scaling Groups
Amazon RDS Multi-AZ
A finance company uses backup software to back up its data to physical tape storage on-premises. To comply with regulations, the company needs to store the data for 7 years. The company must be able to restore archived data within one week when necessary.
The company wants to migrate the backup data to AWS to reduce costs. The company does not want to change the current backup software.
Which solution will meet these requirements MOST cost-effectively?
- A . Use AWS Storage Gateway Tape Gateway to copy the data to virtual tapes. Use AWS DataSync to migrate the virtual tapes to the Amazon S3 Standard-Infrequent Access (S3 Standard-IA). Change the target of the backup software to S3 Standard-IA.
- B . Convert the physical tapes to virtual tapes. Use AWS DataSync to migrate the virtual tapes to Amazon S3 Glacier Flexible Retrieval. Change the target of the backup software to the S3 Glacier Flexible Retrieval.
- C . Use AWS Storage Gateway Tape Gateway to copy the data to virtual tapes. Migrate the virtual tapes to Amazon S3 Glacier Deep Archive. Change the target of the backup software to the virtual tapes.
- D . Convert the physical tapes to virtual tapes. Use AWS Snowball Edge storage-optimized devices to migrate the virtual tapes to Amazon S3 Glacier Flexible Retrieval. Change the target of the backup software to S3 Glacier Flexible Retrieval.
C
Explanation:
AWS Storage Gateway Tape Gateway provides a seamless way to migrate backup data to AWS without requiring changes to the backup software. Migrating to S3 Glacier Deep Archive ensures long-term, cost-effective storage for data that rarely needs retrieval.
Option A: S3 Standard-IA is more expensive than Glacier for long-term storage.
Option B and D: Glacier Flexible Retrieval is costlier than Glacier Deep Archive for archival use cases
with low retrieval frequency.
AWS Documentation
Reference: AWS Storage Gateway Tape Gateway
S3 Glacier Storage Classes
A company has an on-premises MySQL database that handles transactional data. The company is migrating the database to the AWS Cloud. The migrated database must maintain compatibility with the company’s applications that use the database. The migrated database also must scale automatically during periods of increased demand.
Which migration solution will meet these requirements?
- A . Use native MySQL tools to migrate the database to Amazon RDS for MySQL. Configure elastic storage scaling.
- B . Migrate the database to Amazon Redshift by using the mysqldump utility. Turn on Auto Scaling for the Amazon Redshift cluster.
- C . Use AWS Database Migration Service (AWS DMS) to migrate the database to Amazon Aurora. Turn on Aurora Auto Scaling.
- D . Use AWS Database Migration Service (AWS DMS) to migrate the database to Amazon DynamoDB.
Configure an Auto Scaling policy.
C
Explanation:
The key requirements are (1) MySQL compatibility for existing applications and (2) the ability to scale automatically during demand spikes for a transactional workload. Amazon Aurora (MySQL-compatible) satisfies compatibility while providing managed database capabilities designed for high performance and scalability. Using AWS Database Migration Service (DMS) is operationally efficient for migration because it supports moving data from on-premises databases to AWS with options for continuous replication to minimize downtime, which is commonly required for transactional systems.
Turning on Aurora Auto Scaling (for Aurora Replicas) allows the database tier to automatically add or remove read replicas based on demand, which helps absorb increased read traffic without manual intervention. This meets the “scale automatically” requirement more directly than simply scaling storage. (Write scaling is addressed through Aurora’s architecture and capacity planning; for many workloads, scaling reads is the dominant elasticity need.)
Option A only mentions configuring storage scaling, which does not address compute scaling during load increases; RDS storage autoscaling helps prevent running out of space but does not automatically add compute capacity in response to demand.
Option B is incorrect because Amazon Redshift is a data warehouse designed for analytics/OLAP, not a transactional MySQL OLTP replacement, and mysqldump to Redshift does not preserve application compatibility.
Option D changes the data model by migrating to DynamoDB, which is not MySQL-compatible and would require application redesign―violating the compatibility requirement.
Therefore, C best satisfies compatibility and automatic scaling needs while using managed services that reduce operational overhead during and after migration.
A company runs an application on Amazon EC2 instances that have instance store volumes attached. The application uses Amazon Elastic File System (Amazon EFS) to store files that are shared across a cluster of Linux servers. The shared files are at least 1 GB in size.
The company accesses the files often for the first 7 days after creation. The files must remain readily available after the first 7 days.
The company wants to optimize costs for the application.
Which solution will meet these requirements?
- A . Configure an AWS Storage Gateway Amazon S3 File Gateway to cache frequently accessed files locally. Store older files in Amazon S3.
- B . Move the files from Amazon EFS, and store the files locally on each EC2 instance.
- C . Configure a lifecycle policy to move the files to the EFS Infrequent Access (IA) storage class after 7 days.
- D . Deploy AWS DataSync to automatically move files older than 7 days to Amazon S3 Glacier Deep Archive.
C
Explanation:
Amazon EFS Lifecycle Management enables automatic cost optimization by transitioning files that haven’t been accessed for a defined period (e.g., 7 days) from EFS Standard to EFS Infrequent Access (IA).
“Amazon EFS Lifecycle Management automatically moves files that haven’t been accessed for a set period to the EFS Infrequent Access storage class, reducing storage costs for infrequently accessed files.”
― Amazon EFS Documentation Key Points:
EFS IA is ideal for files larger than 128 KB and accessed less frequently. It’s seamless ― no code or tools needed.
Meets requirement for cost optimization and high availability. Incorrect
A: File Gateway adds unnecessary complexity and does not use EFS.
B: Storing files locally breaks shared access and resiliency.
D: Glacier Deep Archive is cold storage ― not "readily available."
Reference: EFS Lifecycle Management EFS IA Storage Class
