Practice Free SAA-C03 Exam Online Questions
A company has stored millions of objects across multiple prefixes in an Amazon S3 bucket by using the Amazon S3 Glacier Deep Archive storage class. The company needs to delete all data older than 3 years except for a subset of data that must be retained. The company has identified the data that must be retained and wants to implement a serverless solution.
Which solution will meet these requirements?
- A . Use S3 Inventory to list all objects. Use the AWS CLI to create a script that runs on an Amazon EC2 instance that deletes objects from the inventory list.
- B . Use AWS Batch to delete objects older than 3 years except for the data that must be retained
- C . Provision an AWS Glue crawler to query objects older than 3 years. Save the manifest file of old objects. Create a script to delete objects in the manifest.
- D . Enable S3 Inventory. Create an AWS Lambda function to filter and delete objects. Invoke the Lambda function with S3 Batch Operations to delete objects by using the inventory reports.
D
Explanation:
To meet the requirement of deleting objects older than 3 years while retaining certain data, this solution leverages serverless technologies to minimize operational overhead.
S3 Inventory: S3 Inventory provides a flat file that lists all the objects in an S3 bucket and their metadata, which can be configured to include data such as the last modified date. This inventory can be generated daily or weekly.
AWS Lambda Function: A Lambda function can be created to process the S3 Inventory report, filtering out the objects that need to be retained and identifying those that should be deleted.
S3 Batch Operations: S3 Batch Operations can execute tasks such as object deletion at scale. By invoking the Lambda function through S3 Batch Operations, you can automate the process of deleting the identified objects, ensuring that the solution is serverless and requires minimal operational management.
Why Not Other Options?:
Option A (AWS CLI script on EC2): Running a script on an EC2 instance adds unnecessary operational overhead and is not serverless.
Option B (AWS Batch): AWS Batch is designed for running large-scale batch computing workloads, which is overkill for this scenario.
Option C (AWS Glue + script): AWS Glue is more suited for ETL tasks, and this approach would add unnecessary complexity compared to the serverless Lambda solution. AWS
Reference: Amazon S3 Inventory- Information on how to set up and use S3 Inventory.
S3 Batch Operations- Documentation on how to perform bulk operations on S3 objects using S3 Batch Operations.
A company wants to migrate its on-premises Oracle database to Amazon Auror
a. The company wants to use a secure and encrypted network to transfer the data.
Which combination of steps will meet these requirements? (Select TWO.)
- A . Use AWS Application Migration Service to migrate the data.
- B . Use AWS Schema Conversion Tool (AWS SCT) and AWS Database Migration Service (AWS DMS) to migrate the data.
- C . Use AWS Direct Connect SiteLink to transfer data from the on-premises environment to AWS.
- D . Use AWS Site-to-Site VPN to establish a connection to transfer the data from the on-premises environment to AWS.
- E . Use AWS App2Container to migrate the data.
B, D
Explanation:
Comprehensive and Detailed
To securely migrate an on-premises Oracle database to Amazon Aurora, the following steps are recommended:
Use AWS Schema Conversion Tool (AWS SCT) and AWS Database Migration Service (AWS DMS): AWS SCT helps convert the source database schema to a format compatible with the target database (Aurora). AWS DMS facilitates the actual data migration, ensuring minimal downtime and data integrity.
Use AWS Site-to-Site VPN: Establishing a Site-to-Site VPN connection provides a secure and encrypted tunnel between the on-premises environment and AWS. This ensures that data transferred during the migration is protected against interception and unauthorized access.
Reference: Migrate an Oracle database to Aurora PostgreSQL using AWS DMS and AWS SCT Step 2: Configure Your Source Database – AWS Documentation
A company is using Amazon DocumentDB global clusters to support an ecommerce application. The application serves customers across multiple AWS Regions. To ensure business continuity, the company needs a solution to minimize downtime during maintenance windows or other disruptions.
Which solution will meet these requirements?
- A . Regularly create manual snapshots of the DocumentDB instance in the primary Region.
- B . Perform a managed failover to a secondary Region when needed.
- C . Perform a failover to a replica DocumentDB instance within the primary Region.
- D . Configure increased replication lag to manage cross-Region replication.
B
Explanation:
Amazon DocumentDB global clusters support managed cross-region failover, allowing you to promote a secondary region to become the new primary with minimal downtime. This ensures business continuity during maintenance or regional disruptions. Reference Extract:
"Amazon DocumentDB global clusters support managed cross-Region failover, allowing you to recover quickly from regional disruptions with minimal downtime."
Source: AWS Certified Solutions Architect C Official Study Guide, DocumentDB and Resiliency section.
A company uses Amazon EC2 instances and Amazon Elastic Block Store (Amazon EBS) volumes to run an application. The company creates one snapshot of each EBS volume every day.
The company needs to prevent users from accidentally deleting the EBS volume snapshots. The solution must not change the administrative rights of a storage administrator user.
Which solution will meet these requirements with the LEAST administrative effort?
- A . Create an IAM role that has permission to delete snapshots. Attach the role to a new EC2 instance.
Use the AWS CLI from the new EC2 instance to delete snapshots. - B . Create an IAM policy that denies snapshot deletion. Attach the policy to the storage administrator user.
- C . Add tags to the snapshots. Create tag-level retention rules in the Recycle Bin for EBS snapshots.
Configure rule lock settings for the retention rules. - D . Take EBS snapshots by using the EBS direct APIs. Copy the snapshots to an Amazon S3 bucket.
Configure S3 Versioning and Object Lock on the bucket.
C
Explanation:
Amazon EBS Snapshots Recycle Bin enables you to specify retention rules for EBS snapshots based on tags. When snapshots are deleted, they are retained in the Recycle Bin for a specified duration, preventing accidental deletion. Tag-level rules allow selective protection without changing IAM roles or user permissions.
Reference: AWS Documentation C Amazon EBS Snapshots and Recycle Bin
A company is migrating a daily Microsoft Windows batch job from the company’s on-premises environment to AWS. The current batch job runs for up to 1 hour. The company wants to modernize the batch job process for the cloud environment.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Create a fleet of Amazon EC2 instances in an Auto Scaling group to handle the Windows batch job processing.
- B . Implement an AWS Lambda function to process the Windows batch job. Use an Amazon EventBridge rule to invoke the Lambda function.
- C . Use AWS Fargate to deploy the Windows batch job as a container. Use AWS Batch to manage the batch job processing.
- D . Use Amazon Elastic Kubernetes Service (Amazon EKS) on Amazon EC2 instances to orchestrate Windows containers for the batch job processing.
C
Explanation:
AWS Batch supports Windows-based jobs and automates provisioning and scaling of compute environments. Paired with AWS Fargate, it removes the need to manage infrastructure. This solution requires the least operational overhead and is cloud-native, providing flexibility and scalability.
Reference: AWS Documentation C AWS Batch with Fargate for Windows Workloads
A company needs to create an AWS Lambda function that will run in a VPC in the company’s primary AWS account. The Lambda function needs to access files that the company stores in an Amazon Elastic File System (Amazon EFS) file system. The EFS file system is located in a secondary AWS account. As the company adds files to the file system, the solution must scale to meet the demand.
Which solution will meet these requirements MOST cost-effectively?
- A . Create a new EFS file system in the primary account. Use AWS DataSync to copy the contents of the original EFS file system to the new EFS file system.
- B . Create a VPC peering connection between the VPCs that are in the primary account and the secondary account.
- C . Create a second Lambda function in the secondary account that has a mount that is configured for the file system. Use the primary account’s Lambda function to invoke the secondary account’s Lambda function.
- D . Move the contents of the file system to a Lambda layer. Configure the Lambda layer’s permissions to allow the company’s secondary account to use the Lambda layer.
B
Explanation:
Amazon EFS is a regional, elastic file system that “scales to petabytes” and can be accessed from “thousands of compute instances” concurrently. You can mount EFS across VPCs and across AWS accounts by providing network connectivity (e.g., VPC peering) to the EFS mount targets and allowing NFS (TCP 2049) in the mount target security group, optionally using EFS access points and a file system policy for cross-account access control. VPC peering has no hourly charge and minimal operational overhead, and EFS automatically scales as files are added―meeting the scalability and cost goals.
Option A duplicates data, incurs DataSync and extra storage costs, and adds sync lag.
Option C adds an extra Lambda hop and complexity without exposing a shared filesystem to the primary function.
Option D is impractical: Lambda layers are immutable artifacts with tight size limits (up to 50 MB compressed/250 MB uncompressed) and are not suited for dynamic, growing file sets.
Reference: Amazon EFS User Guide ― “Accessing EFS across VPCs and accounts,” “Mount targets and security groups,” “EFS access points,” and “EFS automatically scales to petabytes.”
A company launches a new web application that uses an Amazon Aurora PostgreSQL database. The company wants to add new features to the application that rely on AI. The company requires vector storage capability to use AI tools.
Which solution will meet this requirement MOST cost-effectively?
- A . Use Amazon OpenSearch Service to create an OpenSearch service. Configure the application to write vector embeddings to a vector index.
- B . Create an Amazon DocumentDB cluster. Configure the application to write vector embeddings to a vector index.
- C . Create an Amazon Neptune ML cluster. Configure the application to write vector embeddings to a vector graph.
- D . Install the pgvector extension on the Aurora PostgreSQL database. Configure the application to write vector embeddings to a vector table.
D
Explanation:
Aurora PostgreSQL supports the pgvector extension, which allows storage and querying of vector embeddings directly inside the database. This eliminates the need for external vector databases and provides cost-effective and performant integration for AI workloads.
Reference: AWS Documentation C Amazon Aurora PostgreSQL and pgvector Support
A company needs an automated solution to detect cryptocurrency mining activity on Amazon EC2 instances. The solution must automatically isolate any identified EC2 instances for forensic analysis.
Which solution will meet these requirements?
- A . Create an Amazon EventBridge rule that runs when Amazon GuardDuty detects cryptocurrency mining activity. Configure the rule to invoke an AWS Lambda function to isolate the identified EC2 instances.
- B . Create an AWS Security Hub custom action that runs when Amazon GuardDuty detects cryptocurrency mining activity. Configure the custom action to invoke an AWS Lambda function to isolate the identified EC2 instances.
- C . Create an Amazon Inspector rule that runs when Amazon GuardDuty detects cryptocurrency mining activity. Configure the rule to invoke an AWS Lambda function to isolate the identified EC2 instances.
- D . Create an AWS Config custom rule that runs when AWS Config detects cryptocurrency mining activity. Configure the rule to invoke an AWS Lambda function to isolate the identified EC2 instances.
A
Explanation:
Amazon GuardDuty detects cryptocurrency mining and sends findings to Amazon EventBridge. You can use EventBridge to trigger an automated Lambda function to isolate EC2 instances (such as by removing security group access or stopping/isolating the instance).
AWS Documentation Extract:
"Amazon GuardDuty findings can be sent to Amazon EventBridge, which enables you to trigger an
automated response using AWS Lambda."
(Source: AWS GuardDuty documentation)
B, C, D: Security Hub, Inspector, and Config are not directly used for this detection-to-isolation workflow.
Reference: AWS Certified Solutions Architect C Official Study Guide, Threat Detection and Automated Response.
A company runs its legacy web application on AWS. The web application server runs on an Amazon EC2 instance in the public subnet of a VPC. The web application server collects images from customers and stores the image files in a locally attached Amazon Elastic Block Store (Amazon EBS) volume. The image files are uploaded every night to an Amazon S3 bucket for backup.
A solutions architect discovers that the image files are being uploaded to Amazon S3 through the public endpoint. The solutions architect needs to ensure that traffic to Amazon S3 does not use the public endpoint.
- A . Create a gateway VPC endpoint for the S3 bucket that has the necessary permissions for the VPC.
Configure the subnet route table to use the gateway VPC endpoint. - B . Move the S3 bucket inside the VPC. Configure the subnet route table to access the S3 bucket through private IP addresses.
- C . Create an Amazon S3 access point for the Amazon EC2 instance inside the VPC. Configure the web application to upload by using the Amazon S3 access point.
- D . Configure an AWS Direct Connect connection between the VPC that has the Amazon EC2 instance and Amazon S3 to provide a dedicated network path.
A
Explanation:
To route S3 traffic privately from within a VPC, AWS provides Gateway VPC Endpoints for Amazon S3. These allow private connectivity to S3 without traversing the public internet or requiring an Internet Gateway.
From AWS Documentation:
“A gateway endpoint enables you to privately connect your VPC to supported AWS services such as Amazon S3 and DynamoDB without requiring an Internet Gateway, NAT device, or public IP.” (Source: Amazon VPC User Guide C Gateway Endpoints) Why A is correct:
Gateway VPC endpoints route S3 traffic internally within the AWS network.
Improves security and data privacy while reducing exposure to the public internet.
Requires only a simple route table modification and IAM policy configuration.
Why other options are incorrect:
B: S3 is a regional service; you cannot “move” it inside a VPC.
C: Access points do not change the routing path; still uses S3 endpoints.
D: AWS Direct Connect is for hybrid environments, not intra-AWS private connectivity.
Reference: Amazon VPC User Guide C “Gateway Endpoints for Amazon S3” AWS Well-Architected Framework C Security Pillar
AWS Networking Best Practices
A company is building a stock trading application in the AWS Cloud. The company requires a highly available solution that provides low-latency access to block storage across multiple Availability Zones.
- A . Use an Amazon S3 bucket and an S3 File Gateway as shared storage for the application.
- B . Create an Amazon EC2 instance in each Availability Zone. Attach a General Purpose SSD (gp3) Amazon Elastic Block Store (Amazon EBS) volume to each EC2 instance. Create a Bash script to sync data between volumes.
- C . Use an Amazon FSx for NetApp ONTAP Multi-AZ file system to access data by using the iSCSI protocol.
- D . Create an Amazon EC2 instance in each Availability Zone. Attach a Provisioned IOPS SSD (io2) Amazon Elastic Block Store (Amazon EBS) volume to each EC2 instance. Create a Python script to sync data between volumes.
C
Explanation:
Amazon FSx for NetApp ONTAP supports Multi-AZ, providing automatic failover between Availability Zones for high availability. It exposes ONTAP LUNs over the iSCSI protocol, delivering shared block storage semantics with low latency and consistent performance to EC2 clients across AZs. This meets the requirement for “highly available” and “low-latency” block access across multiple AZs. S3/S3 File Gateway (A) is object/file, not block storage. EBS (B, D) provides block storage to a single instance in a single AZ; EBS volumes cannot be shared across instances/AZs, and host-side sync scripts add latency, complexity, and do not provide true HA. FSx for ONTAP natively provides synchronous HA pair replication, fast failover, and supports iSCSI multipathing for resilient, performant access suited to latency-sensitive trading workloads.
Reference: Amazon FSx for NetApp ONTAP ― Multi-AZ file systems; iSCSI LUNs and host connectivity; High availability and failover behavior; Performance and client access guidance.
