Practice Free SAA-C03 Exam Online Questions
A company runs an application on Amazon EC2 instances. The application is deployed in private subnets in three Availability Zones of the us-east-1 Region. The instances must be able to connect to the internet to download files. The company wants a design that is highly available across the Region.
Which solution should be implemented to ensure that there are no disruptions to internet connectivity?
- A . Deploy a NAT instance in a private subnet of each Availability Zone.
- B . Deploy a NAT gateway in a public subnet of each Availability Zone.
- C . Deploy a transit gateway in a private subnet of each Availability Zone.
- D . Deploy an internet gateway in a public subnet of each Availability Zone.
B
Explanation:
Explanation (AWS Docs):
To allow private subnets to access the internet, deploy NAT gateways in a public subnet in each AZ for high availability. NAT instances are less scalable and less fault-tolerant.
“To create a highly available architecture, create a NAT gateway in each Availability Zone and configure your routing to use it.”
― NAT Gateway Overview
A company hosts an application on AWS that uses an Amazon S3 bucket and an Amazon Aurora database. The company wants to implement a multi-Region disaster recovery (DR) strategy that minimizes potential data loss.
Which solution will meet these requirements?
- A . Create an Aurora read replica in a second Availability Zone within the same AWS Region. Enable S3 Versioning for the bucket.
- B . Create an Aurora read replica in a second AWS Region. Configure AWS Backup to create continuous backups of the S3 bucket to a second bucket in a second Availability Zone.
- C . Enable Aurora native database backups across multiple AWS Regions. Use S3 cross-account backups within the company’s local Region.
- D . Migrate the database to an Aurora global database. Create a second S3 bucket in a second Region.
Configure Cross-Region Replication.
D
Explanation:
Aurora Global Database: Provides cross-Region disaster recovery with minimal data loss (<1 second replication latency).
S3 Cross-Region Replication (CRR): Automatically replicates data between buckets in different Regions.
“Aurora Global Database replicates your data with typically under one second of latency to secondary Regions.”
“Amazon S3 Cross-Region Replication automatically replicates objects across buckets in different AWS Regions.”
― Aurora Global Database
― S3 Cross-Region Replication
This meets the multi-Region DR requirement with minimal data loss.
A financial services company must retain log data for 1 year. The company stores log files in an Amazon S3 bucket and wants to prevent any user from deleting or overwriting the log files during this period. The data must remain available for read-only requests.
- A . Enable S3 Versioning on the bucket. Use Object Lock in compliance mode with a 1-year retention period.
- B . Enable S3 Transfer Acceleration on the bucket. Create an S3 Lifecycle Configuration rule to move objects to Amazon S3 Glacier Flexible Retrieval after 1 year.
- C . Enable S3 Versioning on the bucket. Create an S3 Lifecycle Configuration rule to move objects to Amazon S3 Glacier Flexible Retrieval after 1 year.
- D . Create an AWS Lambda function to programmatically check the timestamp of S3 data and to move the data to Amazon S3 Glacier Deep Archive if the data is older than 1 year.
A
Explanation:
Comprehensive and Detailed
To ensure that log files are immutable and cannot be deleted or overwritten for a specified retention period, Amazon S3 offers Object Lock in Compliance Mode. When enabled:
Compliance Mode: Ensures that a protected object version can’t be overwritten or deleted by any user, including the root user in your AWS account, for the duration of the retention period. AWS Documentation S3 Versioning: Must be enabled on the bucket to use Object Lock. This allows multiple versions of an object to be stored, ensuring that previous versions are preserved and protected. Amazon Web Services, Inc.
By enabling S3 Versioning and applying Object Lock in Compliance Mode with a 1-year retention period, the company can meet regulatory requirements to retain log data securely and prevent any modifications or deletions during that time.
A company uses an AWS Transfer for SFTP public server endpoint and Amazon S3 storage to host large datasets for its customers. The company provides customers SSH private keys to authenticate and download their datasets. The Transfer for SFTP server is configured with structured logging that is saved to an S3 bucket. The company wants to charge customers based on their monthly data download usage.
Which solution will meet these requirements?
- A . Configure VPC Flow Logs to write to a new S3 bucket. Run monthly queries on the flow logs to identify customer usage and calculate cost. Add the charges to the customers’ monthly bills.
- B . Each month, use AWS Cost Explorer to examine the costs for Transfer for SFTP and obtain a breakdown by customer. Add the charges to the customers’ monthly bills.
- C . Enable requester pays on the S3 bucket that hosts the software. Allocate the charges to each customer based on the customer’s requests.
- D . Run Amazon Athena queries on the logging S3 bucket monthly to identify customer usage and calculate costs. Add the charges to the customers’ monthly bills.
D
Explanation:
Comprehensive and Detailed Step-by-Step
To accurately charge customers based on their monthly data download usage, the following solution is recommended:
Structured Logging Configuration:
Action: Ensure that the AWS Transfer for SFTP server is configured to log user activity, including details about file downloads, to Amazon S3 in a structured format.
Implementation: Utilize AWS Transfer Family’s structured logging feature to capture detailed
information about user sessions, including actions performed and data transferred.
docs.aws.amazon.com
Justification: Structured logs provide comprehensive data necessary for analyzing customer-specific download activities.
Data Analysis with Amazon Athena:
Action: Use Amazon Athena to run SQL queries on the structured log data stored in the S3 bucket to calculate the amount of data each customer has downloaded.
Implementation:
a. Define a Schema: Create a table in Athena that maps to the structure of your log files. This involves specifying the format of the logs and the location in S3.
b. Query Data: Write SQL queries to sum the total bytes downloaded by each customer over the billing period. This can be achieved by filtering logs based on user identifiers and summing the data transfer amounts.
Justification: Athena allows for efficient querying
A company is using AWS Identity and Access Management (IAM) Access Analyzer to refine IAM permissions for employee users. The company uses an organization in AWS Organizations and AWS Control Tower to manage its AWS accounts. The company has designated a specific member account as an audit account.
A solutions architect needs to set up IAM Access Analyzer to aggregate findings from all member accounts in the audit account.
What is the first step the solutions architect should take?
- A . Use AWS CloudTrail to configure one trail for all accounts. Create an Amazon S3 bucket in the audit account. Configure the trail to send logs related to access activity to the new S3 bucket in the audit account.
- B . Configure a delegated administrator account for IAM Access Analyzer in the AWS Control Tower management account. In the delegated administrator account for IAM Access Analyzer, specify the AWS account ID of the audit account.
- C . Create an Amazon S3 bucket in the audit account. Generate a new permissions policy, and add a service role to the policy to give IAM Access Analyzer access to AWS CloudTrail and the S3 bucket in the audit account.
- D . Add a new trust policy that includes permissions to allow IAM Access Analyzer to perform sts: Assume Role actions. Modify the permissions policy to allow IAM Access Analyzer to generate policies.
B
Explanation:
The first step is to configure a delegated administrator account for IAM Access Analyzer at the organization level. Only after delegating the administrator account can you aggregate Access Analyzer findings from all member accounts into a designated audit account. This must be set up in the AWS Organizations management account.
AWS Documentation Extract:
“You must designate a delegated administrator for IAM Access Analyzer at the organization level. The delegated administrator account aggregates findings from all member accounts.” (Source: IAM Access Analyzer documentation)
A, C, D: These steps do not establish the organization-wide aggregation required for Access Analyzer.
Reference: AWS Certified Solutions Architect C Official Study Guide, Access Analyzer Delegation.
A company has 5 TB of datasets. The datasets consist of 1 million user profiles and 10 million connections. The user profiles have connections as many-to-many relationships. The company needs a performance-efficient way to find mutual connections up to five levels.
Which solution will meet these requirements?
- A . Use an Amazon S3 bucket to store the datasets. Use Amazon Athena to perform SQL JOIN queries to find connections.
- B . Use Amazon Neptune to store the datasets with edges and vertices. Query the data to find connections.
- C . Use an Amazon S3 bucket to store the datasets. Use Amazon QuickSight to visualize connections.
- D . Use Amazon RDS to store the datasets with multiple tables. Perform SQL JOIN queries to find connections.
B
Explanation:
Amazon Neptune is a fully managed graph database service optimized for storing and navigating complex many-to-many relationships like social graphs or network topologies.
It supports Gremlin and openCypher queries that allow efficient traversal of connections even multiple levels deep. SQL JOINs (Athena or RDS) are inefficient for deep relationship queries due to exponential complexity.
QuickSight is used for data visualization, not graph traversal.
A company wants to use an API to translate text from one language to another. The API must receive an HTTP header value and pass the value to an embedded library. The API translates documents in 6 minutes. The API requires a custom authorization mechanism.
- A . Configure an Amazon API Gateway REST API with AWS_PROXY integration to synchronously call an AWS Lambda function to perform translations.
- B . Configure an AWS Lambda function with a Lambda function URL to synchronously call a second function to perform translations.
- C . Configure an Amazon API Gateway REST API with AWS_PROXY integration to asynchronously call an AWS Lambda function to perform translations.
- D . Configure an Amazon API Gateway REST API with HTTP PROXY integration to synchronously call a web endpoint that is hosted on an EC2 instance.
A
Explanation:
The AWS_PROXY integration with Amazon API Gateway allows the API to invoke a Lambda function synchronously, making it a suitable solution for the custom authorization mechanism and text translation use case.
Synchronous Invocation: The API Gateway REST API with AWS_PROXY integration enables synchronous processing of HTTP requests and responses, which is required for document translation. Custom Authorization: API Gateway supports custom authorizers for fine-grained access control. Lambda Function Execution: Although Lambda’s execution time limit is 15 minutes, this is sufficient for the 6-minute document translation requirement.
Why Other Options Are Not Ideal:
Option B:
Introducing a Lambda function URL to invoke another Lambda function unnecessarily complicates the architecture. Not efficient.
Option C:
Asynchronous invocation cannot guarantee real-time response delivery for document translation tasks. Not suitable.
Option D:
Hosting the API on an EC2 instance increases operational overhead. HTTP PROXY integration does not add significant benefits here. Not cost-effective or efficient. AWS
Reference: API Gateway Lambda Proxy Integration: AWS Documentation – Proxy Integration Custom Authorization in API Gateway: AWS Documentation – Custom Authorization
A company stores sensitive financial reports in an Amazon S3 bucket. To comply with auditing requirements, the company must encrypt the data at rest. Users must not have the ability to change the encryption method or remove encryption when the users upload data. The company must be able to audit all encryption and storage actions.
Which solution will meet these requirements and provide the MOST granular control?
- A . Enable default server-side encryption with Amazon S3 managed keys (SSE-S3) for the S3 bucket. Apply a bucket policy that denies any upload requests that do not include the x-amz-server-side-encryption header.
- B . Configure server-side encryption with AWS KMS (SSE-KMS) keys. Use an S3 bucket policy to reject any data that is not encrypted by the designated key.
- C . Use client-side encryption before uploading the reports. Store the encryption keys in AWS Secrets Manager.
- D . Enable default server-side encryption with Amazon S3 managed keys (SSE-S3). Use AWS Identity and Access Management (IAM) to prevent users from changing S3 bucket settings.
B
Explanation:
AWS KMS with SSE-KMS provides granular key management and auditability. All use of KMS keys is logged in AWS CloudTrail, which allows compliance teams to monitor encryption and decryption operations. A bucket policy can be configured to enforce uploads only with the designated KMS key, ensuring that users cannot bypass encryption or change methods.
Option A (SSE-S3 with bucket policy) enforces encryption but does not provide the same level of control or auditable key usage.
Option C (client-side encryption) increases complexity and key management burden.
Option D prevents bucket setting changes but does not prevent unencrypted uploads.
Therefore, B ensures the most granular control, auditability, and compliance with financial data requirements.
Reference:
• Amazon S3 User Guide ― Using SSE-KMS for encryption
• AWS KMS Developer Guide ― Key management and auditing with CloudTrail
• AWS Well-Architected Framework ― Security Pillar
A company has an industrial application that controls a process in real time. The company plans to rearchitect the application to distribute jobs across several Amazon EC2 instances in a VPC. The solution needs to maximize the network throughput and minimize the network latency between the instances.
- A . Place the instances in a host-level partition placement group. Choose instance types that support enhanced networking.
- B . Place the instances in several dedicated hosts in the same partition of a partition placement group. Choose dedicated hosts that support enhanced networking.
- C . Place the instances in several dedicated hosts in the same rack of a rack-level placement group.
Choose dedicated hosts that support enhanced networking. - D . Place the instances in a cluster placement group. Choose instance types that support enhanced networking.
D
Explanation:
Comprehensive and Detailed
For applications requiring high network throughput and low latency between EC2 instances, AWS recommends using a Cluster Placement Group:
Cluster Placement Group: Packs instances close together inside an Availability Zone, enabling workloads to achieve the low-latency network performance necessary for tightly-coupled node-to-node communication.
Enhanced Networking: Selecting instance types that support enhanced networking provides higher bandwidth, higher packet-per-second (PPS) performance, and consistently lower inter-instance latencies.
By combining a Cluster Placement Group with enhanced networking-enabled instance types, the company can meet the stringent performance requirements of its real-time industrial application.
A media company hosts a web application on AWS for uploading videos. Only authenticated users should upload within a specified time frame after authentication.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Configure the application to generate IAM temporary security credentials for authenticated users.
- B . Create an AWS Lambda function that generates pre-signed URLs when a user authenticates.
- C . Develop a custom authentication service that integrates with Amazon Cognito to control and log direct S3 bucket access through the application.
- D . Use AWS Security Token Service (AWS STS) to assume a pre-defined IAM role that grants authenticated users temporary permissions to upload videos directly to the S3 bucket.
B
Explanation:
Option B: Pre-signed URLs provide temporary, authenticated access to S3, limiting uploads to the time frame specified. This solution is lightweight, efficient, and easy to implement.
Option A requires the management of IAM temporary credentials, adding complexity.
Option C involves unnecessary development effort.
Option D introduces more complexity with STS and roles than pre-signed URLs.
