Practice Free SAA-C03 Exam Online Questions
A company is building a data analysis platform on AWS by using AWS Lake Formation. The platform will ingest data from different sources such as Amazon S3 and Amazon RDS. The company needs a secure solution to prevent access to portions of the data that contain sensitive information.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Create an IAM role that includes permissions to access Lake Formation tables.
- B . Create data filters to implement row-level security and cell-level security.
- C . Create an AWS Lambda function that removes sensitive information before Lake Formation ingests the data.
- D . Create an AWS Lambda function that periodically queries and removes sensitive information from Lake Formation tables.
B
Explanation:
AWS Lake Formation natively supports fine-grained access control, including:
Row-level security
Cell/column-level security
These are implemented through data filters and Lake Formation permissions on governed tables and views. This allows you to centrally define which users or groups can access which rows and which columns, including sensitive data fields, with minimal custom code or maintenance.
Why others are incorrect:
A: IAM alone does not provide row-level or cell-level data security inside Lake Formation tables.
C and D: Using Lambda to scrub or periodically remove sensitive data is complex, error-prone, and high overhead compared to built-in Lake Formation data filters.
A company runs an application on Amazon EC2 instances. The instances need to access an Amazon RDS database by using specific credentials. The company uses AWS Secrets Manager to contain the credentials the EC2 instances must use.
Which solution will meet this requirement?
- A . Create an IAM role, and attach the role to each EC2 instance profile. Use an identity-based policy
to grant the new IAM role access to the secret that contains the database credentials. - B . Create an IAM user, and attach the user to each EC2 instance profile. Use a resource-based policy to grant the new IAM user access to the secret that contains the database credentials.
- C . Create a resource-based policy for the secret that contains the database credentials. Use EC2 Instance Connect to access the secret.
- D . Create an identity-based policy for the secret that contains the database credentials. Grant direct access to the EC2 instances.
A
Explanation:
IAM Role: Attaching an IAM role to an EC2 instance profile is a secure way to manage permissions without embedding credentials.
AWS Secrets Manager: Grants controlled access to database credentials and automatically rotates secrets if configured.
Identity-Based Policy: Ensures the IAM role only has access to specific secrets, enhancing security. AWS Secrets Manager Documentation
An application is experiencing performance issues based on increased demand. This increased demand is on read-only historical records that are pulled from an Amazon RDS-hosted database with custom views and queries. A solutions architect must improve performance without changing the database structure.
Which approach will improve performance and MINIMIZE management overhead?
- A . Deploy Amazon DynamoDB, move all the data, and point to DynamoDB.
- B . Deploy Amazon ElastiCache (Redis OSS) and cache the data for the application.
- C . Deploy Memcached on Amazon EC2 and cache the data for the application.
- D . Deploy Amazon DynamoDB Accelerator (DAX) on Amazon RDS to improve cache performance.
B
Explanation:
AWS recommends using Amazon ElastiCache as an in-memory caching layer in front of relational databases such as Amazon RDS to offload read traffic and significantly improve performance for read-heavy workloads. ElastiCache (Redis OSS) provides microsecond latency and can cache the results of frequent or expensive queries without requiring any change to the underlying database schema or engine. This directly addresses the requirement to improve performance without changing the database structure and with minimal operational overhead, because ElastiCache is a fully managed service (patching, failure detection, replacement, etc., are handled by AWS).
Option A (DynamoDB) would require a full data migration and application changes, including schema and query rewrites.
Option C (Memcached on EC2) introduces additional management overhead for EC2 instances (scaling, patching, HA).
Option D (DAX) is a caching layer only for DynamoDB and cannot be used directly with Amazon RDS.
A company is developing a new application that uses a relational database to store user data and application configurations. The company expects the application to have steady user growth. The company expects the database usage to be variable and read-heavy, with occasional writes.
The company wants to cost-optimize the database solution. The company wants to use an AWS managed database solution that will provide the necessary performance.
Which solution will meet these requirements MOST cost-effectively?
- A . Deploy the database on Amazon RDS. Use Provisioned IOPS SSD storage to ensure consistent performance for read and write operations.
- B . Deploy the database on Amazon Aurora Serveriess to automatically scale the database capacity based on actual usage to accommodate the workload.
- C . Deploy the database on Amazon DynamoDB. Use on-demand capacity mode to automatically scale throughput to accommodate the workload.
- D . Deploy the database on Amazon RDS Use magnetic storage and use read replicas to accommodate the workload
B
Explanation:
Amazon Aurora Serverless is a cost-effective, on-demand, autoscaling configuration for Amazon Aurora. It automatically adjusts the database’s capacity based on the current demand, which is ideal for workloads with variable and unpredictable usage patterns. Since the application is expected to be read-heavy with occasional writes and steady growth, Aurora Serverless can provide the necessary performance without requiring the management of database instances.
Cost-Optimization: Aurora Serverless only charges for the database capacity you use, making it a more cost-effective solution compared to always running provisioned database instances, especially for workloads with fluctuating demand.
Scalability: It automatically scales database capacity up or down based on actual usage, ensuring that you always have the right amount of resources available.
Performance: Aurora Serverless is built on the same underlying storage as Amazon Aurora, providing high performance and availability.
Why Not Other Options?
Option A (RDS with Provisioned IOPS SSD): While Provisioned IOPS SSD ensures consistent performance, it is generally more expensive and less flexible compared to the autoscaling nature of Aurora Serverless.
Option C (DynamoDB with On-Demand Capacity): DynamoDB is a NoSQL database and may not be the best fit for applications requiring relational database features.
Option D (RDS with Magnetic Storage and Read Replicas): Magnetic storage is outdated and generally slower. While read replicas help with read-heavy workloads, the overall performance might not be optimal, and magnetic storage doesn’t provide the necessary performance.
AWS
Reference: Amazon Aurora Serverless- Information on how Aurora Serverless works and its use cases.
Amazon Aurora Pricing- Details on the cost-effectiveness of Aurora Serverless.
A company runs an order management application on AWS. The application allows customers to place orders and pay with a credit card. The company uses an Amazon CloudFront distribution to deliver the application. A security team has set up logging for all incoming requests. The security team needs a solution to generate an alert if any user modifies the logging configuration.
Which combination of solutions will meet these requirements? (Select TWO.)
- A . Configure an Amazon EventBridge rule that is invoked when a user creates or modifies a CloudFront distribution. Add the AWS Lambda function as a target of the EventBridge rule.
- B . Create an Application Load Balancer (ALB). Enable AWS WAF rules for the ALB. Configure an AWS Config rule to detect security violations.
- C . Create an AWS Lambda function to detect changes in CloudFront distribution logging. Configure the Lambda function to use Amazon Simple Notification Service (Amazon SNS) to send notifications to the security team.
- D . Set up Amazon GuardDuty. Configure GuardDuty to monitor findings from the CloudFront distribution. Create an AWS Lambda function to address the findings.
- E . Create a private API in Amazon API Gateway. Use AWS WAF rules to protect the private API from common security problems.
A, C
Explanation:
An application uses an Amazon SQS queue and two AWS Lambda functions. One of the Lambda
functions pushes messages to the queue, and the other function polls the queue and receives queued messages.
A solutions architect needs to ensure that only the two Lambda functions can write to or read from the queue.
Which solution will meet these requirements?
- A . Attach an IAM policy to the SQS queue that grants the Lambda function principals read and write access. Attach an IAM policy to the execution role of each Lambda function that denies all access to the SQS queue except for the principal of each function.
- B . Attach a resource-based policy to the SQS queue to deny read and write access to the queue for any entity except the principal of each Lambda function. Attach an IAM policy to the execution role of each Lambda function that allows read and write access to the queue.
- C . Attach a resource-based policy to the SQS queue that grants the Lambda function principals read and write access to the queue. Attach an IAM policy to the execution role of each Lambda function that allows read and write access to the queue.
- D . Attach a resource-based policy to the SQS queue to deny all access to the queue. Attach an IAM policy to the execution role of each Lambda function that grants read and write access to the queue.
C
Explanation:
To ensure that only specific AWS Lambda functions can read from or write to an Amazon SQS queue, useresource-based policiesattached directly to the SQS queue. These policies explicitly grant permissions to the IAM roles used by the Lambda functions. Additionally, the Lambda execution roles must also have IAM policies that permit SQS access. This dual-layer approach follows the AWS security best practice of granting least privilege access and ensures that no other service or entity can interact with the queue.
This is a common and supported pattern documented in theAmazon SQS Developer Guide, where resource-based policies restrict access at the queue level while IAM roles control permissions at the function level.
Reference: AWS Documentation C Amazon SQS Access Control, Lambda Permissions, and Resource-Based Policies
A media company needs to migrate its Windows-based video editing environment to AWS. The company’s current environment processes 4K video files that require sustained throughput of 2 GB per second across multiple concurrent users.
The company’s storage needs increase by 1 TB each week. The company needs a shared file system that supports SMB protocol and can scale automatically based on storage demands.
Which solution will meet these requirements?
- A . Deploy an Amazon FSx for Windows File Server Multi-AZ file system with SSD storage.
- B . Deploy an Amazon Elastic File System (Amazon EFS) file system in Max I/O mode. Provision mount targets in multiple Availability Zones.
- C . Deploy an Amazon FSx for Lustre file system with a Persistent 2 deployment type. Provision the file system with 2 TB of storage.
- D . Deploy Amazon S3 File Gateway by using multiple cached gateway instances. Configure S3 Transfer Acceleration.
A
Explanation:
The workload is Windows-based and requires a shared file system with SMB support and very high throughput for 4K video editing.
Amazon FSx for Windows File Server is a fully managed, highly available native Windows file system that supports the SMB protocol, Active Directory integration, and can deliver high throughput and low latency suitable for media workloads.
FSx for Windows File Server supports automatic storage scaling to grow file system capacity as data increases, matching the requirement of +1 TB per week with minimal admin effort.
A Multi-AZ SSD deployment provides high availability and performance.
Why others are not correct:
B: Amazon EFS is NFS, not SMB, and is optimized for Linux clients, not Windows-based environments.
C: FSx for Lustre is a high-performance POSIX file system typically used with Linux-based HPC workloads, not Windows + SMB.
D: S3 File Gateway is not designed for sustained multi-GB/s shared-edit performance and adds
additional complexity; it is more suited for backup/archival and limited shared file access.
A company is developing a highly available natural language processing (NLP) application. The application handles large volumes of concurrent requests. The application performs NLP tasks such as entity recognition, sentiment analysis, and key phrase extraction on text data.
The company needs to store data that the application processes in a highly available and scalable database.
- A . Create an Amazon API Gateway REST API endpoint to handle incoming requests. Configure the REST API to invoke an AWS Lambda function for each request. Configure the Lambda function to call Amazon Comprehend to perform NLP tasks on the text data. Store the processed data in Amazon
DynamoDB. - B . Create an Amazon API Gateway HTTP API endpoint to handle incoming requests. Configure the HTTP API to invoke an AWS Lambda function for each request. Configure the Lambda function to call Amazon Translate to perform NLP tasks on the text data. Store the processed data in Amazon ElastiCache.
- C . Create an Amazon SQS queue to buffer incoming requests. Deploy the NLP application on Amazon EC2 instances in an Auto Scaling group. Use Amazon Comprehend to perform NLP tasks. Store the processed data in an Amazon RDS database.
- D . Create an Amazon API Gateway WebSocket API endpoint to handle incoming requests. Configure the WebSocket API to invoke an AWS Lambda function for each request. Configure the Lambda function to call Amazon Textract to perform NLP tasks on the text data. Store the processed data in Amazon ElastiCache.
A
Explanation:
A company uses Amazon S3 to store customer data that contains personally identifiable information (PII) attributes. The company needs to make the customer information available to company resources through an AWS Glue Catalog. The company needs to have fine-grained access control for the data so that only specific IAM roles can access the PII data.
- A . Create one IAM policy that grants access to PII. Create a second IAM policy that grants access to non-PII data. Assign the PII policy to the specified IAM roles.
- B . Create one IAM role that grants access to PII. Create a second IAM role that grants access to non-PII data. Assign the PII policy to the specified IAM roles.
- C . Use AWS Lake Formation to provide the specified IAM roles access to the PII data.
- D . Use AWS Glue to create one view for PII data. Create a second view for non-PII data. Provide the specified IAM roles access to the PII view.
C
Explanation:
AWS Lake Formationis designed for managing fine-grained access control to data in an efficient manner:
Granular Permissions: Lake Formation allows column-level, row-level, and table-level access controls, which can precisely define access to PII data.
Integration with AWS Glue Catalog: Lake Formation natively integrates with AWS Glue for seamless data cataloging and access control.
Operational Efficiency: Centralized access control policies minimize the need for separate IAM roles or policies.
Why Other Options Are Not Ideal:
Option A:
Creating multiple IAM policies introduces complexity and lacks column-level access control.Not efficient.
Option B:
Managing multiple IAM roles for granular access is operationally complex.Not efficient.
Option D:
Creating views in Glue adds unnecessary complexity and may not provide the level of granularity that Lake Formation offers.Not the best choice.
AWS
Reference: AWS Lake Formation: AWS Documentation – Lake Formation Fine-Grained Permissions with Lake Formation: AWS Documentation – Fine-Grained Permissions
A company is creating an application. The company stores data from tests of the application in multiple on-premises locations.
The company needs to connect the on-premises locations to VPCs in an AWS Region in the AWS
Cloud. The number of accounts and VPCs will increase during the next year. The network architecture must simplify the administration of new connections and must provide the ability to scale.
Which solution will meet these requirements with the LEAST administrative overhead?
- A . Create a peering connection between the VPCs. Create a VPN connection between the VPCs and the on-premises locations.
- B . Launch an Amazon EC2 instance. On the instance, include VPN software that uses a VPN connection to connect all VPCs and on-premises locations.
- C . Create a transit gateway. Create VPC attachments for the VPC connections. Create VPNattachments for the on-premises connections.
- D . Create an AWS Direct Connect connection between the on-premises locations and a central VPC.
Connect the central VPC to other VPCs by using peering connections.
C
Explanation:
AWS Transit Gateway simplifies network connectivity by acting as a hub that can connect VPCs and on-premises networks through VPN or Direct Connect. It provides scalability and reduces administrative overhead by eliminating the need to manage complex peering relationships as the number of accounts and VPCs grows.
Reference: AWS Documentation C Transit Gateway
