Practice Free DVA-C02 Exam Online Questions
A developer is building a multi-tenant application using AWS Lambda, Amazon S3, and Amazon DynamoDB. Each S3 object prefix represents a tenant name, and DynamoDB uses the tenant name as
the partition key.
The developer must prevent cross-tenant data access during processing.
Which combination of actions will meet this requirement? (Select THREE.)
- A . Create a data access IAM role that allows the sts:TagSession action.
- B . Allow the Lambda execution role to assume the data access role.
- C . Configure IAM policies on the data access role to allow S3 and DynamoDB access only when resource attributes match the tenant session tag.
- D . Create a resource-based policy on DynamoDB based on principal tags.
- E . Create a resource control policy (RCP) for the S3 bucket.
- F . Configure the Lambda function to assume the data access role and pass the tenant name as a session tag.
A, C, F
Explanation:
Comprehensive and Detailed Explanation (250C300 words):
AWS best practices for multi-tenant isolation recommend session-based access control using IAM session tags. Session tags allow temporary credentials to be scoped dynamically based on tenant context.
First, the data access role must allow sts:TagSession (Option A) so tenant identifiers can be passed securely at runtime. The Lambda function assumes this role and includes the tenant name as a session tag (Option F).
Next, IAM policies on the data access role restrict access to S3 object prefixes and DynamoDB partition keys by evaluating the session tag (Option C). This ensures that even if a bug occurs, the role cannot access data belonging to another tenant.
Resource-based DynamoDB policies and RCPs are unnecessary here and add complexity.
This approach follows AWS’s recommended attribute-based access control (ABAC) model and provides strong tenant isolation with minimal operational overhead.
A company needs to deploy all its cloud resources by using AWS CloudFormation templates A developer must create an Amazon Simple Notification Service (Amazon SNS) automatic notification to help enforce this rule. The developer creates an SNS topic and subscribes the email address of the company’s security team to the SNS topic.
The security team must receive a notification immediately if an 1AM role is created without the use of CloudFormation.
Which solution will meet this requirement?
- A . Create an AWS Lambda function to filter events from CloudTrail if a role was created without CloudFormation Configure the Lambda function to publish to the SNS topic. Create an Amazon EventBridge schedule to invoke the Lambda function every 15 minutes
- B . Create an AWS Fargate task in Amazon Elastic Container Service (Amazon ECS) to filter events from CloudTrail if a role was created without CloudFormation Configure the Fargate task to publish to the SNS topic Create an Amazon EventBridge schedule to run the Fargate task every 15 minutes
- C . Launch an Amazon EC2 instance that includes a script to filter events from CloudTrail if a role was created without CloudFormation. Configure the script to publish to the SNS topic. Create a cron job to run the script on the EC2 instance every 15 minutes.
- D . Create an Amazon EventBridge rule to filter events from CloudTrail if a role was created without CloudFormation Specify the SNS topic as the target of the EventBridge rule.
D
Explanation:
EventBridge (formerly CloudWatch Events) is the ideal service for real-time event monitoring.
CloudTrail logs IAM role creation.
EventBridge rules can filter CloudTrail events and trigger SNS notifications instantly.
A developer is building an application that uses an Amazon RDS for PostgreSQL database. To meet security requirements, the developer needs to ensure that data is encrypted at rest. The developer
must be able to rotate the encryption keys on demand.
- A . Use an AWS KMS managed encryption key to encrypt the database.
- B . Create a symmetric customer managed AWS KMS key. Use the key to encrypt the database.
- C . Create a 256-bit AES-GCM encryption key. Store the key in AWS Secrets Manager, and enable managed rotation. Use the key to encrypt the database.
- D . Create a 256-bit AES-GCM encryption key. Store the key in AWS Secrets Manager. Configure an AWS Lambda function to perform key rotation. Use the key to encrypt the database.
B
Explanation:
Why Option B is Correct: A customer-managed AWS Key Management Service (KMS) key allows for encryption at rest and provides the ability to rotate the key on demand. This ensures compliance with security requirements for key management and database encryption.
RDS integrates natively with AWS KMS, allowing the use of a customer-managed key for encrypting data at rest.
Key rotation can be managed directly in AWS KMS without needing custom solutions.
Why Other Options are Incorrect:
Option A: AWS KMS managed encryption keys (AWS-owned keys) do not support key rotation on demand.
Option C & D: Storing keys in AWS Secrets Manager with custom rotation is not a recommended approach for database encryption. AWS KMS is designed specifically for secure key management and encryption.
AWS Documentation
Reference: Encrypting Amazon RDS Resources
AWS Key Management Service (KMS)
A healthcare company uses AWS Amplify to host a patient management system. The system uses Amazon API Gateway to expose RESTful APIs. The backend logic of the system is handled by AWS Lambda functions.
One of the Lambda functions receives patient data that includes personally identifiable information (PII). The Lambda function sends the patient data to an Amazon DynamoDB table. The company must encrypt all patient data at rest and in transit before the data is stored in DynamoDB.
- A . Configure the Lambda function to use AWS KMS keys with the AWS Database Encryption SDK to encrypt the patient data before sending the data to DynamoDB.
- B . Use AWS managed AWS KMS keys to encrypt the data in the DynamoDB table.
- C . Configure a DynamoDB stream on the table to invoke a Lambda function. Configure the Lambda function to use an AWS KMS key to encrypt the DynamoDB table and to update the table.
- D . Use an AWS Step Functions workflow to transfer the data to an Amazon SQS queue. Configure a Lambda function to encrypt the data in the queue before sending the data to the DynamoDB table.
A
Explanation:
Why Option A is Correct: Encrypting PII at rest and in transit before storing it in DynamoDB ensures end-to-end security. Using the AWS Database Encryption SDK with KMS keys allows the Lambda function to encrypt data before transmission, meeting security and compliance requirements.
Why Other Options are Incorrect:
Option B: While AWS-managed KMS keys encrypt DynamoDB data at rest, they do not encrypt data in transit.
Option C: DynamoDB streams process updates after the data is written to the table, failing to encrypt PII in transit.
Option D: Step Functions and SQS add unnecessary complexity and still require encryption logic for both transit and at rest.
AWS Documentation
Reference: Encrypting Data in DynamoDB
AWS Database Encryption SDK
An ecommerce startup is preparing for an annual sales event. As the traffic to the company’s application increases, the development team wants to be notified when the Amazon EC2 instance’s CPU utilization exceeds 80%.
Which solution will meet this requirement?
- A . Create a custom Amazon CloudWatch alarm that sends a notification to an Amazon SNS topic when the CPU utilization exceeds 80%.
- B . Create a custom AWS CloudTrail alarm that sends a notification to an Amazon SNS topic when the CPU utilization exceeds 80%.
- C . Create a cron job on the EC2 instance that invokes the –describe-instance-information command on the host instance every 15 minutes and sends the results to an Amazon SNS topic.
- D . Create an AWS Lambda function that queries the AWS CloudTrail logs for the CPUUtilization metric every 15 minutes and sends a notification to an Amazon SNS topic when the CPU utilization exceeds 80%.
A
Explanation:
Step-by-Step Breakdown:
Requirement Summary:
Get notified when EC2 CPU Utilization > 80%
Option A: CloudWatch Alarm with SNS
Correct and standard AWS practice
CloudWatch automatically collects EC2 metrics, including CPUUtilization.
You can set a CloudWatch Alarm with a threshold (80% in this case).
Then, trigger an SNS notification to email, SMS, Lambda, etc.
Option B: AWS CloudTrail alarm
Incorrect: CloudTrail logs API activity, not performance metrics.
It doesn’t track metrics like CPU utilization.
Option C: Cron job on EC2 running –describe-instance-information
Incorrect: This doesn’t give CPU usage.
Also inefficient, and polling is bad practice when CloudWatch already monitors this natively.
Option D: Lambda function querying CloudTrail for CPU usage
Incorrect and conceptually flawed.
CloudTrail does not store performance metrics; CloudWatch does.
CloudWatch Alarms for EC2:
https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/AlarmThatSendsEmail.html
EC2 Metrics in CloudWatch:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/viewing_metrics_with_cloudwatch.html
Amazon SNS Notification Setup: https://docs.aws.amazon.com/sns/latest/dg/sns-email-notifications.html
A company is building a web application on AWS. When a customer sends a request, the application will generate reports and then make the reports available to the customer within one hour. Reports should be accessible to the customer for 8 hours. Some reports are larger than 1 MB. Each report is unique to the customer. The application should delete all reports that are older than 2 days.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Generate the reports and then store the reports as Amazon DynamoDB items that have a specified TTL. Generate a URL that retrieves the reports from DynamoDB. Provide the URL to customers through the web application.
- B . Generate the reports and then store the reports in an Amazon S3 bucket that uses server-side encryption. Attach the reports to an Amazon Simple Notification Service (Amazon SNS) message. Subscribe the customer to email notifications from Amazon SNS.
- C . Generate the reports and then store the reports in an Amazon S3 bucket that uses server-side encryption. Generate a presigned URL that contains an expiration date Provide the URL to customers through the web application. Add S3 Lifecycle configuration rules to the S3 bucket to delete old reports.
- D . Generate the reports and then store the reports in an Amazon RDS database with a date stamp. Generate an URL that retrieves the reports from the RDS database. Provide the URL to customers through the web application. Schedule an hourly AWS Lambda function to delete database records that have expired date stamps.
C
Explanation:
This solution will meet the requirements with the least operational overhead because it uses Amazon S3 as a scalable, secure, and durable storage service for the reports. The presigned URL will allow customers to access their reports for a limited time (8 hours) without requiring additional authentication. The S3 Lifecycle configuration rules will automatically delete the reports that are older than 2 days, reducing storage costs and complying with the data retention policy.
Option A is not optimal because it will incur additional costs and complexity to store the reports as DynamoDB items, which have a size limit of 400 KB.
Option B is not optimal because it will not provide customers with access to their reports within one hour, as Amazon SNS email delivery is not guaranteed.
Option D is not optimal because it will require more operational overhead to manage an RDS database and a Lambda function for storing and deleting the reports.
Reference: Amazon S3 Presigned URLs, Amazon S3 Lifecycle
A developer is working on an AWS Lambda function that accesses Amazon DynamoDB. The Lambda function must retrieve an item and update some of its attributes, or create the item if it does not exist. The Lambda function has access to the primary key.
Which IAM permissions should the developer request for the Lambda function to achieve this functionality?
A)

B)

C)

D)

- A . Option A
- B . Option B
- C . Option C
- D . Option D
B
Explanation:
Requirement Summary:
Lambda function:
Retrieves an item from DynamoDB
Updates its attributes or creates it if it does not exist
Has primary key
Needs minimum required IAM permissions
Analysis of Required Permissions:
To get an item and update it or create it if it doesn’t exist, the Lambda function must use the **PutItem** or **UpdateItem** API, and read with **GetItem**.
Valid API options:
GetItem: Read the item from the table.
UpdateItem: Update existing item attributes or insert if it doesn’t exist (with UpdateExpression and ConditionExpression).
When UpdateItem is used without a conditional check for existence, it can create a new item if it does not exist (acts like upsert).
DescribeTable: (Optional) Used if you need table metadata (not strictly required here).
Evaluate the Choices:
Option A:
DeleteItem C Not needed
GetItem C
PutItem C (can create/replace but not partial update)
Close, but PutItem overwrites the full item, not update-in-place. Acceptable, but UpdateItem is better suited.
Option B:
UpdateItem C Required to modify attributes or insert new item GetItem C Required to check existence or read data DescribeTable C Optional, but not harmful BEST FIT C Matches the update-or-create logic.
Option C:
GetRecords C This is used for DynamoDB Streams, not standard GetItem PutItem C
UpdateTable C Used to change table settings, not data manipulation
Incorrect usage context
Option D:
UpdateItem, GetItem, PutItem C all valid
But UpdateItem alone is sufficient; including PutItem might not be necessary Also, image is faded (possibly invalid), and it’s redundant
UpdateItem API:
https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_UpdateItem.html
PutItem vs UpdateItem:
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/WorkingWithItems.html
IAM actions for DynamoDB: https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazondynamodb.html
An application that runs on AWS receives messages from an Amazon Simple Queue Service (Amazon SQS) queue and processes the messages in batches. The application sends the data to another SQS queue to be consumed by another legacy application. The legacy system can take up to 5 minutes to process some transaction data.
A developer wants to ensure that there are no out-of-order updates in the legacy system. The developer cannot alter the behavior of the legacy system.
Which solution will meet these requirements?
- A . Use an SQS FIFO queue. Configure the visibility timeout value.
- B . Use an SQS standard queue with a SendMessageBatchRequestEntry data type. Configure the DelaySeconds values.
- C . Use an SQS standard queue with a SendMessageBatchRequestEntry data type. Configure the visibility timeout value.
- D . Use an SQS FIFO queue. Configure the DelaySeconds value.
A
Explanation:
An SQS FIFO queue is a type of queue that preserves the order of messages and ensures that each message is delivered and processed only once1. This is suitable for the scenario where the developer wants to ensure that there are no out-of-order updates in the legacy system.
The visibility timeout value is the amount of time that a message is invisible in the queue after a consumer receives it2. This prevents other consumers from processing the same message simultaneously. If the consumer does not delete the message before the visibility timeout expires, the message becomes visible again and another consumer can receive it2.
In this scenario, the developer needs to configure the visibility timeout value to be longer than the maximum processing time of the legacy system, which is 5 minutes. This will ensure that the message remains invisible in the queue until the legacy system finishes processing it and deletes it. This will prevent duplicate or out-of-order processing of messages by the legacy system.
A developer is building a highly secure healthcare application using serverless components. This application requires writing temporary data to /Imp storage on an AWS Lambda function.
How should the developer encrypt this data?
- A . Enable Amazon EBS volume encryption with an AWS KMS key in the Lambda function configuration so that all storage attached to the Lambda function is encrypted.
- B . Set up the Lambda function with a role and key policy to access an AWS KMS key. Use the key to generate a data key used to encrypt all data prior to writing to Amp storage.
- C . Use OpenSSL to generate a symmetric encryption key on Lambda startup. Use this key to encrypt the data prior to writing to /tmp.
- D . Use an on-premises hardware security module (HSM) to generate keys, where the Lambda function requests a data key from the HSM and uses that to encrypt data on all requests to the function.
A social media application is experiencing high volumes of new user requests after a recent marketing campaign. The application is served by an Amazon RDS for MySQL instance. A solutions architect examines the database performance and notices high CPU usage and many "too many connections" errors that lead to failed requests on the database. The solutions architect needs to address the failed requests.
Which solution will meet this requirement?
- A . Deploy an Amazon DynamoDB Accelerator (DAX) cluster. Configure the application to use the DAX cluster.
- B . Deploy an RDS Proxy. Configure the application to use the RDS Proxy.
- C . Migrate the database to an Amazon RDS for PostgreSQL instance.
- D . Deploy an Amazon ElastiCache (Redis OSS) cluster. Configure the application to use the ElastiCache cluster.
B
Explanation:
Why Option B is Correct: RDS Proxy manages database connections efficiently, reducing overhead on the RDS instance and mitigating "too many connections" errors.
Why Other Options are Incorrect:
Option A: DAX is for DynamoDB, not RDS.
Option C: Migration to PostgreSQL does not address the current issue.
Option D: ElastiCache is useful for caching but does not solve connection pool issues.
AWS Documentation
Reference: Amazon RDS Proxy
