Practice Free DVA-C02 Exam Online Questions
A developer is modifying a large-scale IoT application that stores device telemetry data in an Amazon DynamoDB table. The telemetry data is valuable only for a limited time, but the application stores the data indefinitely. Data storage is slowing the application down. The developer needs a solution to improve the performance of the application.
Which solution will meet this requirement in the MOST operationally efficient way?
- A . Create an AWS Lambda function to run an Amazon EventBridge job on a schedule to scan the DynamoDB table for old items and to delete them.
- B . Archive old data in an Amazon S3 bucket. Set up an S3 Lifecycle policy to transition old data to a more cost-effective storage class.
- C . Set a TTL attribute for the telemetry data. Activate TTL on the DynamoDB table.
- D . Change the table to on-demand capacity mode.
C
Explanation:
The most operationally efficient way to remove time-bound data from a DynamoDB table is to use Time to Live (TTL). TTL is a DynamoDB feature that lets you define an attribute (typically a Unix epoch timestamp) that represents an item’s expiration time. After the timestamp has passed, DynamoDB automatically marks the item as expired and later deletes it in the background. This directly satisfies the requirement because the telemetry is only valuable for a limited time, yet it is currently stored indefinitely.
Enabling TTL (option C) improves performance and operational efficiency by preventing the table from growing without bounds. As the item count grows, access patterns that rely on scans, large partitions, or indexes can degrade, and storage-related overhead increases. TTL helps keep the dataset “fresh” by automatically removing stale telemetry, reducing the amount of data the application must work around and decreasing overall storage footprint.
Option A is operationally heavier: scanning and deleting items with a scheduled Lambda introduces ongoing maintenance, costs, and risk of throttling (table scans can be expensive and disruptive). It also requires custom logic, error handling, and retries.
Option B addresses cost optimization for archived data but does not fix the DynamoDB table size or the performance degradation caused by keeping old data in the table; archiving is useful, but you still need an efficient deletion mechanism from DynamoDB.
Option D (on-demand capacity mode) changes how capacity is managed, not how much data is stored; it does not remove stale items and therefore does not address the core problem.
Therefore, the best and most operationally efficient solution is C: add a TTL attribute to each telemetry item and enable TTL on the DynamoDB table so DynamoDB automatically expires and
deletes old telemetry.
A company is expanding the compatibility of its photo-snaring mobile app to hundreds of additional devices with unique screen dimensions and resolutions. Photos are stored in Amazon S3 in their original format and resolution. The company uses an Amazon CloudFront distribution to serve the photos The app includes the dimension and resolution of the display as GET parameters with every request.
A developer needs to implement a solution that optimizes the photos that are served to each device to reduce load time and increase photo quality.
Which solution will meet these requirements MOST cost-effective?
- A . Use S3 Batch Operations to invoke an AWS Lambda function to create new variants of the photos with the required dimensions and resolutions. Create a dynamic CloudFront origin that automatically maps the request of each device to the corresponding photo variant.
- B . Use S3 Batch Operations to invoke an AWS Lambda function to create new variants of the photos with the required dimensions and resolutions. Create a Lambda@Edge function to route requests to the corresponding photo vacant by using request headers.
- C . Create a Lambda@Edge function that optimizes the photos upon request and returns the photos as a response. Change the CloudFront TTL cache policy to the maximum value possible.
- D . Create a Lambda@Edge function that optimizes the photos upon request and returns the photos as a response. In the same function store a copy of the processed photos on Amazon S3 for subsequent requests.
D
Explanation:
This solution meets the requirements most cost-effectively because it optimizes the photos on demand and caches them for future requests. Lambda@Edge allows the developer to run Lambda functions at AWS locations closer to viewers, which can reduce latency and improve photo quality. The developer can create a Lambda@Edge function that uses the GET parameters from each request to optimize the photos with the required dimensions and resolutions and returns them as a response. The function can also store a copy of the processed photos on Amazon S3 for subsequent requests, which can reduce processing time and costs. Using S3 Batch Operations to create new variants of the photos will incur additional storage costs and may not cover all possible dimensions and resolutions. Creating a dynamic CloudFront origin or a Lambda@Edge function to route requests to corresponding photo variants will require maintaining a mapping of device types and photo variants, which can be complex and error-prone.
Reference: [Lambda@Edge Overview], [Resizing Images with Amazon CloudFront & Lambda@Edge]
A company has an application that processes audio files for different departments. When audio files are saved to an Amazon S3 bucket, an AWS Lambda function receives an event notification and processes the audio input.
A developer needs to update the solution so that the application can process the audio files for each department independently. The application must publish the audio file location for each department to each department’s existing Amazon SQS queue.
Which solution will meet these requirements with no changes to the Lambda function code?
- A . Configure the S3 bucket to send the event notifications to an Amazon SNS topic. Subscribe each department’s SQS queue to the SNS topic. Configure subscription filter policies.
- B . Update the Lambda function to write the file location to a single shared SQS queue. Configure the shared SQS queue to send the file reference to each department’s SQS queue.
- C . Update the Lambda function to send the file location to each department’s SQS queue.
- D . Configure the S3 bucket to send the event notifications to each department’s SQS queue.
A
Explanation:
The key constraint is no changes to the Lambda code, while still fanning out notifications so that each department receives only its relevant file locations in its existing SQS queue. The cleanest “plumbing-only” pattern is S3 → SNS → SQS with SNS subscription filter policies.
Amazon S3 event notifications can publish events to an SNS topic. SNS then delivers messages to multiple subscribers, including SQS queues. By subscribing each department’s SQS queue to the SNS topic, the system can fan out the event to all queues. To ensure each department receives only its relevant events, the developer can configure SNS subscription filter policies (for example, based on object key prefixes like /deptA/, /deptB/). This routes messages without requiring any change to the Lambda function.
Option D is not ideal because S3 event notifications do not provide the same flexible filtering/routing to multiple SQS queues as SNS filter policies do (and configuring many direct notifications becomes harder to manage).
Options B and C require Lambda code changes, which violates the requirement.
Therefore, use S3 event notifications to SNS, subscribe each department SQS queue, and use filter policies for routing.
A developer has observed an increase in bugs in the AWS Lambda functions that a development team has deployed in its Node.js application.
To minimize these bugs, the developer wants to implement automated testing of Lambda functions in an environment that closely simulates the Lambda environment.
The developer needs to give other developers the ability to run the tests locally. The developer also needs to integrate the tests into the team’s continuous integration and continuous delivery (CI/CD) pipeline before the AWS Cloud Development Kit (AWS CDK) deployment.
Which solution will meet these requirements?
- A . Create sample events based on the Lambda documentation. Create automated test scripts that use the cdk local invoke command to invoke the Lambda functions. Check the response. Document the test scripts for the other developers on the team. Update the CI/CD pipeline to run the test scripts.
- B . Install a unit testing framework that reproduces the Lambda execution environment. Create sample events based on the Lambda documentation. Invoke the handler function by using a unit testing framework. Check the response. Document how to run the unit testing framework for the other developers on the team. Update the CI/CD pipeline to run the unit testing framework.
- C . Install the AWS Serverless Application Model (AWS SAM) CLI tool. Use the sam local generate-event command to generate sample events for the automated tests. Create automated test scripts that use the sam local invoke command to invoke the Lambda functions. Check the response. Document the test scripts for the other developers on the team. Update the CI/CD pipeline to run the test scripts.
- D . Create sample events based on the Lambda documentation. Create a Docker container from the Node.js base image to invoke the Lambda functions. Check the response. Document how to run the Docker container for the other developers on the team. Update the CllCD pipeline to run the Docker container.
C
Explanation:
The AWS Serverless Application Model Command Line Interface (AWS SAM CLI) is a command-line tool for local development and testing of Serverless applications3. The sam local generate-event command of AWS SAM CLI generates sample events for automated tests3. The sam local invoke command is used to invoke Lambda functions3. Therefore, option C is correct.
A developer is writing an AWS Lambda function. The developer wants to log key events that occur while the Lambda function runs. The developer wants to include a unique identifier to associate the events with a specific function invocation.
The developer adds the following code to the Lambda function:
![]()
Which solution will meet this requirement?
- A . Obtain the request identifier from the AWS request ID field in the context object. Configure the application to write logs to standard output.
- B . Obtain the request identifier from the AWS request ID field in the event object. Configure the application to write logs to a file.
- C . Obtain the request identifier from the AWS request ID field in the event object. Configure the application to write logs to standard output.
- D . Obtain the request identifier from the AWS request ID field in the context object. Configure the application to write logs to a file.
A
Explanation:
https://docs.aws.amazon.com/lambda/latest/dg/nodejs-context.html https://docs.aws.amazon.com/lambda/latest/dg/nodejs-logging.html
There is no explicit information for the runtime, the code is written in Node.js.
AWS Lambda is a service that lets developers run code without provisioning or managing servers. The developer can use the AWS request ID field in the context object to obtain a unique identifier for each function invocation. The developer can configure the application to write logs to standard output, which will be captured by Amazon CloudWatch Logs. This solution will meet the requirement of logging key events with a unique identifier.
Reference: [What Is AWS Lambda? – AWS Lambda]
[AWS Lambda Function Handler in Node.js – AWS Lambda]
[Using Amazon CloudWatch – AWS Lambda]
A developer is using an AWS Lambda function to generate avatars for profile pictures that are uploaded to an Amazon S3 bucket. The Lambda function is automatically invoked for profile pictures that are saved under the /original/ S3 prefix. The developer notices that some pictures cause the Lambda function to time out. The developer wants to implement a fallback mechanism by using another Lambda function that resizes the profile picture.
Which solution will meet these requirements with the LEAST development effort?
- A . Set the image resize Lambda function as a destination of the avatar generator Lambda function for the events that fail processing.
- B . Create an Amazon Simple Queue Service (Amazon SQS) queue. Set the SQS queue as a destination with an on failure condition for the avatar generator Lambda function. Configure the image resize Lambda function to poll from the SQS queue.
- C . Create an AWS Step Functions state machine that invokes the avatar generator Lambda function and uses the image resize Lambda function as a fallback. Create an Amazon EventBridge rule that matches events from the S3 bucket to invoke the state machine.
- D . Create an Amazon Simple Notification Service (Amazon SNS) topic. Set the SNS topic as a destination with an on failure condition for the avatar generator Lambda function. Subscribe the image resize Lambda function to the SNS topic.
A
Explanation:
The solution that will meet the requirements with the least development effort is to set the image resize Lambda function as a destination of the avatar generator Lambda function for the events that fail processing. This way, the fallback mechanism is automatically triggered by the Lambda service without requiring any additional components or configuration. The other options involve creating and managing additional resources such as queues, topics, state machines, or rules, which would increase the complexity and cost of the solution.
Reference: Using AWS Lambda destinations
A company has deployed infrastructure on AWS. A development team wants to create an AWS Lambda function that will retrieve data from an Amazon Aurora database. The Amazon Aurora database is in a private subnet in company’s VPC. The VPC is named VPC1. The data is relational in nature. The Lambda function needs to access the data securely.
Which solution will meet these requirements?
- A . Create the Lambda function. Configure VPC1 access for the function. Attach a security group named SG1 to both the Lambda function and the database. Configure the security group inbound and outbound rules to allow TCP traffic on Port 3306.
- B . Create and launch a Lambda function in a new public subnet that is in a new VPC named VPC2.
Create a peering connection between VPC1 and VPC2. - C . Create the Lambda function. Configure VPC1 access for the function. Assign a security group named SG1 to the Lambda function. Assign a second security group named SG2 to the database. Add an inbound rule to SG1 to allow TCP traffic from Port 3306.
- D . Export the data from the Aurora database to Amazon S3. Create and launch a Lambda function in VPC1. Configure the Lambda function query the data from Amazon S3.
A
Explanation:
AWS Lambda is a service that lets you run code without provisioning or managing servers. Lambda functions can be configured to access resources in a VPC, such as an Aurora database, by specifying one or more subnets and security groups in the VPC settings of the function. A security group acts as a virtual firewall that controls inbound and outbound traffic for the resources in a VPC. To allow a Lambda function to communicate with an Aurora database, both resources need to be associated with the same security group, and the security group rules need to allow TCP traffic on Port 3306, which is the default port for MySQL databases.
Reference: [Configuring a Lambda function to access resources in a VPC]
Users of a web-based music application are experiencing latency issues on one of the application’s most popular pages. A developer identifies that the issue is caused by the slow load time of specific widgets that rank and sort various songs and albums.
The developer needs to ensure that the widgets load more quickly by using built-in, in-memory ranking and sorting techniques. The developer must ensure that the data remains up to date.
Which solution will meet these requirements with the LEAST latency?
- A . Provision an Amazon ElastiCache (Memcached) cluster. Implement a lazy-loading caching strategy.
- B . Provision an Amazon ElastiCache (Redis OSS) cluster. Implement a write-through caching strategy.
- C . Provision an Amazon ElastiCache (Memcached) cluster. Implement a write-through caching strategy.
- D . Provision an Amazon ElastiCache (Redis OSS) cluster. Implement a lazy-loading caching strategy.
B
Explanation:
Comprehensive and Detailed 250 to 300 words of Explanation (AWS-doc-aligned):
To minimize widget latency for ranking and sorting, the best fit is Amazon ElastiCache for Redis OSS because Redis provides built-in in-memory data structures specifically suited for ranking use cases, such as Sorted Sets (ZSETs). Sorted sets maintain members in order by score, which enables fast retrieval of “top N” items and efficient re-ranking without repeatedly querying the primary database and sorting at the application tier. Memcached, by contrast, is a simpler key-value cache and does not provide comparable native ranking/sorting data structures.
To ensure data remains up to date, a write-through caching strategy is preferred. With write-through, updates are written to the cache at the same time as the system of record, so the cache reflects the latest values immediately after writes. This avoids stale rankings that can occur with lazy-loading (cache-aside), where the cache is only populated or refreshed on reads and can serve older data until expiration or explicit invalidation.
Combining Redis’s native sorted/ranking structures with write-through updates yields the least latency because the page widgets can fetch pre-ranked/pre-sorted results directly from Redis memory with minimal computation, while still maintaining freshness as new plays, likes, or album changes occur. Operationally, this pattern reduces database load, avoids repeated sorting work in the application, and provides consistently fast reads for a high-traffic page.
An application is using Amazon Cognito user pools and identity pools for secure access. A developer wants to integrate the user-specific file upload and download features in the application with Amazon S3. The developer must ensure that the files are saved and retrieved in a secure manner and that users can access only their own files. The file sizes range from 3 KB to 300 MB.
Which option will meet these requirements with the HIGHEST level of security?
- A . Use S3 Event Notifications to validate the file upload and download requests and update the user interface (UI).
- B . Save the details of the uploaded files in a separate Amazon DynamoDB table. Filter the list of files in the user interface (UI) by comparing the current user ID with the user ID associated with the file in the table.
- C . Use Amazon API Gateway and an AWS Lambda function to upload and download files. Validate each request in the Lambda function before performing the requested operation.
- D . Use an IAM policy within the Amazon Cognito identity prefix to restrict users to use their own folders in Amazon S3.
D
Explanation:
https://docs.aws.amazon.com/cognito/latest/developerguide/amazon-cognito-integrating-user-pools-with-identity-pools.html
A company stores its data in data tables in a series of Amazon S3 buckets. The company received an alert that customer credit card information might have been exposed in a data table on one of the company’s public applications. A developer needs to identify all potential exposures within the application environment.
Which solution will meet these requirements?
- A . Use Amazon Athena to run a job on the S3 buckets that contain the affected data. Filter the findings by using the SensitiveData:S30bject/Personal finding type.
- B . Use Amazon Made to run a job on the S3 buckets that contain the affected data. Filter the findings by using the SensitiveData:S30bject/Financial finding type.
- C . Use Amazon Made to run a job on the S3 buckets that contain the affected data. Filter the findings by using the SensitiveData:S30bject/Personal finding type.
- D . Use Amazon Athena to run a job on the S3 buckets that contain the affected data. Filter the findings by using the SensitiveData:S30bject/Financial finding type.
B
Explanation:
Requirement Summary:
Customer credit card data may be exposed
Data is stored in Amazon S3
Developer must identify all exposure risks
Tool to Use:
Amazon Macie is designed to:
Automatically scan S3 for sensitive data
Detect financial information, PII, credentials, etc.
Finding Type Mapping:
Credit card data maps to: SensitiveData:S3Object/Financial
Evaluate Options:
