Practice Free DVA-C02 Exam Online Questions
A company is developing a serverless application that requires storage of sensitive API keys as environment variables for various services. The application requires the automatic rotation of the encryption keys every year.
Which solution will meet these requirements with no development effort?
- A . Encrypt the environment variables by using AWS Secrets Manager. Set up automatic rotation in Secrets Manager.
- B . Encrypt the environment variables by using AWS Key Management Service (AWS KMS) customer managed keys. Enable automatic key rotation.
- C . Encrypt the environment variables by using AWS Key Management Service (AWS KMS) AWS managed keys. Configure a custom AWS Lambda function to automate key rotation.
- D . Encrypt the environment variables by using AWS Systems Manager Parameter Store. Set up automatic rotation in Parameter Store.
A company is launching a photo sharing application on AWS. Users use the application to upload images to an Amazon S3 bucket. When users upload images, an AWS Lambda function creates thumbnail versions of the images and stores the thumbnail versions in another S3 bucket.
During development, a developer notices that the Lambda function takes more than 2 minutes to complete the thumbnail process. The company needs all images to be processed in less than 30 seconds.
What should the developer do to meet these requirements?
- A . Increase the virtual CPUs (vCPUs) for the Lambda function to use 10 vCPUs.
- B . Change the Lambda function instance type to use m6a.4xlarge.
- C . Configure the Lambda function to increase the amount of memory.
- D . Configure burstable performance for the Lambda function.
C
Explanation:
AWS Lambda allocates CPU, network throughput, and other resources proportionally to the function’s configured memory. If a Lambda function is CPU-bound (common for image manipulation such as resizing and thumbnail generation) or limited by I/O throughput, increasing the configured memory often improves performance because it also increases the amount of CPU available to the function. This is a standard Lambda optimization technique: tune memory to reduce runtime and achieve a lower latency target.
In this scenario, the thumbnail generation is taking more than 2 minutes, but the requirement is to process each image in under 30 seconds. Without changing the code, the most direct lever is to increase the function’s memory setting and retest. As memory increases, Lambda provides more compute power, which can significantly speed up CPU-intensive libraries (for example, image decoding, resizing, encoding) and reduce overall execution time. Many workloads show near-linear improvements until other bottlenecks (such as downstream service latency) dominate.
Options A and B are not applicable to Lambda in this way. Lambda does not let you directly select a fixed number of vCPUs or choose an EC2 instance type such as m6a.4xlarge for a function; those are EC2 concepts, not Lambda configuration parameters.
Option D (“burstable performance”) is also not a Lambda configuration model; burstable performance applies to certain EC2 instance families, not Lambda’s managed execution environment.
Therefore, the correct action is C: increase the Lambda function’s memory allocation (which also increases available CPU and throughput). This is the AWS-supported way to improve performance for compute-intensive Lambda functions and is the most likely to bring processing time down toward the 30-second requirement without code changes.
An ecommerce company manages its application’s infrastructure by using AWS Elastic Beanstalk. A developer wants to deploy the new version of the application with the least possible application downtime. The developer also must minimize the application’s rollback time if there are issues with the deployment.
Which approach will meet these requirements?
- A . Use a rolling deployment to deploy the new version.
- B . Use a rolling deployment with additional batches to deploy the new version.
- C . Use an all-at-once deployment to deploy the new version.
- D . Deploy the new version to a new environment. Use a blue/green deployment.
D
Explanation:
The goal is minimal downtime during deployment and fast rollback if problems occur. In Elastic Beanstalk, the approach that best satisfies both is a blue/green deployment.
With blue/green, the developer deploys the new application version to a separate environment (green) while the current production environment (blue) continues serving traffic. Once validation is complete (health checks, smoke tests), the developer performs a controlled CNAME swap (or routing switch) so traffic shifts from blue to green with near-zero downtime. This reduces risk because the existing environment remains intact and can immediately resume traffic if issues are detected.
Rollback time is also minimized because reverting is simply switching traffic back to the original environment (swap back), rather than rolling back changes across the same environment.
Why the other options are weaker:
All-at-once (C) introduces the highest downtime and risk because it replaces all instances at once, potentially taking the application fully offline and making rollback slower.
Rolling (A) reduces downtime compared to all-at-once, but instances are still updated in-place; if issues appear mid-deployment, rollback is more complex and slower than swapping environments.
Rolling with additional batches (B) can further reduce capacity impact by adding extra instances during deployment, but rollback still involves reversing updates in the same environment and is typically slower than blue/green.
Therefore, deploying to a new environment and using blue/green provides the least downtime and the fastest rollback.
A developer has an application that stores data in an Amazon S3 bucket. The application uses an HTTP API to store and retrieve objects. When the PutObject API operation adds objects to the S3 bucket the developer must encrypt these objects at rest by using server-side encryption with Amazon S3 managed keys (SSE-S3).
Which solution will meet this requirement?
- A . Create an AWS Key Management Service (AWS KMS) key. Assign the KMS key to the S3 bucket.
- B . Set the x-amz-server-side-encryption header when invoking the PutObject API operation.
- C . Provide the encryption key in the HTTP header of every request.
- D . Apply TLS to encrypt the traffic to the S3 bucket.
B
Explanation:
Amazon S3 supports server-side encryption, which encrypts data at rest on the server that stores the data. One of the encryption options is SSE-S3, which uses keys managed by S3. To use SSE-S3, the x-amz-server-side-encryption header must be set to AES256 when invoking the PutObject API operation. This instructs S3 to encrypt the object data with SSE-S3 before saving it on disks in its data centers and decrypt it when it is downloaded.
Reference: Protecting data using server-side encryption with Amazon S3-managed encryption keys (SSE-S3)
A developer is automating a new application deployment with AWS SAM. The new application has one AWS Lambda function and one Amazon S3 bucket. The Lambda function must access the S3 bucket to only read objects.
How should the developer configure AWS SAM to grant the necessary read permission to the S3 bucket?
- A . Reference a second Lambda authorizer function.
- B . Add a custom S3 bucket policy to the Lambda function.
- C . Create an Amazon SQS topic for only S3 object reads. Reference the topic in the template.
- D . Add the S3ReadPolicy template to the Lambda function’s execution role.
D
Explanation:
Step-by-Step Breakdown:
Requirement Summary:
Use AWS SAM to deploy:
1 Lambda Function
1 S3 Bucket
Lambda needs read-only access to the S3 bucket
Solution must be expressed via AWS SAM template
Option A: Reference a second Lambda authorizer function
Incorrect: Lambda authorizers are used in API Gateway for authentication, not for granting S3 permissions.
Option B: Add a custom S3 bucket policy to the Lambda function
Incorrect: Bucket policies control who can access the bucket, not what the Lambda function can do.
The permission must be granted to the Lambda’s IAM execution role.
Option C: Create an Amazon SQS topic for only S3 object reads
Incorrect: SQS cannot read objects from S3 and is not relevant to this scenario.
Option D: Add the S3ReadPolicy template to the Lambda function’s execution role
Correct: AWS SAM provides managed policy templates like AmazonS3ReadOnlyAccess and shortcuts like S3ReadPolicy.
You can apply these to the Lambda’s execution role using the Policies: section in your SAM template.
Example SAM YAML:
yaml
CopyEdit
Resources:
MyFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: my-code/
Handler: app.handler
Runtime: python3.11
Policies:
– S3ReadPolicy: BucketName: !Ref MyBucket
MyBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: my-personal-bucket-name
SAM Policy Templates: https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-policy-templates.html
Example using S3ReadPolicy: https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-policy-templates.html#s3-readpolicy
A company has a large amount of data in an Amazon DynamoDB table. A large batch of data is appended to the table once each day. The company wants a solution that will make all the existing and future data in DynamoDB available for analytics on a long-term basis.
Which solution meets these requirements with the LEAST operational overhead?
- A . Configure DynamoDB incremental exports to Amazon S3.
- B . Configure Amazon DynamoDB Streams to write records to Amazon S3.
- C . Configure Amazon EMR to copy DynamoDB data to Amazon S3.
- D . Configure Amazon EMR to copy DynamoDB data to Hadoop Distributed File System (HDFS).
A
Explanation:
Why Option A is Correct: DynamoDB supports incremental exports to Amazon S3 natively, making data analytics-ready with minimal operational overhead.
Why Other Options are Incorrect:
Option B: DynamoDB Streams require additional processing logic to write to S3, increasing complexity.
Option C & D: Using EMR for data movement adds unnecessary operational overhead compared to native exports.
AWS Documentation
Reference: DynamoDB Exports to S3
A company introduced a new feature that should be accessible to only a specific group of premium customers. A developer needs the ability to turn the feature on and off in response to performance and feedback. The developer needs a solution to validate and deploy these configurations quickly without causing any disruptions.
What should the developer do to meet these requirements?
- A . Use AWS AppConfig to manage the feature configuration and to validate and deploy changes. Use feature flags to turn the feature on and off.
- B . Use AWS Secrets Manager to securely manage and validate the feature configurations. Enable lifecycle rules to turn the feature on and off.
- C . Use AWS Config to manage the feature configuration and validation. Set up AWS Config rules to turn the feature on and off based on predefined conditions.
- D . Use AWS Systems Manager Parameter Store to store and validate the configuration settings for the feature. Enable lifecycle rules to turn the feature on and off.
A
Explanation:
This is a feature flag / dynamic configuration requirement: enable a new feature for a subset of users (premium customers), toggle it on/off quickly based on feedback/performance, and deploy configuration changes safely with validation and without disruption. AWS AppConfig (part of AWS Systems Manager) is designed for exactly this use case.
AWS AppConfig helps create, manage, and deploy application configurations, including feature flags, in a controlled way. It supports configuration validators (such as JSON schema or Lambda validators) to catch errors before deployment. It also supports controlled rollouts with deployment strategies to reduce blast radius (for example, gradual deployments). This allows rapid changes without redeploying code and minimizes disruption.
Option A aligns perfectly: use AppConfig feature flags to target premium users and toggle the feature quickly.
Option B is incorrect because Secrets Manager is for sensitive secrets (passwords, API keys), not feature flag management and progressive configuration deployments.
Option C is incorrect because AWS Config evaluates resource compliance and records configuration changes of AWS resources; it does not serve runtime feature flag delivery.
Option D (Parameter Store) can store configuration values, but it does not provide the same feature-flag-focused capabilities, validation/deployment strategies, and safe rollout controls that AppConfig provides. Also, “lifecycle rules” is not how Parameter Store toggles features.
A developer is building an application that processes a stream of user-supplied data. The data stream must be consumed by multiple Amazon EC2-based processing applications in parallel and in real time. Each processor must be able to resume without losing data if there is a service interruption. The application architect plans to add other processors in the near future and wants to minimize the amount of data duplication involved.
Which solution will satisfy these requirements?
- A . Publish the data to Amazon SQS.
- B . Publish the data to Amazon Data Firehose.
- C . Publish the data to Amazon EventBridge.
- D . Publish the data to Amazon Kinesis Data Streams.
D
Explanation:
The requirements describe a real-time streaming use case where multiple processing applications must read the same stream in parallel, each must be able to resume processing after interruptions without data loss, and the system should support adding new processors with minimal duplication. Amazon Kinesis Data Streams (KDS) is purpose-built for this pattern.
With Kinesis Data Streams, producers write records to a stream that is split into shards. Multiple consumer applications can independently read from the stream using separate consumer groups (for example, using Kinesis Client Library). Each consumer maintains its own checkpoint (sequence number position), which allows it to resume from the last processed record after a restart or failure. This directly meets the “resume without losing data” requirement.
KDS also supports a configurable retention period (commonly 24 hours by default, extendable), enabling replay and recovery without forcing producers to re-send data or duplicating messages for each processor. Adding new consumers is straightforward: a new processor can begin reading the existing stream (from latest or from a trim horizon), which minimizes duplication compared to fan-out duplication patterns.
Why the others don’t fit:
SQS is a queue, not a stream. Multiple consumers typically compete for messages rather than all consuming the same messages independently, and duplication is required if multiple apps must process the same data.
Kinesis Data Firehose is primarily for delivery to destinations (S3, Redshift, OpenSearch) and not designed for multiple real-time parallel consumers with replay semantics.
EventBridge is event routing with limited replay/resume semantics compared to KDS and is not intended for high-throughput stream processing with per-consumer checkpoints.
Therefore, Amazon Kinesis Data Streams satisfies all requirements.
A developer has observed an increase in bugs in the AWS Lambda functions that a development team has deployed in its Node is application. To minimize these bugs, the developer wants to impendent automated testing of Lambda functions in an environment that Closely simulates the Lambda environment.
The developer needs to give other developers the ability to run the tests locally. The developer also needs to integrate the tests into the team’s continuous integration and continuous delivery (Ct/CO) pipeline before the AWS Cloud Development Kit (AWS COK) deployment.
Which solution will meet these requirements?
- A . Create sample events based on the Lambda documentation. Create automated test scripts that use the cdk local invoke command to invoke the Lambda functions. Check the response Document the test scripts for the other developers on the team Update the CI/CD pipeline to run the test scripts.
- B . Install a unit testing framework that reproduces the Lambda execution environment. Create sample events based on the Lambda Documentation Invoke the handler function by using a unit testing framework. Check the response Document how to run the unit testing framework for the other developers on the team. Update the OCD pipeline to run the unit testing framework.
- C . Install the AWS Serverless Application Model (AWS SAW) CLI tool Use the Sam local generate-event command to generate sample events for me automated tests. Create automated test scripts that use the Sam local invoke command to invoke the Lambda functions. Check the response Document the test scripts tor the other developers on the team Update the CI/CD pipeline to run the test scripts.
- D . Create sample events based on the Lambda documentation. Create a Docker container from the Node is base image to invoke the Lambda functions. Check the response Document how to run the Docker container for the more developers on the team update the CI/CD pipeline to run the Docker container.
C
Explanation:
This solution will meet the requirements by using AWS SAM CLI tool, which is a command line tool that lets developers locally build, test, debug, and deploy serverless applications defined by AWS SAM templates. The developer can use sam local generate-event command to generate sample events for different event sources such as API Gateway or S3. The developer can create automated test scripts that use sam local invoke command to invoke Lambda functions locally in an environment that closely simulates Lambda environment. The developer can check the response from Lambda functions and document how to run the test scripts for other developers on the team. The developer can also update CI/CD pipeline to run these test scripts before deploying with AWS CDK.
Option A is not optimal because it will use cdk local invoke command, which does not exist in AWS CDK CLI tool.
Option B is not optimal because it will use a unit testing framework that reproduces Lambda execution environment, which may not be accurate or consistent with Lambda environment.
Option D is not optimal because it will create a Docker container from Node.js base image to invoke Lambda functions, which may introduce additional overhead and complexity for creating and running Docker containers.
Reference: [AWS Serverless Application Model (AWS SAM)], [AWS Cloud Development Kit (AWS CDK)]
A company needs to rapidly prototype a web application. However, the company has not yet designed the complete architecture.
A developer uses AWS Lambda functions to build three endpoints. A frontend team wants to test the endpoints while the team prototypes the frontend.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Set up a Lambda function URL for each endpoint. Use the function URLs for testing.
- B . Set up an Amazon API Gateway REST API to have a Lambda proxy integration. Use the REST API endpoint URL for testing.
- C . Set up an AWS AppSync API to have a Lambda resolver. Use a GraphQL endpoint for testing.
- D . Set up an Amazon ECS container that runs an open source web proxy and Lambda code. Use the web proxy endpoint for testing.
A
Explanation:
For rapid prototyping with the least operational overhead, Lambda function URLs are the best fit. A Lambda function URL provides a built-in HTTPS endpoint directly in front of a Lambda function, allowing the frontend team to invoke each endpoint without requiring additional infrastructure. This is ideal when the overall architecture is not finalized and the team needs a quick way to test.
AWS describes Lambda function URLs as a simple way to “add an HTTPS endpoint to your Lambda function” without requiring API Gateway, load balancers, or custom proxy layers. This substantially reduces setup time and avoids the operational tasks associated with provisioning, configuring, and managing an API layer during early-stage development.
Option B (API Gateway REST API) introduces more operational overhead: creating resources, methods, integrations, deployments, stages, and potentially authorization and throttling configuration. API Gateway is powerful, but it is more than needed for quick endpoint testing.
Option C (AppSync) is designed for GraphQL-based APIs and requires schema design and resolvers.
That is unnecessary complexity for simply testing three Lambda-backed endpoints.
Option D (ECS proxy) adds the most operational burden because it requires container orchestration, networking, scaling, and patching―completely misaligned with “least operational overhead.”
Therefore, the fastest and simplest approach is to create a function URL per Lambda endpoint, allowing immediate testing from the frontend with minimal additional AWS configuration.
