Practice Free DVA-C02 Exam Online Questions
A developer at a company recently created a serverless application to process and show data from business reports. The application’s user interface (UI) allows users to select and start processing the files. The Ul displays a message when the result is available to view. The application uses AWS Step Functions with AWS Lambda functions to process the files. The developer used Amazon API Gateway and Lambda functions to create an API to support the UI.
The company’s Ul team reports that the request to process a file is often returning timeout errors because of the see or complexity of the files. The Ul team wants the API to provide an immediate response so that the Ul can deploy a message while the files are being processed. The backend process that is invoked by the API needs to send an email message when the report processing is complete.
What should the developer do to configure the API to meet these requirements?
- A . Change the API Gateway route to add an X-Amz-Invocation-Type header win a sialic value of ‘Event’ in the integration request Deploy the API Gateway stage to apply the changes.
- B . Change the configuration of the Lambda function that implements the request to process a file. Configure the maximum age of the event so that the Lambda function will ion asynchronously.
- C . Change the API Gateway timeout value to match the Lambda function ominous value. Deploy the API Gateway stage to apply the changes.
- D . Change the API Gateway route to add an X-Amz-Target header with a static value of ‘A sync’ in the integration request Deploy me API Gateway stage to apply the changes.
A
Explanation:
This solution allows the API to invoke the Lambda function asynchronously, which means that the API will return an immediate response without waiting for the function to complete. The X-Amz-Invocation-Type header specifies the invocation type of the Lambda function, and setting it to ‘Event’ means that the function will be invoked asynchronously. The function can then use Amazon Simple Email Service (SES) to send an email message when the report processing is complete.
Reference: [Asynchronous invocation], [Set up Lambda proxy integrations in API Gateway]
A developer is building an application that needs to store an API key. An AWS Lambda function needs to use the API key. The developer’s company requires secrets to be encrypted at rest by an AWS KMS key. The company must control key rotation.
Which solutions will meet these requirements? (Select TWO.)
- A . Store the API key as an AWS Secrets Manager secret. Encrypt the secret with an AWS managed KMS key.
- B . Store the API key as an AWS Systems Manager Parameter Store String parameter.
- C . Store the API key as an AWS Systems Manager Parameter Store SecureString parameter. Encrypt the parameter with a customer managed KMS key.
- D . Store the API key in a Lambda environment variable. Encrypt the environment variable with an AWS managed KMS key.
- E . Store the API key in a Lambda environment variable. Encrypt the environment variable with a customer managed KMS key.
C, E
Explanation:
The requirements are: (1) store an API key that a Lambda function will use, (2) ensure the secret is encrypted at rest using AWS KMS, and (3) the company must control key rotation, which implies using a customer managed KMS key (CMK) rather than an AWS managed key.
Option C meets the requirements by storing the API key in AWS Systems Manager Parameter Store as a SecureString parameter encrypted with a customer managed KMS key. SecureString is designed for sensitive configuration data and integrates with KMS so the organization can choose the CMK, manage its lifecycle, and control rotation policies. The Lambda function can retrieve the parameter at runtime using the AWS SDK, and IAM policies can tightly control access to the parameter and to the KMS key.
Option E also meets the requirements by storing the API key in a Lambda environment variable encrypted with a customer managed KMS key. Lambda encrypts environment variables at rest and allows you to specify a customer managed KMS key for encryption. This gives the company control over key rotation and key policy, satisfying the “must control key rotation” requirement. The function reads the value from the environment at runtime without additional network calls.
Why the other options fail:
A uses an AWS managed KMS key, which does not satisfy the requirement for the company to control rotation (you cannot manage rotation of AWS managed keys in the same way).
B is a plain String parameter, which is not encrypted as a secret and does not meet the at-rest encryption requirement.
D uses an AWS managed KMS key, again failing the company-controlled rotation requirement.
Therefore, the two valid solutions are C (Parameter Store SecureString with a customer managed CMK) and E (Lambda environment variable encrypted with a customer managed CMK).
A developer is building a web application that uses Amazon API Gateway to expose an AWS Lambda function to process requests from clients. During testing, the developer notices that the API Gateway times out even though the Lambda function finishes under the set time limit.
Which of the following API Gateway metrics in Amazon CloudWatch can help the developer troubleshoot the issue? (Choose two.)
- A . CacheHitCount
- B . IntegrationLatency
- C . CacheMissCount
- D . Latency
- E . Count
B,D
Explanation:
Amazon API Gateway is a service that enables developers to create, publish, maintain, monitor, and secure APIs at any scale. Amazon CloudWatch is a service that monitors AWS resources and applications. API Gateway provides several CloudWatch metrics to help developers troubleshoot issues with their APIs. Two of the metrics that can help the developer troubleshoot the issue of API Gateway timing out are:
Integration Latency: This metric measures the time between when API Gateway relays a request to the backend and when it receives a response from the backend. A high value for this metric indicates that the backend is taking too long to respond and may cause API Gateway to time out.
Latency: This metric measures the time between when API Gateway receives a request from a client and when it returns a response to the client. A high value for this metric indicates that either the integration latency is high or API Gateway is taking too long to process the request or response.
Reference: [What Is Amazon API Gateway? – Amazon API Gateway]
[Amazon API Gateway Metrics and Dimensions – Amazon CloudWatch]
[Troubleshooting API Errors – Amazon API Gateway]
A developer has an application that asynchronously invokes an AWS Lambda function. The developer wants to store messages that resulted in failed invocations of the Lambda function so that the
application can retry the call later.
What should the developer do to accomplish this goal with the LEAST operational overhead?
- A . Set up CloudWatch Logs log groups to filter and store the messages in an S3 bucket. Import the messages in Lambda. Run the Lambda function again.
- B . Configure Amazon EventBridge to send the messages to Amazon SNS to initiate the Lambda function again.
- C . Implement a dead-letter queue for discarded messages. Set the dead-letter queue as an event source for the Lambda function.
- D . Send Amazon EventBridge events to an Amazon SQS queue. Configure the Lambda function to pull messages from the SQS queue. Run the Lambda function again.
C
Explanation:
For asynchronous Lambda invocations, AWS provides built-in failure handling options that require minimal code and minimal operational work. When an async invocation fails (after Lambda’s internal retry behavior), the event can be sent to a dead-letter queue (DLQ) so it is not lost. A DLQ is the standard mechanism for capturing events that could not be processed successfully, preserving the original payload for later inspection and reprocessing.
Using a DLQ (typically Amazon SQS or Amazon SNS) gives durable storage of failed events and decouples failure recovery from the main execution path. The developer can later re-drive the messages by building a simple replay process, such as moving messages from the DLQ back to the original event source or invoking the function again with the stored payloads. This aligns with the requirement: “store messages that resulted in failed invocations so the application can retry later.”
Among the options, C is the only one that uses Lambda’s native async failure capture mechanism.
Options A and B introduce unnecessary complexity and do not reliably store the original failed invocation payloads as a first-class workflow (CloudWatch Logs is not a message queue, and EventBridge→SNS doesn’t automatically capture only failed events from Lambda async invocations).
Option D changes the architecture to an SQS pull model for the Lambda function; while valid, it is more than needed and adds design/operational considerations (polling, batching, visibility timeouts, DLQ configuration on the queue, etc.) compared with simply enabling DLQ support for async invokes.
Therefore, the least operational overhead solution is C: configure a dead-letter queue for the Lambda function’s asynchronous invocations so failed events are stored for later retry.
A developer registered an AWS Lambda function as a target for an Application Load Balancer (ALB) using a CLI command. However, the Lambda function is not being invoked when the client sends requests through the ALB.
Why is the Lambda function not being invoked?
- A . A Lambda function cannot be registered as a target for an ALB.
- B . A Lambda function can be registered with an ALB using AWS Management Console only.
- C . The permissions to invoke the Lambda function are missing.
- D . Cross-zone is not enabled on the ALB.
C
Explanation:
An Application Load Balancer can invoke Lambda functions as targets, but ALB must be explicitly granted permission to invoke the function. This is done by adding a resource-based permission to the Lambda function policy that allows the Elastic Load Balancing service principal to call lambda:InvokeFunction, typically scoped to the specific target group ARN.
If the developer registers the function as a target but does not add the required permission, the ALB will not be able to invoke the function when requests arrive. This is a common integration oversight: the target registration succeeds, but invocation fails because Lambda denies the call.
Option A is incorrect because ALB does support Lambda targets.
Option B is incorrect because Lambda targets can be configured through the console, CLI, or APIs.
Option D is unrelated: cross-zone load balancing affects distribution of traffic across AZs for instance/IP targets, not whether Lambda invocation is authorized.
A developer is creating an AWS Lambda function that consumes messages from an Amazon Simple Queue Service (Amazon SQS) standard queue. The developer notices that the Lambda function processes some messages multiple times.
How should developer resolve this issue MOST cost-effectively?
- A . Change the Amazon SQS standard queue to an Amazon SQS FIFO queue by using the Amazon SQS message deduplication ID.
- B . Set up a dead-letter queue.
- C . Set the maximum concurrency limit of the AWS Lambda function to 1
- D . Change the message processing to use Amazon Kinesis Data Streams instead of Amazon SQS.
A
Explanation:
Amazon Simple Queue Service (Amazon SQS) is a fully managed queue service that allows you to de-couple and scale for applications1. Amazon SQS offers two types of queues: Standard and FIFO (First In First Out) queues1. The FIFO queue uses the message DeduplicationId property to treat messages with the same value as duplicate2. Therefore, changing the Amazon SQS standard queue to an Amazon SQS FIFO queue using the Amazon SQS message deduplication ID can help resolve the issue of the Lambda function processing some messages multiple times. Therefore, option A is correct.
A company wants to automate part of its deployment process. A developer needs to automate the process of checking for and deleting unused resources that supported previously deployed stacks but that are no longer used.
The company has a central application that uses the AWS Cloud Development Kit (AWS CDK) to manage all deployment stacks. The stacks are spread out across multiple accounts. The developer’s solution must integrate as seamlessly as possible within the current deployment process.
Which solution will meet these requirements with the LEAST amount of configuration?
- A . In the central AWS CDK application, write a handler function in the code that uses AWS SDK calls to check for and delete unused resources. Create an AWS CloudPormation template from a JSON file. Use the template to attach the function code to an AWS Lambda function and lo invoke the Lambda function when the deployment slack runs.
- B . In the central AWS CDK application. write a handler function in the code that uses AWS SDK calls to check for and delete unused resources. Create an AWS CDK custom resource Use the custom resource to attach the function code to an AWS Lambda function and to invoke the Lambda function when the deployment stack runs.
- C . In the central AWS CDK, write a handler function m the code that uses AWS SDK calls to check for and delete unused resources. Create an API in AWS Amplify Use the API to attach the function code to an AWS Lambda function and to invoke the Lambda function when the deployment stack runs.
- D . In the AWS Lambda console write a handler function in the code that uses AWS SDK calls to check for and delete unused resources. Create an AWS CDK custom resource. Use the custom resource to import the Lambda function into the stack and to Invoke the Lambda function when the deployment stack runs.
B
Explanation:
This solution meets the requirements with the least amount of configuration because it uses a feature of AWS CDK that allows custom logic to be executed during stack deployment or deletion. The AWS Cloud Development Kit (AWS CDK) is a software development framework that allows you to define cloud infrastructure as code and provision it through CloudFormation. An AWS CDK custom resource is a construct that enables you to create resources that are not natively supported by CloudFormation or perform tasks that are not supported by CloudFormation during stack deployment or deletion. The developer can write a handler function in the code that uses AWS SDK calls to check for and delete unused resources, and create an AWS CDK custom resource that attaches the function code to a Lambda function and invokes it when the deployment stack runs. This way, the developer can automate the cleanup process without requiring additional configuration or integration. Creating a CloudFormation template from a JSON file will require additional configuration and integration with the central AWS CDK application. Creating an API in AWS Amplify will require additional configuration and integration with the central AWS CDK application and may not provide optimal performance or availability. Writing a handler function in the AWS Lambda console will require additional configuration and integration with the central AWS CDK application.
Reference: [AWS Cloud Development Kit (CDK)], [Custom Resources]
A developer is preparing to begin development of a new version of an application. The previous version of the application is deployed in a production environment. The developer needs to deploy fixes and updates to the current version during the development of the new version of the application. The code for the new version of the application is stored in AWS CodeCommit.
Which solution will meet these requirements?
- A . From the main branch, create a feature branch for production bug fixes. Create a second feature branch from the main branch for development of the new version.
- B . Create a Git tag of the code that is currently deployed in production. Create a Git tag for the development of the new version. Push the two tags to the CodeCommit repository.
- C . From the main branch, create a branch of the code that is currently deployed in production. Apply an IAM policy that ensures no other other users can push or merge to the branch.
- D . Create a new CodeCommit repository for development of the new version of the application.
Create a Git tag for the development of the new version.
A
Explanation:
A feature branch is a branch that is created from the main branch to work on a specific feature or
task1. Feature branches allow developers to isolate their work from the main branch and avoid conflicts with other changes1. Feature branches can be merged back to the main branch when the feature or task is completed and tested1.
In this scenario, the developer needs to maintain two parallel streams of work: one for fixing and updating the current version of the application that is deployed in production, and another for developing the new version of the application. The developer can use feature branches to achieve this goal.
The developer can create a feature branch from the main branch for production bug fixes. This branch will contain the code that is currently deployed in production, and any fixes or updates that need to be applied to it. The developer can push this branch to the CodeCommit repository and use it to deploy changes to the production environment.
The developer can also create a second feature branch from the main branch for development of the new version of the application. This branch will contain the code that is under development for the new version, and any changes or enhancements that are part of it. The developer can push this branch to the CodeCommit repository and use it to test and deploy the new version of the application in a separate environment.
By using feature branches, the developer can keep the main branch stable and clean, and avoid mixing code from different versions of the application. The developer can also easily switch between branches and merge them when needed.
A developer is building an ecommerce application that uses AWS Lambda functions. Each Lambda function performs a specific step in a customer order workflow, such as order processing and inventory management. The developer must ensure that the Lambda functions run in a specific order.
Which solution will meet this requirement with the LEAST operational overhead?
- A . Configure an Amazon SQS queue to contain messages about each step that a Lambda function must perform. Configure the Lambda functions to run sequentially based on the order of messages in the SQS queue.
- B . Configure an Amazon SNS topic to contain notifications about each step that a Lambda function must perform. Subscribe the Lambda functions to the SNS topic. Use subscription filters based on the
step that each Lambda function must perform. - C . Configure an AWS Step Functions state machine to invoke the Lambda functions in a specific order.
- D . Configure Amazon EventBridge Scheduler schedules to invoke the Lambda functions in a specific order.
C
Explanation:
When multiple Lambda functions must execute in a defined sequence as part of a workflow (order processing → payment → inventory → fulfillment, etc.), the AWS service designed to coordinate and orchestrate serverless workflows is AWS Step Functions.
Option C is the least operational overhead because Step Functions provides a managed state machine that invokes Lambda functions in an explicit order with built-in support for retries, timeouts, error handling, branching, and state passing between steps. The developer defines the workflow declaratively (Amazon States Language) and Step Functions ensures the sequence is enforced consistently.
Option A (SQS) is not a workflow orchestrator. Ensuring strict sequencing would require custom coordination logic, state tracking, and careful handling of retries and ordering―more code and complexity (and standard SQS does not guarantee strict order).
Option B (SNS) fans out events and is not designed for sequential orchestration.
Option D (EventBridge Scheduler) can schedule invocations at times, but it does not coordinate multi-step workflows with dependencies and conditional transitions.
A company is running Amazon EC2 instances in multiple AWS accounts. A developer needs to implement an application that collects all the lifecycle events of the EC2 instances. The application needs to store the lifecycle events in a single Amazon Simple Queue Service (Amazon SQS) queue in the company’s main AWS account for further processing.
Which solution will meet these requirements?
- A . Configure Amazon EC2 to deliver the EC2 instance lifecycle events from all accounts to the Amazon EventBridge event bus of the main account. Add an EventBridge rule to the event bus of the main account that matches all EC2 instance lifecycle events. Add the SQS queue as a target of the rule.
- B . Use the resource policies of the SQS queue in the main account to give each account permissions to write to that SQS queue. Add to the Amazon EventBridge event bus of each account an EventBridge rule that matches all EC2 instance lifecycle events. Add the SQS queue in the main account as a target of the rule.
- C . Write an AWS Lambda function that scans through all EC2 instances in the company accounts to detect EC2 instance lifecycle changes. Configure the Lambda function to write a notification message to the SQS queue in the main account if the function detects an EC2 instance lifecycle change. Add an Amazon EventBridge scheduled rule that invokes the Lambda function every minute.
- D . Configure the permissions on the main account event bus to receive events from all accounts. Create an Amazon EventBridge rule in each account to send all the EC2 instance lifecycle events to the main account event bus. Add an EventBridge rule to the main account event bus that matches all EC2 instance lifecycle events. Set the SQS queue as a target for the rule.
D
Explanation:
Amazon EC2 instances can send the state-change notification events to Amazon EventBridge. https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/monitoring-instance-state-changes.html Amazon EventBridge can send and receive events between event buses in AWS accounts. https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-cross-account.html
