Practice Free DVA-C02 Exam Online Questions
A company recently deployed an AWS Lambda function. A developer notices an increase in the function throttle metrics in Amazon CloudWatch.
What are the MOST operationally efficient solutions to reduce the function throttling? (Select TWO.)
- A . Migrate the function to Amazon EKS.
- B . Increase the maximum age of events in Lambda.
- C . Increase the function’s reserved concurrency.
- D . Add the lambda:GetFunctionConcurrency action to the execution role.
- E . Request a service quota change for increased concurrency.
C,E
Explanation:
Lambda throttling occurs when the number of concurrent executions exceeds the available concurrency. This can happen due to account-level concurrency limits, function-level reserved concurrency limits, or sudden traffic spikes.
The most operationally efficient ways to reduce throttling are to increase available concurrency:
Option C: Increasing the function’s reserved concurrency can reduce throttling when the function is being constrained by a too-low reserved concurrency value. Reserved concurrency guarantees a fixed amount of concurrency for the function (and also caps it). If the cap is currently too low, raising it allows more parallel executions and reduces throttles.
Option E: If throttling is due to the account concurrency quota being reached (or burst scaling limits in some patterns), the correct fix is to request a service quota increase. This increases the total available concurrency capacity for the account (and/or specific dimensions), allowing more simultaneous executions across functions.
Why the others are not correct:
A (migrate to EKS) is high effort and not operationally efficient for solving Lambda throttling.
B (maximum age of events) applies to asynchronous event retries/queues; it does not reduce throttling―it only changes how long events remain eligible for processing.
D is irrelevant: adding lambda:GetFunctionConcurrency to the execution role only allows reading settings and does not change throttling behavior.
Therefore, the best operational fixes are to increase reserved concurrency and request a concurrency quota increase.
A company has an application that consists of different microservices that run inside an AWS account. The microservices are running in containers inside a single VPC. The number of microservices is constantly increasing. A developer must create a central logging solution for application logs.
Which solution will meet these requirements?
- A . Create a different Amazon CloudWatch Logs stream for each microservice.
- B . Create an AWS CloudTrail trail to log all the API calls.
- C . Configure VPC Flow Logs to track the communications between the microservices.
- D . Use AWS Cloud Map to map the interactions of the microservices.
A
Explanation:
Centralized logging for microservices means collecting application logs from many services into a single managed logging backend where logs can be searched, retained, and monitored. For containerized microservices on AWS, the standard approach is to send each service’s stdout/stderr logs to Amazon CloudWatch Logs.
Option A is the appropriate solution: create separate CloudWatch Logs streams (and typically log groups) per microservice. This scales naturally as microservices increase, supports retention policies, metric filters, alarms, and integrations with tools like CloudWatch Logs Insights for querying across services. Container orchestrators (ECS/EKS) can be configured to use the CloudWatch Logs log driver so logs are shipped automatically without custom log shipping agents.
Option B (CloudTrail) logs AWS API activity, not application logs.
Option C (VPC Flow Logs) captures network flow metadata, not application-level logs.
Option D (Cloud Map) is service discovery, not logging.
Therefore, using CloudWatch Logs streams per microservice is the correct central logging approach.
A developer is migrating a containerized application from an on-premises environment to the AWS Cloud. The developer is using the AWS CDK to provision a container in Amazon ECS on AWS Fargate. The container is behind an Application Load Balancer (ALB).
When the developer deploys the stack, the deployment fails because the ALB fails health checks. The developer needs to resolve the failed health checks.
Which solutions will meet this requirement? (Select TWO.)
- A . Confirm that the capacity providers for the container have been provisioned and are properly sized.
- B . Confirm that the target group port matches the port mappings in the ECS task definition.
- C . Confirm that a hosted zone associated with the ALB matches a hosted zone that is referenced in the ECS task definition.
- D . Confirm that the ALB listener on the mapped port has a default action that redirects to the application’s health check path endpoint.
- E . Confirm that the ALB listener on the mapped port has a default action that forwards to the correct target group.
B,E
Explanation:
Option B: The target group port in the ALB must match the port specified in the ECS task definition. If there is a mismatch, the ALB health check will fail since it cannot correctly route traffic to the container.
Option E: The ALB listener must have a default action that forwards requests to the correct target group associated with the ECS service. If this configuration is missing, the health check will fail as no traffic is routed to the service.
Option A is irrelevant to resolving health check issues since capacity providers relate to provisioning compute capacity.
Option C (hosted zone) is not directly related to ALB health checks.
Option D (redirecting traffic) is not related to ECS health check configurations.
Reference: AWS ECS Health Check Documentation
A developer is troubleshooting an application in an integration environment. In the application, an Amazon Simple Queue Service (Amazon SQS) queue consumes messages and then an AWS Lambda function processes the messages. The Lambda function transforms the messages and makes an API call to a third-party service.
There has been an increase in application usage. The third-party API frequently returns an HTTP 429 Too Many Requests error message. The error message prevents a significant number of messages from being processed successfully.
How can the developer resolve this issue?
- A . Increase the SQS event source’s batch size setting.
- B . Configure provisioned concurrency for the Lambda function based on the third-party API’s documented rate limits.
- C . Increase the retry attempts and maximum event age in the Lambda function’s asynchronous configuration.
- D . Configure maximum concurrency on the SQS event source based on the third-party service’s documented rate limits.
D
Explanation:
Maximum concurrency for SQS as an event source allows customers to control the maximum concurrent invokes by the SQS event source1. When multiple SQS event sources are configured to a function, customers can control the maximum concurrent invokes of individual SQS event source1.
In this scenario, the developer needs to resolve the issue of the third-party API frequently returning an HTTP 429 Too Many Requests error message, which prevents a significant number of messages from being processed successfully. To achieve this, the developer can follow these steps:
Find out the documented rate limits of the third-party API, which specify how many requests can be made in a given time period.
Configure maximum concurrency on the SQS event source based on the rate limits of the third-party API. This will limit the number of concurrent invokes by the SQS event source and prevent exceeding the rate limits of the third-party API.
Test and monitor the application performance and adjust the maximum concurrency value as needed.
By using this solution, the developer can reduce the frequency of HTTP 429 errors and improve the message processing success rate. The developer can also avoid throttling or blocking by the third-
party API.
A developer designed an application on an Amazon EC2 instance The application makes API requests to objects in an Amazon S3 bucket
Which combination of steps will ensure that the application makes the API requests in the MOST secure manner? (Select TWO.)
- A . Create an IAM user that has permissions to the S3 bucket. Add the user to an 1AM group
- B . Create an IAM role that has permissions to the S3 bucket
- C . Add the IAM role to an instance profile. Attach the instance profile to the EC2 instance.
- D . Create an 1AM role that has permissions to the S3 bucket Assign the role to an 1AM group
- E . Store the credentials of the IAM user in the environment variables on the EC2 instance
B,C
Explanation:
IAM Roles for EC2: IAM roles are the recommended way to provide AWS credentials to applications running on EC2 instances. Here’s how this works:
You create an IAM role with the necessary permissions to access the target S3 bucket.
You create an instance profile and associate the IAM role with this profile.
When launching the EC2 instance, you attach this instance profile.
Temporary Security Credentials: When the application on the EC2 instance needs to access S3, it doesn’t directly use access keys. Instead, the AWS SDK running on the instance retrieves temporary security credentials associated with the role. These are rotated automatically by AWS.
Reference: IAM Roles for Amazon
EC2: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2.html
Temporary Security
Credentials: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp.html
An ecommerce application is running behind an Application Load Balancer. A developer observes some unexpected load on the application during non-peak hours. The developer wants to analyze patterns for the client IP addresses that use the application.
Which HTTP header should the developer use for this analysis?
- A . The X-Forwarded-Proto header
- B . The X-F Forwarded-Host header
- C . The X-Forwarded-For header
- D . The X-Forwarded-Port header
C
Explanation:
The HTTP header that the developer should use for this analysis is the X-Forwarded-For header. This header contains the IP address of the client that made the request to the Application Load Balancer. The developer can use this header to analyze patterns for the client IP addresses that use the application. The other headers either contain information about the protocol, host, or port of the request, which are not relevant for the analysis.
Reference: How Application Load Balancer works with your applications
A company with multiple branch locations has an analytics and reporting application. Each branch office pushes a sales report to a shared Amazon S3 bucket at a predefined time each day. The company has developed an AWS Lambda function that analyzes the reports from all branch offices in a single pass. The Lambda function stores the results in a database.
The company needs to start the analysis once each day at a specific time.
Which solution will meet these requirements MOST cost-effectively?
- A . Configure an S3 event notification to invoke the Lambda function when a branch office uploads a sales report.
- B . Create an AWS Step Functions state machine that invokes the Lambda function once each day at the predefined time.
- C . Configure the Lambda function to run continuously and to begin analysis only at the predefined time each day.
- D . Create an Amazon EventBridge scheduled rule that invokes the Lambda function once each day at the predefined time.
D
Explanation:
The requirement is to run a single analysis job once per day at a specific time, after branch offices have uploaded their daily reports. The most cost-effective and straightforward way to trigger a Lambda function on a fixed schedule is to use Amazon EventBridge scheduled rules (cron or rate expressions). With a scheduled rule, EventBridge invokes the Lambda function at the exact predefined time each day without requiring any continuously running resources, polling logic, or additional orchestration unless it is needed.
Option D fits perfectly: an EventBridge schedule is purpose-built for time-based triggers, and you pay only for the invocations and any EventBridge rule evaluations according to AWS pricing, with minimal operational overhead. The Lambda function will run exactly once each day and can then read all relevant objects from the S3 bucket and process them in a single pass as designed.
Option A (S3 event notifications) triggers the Lambda function per upload, which is not aligned with “single pass once per day.” It could cause multiple invocations and additional cost and complexity (coordination, waiting for all branches, handling partial arrival).
Option B (Step Functions) can schedule via EventBridge as well, but Step Functions introduces additional state machine costs and is unnecessary if the requirement is simply a scheduled single Lambda invocation.
Option C is the least cost-effective: running continuously to “wait” until a time wastes compute time and increases cost, and it is not an event-driven design.
Therefore, D is the most cost-effective solution: use an EventBridge scheduled rule to invoke the Lambda function once daily at the predefined time.
A developer must cache dependent artifacts from Maven Central, a public package repository, as part of an application’s build pipeline. The build pipeline has an AWS CodeArtifact repository where
artifacts of the build are published. The developer needs a solution that requires minimum changes to the build pipeline.
Which solution meets these requirements?
- A . Modify the existing CodeArtifact repository to associate an upstream repository with the public package repository.
- B . Create a new CodeArtifact repository that has an external connection to the public package repository.
- C . Create a new CodeArtifact domain that contains a new repository that has an external connection to the public package repository.
- D . Modify the CodeArtifact repository resource policy to allow artifacts to be fetched from the public package repository.
A developer is working on an ecommerce platform that communicates with several third-party payment processing APIs. The third-party payment services do not provide a test environment.
The developer needs to validate the ecommerce platform’s integration with the third-party payment processing APIs. The developer must test the API integration code without invoking the third-party payment processing APIs.
Which solution will meet these requirements?
- A . Set up an Amazon API Gateway REST API with a gateway response configured for status code 200
Add response templates that contain sample responses captured from the real third-party API. - B . Set up an AWS AppSync GraphQL API with a data source configured for each third-party API Specify an integration type of Mock Configure integration responses by using sample responses captured from the real third-party API.
- C . Create an AWS Lambda function for each third-party API. Embed responses captured from the real third-party API. Configure Amazon Route 53 Resolver with an inbound endpoint for each Lambda function’s Amazon Resource Name (ARN).
- D . Set up an Amazon API Gateway REST API for each third-party API Specify an integration request type of Mock Configure integration responses by using sample responses captured from the real third-party API
D
Explanation:
Mocking API Responses: API Gateway’s Mock integration type enables simulating API behavior without invoking backend services.
Testing with Sample Data: Using captured responses from the real third-party API ensures realistic testing of the integration code.
Focus on Integration Logic: This solution allows the developer to isolate and test the application’s interaction with the payment APIs, even without a test environment from the third-party providers.
Reference: Amazon API Gateway Mock
Integrations: https://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-mock-integration.html
A company is running a custom web application on Amazon EC2 instances behind an Application Load Balancer. The instances run in an Auto Scaling group. The company’s development team is using AWS CloudFormation to deploy all the services. The application is time-consuming to install and configure when the development team launches a new instance.
Which combination of steps should a developer take to optimize the performance when a new instance is launched? (Select TWO.)
- A . Use an AWS Marketplace Amazon Machine Image (AMI) with a prebuilt application.
- B . Create a prebuilt Amazon Machine Image (AMI) with the application installed and configured.
- C . Update the launch template resource in the CloudFormation template.
- D . Use AWS Systems Manager Run Command to install and configure the application.
- E . Use CloudFormation helper scripts to install and configure the application.
B, C
Explanation:
To reduce the time it takes for new Auto Scaling instances to become ready, the most effective optimization is to bake the time-consuming installation and configuration steps into a custom AMI. By creating a prebuilt AMI that already contains the application binaries, dependencies, and baseline configuration, new instances start from an image that is already near “ready to serve.” This minimizes lengthy bootstrapping and significantly shortens the time required to pass ALB health checks and enter service. Therefore, creating a prebuilt AMI (B) is a primary performance optimization for faster scaling events and lower deployment variability.
After creating the AMI, the Auto Scaling group must actually use it. Because the infrastructure is managed with AWS CloudFormation, the developer should update the launch template resource (or launch configuration equivalent) in the CloudFormation template to reference the new AMI ID. This ensures that all newly launched instances use the optimized image consistently and that the change is tracked and repeatable as part of infrastructure as code. That is option C.
The other options are less optimal for the stated goal. Installing and configuring the app at launch time via Systems Manager Run Command (D) or CloudFormation helper scripts (E) still performs the expensive work during instance initialization, which prolongs launch time and can create variance (for example, dependency download time, repo availability, or transient network slowness).
Option A (Marketplace AMI) could help only if a suitable prebuilt image exists for the exact custom application―which is unlikely for a proprietary application―and it also reduces control over the build and hardening process compared to maintaining an internal golden AMI.
Therefore, the best combination is B (bake the app into a custom AMI) and C (update the CloudFormation-managed launch template to use that AMI).
