Practice Free DVA-C02 Exam Online Questions
An application that is deployed to Amazon EC2 is using Amazon DynamoDB. The app cation calls the DynamoDB REST API Periodically the application receives a ProvisionedThroughputExceededException error when the application writes to a DynamoDB table.
Which solutions will mitigate this error MOST cost-effectively^ (Select TWO)
- A . Modify the application code to perform exponential back off when the error is received.
- B . Modify the application to use the AWS SDKs for DynamoDB.
- C . Increase the read and write throughput of the DynamoDB table.
- D . Create a DynamoDB Accelerator (DAX) cluster for the DynamoDB table.
- E . Create a second DynamoDB table Distribute the reads and writes between the two tables.
A,B
Explanation:
These solutions will mitigate the error most cost-effectively because they do not require increasing the provisioned throughput of the DynamoDB table or creating additional resources. Exponential backoff is a retry strategy that increases the waiting time between retries to reduce the number of requests sent to DynamoDB. The AWS SDKs for DynamoDB implement exponential backoff by default and also provide other features such as automatic pagination and encryption. Increasing the read and write throughput of the DynamoDB table, creating a DynamoDB Accelerator (DAX) cluster, or creating a second DynamoDB table will incur additional costs and complexity.
Reference: [Error Retries and Exponential Backoff in AWS], [Using the AWS SDKs with DynamoDB]
A company uses Amazon API Gateway to expose a set of APIs to customers. The APIs have caching enabled in API Gateway. Customers need a way to invalidate the cache for each API when they test the API.
What should a developer do to give customers the ability to invalidate the API cache?
- A . Ask the customers to use AWS credentials to call the InvalidateCache API operation.
- B . Attach an InvalidateCache policy to the IAM execution role that the customers use to invoke the API. Ask the customers to send a request that contains the HTTP header when they make an API call.
- C . Ask the customers to use the AWS SDK API Gateway class to invoke the InvalidateCache API operation.
- D . Attach an InvalidateCache policy to the IAM execution role that the customers use to invoke the API. Ask the customers to add the INVALIDATE_CACHE query string parameter when they make an API call.
A developer needs to write an AWS CloudFormation template on a local machine and deploy a CloudFormation stack to AWS.
What must the developer do to complete these tasks?
- A . Install the AWS CLI. Configure the AWS CLI by using an I AM user name and password.
- B . Install the AWS CLI. Configure the AWS CLI by using an SSH key.
- C . Install the AWS CLI. Configure the AWS CLI by using an 1AM user access key and secret key.
- D . Install an AWS software development kit (SDK). Configure the SDK by using an X.509 certificate.
A developer uses AWS CloudFormation to deploy an Amazon API Gateway API and an AWS Step Functions state machine The state machine must reference the API Gateway API after the CloudFormation template is deployed The developer needs a solution that uses the state machine to reference the API Gateway endpoint.
Which solution will meet these requirements MOST cost-effectively?
- A . Configure the CloudFormation template to reference the API endpoint in the DefinitionSubstitutions property for the AWS StepFunctions StateMachme resource.
- B . Configure the CloudFormation template to store the API endpoint in an environment variable for the AWS::StepFunctions::StateMachine resourc Configure the state machine to reference the environment variable
- C . Configure the CloudFormation template to store the API endpoint in a standard AWS:
SecretsManager Secret resource Configure the state machine to reference the resource - D . Configure the CloudFormation template to store the API endpoint in a standard AWS::AppConfig;:ConfigurationProfile resource Configure the state machine to referencethe resource.
A
Explanation:
CloudFormation and Dynamic
Reference: The DefinitionSubstitutions property in CloudFormation allows you to pass values into Step Functions state machines at runtime.
Cost-Effectiveness: This solution is cost-effective as it leverages CloudFormation’s built-in capabilities, avoiding the need for additional services like Secrets Manager or AppConfig.
Reference: AWS Step Functions State
Machine: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-stepfunctions-statemachine.html
CloudFormation DefinitionSubstitutions: https://github.com/aws-cloudformation/aws-cloudformation-resource-providers-stepfunctions/issues/14
A developer is investigating an issue in part of a company’s application. In the application messages are sent to an Amazon Simple Queue Service (Amazon SQS) queue The AWS Lambda function polls messages from the SQS queue and sends email messages by using Amazon Simple Email Service (Amazon SES) Users have been receiving duplicate email messages during periods of high traffic.
Which reasons could explain the duplicate email messages? (Select TWO.)
- A . Standard SQS queues support at-least-once message delivery
- B . Standard SQS queues support exactly-once processing, so the duplicate email messages are because of user error.
- C . Amazon SES has the DomainKeys Identified Mail (DKIM) authentication incorrectly configured
- D . The SQS queue’s visibility timeout is lower than or the same as the Lambda function’s timeout.
- E . The Amazon SES bounce rate metric is too high.
A
Explanation:
SQS Delivery Behavior: Standard SQS queues guarantee at-least-once delivery, meaning messages may be processed more than once. This can lead to duplicate emails in this scenario.
Visibility Timeout: If the visibility timeout on the SQS queue is too short, a message might become visible for another consumer before the first Lambda function finishes processing it. This can also lead to duplicates.
Reference: Amazon SQS Delivery Semantics: [invalid URL removed]
Amazon SQS Visibility
Timeout: https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-visibility-timeout.html
A company’s application uses an Amazon API Gateway REST API and AWS Lambda functions to
upload media files to and fetch media files from a standard Amazon S3 Standard bucket. The company runs a nightly job on an Amazon EC2 instance to create dashboards and other visualizations for application users. The job usually runs for 1 to 2 hours.
A developer observes request throttling while the function is running. The application generates multiple 429 exceptions in the Lambda function logs when files do not process successfully. The developer needs to resolve the issue and ensure that all of the application ingests all files.
Which solution will meet these requirements?
- A . Enable S3 Transfer Acceleration on the bucket. Use the appropriate endpoint.
- B . Call the CreateMultipartUpload API in the Lambda functions to upload the files in pieces.
- C . Implement the retry with a backoff pattern in the Lambda functions.
- D . Set up an S3 Lifecycle policy to automatically move the media files to the S3 Intelligent-Tiering storage class.
C
Explanation:
HTTP 429 errors indicate throttling (“Too Many Requests”). In this architecture, throttling can occur at multiple layers (API Gateway rate limits, downstream AWS service API throttles, or dependency throttling). Regardless of which service is throttling, the correct resilience pattern is to implement retries with exponential backoff and jitter for retryable failures. AWS SDKs and AWS best practices recommend backoff to reduce contention and allow the system to recover, while ensuring requests eventually succeed.
Option C directly addresses the problem and the requirement “ensure that all of the application ingests all files.” With a retry + backoff pattern, transient throttling is handled gracefully: failed operations are retried after increasing delays, avoiding immediate retry storms that worsen throttling. This increases successful completion rate without requiring major architectural changes.
Option A (Transfer Acceleration) can improve upload latency from geographically distributed clients, but it does not resolve request throttling (429) caused by API limits.
Option B (multipart upload) helps with large object uploads and can improve throughput/reliability for big files, but it does not inherently prevent 429 throttling at API Gateway or other throttled calls.
Option D (Intelligent-Tiering) affects storage cost optimization, not ingestion throttling.
A company wants to automate part of its deployment process. A developer needs to automate the process of checking for and deleting unused resources that supported previously deployed stacks but that are no longer used.
The company has a central application that uses the AWS Cloud Development Kit (AWS CDK) to manage all deployment stacks. The stacks are spread out across multiple accounts. The developer’s solution must integrate as seamlessly as possible within the current deployment process.
Which solution will meet these requirements with the LEAST amount of configuration?
- A . In the central AWS CDK application, write a handler function in the code that uses AWS SDK calls to check for and delete unused resources. Create an AWS CloudPormation template from a JSON file. Use the template to attach the function code to an AWS Lambda function and lo invoke the Lambda function when the deployment slack runs.
- B . In the central AWS CDK application. write a handler function in the code that uses AWS SDK calls to check for and delete unused resources. Create an AWS CDK custom resource Use the custom resource to attach the function code to an AWS Lambda function and to invoke the Lambda function when the deployment stack runs.
- C . In the central AWS CDK, write a handler function m the code that uses AWS SDK calls to check for and delete unused resources. Create an API in AWS Amplify Use the API to attach the function code to an AWS Lambda function and to invoke the Lambda function when the deployment stack runs.
- D . In the AWS Lambda console write a handler function in the code that uses AWS SDK calls to check for and delete unused resources. Create an AWS CDK custom resource. Use the custom resource to import the Lambda function into the stack and to Invoke the Lambda function when the deployment stack runs.
B
Explanation:
This solution meets the requirements with the least amount of configuration because it uses a feature of AWS CDK that allows custom logic to be executed during stack deployment or deletion. The AWS Cloud Development Kit (AWS CDK) is a software development framework that allows you to define cloud infrastructure as code and provision it through CloudFormation. An AWS CDK custom resource is a construct that enables you to create resources that are not natively supported by CloudFormation or perform tasks that are not supported by CloudFormation during stack deployment or deletion. The developer can write a handler function in the code that uses AWS SDK calls to check for and delete unused resources, and create an AWS CDK custom resource that attaches the function code to a Lambda function and invokes it when the deployment stack runs. This way, the developer can automate the cleanup process without requiring additional configuration or integration. Creating a CloudFormation template from a JSON file will require additional configuration and integration with the central AWS CDK application. Creating an API in AWS Amplify will require additional configuration and integration with the central AWS CDK application and may not provide optimal performance or availability. Writing a handler function in the AWS Lambda console will require additional configuration and integration with the central AWS CDK application.
Reference: [AWS Cloud Development Kit (CDK)], [Custom Resources]
A developer has an application that stores data in an Amazon S3 bucket. The application uses an HTTP API to store and retrieve objects. When the PutObject API operation adds objects to the S3 bucket the developer must encrypt these objects at rest by using server-side encryption with Amazon S3 managed keys (SSE-S3).
Which solution will meet this requirement?
- A . Create an AWS Key Management Service (AWS KMS) key. Assign the KMS key to the S3 bucket.
- B . Set the x-amz-server-side-encryption header when invoking the PutObject API operation.
- C . Provide the encryption key in the HTTP header of every request.
- D . Apply TLS to encrypt the traffic to the S3 bucket.
B
Explanation:
Amazon S3 supports server-side encryption, which encrypts data at rest on the server that stores the data. One of the encryption options is SSE-S3, which uses keys managed by S3. To use SSE-S3, the x-amz-server-side-encryption header must be set to AES256 when invoking the PutObject API operation. This instructs S3 to encrypt the object data with SSE-S3 before saving it on disks in its data centers and decrypt it when it is downloaded.
Reference: Protecting data using server-side encryption with Amazon S3-managed encryption keys (SSE-S3)
A developer is working on an ecommerce platform that communicates with several third-party payment processing APIs. The third-party payment services do not provide a test environment.
The developer needs to validate the ecommerce platform’s integration with the third-party payment processing APIs. The developer must test the API integration code without invoking the third-party payment processing APIs.
Which solution will meet these requirements?
- A . Set up an Amazon API Gateway REST API with a gateway response configured for status code 200
Add response templates that contain sample responses captured from the real third-party API. - B . Set up an AWS AppSync GraphQL API with a data source configured for each third-party API Specify an integration type of Mock Configure integration responses by using sample responses captured from the real third-party API.
- C . Create an AWS Lambda function for each third-party API. Embed responses captured from the real third-party API. Configure Amazon Route 53 Resolver with an inbound endpoint for each Lambda function’s Amazon Resource Name (ARN).
- D . Set up an Amazon API Gateway REST API for each third-party API Specify an integration request type of Mock Configure integration responses by using sample responses captured from the real third-party API
D
Explanation:
Mocking API Responses: API Gateway’s Mock integration type enables simulating API behavior without invoking backend services.
Testing with Sample Data: Using captured responses from the real third-party API ensures realistic testing of the integration code.
Focus on Integration Logic: This solution allows the developer to isolate and test the application’s interaction with the payment APIs, even without a test environment from the third-party providers.
Reference: Amazon API Gateway Mock
Integrations: https://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-mock-integration.html
A developer is building various microservices for an application that will run on Amazon EC2 instances. The developer needs to monitor the end-to-end view of the requests between the microservices and debug any issues in the various microservices.
What should the developer do to accomplish these tasks?
- A . Use Amazon CloudWatch to aggregate the microservices’ logs and metrics, and build the monitoring dashboard.
- B . Use AWS CloudTrail to aggregate the microservices’ logs and metrics, and build the monitoring dashboard.
- C . Use the AWS X-Ray SDK to add instrumentation in all the microservices, and monitor using the X-Ray service map.
- D . Use AWS Health to monitor the health of all the microservices.
C
Explanation:
To monitor end-to-end requests and debug issues across microservices, you need distributed tracing.
Among AWS tools, AWS X-Ray is specifically built for this use case.
Step-by-Step Breakdown:
Step 1: Understand the Requirement
The developer needs:
End-to-end request tracing across microservices
Debugging capability for performance issues or failures
This points directly to a distributed tracing solution rather than just logs or metrics.
Step 2: Evaluate the Options
Option A: Amazon CloudWatch
CloudWatch is powerful for metrics, logs, and alerts.
But it does not provide distributed tracing or request path visualization between microservices.
So, it’s not sufficient alone for the requirement.
Option B: AWS CloudTrail
CloudTrail tracks API calls made through the AWS Management Console, CLI, SDKs.
It is meant for auditing and governance, not microservices request tracing.
Not suitable for debugging microservices interactions.
Option C: AWS X-Ray SDK with X-Ray service map
Purpose-built for distributed tracing
Automatically collects data about requests as they travel through your microservices
You can instrument your code with the AWS X-Ray SDK in Java, Node.js, Python, Go, .NET, etc.
Visualizes requests using a service map, showing latency, errors, and time spent in downstream services.
How it works:
Instrument each microservice using the AWS X-Ray SDK.
Use the SDK to trace requests, propagate trace headers across services.
The X-Ray daemon collects and sends trace data to the X-Ray service.
You can view the service map, see bottlenecks, error rates, etc.
Option D: AWS Health
AWS Health provides information on AWS service outages or account-level events.
It does not monitor your application or microservices health or request flows.
Correct Choice: C is the only option that meets the developer’s requirement to monitor end-to-end request paths and debug issues in a microservices architecture.
AWS Developer
Reference: AWS X-Ray Documentation: https://docs.aws.amazon.com/xray/latest/devguide/aws-xray.html
Instrumenting your application: https://docs.aws.amazon.com/xray/latest/devguide/xray-sdk.html
Viewing the Service Map: https://docs.aws.amazon.com/xray/latest/devguide/xray-console-service-map.html
Tracing Microservices with AWS X-Ray: https://aws.amazon.com/blogs/compute/tracing-microservices-aws-x-ray/
