Practice Free DVA-C02 Exam Online Questions
A company is using an Amazon API Gateway REST API endpoint as a webhook to publish events from an on-premises source control management (SCM) system to Amazon EventBridge. The company has configured an EventBridge rule to listen for the events and to control application deployment in a central AWS account. The company needs to receive the same events across multiple receiver AWS accounts.
How can a developer meet these requirements without changing the configuration of the SCM system?
- A . Deploy the API Gateway REST API to all the required AWS accounts. Use the same custom domain name for all the gateway endpoints so that a single SCM webhook can be used for all events from all accounts.
- B . Deploy the API Gateway REST API to all the receiver AWS accounts. Create as many SCM webhooks as the number of AWS accounts.
- C . Grant permission to the central AWS account for EventBridge to access the receiver AWS accounts.
Add an EventBridge event bus on the receiver AWS accounts as the targets to the existing
EventBridge rule. - D . Convert the API Gateway type from REST API to HTTP API.
A developer is designing a serverless application with two AWS Lambda functions to process photos. One Lambda function stores objects in an Amazon S3 bucket and stores the associated metadata in an Amazon DynamoDB table. The other Lambda function fetches the objects from the S3 bucket by using the metadata from the DynamoDB table. Both Lambda functions use the same Python library to perform complex computations and are approaching the quota for the maximum size of zipped deployment packages.
What should the developer do to reduce the size of the Lambda deployment packages with the LEAST operational overhead?
- A . Package each Python library in its own .zip file archive. Deploy each Lambda function with its own copy of the library.
- B . Create a Lambda layer with the required Python library. Use the Lambda layer in both Lambda functions.
- C . Combine the two Lambda functions into one Lambda function. Deploy the Lambda function as a single .zip file archive.
- D . Download the Python library to an S3 bucket. Program the Lambda functions to reference the object URLs.
B
Explanation:
AWS Lambda is a service that lets developers run code without provisioning or managing servers. Lambda layers are a distribution mechanism for libraries, custom runtimes, and other dependencies. The developer can create a Lambda layer with the required Python library and use the layer in both Lambda functions. This will reduce the size of the Lambda deployment packages and avoid reaching the quota for the maximum size of zipped deployment packages. The developer can also benefit from using layers to manage dependencies separately from function code.
Reference: [What Is AWS Lambda? – AWS Lambda]
[AWS Lambda Layers – AWS Lambda]
A developer is using AWS Amplify Hosting to build and deploy an application. The developer is receiving an increased number of bug reports from users. The developer wants to add end-to-end testing to the application to eliminate as many bugs as possible before the bugs reach production.
Which solution should the developer implement to meet these requirements?
- A . Run the amplify add test command in the Amplify CLI.
- B . Create unit tests in the application. Deploy the unit tests by using the amplify push command in the Amplify CLI.
- C . Add a test phase to the amplify.yml build settings for the application.
- D . Add a test phase to the aws-exports.js file for the application.
C
Explanation:
The solution that will meet the requirements is to add a test phase to the amplify.yml build settings for the application. This way, the developer can run end-to-end tests on every code commit and catch any bugs before deploying to production. The other options either do not support end-to-end testing, or do not run tests automatically.
Reference: End-to-end testing
A developer is migrating a containerized application from an on-premises environment to an Amazon ECS cluster.
In the on-premises environment, the container uses a Docker file to store the application. Service dependency configurations such as databases, caches, and storage volumes are stored in a docker-compose.yml file.
Both files are located at the top level of the code base that the developer needs to containerize. When the developer deploys the code to Amazon ECS, the instructions from the Docker file are carried out. However, none of the configurations from docker-compose.yml are applied.
The developer needs to resolve the error and ensure the configurations are applied.
- A . Store the file path for the docker-compose.yml file as a Docker label. Add the label to the ECS cluster’s container details.
- B . Add the details from the docker-compose.yml file to an ECS task definition. Associate the task with the ECS cluster.
- C . Create a namespace in the ECS cluster. Associate the docker-compose.yml file to the namespace.
- D . Update the service type of the ECS cluster to REPLICA, and redeploy the stack.
B
Explanation:
Why Option B is Correct: Amazon ECS does not natively process docker-compose.yml files. Instead, the configurations from docker-compose.yml must be converted into ECS-compatible configurations within a task definition. Task definitions are the primary way to specify container configurations in ECS, including service dependencies like databases, caches, and volumes.
Steps to Resolve the Error:
Extract the configurations from the docker-compose.yml file.
Map the dependencies and settings to the appropriate ECS task definition fields.
Deploy the task definition to the ECS cluster.
Why Other Options are Incorrect:
Option A: Docker labels do not directly impact ECS task execution or integrate with ECS service configurations.
Option C: ECS namespaces do not exist as a feature.
Option D: Changing the service type to REPLICA does not resolve the issue of missing service dependency configurations.
AWS Documentation
Reference: Amazon ECS Task Definitions
Migrating Docker Compose Workloads to ECS
A developer is migrating a containerized application from an on-premises environment to an Amazon ECS cluster.
In the on-premises environment, the container uses a Docker file to store the application. Service dependency configurations such as databases, caches, and storage volumes are stored in a docker-compose.yml file.
Both files are located at the top level of the code base that the developer needs to containerize. When the developer deploys the code to Amazon ECS, the instructions from the Docker file are carried out. However, none of the configurations from docker-compose.yml are applied.
The developer needs to resolve the error and ensure the configurations are applied.
- A . Store the file path for the docker-compose.yml file as a Docker label. Add the label to the ECS cluster’s container details.
- B . Add the details from the docker-compose.yml file to an ECS task definition. Associate the task with the ECS cluster.
- C . Create a namespace in the ECS cluster. Associate the docker-compose.yml file to the namespace.
- D . Update the service type of the ECS cluster to REPLICA, and redeploy the stack.
B
Explanation:
Why Option B is Correct: Amazon ECS does not natively process docker-compose.yml files. Instead, the configurations from docker-compose.yml must be converted into ECS-compatible configurations within a task definition. Task definitions are the primary way to specify container configurations in ECS, including service dependencies like databases, caches, and volumes.
Steps to Resolve the Error:
Extract the configurations from the docker-compose.yml file.
Map the dependencies and settings to the appropriate ECS task definition fields.
Deploy the task definition to the ECS cluster.
Why Other Options are Incorrect:
Option A: Docker labels do not directly impact ECS task execution or integrate with ECS service configurations.
Option C: ECS namespaces do not exist as a feature.
Option D: Changing the service type to REPLICA does not resolve the issue of missing service dependency configurations.
AWS Documentation
Reference: Amazon ECS Task Definitions
Migrating Docker Compose Workloads to ECS
An 1AM role is attached to an Amazon EC2 instance that explicitly denies access to all Amazon S3 API actions. The EC2 instance credentials file specifies the 1AM access key and secret access key, which allow full administrative access.
Given that multiple modes of 1AM access are present for this EC2 instance, which of the following is correct?
- A . The EC2 instance will only be able to list the S3 buckets.
- B . The EC2 instance will only be able to list the contents of one S3 bucket at a time.
- C . The EC2 instance will be able to perform all actions on any S3 bucket.
- D . The EC2 instance will not be able to perform any S3 action on any S3 bucket.
A developer is working on an application that handles 10 MB documents that contain highly sensitive data. The application will use AWS KMS to perform client-side encryption.
What steps must be followed?
- A . Invoke the Encrypt API, passing the plaintext data that must be encrypted, then reference the customer managed key ARN in the KeyId parameter.
- B . Invoke the GenerateRandom API to get a data encryption key, then use the data encryption key to encrypt the data.
- C . Invoke the GenerateDataKey API to retrieve the encrypted version of the data encryption key to encrypt the data.
- D . Invoke the GenerateDataKey API to retrieve the plaintext version of the data encryption key to encrypt the data.
D
Explanation:
For client-side encryption with AWS KMS, the documented pattern is envelope encryption: you use KMS to generate and protect a data encryption key (DEK), and you use that DEK locally to encrypt the actual data. This is the correct approach for large payloads (such as 10 MB documents), because the KMS Encrypt API is intended for encrypting small amounts of data (KMS is not designed to directly encrypt multi-megabyte application payloads).
With envelope encryption, the application calls GenerateDataKey on AWS KMS, specifying the KMS key (customer managed key) to use. KMS returns two versions of the DEK in the response:
a plaintext DEK (usable immediately by the application for local cryptographic operations), and a ciphertext (encrypted) DEK that is encrypted under the specified KMS key.
The application then uses the plaintext DEK to encrypt the 10 MB document on the client side (for example, using an authenticated encryption mode such as AES-GCM via a standard crypto library). After encryption completes, the application must not persist the plaintext DEK; instead, it stores the encrypted document alongside the ciphertext DEK. Later, to decrypt, the application sends the ciphertext DEK to KMS using Decrypt to recover the plaintext DEK, and then decrypts the document locally.
Therefore, option D is correct because it explicitly uses GenerateDataKey and the plaintext DEK to encrypt the data, which is the required envelope-encryption flow.
Options A, B, and C do not follow the correct KMS envelope-encryption process for large client-side payloads.
A developer is creating an application that will be deployed on IoT devices. The application will send data to a RESTful API that is deployed as an AWS Lambda function. The application will assign each API request a unique identifier. The volume of API requests from the application can randomly increase at any given time of day.
During periods of request throttling, the application might need to retry requests. The API must be able to handle duplicate requests without inconsistencies or data loss.
Which solution will meet these requirements?
- A . Create an Amazon RDS for MySQL DB instance. Store the unique identifier for each request in a database table. Modify the Lambda function to check the table for the identifier before processing the request.
- B . Create an Amazon DynamoDB table. Store the unique identifier for each request in the table.
Modify the Lambda function to check the table for the identifier before processing the request. - C . Create an Amazon DynamoDB table. Store the unique identifier for each request in the table. Modify the Lambda function to return a client error response when the function receives a duplicate request.
- D . Create an Amazon ElastiCache for Memcached instance. Store the unique identifier for each request in the cache. Modify the Lambda function to check the cache for the identifier before processing the request.
B
Explanation:
Amazon DynamoDB is a fully managed NoSQL database service that can store and retrieve any amount of data with high availability and performance. DynamoDB can handle concurrent requests from multiple IoT devices without throttling or data loss. To prevent duplicate requests from causing inconsistencies or data loss, the Lambda function can use DynamoDB conditional writes to check if the unique identifier for each request already exists in the table before processing the request. If the identifier exists, the function can skip or abort the request; otherwise, it can process the request and store the identifier in the table.
Reference: Using conditional writes
A developer has created an AWS Lambda function that consumes messages from an Amazon SQS standard queue. The developer notices that the Lambda function processes some messages multiple times.
How should developer resolve this issue MOST cost-effectively?
- A . Change the SQS standard queue to an SQS FIFO queue by using the SQS message deduplication ID.
- B . Set up a dead-letter queue.
- C . Set the maximum concurrency limit of the Lambda function to 1.
- D . Change the message processing to use Amazon Kinesis Data Streams instead of Amazon SQS.
A
Explanation:
Amazon SQS standard queues provide at-least-once delivery, which means messages can be delivered more than once. When Lambda is triggered by an SQS standard queue, duplicate delivery can occur due to retries, visibility timeouts, or transient errors. Therefore, “processed multiple times” is expected behavior unless the system implements deduplication or idempotency.
The most cost-effective way within the provided options to reduce duplicate processing is to use an SQS FIFO queue with deduplication. FIFO queues support exactly-once processing semantics within the constraints of the service (by preventing duplicate message delivery within the deduplication
interval when a MessageDeduplicationId is used or content-based deduplication is enabled). This directly mitigates duplicate processing without requiring large architectural changes.
Option B (DLQ) is important for handling poison messages, but it does not prevent duplicates in normal processing; it only captures messages that fail repeatedly.
Option C (concurrency = 1) reduces parallelism but does not eliminate duplicates; the same message can still be delivered again if the visibility timeout expires or retries occur.
Option D is a major redesign and not cost-effective for simply addressing duplicate SQS message processing.
Note: In real production systems, the best practice is to make processing idempotent (so duplicates do no harm). But among the choices, FIFO + deduplication is the most direct and cost-effective fix.
Therefore, switch to an SQS FIFO queue and use deduplication.
A developer is updating several AWS Lambda functions and notices that all the Lambda functions share the same custom libraries. The developer wants to centralize all the libraries, update the libraries in a convenient way, and keep the libraries versioned.
Which solution will meet these requirements with the LEAST development effort?
- A . Create an AWS CodeArtifact repository that contains all the custom libraries.
- B . Create a custom container image for the Lambda functions to save all the custom libraries.
- C . Create a Lambda layer that contains all the custom libraries.
- D . Create an Amazon EFS file system to store all the custom libraries.
C
Explanation:
AWS Lambda layers are specifically designed to share common code and dependencies across multiple Lambda functions. A layer is a ZIP archive that contains libraries, a custom runtime, or other function dependencies. Layers are versioned, so the developer can publish a new layer version when libraries change and then update functions to reference the new version. This directly meets the requirements to centralize libraries, update conveniently, and keep libraries versioned―while requiring minimal development work.
Using a single shared layer significantly reduces duplication across function deployment packages. It also simplifies CI/CD because the developer can build and publish the layer independently and reuse it across many functions.
Option A (CodeArtifact) is useful for dependency management, but it still requires each Lambda deployment to package dependencies or download them during build. It’s more moving parts than a layer for this use case.
Option B (container images) can centralize dependencies but typically requires more effort: building, scanning, versioning container images, and updating function image URIs. It’s heavier operationally than layers for “shared custom libraries.”
Option D (EFS) can be mounted by Lambda, but introduces networking considerations (VPC config), EFS lifecycle/permissions, and is unnecessary for simply sharing libraries. Also, cold start and operational overhead can increase.
Therefore, creating a Lambda layer for the shared libraries is the least-effort, most standard approach.
