Practice Free DVA-C02 Exam Online Questions
A developer built an application that calls an external API to obtain data, processes the data, and saves the result to Amazon S3. The developer built a container image with all of the necessary dependencies to run the application as a container.
The application runs locally and requires minimal CPU and RAM resources. The developer has created an Amazon ECS cluster. The developer needs to run the application hourly in Amazon ECS.
Which solution will meet these requirements with the LEAST amount of infrastructure management overhead?
- A . Add a capacity provider to manage instances.
- B . Add an Amazon EC2 instance that runs the application.
- C . Define a task definition with an AWS Fargate launch type.
- D . Create an Amazon ECS cluster and add the managed node groups feature to run the application.
C
Explanation:
To run a containerized job hourly in ECS with minimal infrastructure management, the best choice is AWS Fargate. Fargate is a serverless compute engine for containers that runs ECS tasks without the developer needing to provision, patch, or scale EC2 instances.
With option C, the developer defines an ECS task definition using the Fargate launch type and schedules it (commonly via Amazon EventBridge Scheduler / EventBridge rule triggering RunTask). ECS handles the placement, networking, and execution of the task on managed infrastructure. This is the least operational overhead approach because there are no instances to manage and no capacity planning required, which fits an hourly job with small CPU/RAM needs.
Option A (capacity providers) is primarily for managing EC2 capacity for ECS and still requires underlying instances, which increases operational burden.
Option B runs the app on an EC2 instance, which requires instance lifecycle management, OS patching, scaling considerations, and failure handling.
Option D refers to managed node groups (an EKS concept) and does not apply cleanly to ECS. Even in a Kubernetes context, nodes still need management compared to Fargate.
Therefore, defining a Fargate task is the simplest and lowest-management solution.
A company hosts a stateless web application with low data storage in a single AWS Region. The company wants to increase the resiliency of the application to include a multi-Region presence. The company wants to set the recovery time objective (RTO) and recovery point objective (RPO) to hours. The company needs a low-cost and low-complexity disaster recovery (DR) strategy.
Which DR strategy should the company use?
- A . Warm standby
- B . Pilot light
- C . Backup and restore
- D . Multi-site active-active
B
Explanation:
Why Option B is Correct: The pilot light strategy keeps a minimal version of the environment in another Region and scales up during a disaster. It achieves an RTO and RPO of hours at a low cost and complexity.
Why Other Options are Incorrect:
Option A: Warm standby is more expensive as it keeps a scaled-down, fully functioning version running in another Region.
Option C: Backup and restore has a longer RTO and RPO than hours.
Option D: Multi-site active-active is costly and more complex than required.
AWS Documentation
Reference: Disaster Recovery Strategies on AWS
A company’s developer has deployed an application in AWS by using AWS CloudFormation The CloudFormation stack includes parameters in AWS Systems Manager Parameter Store that the application uses as configuration settings. The application can modify the parameter values
When the developer updated the stack to create additional resources with tags, the developer noted that the parameter values were reset and that the values ignored the latest changes made by the application. The developer needs to change the way the company deploys the CloudFormation stack. The developer also needs to avoid resetting the parameter values outside the stack.
Which solution will meet these requirements with the LEAST development effort?
- A . Modify the CloudFormation stack to set the deletion policy to Retain for the Parameter Store parameters.
- B . Create an Amazon DynamoDB table as a resource in the CloudFormation stack to hold configuration data for the application Migrate the parameters that the application is modifying from Parameter Store to the DynamoDB table
- C . Create an Amazon RDS DB instance as a resource in the CloudFormation stack. Create a table in the database for parameter configuration. Migrate the parameters that the application is modifying from Parameter Store to the configuration table
- D . Modify the CloudFormation stack policy to deny updates on Parameter Store parameters
A
Explanation:
Problem: CloudFormation updates reset Parameter Store parameters, disrupting application behavior.
Deletion Policy: CloudFormation has a deletion policy that controls resource behavior when a stack is deleted or updated. The ‘Retain’ policy instructs CloudFormation to preserve a resource’s current state.
Least Development Effort: This solution involves a simple CloudFormation template modification, requiring minimal code changes.
Reference: CloudFormation Deletion
Policies: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-deletionpolicy.html
A developer maintains a serverless application that uses an Amazon API Gateway REST API to invoke an AWS Lambda function by using a non-proxy integration. The Lambda function returns data, which is stored in Amazon DynamoDB.
Several application users begin to receive intermittent errors from the API. The developer examines
Amazon CloudWatch Logs for the Lambda function and discovers several ProvisionedThroughputExceededException errors.
The developer needs to resolve the errors and ensure that the errors do not reoccur.
- A . Use provisioned capacity mode for the DynamoDB table, and assign sufficient capacity units.
Configure the Lambda function to retry requests with exponential backoff. - B . Update the REST API to send requests on an Amazon SQS queue. Configure the Lambda function to process requests from the queue.
- C . Configure a usage plan for the REST API.
- D . Update the REST API to invoke the Lambda function asynchronously.
A
Explanation:
Comprehensive and Detailed Step-by-Step
Option A: Provisioned Capacity with Exponential Backoff:
Using provisioned capacity ensures sufficient throughput for the DynamoDB table.
Configuring the Lambda function to implement exponential backoff retries reduces the chance of exceeding capacity during peak usage.
This combination addresses the root cause (ProvisionedThroughputExceededException) and prevents errors without overprovisioning.
Why Other Options Are Incorrect:
Option B: Using SQS adds unnecessary latency and complexity. The issue lies in DynamoDB throughput, not request management.
Option C: A usage plan for the API does not address throughput issues in DynamoDB.
Option D: Invoking the Lambda function asynchronously does not resolve the DynamoDB capacity issue and might lead to delayed processing.
Reference: DynamoDB Provisioned Throughput Documentation
A company wants to migrate applications from its on-premises servers to AWS. As a first step, the company is modifying and migrating a non-critical application to a single Amazon EC2 instance. The application will store information in an Amazon S3 bucket. The company needs to follow security
best practices when deploying the application on AWS.
Which approach should the company take to allow the application to interact with Amazon S3?
- A . Create an 1AM role that has administrative access to AWS. Attach the role to the EC2 instance.
- B . Create an 1AM user. Attach the AdministratorAccess policy. Copy the generated access key and secret key. Within the application code, use the access key and secret key along with the AWS SDK to communicate with Amazon S3.
- C . Create an 1AM role that has the necessary access to Amazon S3. Attach the role to the EC2 instance.
- D . Create an 1AM user. Attach a policy that provides the necessary access to Amazon S3. Copy the generated access key and secret key. Within the application code, use the access key and secret key along with the AWS SDK to communicate with Amazon S3.
A company’s application has an AWS Lambda function that processes messages from loT devices. The company wants to monitor the Lambda function to ensure that the Lambda function is meeting its required service level agreement (SLA).
A developer must implement a solution to determine the application’s throughput in near real time. The throughput must be based on the number of messages that the Lambda function receives and processes in a given time period. The Lambda function performs initialization and post-processing steps that must not factor into the throughput measurement.
What should the developer do to meet these requirements?
- A . Use the Lambda function’s ConcurrentExecutions metric in Amazon CloudWatch to measure the throughput.
- B . Modify the application to log the calculated throughput to Amazon CloudWatch Logs. Use Amazon EventBridge to invoke a separate Lambda function to process the logs on a schedule.
- C . Modify the application to publish custom Amazon CloudWatch metrics when the Lambda function receives and processes each message. Use the metrics to calculate the throughput.
- D . Use the Lambda function’s Invocations metric and Duration metric to calculate the throughput in Amazon CloudWatch.
A developer created an AWS Lambda function that accesses resources in a VPC. The Lambda function polls an Amazon Simple Queue Service (Amazon SOS) queue for new messages through a VPC endpoint. Then the function calculates a rolling average of the numeric values that are contained in the messages. After initial tests of the Lambda function, the developer found that the value of the rolling average that the function returned was not accurate.
How can the developer ensure that the function calculates an accurate rolling average?
- A . Set the function’s reserved concurrency to 1. Calculate the rolling average in the function. Store the calculated rolling average in Amazon ElastiCache.
- B . Modify the function to store the values in Amazon ElastiCache. When the function initializes, use the previous values from the cache to calculate the rolling average.
- C . Set the function’s provisioned concurrency to 1. Calculate the rolling average in the function. Store the calculated rolling average in Amazon ElastiCache.
- D . Modify the function to store the values in the function’s layers. When the function initializes, use the previously stored values to calculate the rolling average.
A company had an Amazon RDS for MySQL DB instance that was named mysql-db. The DB instance was deleted within the past 90 days.
A developer needs to find which IAM user or role deleted the DB instance in the AWS environment.
Which solution will provide this information?
- A . Retrieve the AWS CloudTrail events for the resource mysql-db where the event name is DeleteDBInstance. Inspect each event.
- B . Retrieve the Amazon CloudWatch log events from the most recent log stream within the rds/mysql-db log group. Inspect the log events.
- C . Retrieve the AWS X-Ray trace summaries. Filter by services with the name mysql-db. Inspect the ErrorRootCauses values within each summary.
- D . Retrieve the AWS Systems Manager deletions inventory. Filter the inventory by deletions that have a TypeName value of RDS. Inspect the deletion details.
A
Explanation:
To determine who deleted an Amazon RDS DB instance, the correct source of truth is AWS CloudTrail, which records API activity and includes the identity (IAM user, role, assumed role session) that made the call. Deleting an RDS instance is performed through the RDS API action DeleteDBInstance, and CloudTrail logs an event for this action that contains key fields such as userIdentity, eventTime, eventName, sourceIPAddress, and request parameters identifying the DB instance (for example, dBInstanceIdentifier).
Because the DB instance was deleted within the past 90 days, CloudTrail Event History is commonly sufficient (Event History typically retains 90 days of management events). If the company has a CloudTrail trail configured to deliver logs to S3/CloudWatch Logs, those logs can also be queried for the same event.
Option A directly matches this: retrieve CloudTrail events for DeleteDBInstance related to mysql-db and inspect the userIdentity section to identify the IAM principal that performed the deletion.
Option B is not reliable because RDS log groups (when enabled) capture database engine logs (slow query log, error log, general log) and do not record control-plane actions like who deleted the instance.
Option C is incorrect because X-Ray traces application-level request flows; it does not audit administrative actions like RDS deletion.
Option D is not applicable: Systems Manager inventory does not provide authoritative records of RDS deletions or the IAM principal responsible.
Therefore, CloudTrail lookup for DeleteDBInstance events is the correct solution.
A developer creates an AWS Lambda function that is written in Java. During testing, the Lambda function does not work how the developer expected. The developer wants to use tracing capabilities to troubleshoot the problem.
Which AWS service should the developer use to accomplish this goal?
- A . AWS Trusted Advisor
- B . Amazon CloudWatch
- C . AWS X-Ray
- D . AWS CloudTrail
A developer is building an application to process a stream of customer orders. The application sends processed orders to an Amazon Aurora MySQL database. The application needs to process the orders in batches.
The developer needs to configure a workflow that ensures each record is processed before the application sends each order to the database.
Options:
- A . Use Amazon Kinesis Data Streams to stream the orders. Use an AWS Lambda function to process the orders. Configure an event source mapping for the Lambda function, and set the MaximumBatchingWindowInSeconds setting to 300.
- B . Use Amazon SQS to stream the orders. Use an AWS Lambda function to process the orders. Configure an event source mapping for the Lambda function, and set the MaximumBatchingWindowInSeconds setting to 0.
- C . Use Amazon Managed Streaming for Apache Kafka (Amazon MSK) to stream the orders. Use an Amazon EC2 instance to process the orders. Configure an event source mapping for the EC2 instance, and increase the payload size limit to 36 MB.
- D . Use Amazon DynamoDB Streams to stream the orders. Use an Amazon ECS cluster on AWS Fargate to process the orders. Configure an event source mapping for the cluster, and set the BatchSize setting to 1.
A
Explanation:
Step 1: Understanding the Problem
Processing in Batches: The application must process records in groups.
Sequential Processing: Each record in the batch must be processed before writing to Aurora.
Solution Goals: Use services that support ordered, batched processing and integrate with Aurora.
Step 2: Solution Analysis
Option A:
Amazon Kinesis Data Streams supports ordered data processing.
AWS Lambda can process batches of records via event source mapping with MaximumBatchingWindowInSeconds for timing control.
Configuring the batching window ensures efficient processing and compliance with the workflow.
Correct Option.
Option B:
Amazon SQS is not designed for streaming; it provides reliable, unordered message delivery.
Setting MaximumBatchingWindowInSeconds to 0 disables batching, which is contrary to the requirement.
Not suitable.
Option C:
Amazon MSK provides Kafka-based streaming but requires custom EC2-based processing.
This increases system complexity and operational overhead.
Not ideal for serverless requirements.
Option D:
DynamoDB Streams is event-driven but lacks strong native integration for batch ordering.
Using ECS adds unnecessary complexity.
Not suitable.
Step 3: Implementation Steps for Option A
Set up Kinesis Data Stream:
Configure shards based on the expected throughput.
Configure Lambda with Event Source Mapping:
Enable Kinesis as the event source for Lambda.
Set MaximumBatchingWindowInSeconds to 300 to accumulate data for processing.
Example:
{
"EventSourceArn": "arn:aws:kinesis:region:account-id:stream/stream-name",
"BatchSize": 100,
"MaximumBatchingWindowInSeconds": 300
}
Write Processed Data to Aurora:
Use AWS RDS Data API for efficient database operations from Lambda.
AWS Developer
Reference: Amazon Kinesis Data Streams Developer Guide
AWS Lambda Event Source Mapping
Batch Processing with Lambda
