Practice Free DVA-C02 Exam Online Questions
A development team uses an Amazon DynamoDB table as a database for an application. The team notices errors and slowdowns in the application during peak usage hours. The slowdowns and errors occur during a surge of user logins. The application receives frequent write requests. Application logs indicate that write requests are being throttled.
The development team needs to reduce the application latency and resolve the throttling errors.
Which solutions will meet these requirements? (Select TWO.)
- A . Create a DynamoDB Accelerator (DAX) cluster. Update the application to send read requests to the DAX endpoint.
- B . Increase the provisioned throughput of the table.
- C . Reduce the frequency of write requests by using error retries and exponential backoff.
- D . Reduce control plane operations that occur during peak usage hours by consolidating the DynamoDB tables and indexes.
- E . Change the table’s capacity mode to on-demand.
B, E
Explanation:
The issue is write throttling during peak login surges. In DynamoDB provisioned capacity mode, throttling happens when the workload exceeds the table’s configured write capacity units (WCU) (or a partition becomes “hot”). To resolve throttling and reduce latency, the table must have sufficient write capacity available when the surge occurs.
Option B directly addresses this by increasing the table’s provisioned throughput (WCU). With more write capacity, DynamoDB can accept more writes per second without returning throttling errors, which reduces retries and improves end-user latency during peak usage.
Option E is also effective: switching the table to on-demand capacity mode automatically accommodates traffic spikes without the team needing to forecast and pre-provision capacity. On-demand is designed for unpredictable workloads and can scale to handle sudden surges, reducing the risk of throttling caused by under-provisioning. For many teams, this is the fastest operational fix because it removes manual capacity planning and reactive scaling steps.
Why the other options are not the best fit:
A (DAX) improves read performance and reduces read latency, but it does not fix write throttling.
C (retries and exponential backoff) is a best practice for handling throttling gracefully, but it does not “resolve” throttling; it reduces contention and improves success rates at the cost of higher latency, and it does not increase available capacity.
D focuses on control plane operations (schema/index/table management) which are unrelated to data plane write throttling during login surges.
Therefore, the two solutions that meet the requirements to resolve write throttling and reduce latency are B (increase provisioned throughput) and E (switch to on-demand capacity mode for automatic scaling).
A company uses AWS Secrets Manager to store API keys for external REST services. The company uses an AWS Lambda function to rotate the API keys on a regular schedule.
Due to an error in the Lambda function, the API keys are successfully updated in AWS Secrets Manager but are not updated in the external REST services. Before investigating the root cause of the issue, the company wants to resume requests to the external REST services as quickly as possible.
Which solution will meet this requirement with the LEAST operational overhead?
- A . Manually create a new version of the API keys in AWS Secrets Manager and update the keys in the external REST services.
- B . Manually retrieve the new version of the API keys from AWS Secrets Manager and update the keys in the external REST services.
- C . Roll back to the last known working version of the API keys in AWS Secrets Manager.
- D . Fix and reinvoke the AWS Lambda rotation function to generate a new version of the API keys in AWS Secrets Manager and update the keys in the external REST services.
C
Explanation:
Comprehensive and Detailed Explanation (250C300 words) From Exact Extract of AWS Developer Documents:
AWS Secrets Manager is designed to securely store, rotate, and manage sensitive information such as API keys, database credentials, and tokens. A core feature of Secrets Manager is secret versioning, which enables recovery from failed or partial rotation events. Each secret version is associated with staging labels, most notably AWSCURRENT and AWSPREVIOUS.
According to AWS documentation, during a successful rotation, Secrets Manager creates a new secret version and assigns it the AWSCURRENT label, while the previous version is retained with the AWSPREVIOUS label. Applications that retrieve secrets typically reference the secret using the AWSCURRENT label. If a rotation fails after updating the secret value but before synchronizing the change with the external system, authentication failures can occur.
AWS explicitly states that in such scenarios, customers can restore service quickly by moving the AWSCURRENT label back to the previous version. This rollback capability allows applications to immediately resume using the last known valid credentials without modifying application code, redeploying resources, or interacting with external systems.
This approach represents the least operational overhead because it is a metadata-only operation within Secrets Manager. No new secrets are created, no Lambda functions are invoked, and no manual updates are required in the external REST services. Additionally, AWS recommends postponing troubleshooting until service availability is restored, which aligns precisely with the requirement in this scenario.
Other options introduce unnecessary complexity, increased risk of human error, and longer recovery times. Therefore, rolling back to the last working secret version is the fastest, safest, and AWS-recommended solution.
A software company is migrating a single-page application from on-premises servers to the AWS Cloud by using AWS Amplify Hosting. The application relies on an API that was created with an existing GraphQL schema. The company needs to migrate the API along with the application.
Which solution will meet this requirement with the LEAST amount of configuration?
- A . Create a new API by using the Amplify CLI’s amplify import api command. Select REST as the service to use. Add the existing schema to the new API.
- B . Create a new API in Amazon API Gateway by using the existing schema. Use the Amplify CLI’s amplify add api command. Select the API as the application’s backend environment.
- C . Create a new API in AWS AppSync by using the existing schema. Use the Amplify CLI’s amplify import api command. Select the API as the application’s backend environment.
- D . Create a new API by using the Amplify CLI’s amplify add api command. Select GraphQL as the service to use. Add the existing schema to the new API.
D
Explanation:
AWS Amplify’s most direct support for GraphQL APIs is through AWS AppSync, and the Amplify CLI can generate and configure an AppSync GraphQL API directly from a schema with minimal setup. The requirement says the API already has an existing GraphQL schema, and the goal is to migrate it with the least configuration effort.
Option D is the simplest: run amplify add api, choose GraphQL, and provide the existing schema. Amplify then provisions the AppSync API, sets up the schema, creates the backend resources (depending on chosen data sources), and wires the configuration into the Amplify project so the SPA can consume the API.
Option A is incorrect because it selects REST and does not align with an existing GraphQL schema.
Option B is incorrect because API Gateway is not the native GraphQL service and would require additional mapping/proxy logic―more configuration.
Option C can be valid if an AppSync API already exists and you want to import it, but the question asks to “migrate the API along with the application” with least configuration. Creating it directly in Amplify is typically less configuration than creating separately and importing.
Therefore, using Amplify CLI to add a GraphQL API and supply the existing schema is the least-config approach.
A developer is building a video search application. Video files average 2.5 TB in size. Files must have instant access for the first 90 days. After 90 days, files can take more than 10 minutes to load.
Which solution will meet these requirements in the MOST cost-effective way?
- A . Store files in Amazon EFS Standard, then transition to EFS Standard-IA.
- B . Store files in Amazon S3 Glacier Deep Archive for 90 days, then transition to S3 Glacier Flexible Retrieval.
- C . Store files in Amazon EBS for 90 days, then transition to S3 Glacier Deep Archive.
- D . Store files in Amazon S3 Glacier Instant Retrieval for 90 days, then transition to S3 Glacier Flexible Retrieval.
D
Explanation:
Amazon S3 Glacier storage classes are designed for cost-effective archival with varying retrieval times. S3 Glacier Instant Retrieval provides millisecond access, making it suitable for workloads that require immediate access but at a lower cost than S3 Standard.
For the first 90 days, instant access is required, which rules out S3 Glacier Deep Archive and S3 Glacier Flexible Retrieval as primary storage because they have minutes-to-hours retrieval times. After 90 days, the requirement allows retrieval times exceeding 10 minutes, making S3 Glacier Flexible Retrieval appropriate.
Option D uses Instant Retrieval initially and transitions to Flexible Retrieval, optimizing both performance and cost. EFS and EBS (Options A and C) are significantly more expensive for large objects and are not intended for massive archival workloads.
Option B reverses the access requirements and is invalid.
AWS documentation explicitly recommends using S3 lifecycle policies to transition objects between Glacier storage classes based on access requirements.
Therefore, Option D is the most cost-effective and AWS-aligned solution.
A developer used the AWS SDK to create an application that aggregates and produces log records for 10 services. The application delivers data to an Amazon Kinesis Data Streams stream.
Each record contains a log message with a service name, creation timestamp, and other log information. The stream has 15 shards in provisioned capacity mode. The stream uses service name as the partition key.
The developer notices that when all the services are producing logs, ProvisionedThroughputExceededException errors occur during PutRecord requests. The stream metrics show that the write capacity the applications use is below the provisioned capacity.
- A . Change the capacity mode from provisioned to on-demand.
- B . Double the number of shards until the throttling errors stop occurring.
- C . Change the partition key from service name to creation timestamp.
- D . Use a separate Kinesis stream for each service to generate the logs.
C
Explanation:
Comprehensive and Detailed Step-by-Step
Issue Analysis:
The stream uses service name as the partition key. This can cause "hot partition" issues when a few service names generate significantly more logs compared to others, causing uneven distribution of data across shards.
Metrics show that the write capacity used is below provisioned capacity, which confirms that the throughput errors are due to shard-level limits and not overall capacity.
Option C: Change Partition Key to Creation Timestamp:
By changing the partition key to the creation timestamp (or a composite key including timestamp), the distribution of data across shards can be randomized, ensuring an even spread of records.
This resolves the shard overutilization issue and eliminates ProvisionedThroughputExceededException.
Why Other Options Are Incorrect:
Option A: Switching to on-demand capacity mode might temporarily alleviate the issue, but the root cause (hot partitioning) remains unresolved.
Option B: Adding shards increases capacity but does not fix the skewed data distribution caused by using the service name as the partition key.
Option D: Creating separate streams for each service adds unnecessary complexity and does not scale well as the number of services grows.
Reference: Best Practices for Kinesis Data Streams Partition Key Design
A company’s developer has deployed an application in AWS by using AWS CloudFormation The CloudFormation stack includes parameters in AWS Systems Manager Parameter Store that the application uses as configuration settings. The application can modify the parameter values
When the developer updated the stack to create additional resources with tags, the developer noted that the parameter values were reset and that the values ignored the latest changes made by the application. The developer needs to change the way the company deploys the CloudFormation stack. The developer also needs to avoid resetting the parameter values outside the stack.
Which solution will meet these requirements with the LEAST development effort?
- A . Modify the CloudFormation stack to set the deletion policy to Retain for the Parameter Store parameters.
- B . Create an Amazon DynamoDB table as a resource in the CloudFormation stack to hold configuration data for the application Migrate the parameters that the application is modifying from Parameter Store to the DynamoDB table
- C . Create an Amazon RDS DB instance as a resource in the CloudFormation stack. Create a table in the database for parameter configuration. Migrate the parameters that the application is modifying from Parameter Store to the configuration table
- D . Modify the CloudFormation stack policy to deny updates on Parameter Store parameters
A
Explanation:
Problem: CloudFormation updates reset Parameter Store parameters, disrupting application behavior.
Deletion Policy: CloudFormation has a deletion policy that controls resource behavior when a stack is deleted or updated. The ‘Retain’ policy instructs CloudFormation to preserve a resource’s current state.
Least Development Effort: This solution involves a simple CloudFormation template modification, requiring minimal code changes.
Reference: CloudFormation Deletion
Policies: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-deletionpolicy.html
A developer is building an application that uses Amazon DynamoDB. The developer wants to retrieve multiple specific items from the database with a single API call.
Which DynamoDB API call will meet these requirements with the MINIMUM impact on the database?
- A . BatchGetltem
- B . Getltem
- C . Scan
- D . Query
A developer is building an application that uses Amazon DynamoDB. The developer wants to retrieve multiple specific items from the database with a single API call.
Which DynamoDB API call will meet these requirements with the MINIMUM impact on the database?
- A . BatchGetltem
- B . Getltem
- C . Scan
- D . Query
A developer is incorporating AWS X-Ray into an application that handles personal identifiable information (PII). The application is hosted on Amazon EC2 instances. The application trace messages include encrypted PII and go to Amazon CloudWatch. The developer needs to ensure that no PII goes outside of the EC2 instances.
Which solution will meet these requirements?
- A . Manually instrument the X-Ray SDK in the application code.
- B . Use the X-Ray auto-instrumentation agent.
- C . Use Amazon Macie to detect and hide PII. Call the X-Ray API from AWS Lambda.
- D . Use AWS Distro for Open Telemetry.
A
Explanation:
This solution will meet the requirements by allowing the developer to control what data is sent to X-Ray and CloudWatch from the application code. The developer can filter out any PII from the trace messages before sending them to X-Ray and CloudWatch, ensuring that no PII goes outside of the EC2 instances.
Option B is not optimal because it will automatically instrument all incoming and outgoing requests from the application, which may include PII in the trace messages.
Option C is not optimal because it will require additional services and costs to use Amazon Macie and AWS Lambda, which may not be able to detect and hide all PII from the trace messages.
Option D is not optimal because it will use Open Telemetry instead of X-Ray, which may not be compatible with CloudWatch and other AWS services.
Reference: [AWS X-Ray SDKs]
A developer is receiving an intermittent ProvisionedThroughputExceededException error from an application that is based on Amazon DynamoDB. According to the Amazon CloudWatch metrics for the table, the application is not exceeding the provisioned throughput.
What could be the cause of the issue?
- A . The DynamoDB table storage size is larger than the provisioned size.
- B . The application is exceeding capacity on a particular hash key.
- C . The DynamoDB table is exceeding the provisioned scaling operations.
- D . The application is exceeding capacity on a particular sort key.
B
Explanation:
DynamoDB distributes throughput across partitions based on the hash key. A hot partition (caused by high usage of a specific hash key) can result in a ProvisionedThroughputExceededException, even if overall usage is below the provisioned capacity.
Why Option B is Correct:
Partition-Level Limits: Each partition has a limit of 3,000 read capacity units or 1,000 write capacity
units per second.
Hot Partition: Excessive use of a single hash key can overwhelm its partition.
Why Not Other Options:
Option A: DynamoDB storage size does not affect throughput.
Option C: Provisioned scaling operations are unrelated to throughput errors.
Option D: Sort keys do not impact partition-level throughput.
DynamoDB Partition Key Design Best Practices
