Practice Free DVA-C02 Exam Online Questions
A company runs a web application on Amazon EC2 instances behind an Application Load Balancer. The application uses Amazon DynamoDB as its database. The company wants to ensure high performance for reads and writes.
Which solution will meet these requirements MOST cost-effectively?
- A . Configure auto-scaling for the DynamoDB table with a target utilization of 70%. Set the minimum and maximum capacity units based on the expected workload.
- B . Use DynamoDB on-demand capacity mode for the table. Specify a maximum throughput higher than the expected peak read and write capacity units.
- C . Use DynamoDB provisioned throughput mode for the table. Create an Amazon CloudWatch alarm on the ThrottledRequests metric. Invoke an AWS Lambda function to increase provisioned capacity.
- D . Create an Amazon DynamoDB Accelerator (DAX) cluster. Configure the application to use the DAX endpoint.
A
Explanation:
Why Option A is Correct: Auto-scaling with a target utilization ensures the DynamoDB table dynamically adjusts capacity based on workload, maintaining high performance while optimizing cost. Setting a reasonable target utilization minimizes overprovisioning and throttling risks.
Why Other Options are Incorrect:
Option B: On-demand capacity is costlier than provisioned throughput for predictable workloads.
Option C: Using manual CloudWatch alarms and Lambda for scaling is less efficient and adds operational overhead.
Option D: DAX accelerates read performance but does not improve write performance.
AWS Documentation
Reference: DynamoDB Auto Scaling
A company runs a new application on AWS Elastic Beanstalk. The company needs to deploy updates to the application. The updates must not cause any downtime for application users. The deployment must forward a specified percentage of incoming client traffic to a new application version during an evaluation period.
Which deployment type will meet these requirements?
- A . Rolling
- B . Traffic-splitting
- C . In-place
- D . Immutable
B
Explanation:
AWS Elastic Beanstalk supports several deployment policies, and in this case, the requirement is to forward a specific percentage of traffic to the new version without causing downtime. The Traffic-splitting deployment policy is the most appropriate choice.
Traffic-splitting Deployment: This deployment method allows you to gradually shift a specified percentage of incoming traffic from the old environment version to the new one. During the evaluation period, if any issues are detected, the traffic can be redirected back to the old version.
No Downtime: This method ensures no downtime since both versions of the application run concurrently, and traffic is split between them.
Alternatives:
Rolling deployments (Option A): These gradually replace instances but may result in partial downtime if some instances fail during deployment.
In-place deployments (Option C): In-place deployments replace instances without creating new ones, which can lead to downtime.
Immutable deployments (Option D): While this ensures no downtime by creating entirely new instances, it doesn’t provide traffic splitting during the evaluation phase.
Elastic Beanstalk Deployment Policies
A developer is working on an ecommerce application that stores data in an Amazon RDS for MySQL cluster The developer needs to implement a caching layer for the application to retrieve information about the most viewed products.
Which solution will meet these requirements?
- A . Edit the RDS for MySQL cluster by adding a cache node. Configure the cache endpoint instead of the duster endpoint in the application.
- B . Create an Amazon ElastiCache (Redis OSS) cluster. Update the application code to use the ElastiCache (Redis OSS) cluster endpoint.
- C . Create an Amazon DynamoDB Accelerator (DAX) cluster in front of the RDS for MySQL cluster. Configure the application to connect to the DAX endpoint instead of the RDS endpoint.
- D . Configure the RDS for MySQL cluster to add a standby instance in a different Availability Zone. Configure the application to read the data from the standby instance.
B
Explanation:
Requirement Summary:
E-commerce app using Amazon RDS for MySQL
Needs caching layer for most-viewed products
Evaluate Options:
A developer is creating an application that will give users the ability to store photos from their cellphones in the cloud. The application needs to support tens of thousands of users. The application uses an Amazon API Gateway REST API that is integrated with AWS Lambda functions to process the photos. The application stores details about the photos in Amazon DynamoDB.
Users need to create an account to access the application. In the application, users must be able to upload photos and retrieve previously uploaded photos. The photos will range in size from 300 KB to 5 MB.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Use Amazon Cognito user pools to manage user accounts. Create an Amazon Cognito user pool authorizer in API Gateway to control access to the API. Use the Lambda function to store the photos and details in the DynamoDB table. Retrieve previously uploaded photos directly from the DynamoDB table.
- B . Use Amazon Cognito user pools to manage user accounts. Create an Amazon Cognito user pool authorizer in API Gateway to control access to the API. Use the Lambda function to store the photos in Amazon S3. Store the object’s S3 key as part of the photo details in the DynamoDB table. Retrieve previously uploaded photos by querying DynamoDB for the S3 key.
- C . Create an IAM user for each user of the application during the sign-up process. Use IAM authentication to access the API Gateway API. Use the Lambda function to store the photos in Amazon S3. Store the object’s S3 key as part of the photo details in the DynamoDB table. Retrieve previously uploaded photos by querying DynamoDB for the S3 key.
- D . Create a users table in DynamoDB. Use the table to manage user accounts. Create a Lambda authorizer that validates user credentials against the users table. Integrate the Lambda authorizer with API Gateway to control access to the API. Use the Lambda function to store the photos in Amazon S3. Store the object’s S3 key as par of the photo details in the DynamoDB table. Retrieve
previously uploaded photos by querying DynamoDB for the S3 key.
B
Explanation:
Amazon Cognito user pools is a service that provides a secure user directory that scales to hundreds of millions of users. The developer can use Amazon Cognito user pools to manage user accounts and create an Amazon Cognito user pool authorizer in API Gateway to control access to the API. The developer can use the Lambda function to store the photos in Amazon S3, which is a highly scalable, durable, and secure object storage service. The developer can store the object’s S3 key as part of the photo details in the DynamoDB table, which is a fast and flexible NoSQL database service. The developer can retrieve previously uploaded photos by querying DynamoDB for the S3 key and fetching the photos from S3. This solution will meet the requirements with the least operational overhead.
Reference: [Amazon Cognito User Pools]
[Use Amazon Cognito User Pools – Amazon API Gateway]
[Amazon Simple Storage Service (S3)]
[Amazon DynamoDB]
A developer is building a microservices-based application by using Python on AWS and several AWS services The developer must use AWS X-Ray The developer views the service map by using the console to view the service dependencies. During testing, the developer notices that some services are missing from the service map
What can the developer do to ensure that all services appear in the X-Ray service map?
- A . Modify the X-Ray Python agent configuration in each service to increase the sampling rate
- B . Instrument the application by using the X-Ray SDK for Python. Install the X-Ray SDK for all the services that the application uses
- C . Enable X-Ray data aggregation in Amazon CloudWatch Logs for all the services that the application uses
- D . Increase the X-Ray service map timeout value in the X-Ray console
B
Explanation:
AWS X-Ray SDK: The primary way to enable X-Ray tracing within applications. The SDK sends data about requests and subsegments to the X-Ray daemon for service map generation.
Instrumenting All Services: To visualize a complete microservice architecture on the service map, each relevant service must include the X-Ray SDK.
Reference: AWS X-Ray Documentation: https://docs.aws.amazon.com/xray/
X-Ray SDK for Python: https://docs.aws.amazon.com/xray/latest/devguide/xray-sdk-python.html
A developer is re-architecting a caching solution that currently uses an Amazon ElastiCache (Redis OSS) cluster that has cluster mode enabled to store product information. The existing solution experiences significant load. All cached product data expires at the same time, which results in additional pressure on the backend database and poor performance for end users. The developer must resolve the performance issues in a way that maintains data freshness.
Which solution will meet these requirements?
- A . Increase the TTL of the product data in the cache.
- B . Increase the number of replica nodes. Disable cluster mode.
- C . Add a slight variance to the TTL setting by using a randomly generated time value.
- D . Increase the number of shards. Decrease the number of replica nodes in the cluster.
C
Explanation:
The described behavior is a classic cache stampede (also called the “thundering herd” problem): a large set of hot keys share the same expiration time, so they all expire simultaneously. When that happens, many requests miss the cache at the same moment and fall back to the backend database, creating a sudden spike in database load and causing increased latency and poor user experience.
The most effective fix that preserves freshness is to introduce jitter―a small, randomized variance― into the TTL values so cached objects do not all expire at the same time. With TTL jitter, each item still has a bounded lifetime (so freshness is maintained), but expirations become spread out over time. That smooths cache-miss rates and prevents synchronized regeneration, reducing bursts of database traffic. This is a common best practice in high-load caching systems precisely to avoid mass expiration events.
Option A (increase TTL) might reduce how frequently a stampede occurs, but it does not solve the root cause: keys can still align and expire together, and a longer TTL can also reduce freshness if product data changes.
Option B (more replicas, disable cluster mode) changes topology and read scaling, but it does not prevent synchronized expiry or the resulting backend spike; disabling cluster mode could also reduce scale for large keyspaces.
Option D (more shards) can improve cache throughput capacity, but it does not address the core issue that backend load spikes occur when items expire simultaneously; lowering replica count could even reduce read availability.
Therefore, C is the correct solution: keep TTL-based freshness, but add a randomized component (for example, TTL = baseTTL + random(0..jitter)) so expirations are distributed and the backend database is protected from sudden bursts.
A developer writes an AWS Lambda function that processes new object uploads to an Amazon S3 bucket. The Lambda function runs for approximately 30 seconds. The function runs as expected under normal load conditions. Other Lambda functions in the AWS account also run as expected.
Occasionally, up to 500 new objects are written to the bucket every minute. Each new object write invokes the processing Lambda function during the high-volume periods through an event notification.
The developer must ensure that the processing function continues to run as expected during the high-volume periods.
Which solution will meet this requirement?
- A . Modify the function’s timeout setting.
- B . Add an additional Lambda layer to optimize the code execution.
- C . Configure a reserved concurrency quota for the function.
- D . Decrease the function’s memory allocation.
C
Explanation:
Amazon S3 event notifications can invoke AWS Lambda functions at very high rates during traffic spikes. In this scenario, up to 500 objects per minute can result in hundreds of concurrent Lambda invocations. By default, Lambda functions share the account’s unreserved concurrency pool, which can cause throttling if the pool is exhausted by sudden bursts.
AWS Lambda reserved concurrency guarantees a specific number of concurrent executions for a function. By configuring reserved concurrency, the developer ensures that sufficient execution capacity is always available for this specific function, regardless of other Lambda workloads in the account. This prevents throttling during high-volume S3 event bursts.
Increasing the timeout (Option A) does not address concurrency limits. Adding a layer (Option B) does not solve invocation throttling. Decreasing memory (Option D) can actually worsen performance and increase execution time.
AWS documentation explicitly recommends reserved concurrency when workloads experience unpredictable or bursty invocation patterns and must run reliably. Therefore, reserving concurrency is the correct and AWS-recommended solution.
A data visualization company wants to strengthen the security of its core applications The applications are deployed on AWS across its development staging, pre-production, and production environments. The company needs to encrypt all of its stored sensitive credentials The sensitive credentials need to be automatically rotated Aversion of the sensitive credentials need to be stored for each environment
Which solution will meet these requirements in the MOST operationally efficient way?
- A . Configure AWS Secrets Manager versions to store different copies of the same credentials across multiple environments
- B . Create a new parameter version in AWS Systems Manager Parameter Store for each environment Store the environment-specific credentials in the parameter version.
- C . Configure the environment variables in the application code Use different names for each environment type
- D . Configure AWS Secrets Manager to create a new secret for each environment type. Store the environment-specific credentials in the secret
D
Explanation:
Secrets Management: AWS Secrets Manager is designed specifically for storing and managing sensitive credentials.
Environment Isolation: Creating separate secrets for each environment (development, staging, etc.) ensures clear separation and prevents accidental leaks.
Automatic Rotation: Secrets Manager provides built-in rotation capabilities, enhancing security posture.
Versioning: Tracking changes to secrets is essential for auditing and compliance.
Reference: AWS Secrets Manager: https://aws.amazon.com/secrets-manager/
Secrets Manager
Rotation: https://docs.aws.amazon.com/secretsmanager/latest/userguide/rotating-secrets.html
A developer is creating an application that uses an AWS Lambda function to transform and load data from an Amazon S3 bucket. When the developer tests the application, the developer finds that some invocations of the Lambda function are slower than others.
The developer needs to update the Lambda function to have predictable invocation durations that run with low latency. Any initialization activities, such as loading libraries and instantiating clients, must run during allocation time rather than during actual function invocations.
Which combination of steps will meet these requirements? (Select TWO.)
- A . Create a schedule group in Amazon EventBridge Scheduler to invoke the Lambda function.
- B . Configure provisioned concurrency for the Lambda function to have the necessary number of execution environments.
- C . Use the SLATEST version of the Lambda function.
- D . Configure reserved concurrency for the Lambda function to have the necessary number of execution environments.
- E . Deploy changes, and publish a new version of the Lambda function.
A company wants to ensure that only one user from its Admin group has the permanent right to delete an Amazon EC2 resource. The company must not modify the existing Admin group policy.
What should a developer use to meet these requirements?
- A . AWS managed policy
- B . Inline policy
- C . IAM trust relationship
- D . AWS STS
B
Explanation:
An inline IAM policy is directly attached to a specific IAM user, group, or role and applies only to that principal. AWS documentation states that inline policies are useful when permissions should be tightly scoped and not reused.
In this scenario, the Admin group policy cannot be changed, but one specific user needs permanent delete permissions. Attaching an inline policy directly to that user grants the required permissions without impacting other Admin users.
AWS managed policies (Option A) are reusable and not suitable for user-specific access. IAM trust relationships (Option C) control role assumption, not resource permissions. AWS STS (Option D) provides temporary credentials, not permanent access.
Therefore, an inline policy is the correct solution.
