Practice Free DVA-C02 Exam Online Questions
A developer is writing a new serverless application for a company. Several other developers must collaborate on the code for this application, and the company expects frequent changes to the code. The developer needs to deploy the code from source control to AWS Lambda with the fewest number of manual steps.
Which strategy for the build and deployment should the developer use to meet these requirements?
- A . Build the code locally, and then upload the code into the source control system. When a release is needed, run AWS CodePipeline to extract the uploaded build and deploy the resources.
- B . Use the AWS SAM CLI to build and deploy the application from the developer’s local machine with the latest version checked out locally.
- C . Use AWS CodeBuild and AWS CodePipeline to invoke builds and corresponding deployments when configured source-controlled branches have pull requests merged into them.
- D . Use the Lambda console to upload a .zip file of the application that is created by the AWS SAM CLI build command.
C
Explanation:
The requirement emphasizes collaboration, frequent code changes, and deploying from source control with the fewest manual steps. This aligns with a CI/CD pipeline approach where commits or merges automatically trigger builds and deployments. Using AWS CodePipeline orchestrates the stages (source, build, deploy), and AWS CodeBuild performs the build/package steps for the Lambda/serverless application. When integrated with a repository (such as CodeCommit, GitHub, or another supported provider), CodePipeline can automatically start when changes are merged into specific branches, enabling consistent, repeatable deployments.
Option C provides the least operational overhead for teams: developers push code and merge pull requests; the pipeline automatically builds artifacts, runs tests (if configured), packages dependencies, and deploys updates to Lambda (often via AWS SAM/CloudFormation under the hood). This removes manual “build on my laptop” drift, prevents inconsistent deployments between developers, and supports frequent iteration safely.
Option B (SAM CLI from a local machine) requires a developer to manually run build/deploy commands and assumes each developer’s environment is configured correctly. That increases manual steps and the risk of differences across machines.
Option D (uploading zip in the console) is highly manual and not suitable for frequent changes or team collaboration.
Option A is also suboptimal because storing built artifacts in source control is an anti-pattern; builds should be reproducible and produced by CI, not committed binaries.
Therefore, C is the best choice: set up CodePipeline + CodeBuild so merges to controlled branches automatically trigger builds and deployments to AWS Lambda with minimal manual intervention.
An application that is deployed to Amazon EC2 is using Amazon DynamoDB. The app cation calls the DynamoDB REST API Periodically the application receives a Provisioned Throughput Exceeded Exception error when the application writes to a DynamoDB table.
Which solutions will mitigate this error MOST cost-effectively^ (Select TWO)
- A . Modify the application code to perform exponential back off when the error is received.
- B . Modify the application to use the AWS SDKs for DynamoDB.
- C . Increase the read and write throughput of the DynamoDB table.
- D . Create a DynamoDB Accelerator (DAX) cluster for the DynamoDB table.
- E . Create a second DynamoDB table Distribute the reads and writes between the two tables.
A,B
Explanation:
These solutions will mitigate the error most cost-effectively because they do not require increasing the provisioned throughput of the DynamoDB table or creating additional resources. Exponential backoff is a retry strategy that increases the waiting time between retries to reduce the number of requests sent to DynamoDB. The AWS SDKs for DynamoDB implement exponential backoff by default and also provide other features such as automatic pagination and encryption. Increasing the read and write throughput of the DynamoDB table, creating a DynamoDB Accelerator (DAX) cluster, or creating a second DynamoDB table will incur additional costs and complexity.
Reference: [Error Retries and Exponential Backoff in AWS], [Using the AWS SDKs with DynamoDB]
A developer is running an application on an Amazon EC2 instance. When the application attempts to read from an Amazon S3 bucket, the request fails. The developer determines that the IAM role associated with the EC2 instance is missing the required Amazon S3 read permissions.
The developer must grant the application access to read from the S3 bucket with the LEAST application disruption.
Which solution will meet this requirement?
- A . Add the permission to the IAM role. Terminate the EC2 instance and launch a new instance.
- B . Add the permission to the IAM role so that the change takes effect automatically.
- C . Add the permission to the IAM role. Hibernate and restart the EC2 instance.
- D . Add the permission to the S3 bucket and restart the EC2 instance.
B
Explanation:
AWS IAM role permissions are evaluated dynamically. According to AWS documentation, when an IAM role attached to an EC2 instance is updated, the updated permissions are available to applications running on that instance without requiring a restart. The EC2 instance retrieves temporary credentials through the instance metadata service, and those credentials are refreshed automatically.
In this scenario, the least disruptive approach is to update the IAM role policy to include the required s3:GetObject permission. Once the permission is added, the application can immediately retry access to the S3 bucket without restarting, stopping, or terminating the EC2 instance.
Terminating or restarting the instance (Options A and C) introduces unnecessary downtime and
operational overhead. Modifying the S3 bucket policy (Option D) is unnecessary because the problem lies with missing role permissions, not bucket access configuration.
AWS best practices strongly recommend using IAM roles for EC2 and updating permissions dynamically to avoid service interruption. Therefore, simply adding the required permission to the role is the correct and most efficient solution.
A cloud-based video surveillance company is developing an application that analyzes video files.
After the application analyzes the files, the company can discard the files.
The company stores the files in an Amazon S3 bucket. The files are 1 GB in size on average. No file is larger than 2 GB. An AWS Lambda function will run one time for each video file that is processed. The processing is very I/O intensive, and the application must read each file multiple times.
Which solution will meet these requirements in the MOST performance-optimized way?
- A . Attach an Amazon EBS volume that is larger than 1 GB to the Lambda function. Copy the files from the S3 bucket to the EBS volume.
- B . Attach an Elastic Network Adapter (ENA) to the Lambda function. Use the ENA to read the video files from the S3 bucket.
- C . Increase the ephemeral storage size to 2 GB. Copy the files from the S3 bucket to the /tmp directory of the Lambda function.
- D . Configure the Lambda function code to read the video files directly from the S3 bucket.
C
Explanation:
The workload is explicitly very I/O intensive and requires reading the same file multiple times. Reading repeatedly from S3 across the network will introduce latency and repeated network I/O. For best performance, the function should download the file once to fast local storage and perform all repeated reads locally.
AWS Lambda provides ephemeral storage mounted at /tmp. AWS allows increasing this ephemeral storage beyond the default size. Because the files are at most 2 GB, the developer can configure the Lambda function’s ephemeral storage to 2 GB and copy each S3 object into /tmp before processing. This converts repeated network reads into repeated local filesystem reads, which is significantly faster and more consistent for I/O-heavy workloads.
Option D (read directly from S3) is the least optimized because each pass over the file requires additional network I/O and S3 GET requests.
Option A is not feasible: Lambda cannot directly attach and manage an EBS volume like an EC2 instance can.
Option B is not applicable: ENA is an EC2 networking feature; Lambda networking is managed by the service and does not allow attaching ENAs for performance tuning in that way.
Because the files are disposable after processing, using ephemeral /tmp storage is also cost-effective and operationally simple, and it is aligned with the “process once per file” pattern.
Therefore, increase Lambda ephemeral storage and copy the video into /tmp for repeated reads.
A company is building a serverless application on AWS. The application uses Amazon API Gateway and AWS Lambda. The company wants to deploy the application to its development, test, and production environments.
Which solution will meet these requirements with the LEAST development effort?
- A . Use API Gateway stage variables and create Lambda aliases to reference environment-specific resources.
- B . Use Amazon ECS to deploy the application to the environments.
- C . Duplicate the code for each environment. Deploy the code to a separate API Gateway stage.
- D . Use AWS Elastic Beanstalk to deploy the application to the environments.
A
Explanation:
Requirement Summary:
Deploy serverless application using:
API Gateway
AWS Lambda
Need dev, test, and prod environments
Want least development effort
Evaluate Options:
Option A: API Gateway stage variables + Lambda aliases
Most efficient and scalable
API Gateway supports stage variables (like env)
Lambda supports aliases (e.g., dev, test, prod)
You can configure each stage to point to a different alias of the same function version
Enables versioning, isolation, and low effort management
Option B: Use Amazon ECS
Overkill for a serverless setup
ECS is container-based, not serverless
Introduces unnecessary complexity
Option C: Duplicate code for each environment
High operational overhead and poor maintainability
Option D: Use Elastic Beanstalk
Not applicable: Elastic Beanstalk is for traditional app hosting, not optimal for Lambda + API Gateway
Lambda Aliases: https://docs.aws.amazon.com/lambda/latest/dg/configuration-aliases.html
API Gateway Stage Variables:
https://docs.aws.amazon.com/apigateway/latest/developerguide/stage-variables.html
A developer is receiving HTTP 400: ThrottlingException errors intermittently when calling the Amazon CloudWatch API. When a call fails, no data is retrieved.
What best practice should first be applied to address this issue?
- A . Contact AWS Support for a limit increase.
- B . Use the AWS CLI to get the metrics.
- C . Analyze the applications and remove the API call.
- D . Retry the call with exponential backoff.
A developer is building an application that processes a stream of user-supplied data. The data stream must be consumed by multiple Amazon EC2 based processing applications in parallel and in real time. Each processor must be able to resume without losing data if there is a service interruption. The application architect plans to add other processors in the near future, and wants to minimize the amount of data duplication involved.
Which solution will satisfy these requirements?
- A . Publish the data to Amazon Simple Queue Service (Amazon SQS).
- B . Publish the data to Amazon Data Firehose.
- C . Publish the data to Amazon EventBridge.
- D . Publish the data to Amazon Kinesis Data Streams.
An Amazon Kinesis Data Firehose delivery stream is receiving customer data that contains personally identifiable information. A developer needs to remove pattern-based customer identifiers from the data and store the modified data in an Amazon S3 bucket.
What should the developer do to meet these requirements?
- A . Implement Kinesis Data Firehose data transformation as an AWS Lambda function. Configure the function to remove the customer identifiers. Set an Amazon S3 bucket as the destination of the delivery stream.
- B . Launch an Amazon EC2 instance. Set the EC2 instance as the destination of the delivery stream. Run an application on the EC2 instance to remove the customer identifiers. Store the transformed data in an Amazon S3 bucket.
- C . Create an Amazon OpenSearch Service instance. Set the OpenSearch Service instance as the destination of the delivery stream. Use search and replace to remove the customer identifiers. Export the data to an Amazon S3 bucket.
- D . Create an AWS Step Functions workflow to remove the customer identifiers. As the last step in the workflow, store the transformed data in an Amazon S3 bucket. Set the workflow as the destination of the delivery stream.
A
Explanation:
Amazon Kinesis Data Firehose is a service that delivers real-time streaming data to destinations such as Amazon S3, Amazon Redshift, Amazon OpenSearch Service, and Amazon Kinesis Data Analytics. The developer can implement Kinesis Data Firehose data transformation as an AWS Lambda function. The function can remove pattern-based customer identifiers from the data and return the modified data to Kinesis Data Firehose. The developer can set an Amazon S3 bucket as the destination of the delivery stream.
Reference: [What Is Amazon Kinesis Data Firehose? – Amazon Kinesis Data Firehose] [Data Transformation – Amazon Kinesis Data Firehose]
A company hosts its application in the us-west-1 Region. The company wants to add redundancy in the us-east-1 Region. The application secrets are stored in AWS Secrets Manager in us-west-1. A developer needs to replicate the secrets to us-east-1.
Which solution will meet this requirement?
- A . Configure secret replication for each secret. Add us-east-1 as a replication Region. Choose an AWS KMS key in us-east-1 to encrypt the replicated secrets.
- B . Create a new secret in us-east-1 for each secret. Configure secret replication in us-east-1. Set the source to be the corresponding secret in us-west-1. Choose an AWS KMS key in us-west-1 to encrypt the replicated secrets.
- C . Create a replication rule for each secret. Set us-east-1 as the destination Region. Configure the rule to run during secret rotation. Choose an AWS KMS key in us-east-1 to encrypt the replicated secrets.
- D . Create a Secrets Manager lifecycle rule to replicate each secret to a new Amazon S3 bucket in us-west-1. Configure an S3 replication rule to replicate the secrets to us-east-1.
A
Explanation:
AWS Secrets Manager supports multi-Region secret replication, which is designed specifically for redundancy, disaster recovery, and multi-Region applications. With this feature, the primary secret resides in one Region (here, us-west-1) and Secrets Manager automatically maintains a replica in another Region (us-east-1). This provides local read access and resilience if one Region is impaired.
Option A accurately describes the standard configuration: enable secret replication and add us-east-1 as the replica Region. Because encryption keys are Region-scoped, the replica secret in us-east-1 should be encrypted with a KMS key in us-east-1 (either the default Secrets Manager key for that Region or a customer managed key), satisfying encryption requirements and proper key locality.
Option B is incorrect because you don’t configure replication “from the destination.” Replication is configured on the primary secret, and the replica uses a KMS key in the replica Region, not in the source Region.
Option C is not how Secrets Manager replication works. Replication is not only during rotation; it maintains replicas continuously. The “replication rule during rotation” framing is not the standard mechanism.
Option D is inappropriate and insecure/operationally complex: exporting secrets to S3 for replication is not the recommended pattern and introduces unnecessary exposure.
Therefore, enable Secrets Manager multi-Region replication and encrypt replicas with a KMS key in the destination Region.
A developer created an AWS Lambda function that accesses resources in a VPC. The Lambda function polls an Amazon Simple Queue Service (Amazon SOS) queue for new messages through a VPC endpoint. Then the function calculates a rolling average of the numeric values that are contained in the messages. After initial tests of the Lambda function, the developer found that the value of the rolling average that the function returned was not accurate.
How can the developer ensure that the function calculates an accurate rolling average?
- A . Set the function’s reserved concurrency to 1. Calculate the rolling average in the function. Store the calculated rolling average in Amazon ElastiCache.
- B . Modify the function to store the values in Amazon ElastiCache. When the function initializes, use the previous values from the cache to calculate the rolling average.
- C . Set the function’s provisioned concurrency to 1. Calculate the rolling average in the function. Store the calculated rolling average in Amazon ElastiCache.
- D . Modify the function to store the values in the function’s layers. When the function initializes, use the previously stored values to calculate the rolling average.
