Practice Free DVA-C02 Exam Online Questions
A developer is building an application that includes an AWS Lambda function that is written in .NET Core. The Lambda function’s code needs to interact with Amazon DynamoDB tables and Amazon S3 buckets. The developer must minimize the Lambda function’s deployment time and invocation duration.
Which solution will meet these requirements?
- A . Increase the Lambda function’s memory.
- B . Include the entire AWS SDK for .NET in the Lambda function’s deployment package.
- C . Include only the AWS SDK for .NET modules for DynamoDB and Amazon S3 in the Lambda function’s deployment package.
- D . Configure the Lambda function to download the AWS SDK for .NET from an S3 bucket at runtime.
A developer is designing an AWS Lambda function that creates temporary files that are less than 10 MB during invocation. The temporary files will be accessed and modified multiple times during invocation. The developer has no need to save or retrieve these files in the future.
Where should the temporary files be stored?
- A . the /tmp directory
- B . Amazon Elastic File System (Amazon EFS)
- C . Amazon Elastic Block Store (Amazon EBS)
- D . Amazon S3
A
Explanation:
AWS Lambda is a service that lets developers run code without provisioning or managing servers. Lambda provides a local file system that can be used to store temporary files during invocation. The local file system is mounted under the /tmp directory and has a limit of 512 MB. The temporary files are accessible only by the Lambda function that created them and are deleted after the function execution ends. The developer can store temporary files that are less than 10 MB in the /tmp directory and access and modify them multiple times during invocation.
Reference: [What Is AWS Lambda? – AWS Lambda]
[AWS Lambda Execution Environment – AWS Lambda]
A retail company runs a sales analytics application that uses an AWS Lambda function to process transaction data that is stored in Amazon DocumentDB. The application aggregates daily sales data across 500 stores and uses the data to generate reports for senior managers.
Application users report that the application is taking longer to generate reports and that their requests sometimes time out. A developer investigates and notices that the application’s average response time for report generation has increased from 3 seconds to over 25 seconds.
The developer needs to identify the application’s performance bottlenecks.
Which solution will meet this requirement with the LEAST operational overhead?
- A . Enable AWS X-Ray tracing for the Lambda function and DocumentDB cluster. Implement custom subsegments to track query execution to identify slow-performing queries.
- B . Add Amazon CloudWatch Logs error streaming. Create custom CloudWatch metrics based on the logs. Create a CloudWatch dashboard that shows Lambda metrics.
- C . Modify the Lambda function to use DocumentDB connection pooling. Implement async/await patterns for database operations.
- D . Add logging statements within the Lambda function to output query execution times and database connection attempts. Store IDs in Amazon CloudWatch Logs. Use CloudWatch Logs Insights to analyze the logs.
A
Explanation:
The goal is to identify performance bottlenecks across a Lambda function that calls a database (DocumentDB). The most operationally efficient way to pinpoint where time is being spent―without building custom logging/metrics pipelines―is to use distributed tracing. AWS X-Ray provides end-to-end request traces, timing breakdowns, and service maps that help identify whether latency is dominated by Lambda initialization, downstream calls, or database queries.
By enabling X-Ray tracing on the Lambda function, the developer can capture traces for report-generation requests and examine segments that show duration spent inside the function and in downstream dependencies. To specifically diagnose database time, the developer can add custom subsegments around DocumentDB operations (query execution, connection acquisition, cursor iteration). This produces precise timing data for each step, making it straightforward to identify slow queries, excessive connection setup time, or serialization overhead. Because X-Ray aggregates and visualizes traces, the developer can quickly compare “fast” and “slow” traces and isolate the bottleneck.
Option B focuses on errors and generic metrics dashboards; it’s useful for monitoring, but it won’t precisely attribute the extra 22+ seconds to a particular downstream call path.
Option C proposes performance improvements (pooling/async), but the question asks to identify bottlenecks first, and it also requires code changes and validation without confirming the root cause.
Option D requires adding detailed logging statements and then querying logs; this is more development effort and more ongoing log volume/cost than turning on X-Ray and using trace analysis, especially when the intent is bottleneck identification rather than permanent instrumentation.
Therefore, A is the best choice: enable AWS X-Ray for the Lambda function and trace DocumentDB interactions with custom subsegments to quickly and efficiently identify the specific source(s) of increased latency.
A developer is building two microservices that use an Amazon SQS queue to communicate. The messages that the microservices send to one another contain sensitive information. The developer must ensure the messages are stored and are encrypted at rest.
Which solution will meet these requirements?
- A . Add a policy to the SQS queue that sets the aws:SecureTransport condition.
- B . Configure the microservices to use the server-side encryption (SSE) option within the messages to send messages to the SQS queue.
- C . Enable the server-side encryption (SSE) option on the SQS queue. Ensure the microservices contain the sensitive information within the body of the messages.
- D . Transmit sensitive information as part of the attributes of the messages that the microservices send.
C
Explanation:
The requirement is specifically encryption at rest for messages stored in Amazon SQS. The AWS-native way to accomplish this is to enable server-side encryption (SSE) on the SQS queue. When SSE is enabled, SQS encrypts messages as they are stored and decrypts them when they are retrieved, using keys from AWS Key Management Service (AWS KMS) (either an AWS managed key for SQS or a customer managed KMS key). This provides transparent encryption at rest without requiring application-level cryptography changes.
Option C correctly states to enable SSE on the queue. Once enabled, any message content placed in the message body will be protected at rest in SQS. (Message attributes are also stored by SQS; however, the key requirement is to ensure the messages are encrypted at rest, and enabling SSE at the queue level is the control that enforces this.)
Option A (aws:SecureTransport) enforces encryption in transit by requiring HTTPS/TLS, not encryption at rest. It is a good security control, but it does not satisfy the at-rest requirement by itself.
Option B is incorrect because SSE is not something the sender “sets within the message”; SSE is configured on the queue, not per-message by the microservices.
Option D suggests putting sensitive data in attributes, which does not add security; it can actually increase exposure because attributes are often used for routing/filters and may be logged or surfaced in monitoring. The right pattern is to keep sensitive data where intended and rely on queue-level SSE to encrypt stored messages.
Therefore, enabling SQS SSE on the queue (C) meets the requirement to ensure messages are encrypted at rest while stored in SQS.
A company created an application to consume and process data. The application uses Amazon SQS and AWS Lambda functions. The application is currently working as expected, but it occasionally receives several messages that it cannot process properly. The company needs to clear these messages to prevent the queue from becoming blocked. A developer must implement a solution that makes queue processing always operational. The solution must give the company the ability to defer the messages with errors and save these messages for further analysis.
What is the MOST operationally efficient solution that meets these requirements?
- A . Configure Amazon CloudWatch Logs to save the error messages to a separate log stream.
- B . Create a new SQS queue. Set the new queue as a dead-letter queue for the application queue. Configure the Maximum Receives setting.
- C . Change the SQS queue to a FIFO queue. Configure the message retention period to 0 seconds.
- D . Configure an Amazon CloudWatch alarm for Lambda function errors. Publish messages to an Amazon SNS topic to notify administrator users.
B
Explanation:
Using a dead-letter queue (DLQ) with Amazon SQS is the most operationally efficient solution for handling unprocessable messages.
Amazon SQS Dead-Letter Queue:
A DLQ is used to capture messages that fail processing after a specified number of attempts.
Allows the application to continue processing other messages without being blocked.
Messages in the DLQ can be analyzed later for debugging and resolution.
Why DLQ is the Best Option:
Operational Efficiency: Automatically defers messages with errors, ensuring the queue is not blocked.
Analysis Ready: Messages in the DLQ can be inspected to identify recurring issues.
Scalable: Works seamlessly with Lambda and SQS at scale.
Why Not Other Options:
Option A: Logs the messages but does not resolve the queue blockage issue.
Option C: FIFO queues and 0-second retention do not provide error handling or analysis capabilities.
Option D: Alerts administrators but does not handle or store the unprocessable messages.
Steps to Implement:
Create a new SQS queue to serve as the DLQ.
Attach the DLQ to the primary queue and configure the Maximum Receives setting.
Using Amazon SQS Dead-Letter Queues
Best Practices for Using Amazon SQS with AWS Lambda
An application development team decides to use AWS X-Ray to monitor application code to analyze performance and perform root cause analysis.
What does the team need to do to begin using X-Ray? (Select TWO.)
- A . Log instrumentation output into an Amazon SQS queue.
- B . Use a visualization tool to view application traces.
- C . Instrument application code using the AWS SDK.
- D . Install the X-Ray agent on the application servers.
- E . Create an Amazon DynamoDB table to store the trace logs.
C,D
Explanation:
To begin using AWS X-Ray, the team must (1) instrument the application to emit trace segments/subsegments and (2) ensure there is a component that can send trace data to the X-Ray service.
Instrumenting code is typically done with the AWS X-Ray SDK (language-specific). This adds tracing to inbound requests and downstream calls, creates segments/subsegments, and attaches annotations/metadata. That corresponds to Option C.
For many compute environments (especially EC2, ECS on EC2, and on-prem servers), the application sends trace data via the X-Ray daemon/agent running locally. The daemon batches segments and forwards them to the X-Ray service endpoint. That corresponds to Option D.
Option B is helpful after traces exist (using the X-Ray console or CloudWatch ServiceLens), but it is not a prerequisite to “begin using” X-Ray.
Option A is unrelated. X-Ray does not require SQS for instrumentation output.
Option E is incorrect because X-Ray stores trace data in its managed backend; you do not create a DynamoDB table for trace logs.
Therefore, the team needs to instrument the code with the X-Ray SDK and install/run the X-Ray agent/daemon on the servers where the application runs.
A developer is writing an application that processes data delivered into an Amazon S3 bucket. The data is delivered approximately 10 times per day, and the developer expects the processing to complete in less than 1 minute on average.
How can the developer deploy and invoke the application with the LOWEST cost and LOWEST latency?
- A . Deploy the application as an AWS Lambda function and invoke it by using an Amazon CloudWatch alarm that is triggered by an S3 object upload.
- B . Deploy the application as an AWS Lambda function and invoke it by using an Amazon S3 event notification.
- C . Deploy the application as an AWS Lambda function and invoke it by using an Amazon CloudWatch scheduled event.
- D . Deploy the application on an Amazon EC2 instance and poll the S3 bucket for new objects.
B
Explanation:
AWS Lambda combined with Amazon S3 event notifications provides the most cost-effective and lowest-latency solution for processing objects uploaded to S3. AWS documentation states that S3 event notifications can invoke Lambda functions immediately after object creation, eliminating delays associated with polling or scheduled execution.
Because the data arrives only about 10 times per day and processing completes quickly, Lambda is ideal due to its pay-per-invocation pricing model. There is no need to maintain always-on infrastructure, which significantly reduces cost compared to running EC2 instances. The developer pays only for execution time and requests.
Using CloudWatch alarms (Option A) or scheduled events (Option C) introduces unnecessary latency and complexity. These approaches either delay processing or require periodic checks rather than
reacting instantly to object uploads. Polling from EC2 (Option D) is the most expensive and least efficient option because it requires continuously running compute resources.
AWS best practices for event-driven architectures explicitly recommend S3 event notifications → Lambda for near-real-time processing of uploaded data. This approach minimizes operational overhead, achieves near-zero idle cost, and provides the fastest possible response to new data.
Therefore, invoking a Lambda function directly through an S3 event notification is the optimal solution.
A company has developed a new serverless application using AWS Lambda functions that will be deployed using the AWS Serverless Application Model (AWS SAM) CLI.
Which step should the developer complete prior to deploying the application?
- A . Compress the application to a zip file and upload it into AWS Lambda.
- B . Test the new AWS Lambda function by first tracing it m AWS X-Ray.
- C . Bundle the serverless application using a SAM package.
- D . Create the application environment using the eb create my-env command.
C
Explanation:
This step should be completed prior to deploying the application because it prepares the application artifacts for deployment. The AWS Serverless Application Model (AWS SAM) is a framework that simplifies building and deploying serverless applications on AWS. The AWS SAM CLI is a command-line tool that helps you create, test, and deploy serverless applications using AWS SAM templates. The sam package command bundles the application artifacts, such as Lambda function code and API definitions, and uploads them to an Amazon S3 bucket. The command also returns a CloudFormation template that is ready to be deployed with the sam deploy command. Compressing the application to a zip file and uploading it to AWS Lambda will not work because it does not use AWS SAM templates or CloudFormation. Testing the new Lambda function by first tracing it in AWS X-Ray will not prepare the application for deployment, but only monitor its performance and errors. Creating the application environment using the eb create my-env command will not work because it is a command for AWS Elastic Beanstalk, not AWS SAM.
A company needs to develop a proof of concept for a web service application. The application will show the weather forecast for one of the company’s office locations. The application will provide a
REST endpoint that clients can call. Where possible, the application should use caching features provided by AWS to limit the number of requests to the backend service. The application backend will receive a small amount of traffic only during testing.
Which approach should the developer take to provide the REST endpoint MOST cost-effectively?
- A . Create a container image. Deploy the container image by using Amazon EKS. Expose the functionality by using Amazon API Gateway.
- B . Create an AWS Lambda function by using AWS SAM. Expose the Lambda functionality by using Amazon API Gateway.
- C . Create a container image. Deploy the container image by using Amazon ECS. Expose the functionality by using Amazon API Gateway.
- D . Create a microservices application. Deploy the application to AWS Elastic Beanstalk. Expose the AWS Lambda functionality by using an Application Load Balancer.
B
Explanation:
Requirement Summary:
Simple REST endpoint for weather data
Light backend usage (POC, testing)
Wants caching support to reduce backend calls
Must be cost-effective
Evaluate Options:
B: Lambda + API Gateway + SAM Serverless = No idle costs
API Gateway can enable caching (response caching at endpoint level) SAM makes deployment simple and repeatable
Perfect for low-traffic testing
A: EKS + API Gateway
High overhead
Not cost-effective for POC/testing
C: ECS + API Gateway
Similar to A: Container orchestration not needed for light REST endpoint
D: Elastic Beanstalk + ALB + Lambda
Overly complex and does not directly expose Lambda
Beanstalk better suited for full apps, not small REST functions
AWS SAM: https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/what-is-sam.html
API Gateway caching: https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-caching.html
Serverless Best Practices: https://docs.aws.amazon.com/lambda/latest/dg/best-practices.html
A developer wants to insert a record into an Amazon DynamoDB table as soon as a new file is added to an Amazon S3 bucket.
Which set of steps would be necessary to achieve this?
- A . Create an event with Amazon EventBridge that will monitor the S3 bucket and then insert the records into DynamoDB.
- B . Configure an S3 event to invoke an AWS Lambda function that inserts records into DynamoDB.
- C . Create an AWS Lambda function that will poll the S3 bucket and then insert the records into DynamoDB.
- D . Create a cron job that will run at a scheduled time and insert the records into DynamoDB.
B
Explanation:
Amazon S3 is a service that provides highly scalable, durable, and secure object storage. Amazon DynamoDB is a fully managed NoSQL database service that provides fast and consistent performance with seamless scalability. AWS Lambda is a service that lets developers run code without provisioning or managing servers. The developer can configure an S3 event to invoke a Lambda function that inserts records into DynamoDB whenever a new file is added to the S3 bucket. This solution will meet the requirement of inserting a record into DynamoDB as soon as a new file is added to S3.
Reference: [Amazon Simple Storage Service (S3)]
[Amazon DynamoDB]
[What Is AWS Lambda? – AWS Lambda]
[Using AWS Lambda with Amazon S3 – AWS Lambda]
