Practice Free DVA-C02 Exam Online Questions
A developer has implemented an AWS Lambda function that inserts new customers into an Amazon RDS database. The function is expected to run hundreds of times each hour. The function and RDS database are in the same VPC.
The function is configured to use 512 MB of RAM and is based on the following pseudocode:
def lambda_handler(event, context):
db = database.connect()
db.statement("INSERT INTO Customers (CustomerName) VALUES (%s)", event.name)
db.execute()
db.close()
After successfully testing the function multiple times, the developer notices that the execution time is longer than expected.
What should the developer do to improve performance?
- A . Increase the reserved concurrency of the Lambda function.
- B . Increase the size of the RDS database to facilitate an increased number of database connections each hour.
- C . Move the database connection and close statement out of the handler. Place the connection in the global space.
- D . Replace Amazon RDS with Amazon DynamoDB to implement control over the number of writes per second.
C
Explanation:
A common performance issue for Lambda functions that access relational databases is the overhead of establishing a new database connection on every invocation. In the provided pseudocode, each call to lambda_handler opens a connection, executes one statement, and closes the connection. When the function runs hundreds of times per hour, repeatedly creating and tearing down connections adds latency (TCP handshake, TLS negotiation if used, authentication, database session setup), and can also increase load on the database connection manager.
AWS guidance for Lambda execution environments explains that the runtime environment may be reused across multiple invocations of the same function (a “warm” execution environment). Code that is initialized outside the handler runs once per execution environment lifecycle and can be reused on subsequent invocations in that environment. By moving database.connect() into the global scope (module-level initialization), the function can reuse an existing connection when the execution environment is warm, reducing per-invocation latency and improving overall throughput.
This change directly targets the slow part of the flow (connection setup) without changing the database engine, resizing the database, or adding unrelated concurrency settings. Increasing reserved concurrency (A) controls scaling limits but does not reduce the per-invocation connection overhead; it can even amplify connection storms. Resizing the database (B) might help capacity, but it does not address the repeated connection establishment cost and is not the first optimization. Replacing RDS with DynamoDB (D) is a major redesign and unnecessary for the stated issue.
So the best improvement, with minimal code change and aligned with Lambda best practices, is option C: create the connection outside the handler and reuse it when possible, while still handling
retries and reconnect logic if the connection becomes stale.
A company wants to migrate applications from its on-premises servers to AWS. As a first step, the company is modifying and migrating a non-critical application to a single Amazon EC2 instance. The application will store information in an Amazon S3 bucket. The company needs to follow security best practices when deploying the application on AWS.
Which approach should the company take to allow the application to interact with Amazon S3?
- A . Create an 1AM role that has administrative access to AWS. Attach the role to the EC2 instance.
- B . Create an 1AM user. Attach the AdministratorAccess policy. Copy the generated access key and secret key. Within the application code, use the access key and secret key along with the AWS SDK to communicate with Amazon S3.
- C . Create an 1AM role that has the necessary access to Amazon S3. Attach the role to the EC2 instance.
- D . Create an 1AM user. Attach a policy that provides the necessary access to Amazon S3. Copy the generated access key and secret key. Within the application code, use the access key and secret key along with the AWS SDK to communicate with Amazon S3.
A developer is deploying a new application to Amazon Elastic Container Service (Amazon ECS). The developer needs to securely store and retrieve different types of variables. These variables include authentication information for a remote API, the URL for the API, and credentials. The authentication information and API URL must be available to all current and future deployed versions of the application across development, testing, and production environments.
How should the developer retrieve the variables with the FEWEST application changes?
- A . Update the application to retrieve the variables from AWS Systems Manager Parameter Store. Use unique paths in Parameter Store for each variable in each environment. Store the credentials in AWS Secrets Manager in each environment.
- B . Update the application to retrieve the variables from AWS Key Management Service (AWS KMS).
Store the API URL and credentials as unique keys for each environment. - C . Update the application to retrieve the variables from an encrypted file that is stored with the application. Store the API URL and credentials in unique files for each environment.
- D . Update the application to retrieve the variables from each of the deployed environments. Define the authentication information and API URL in the ECS task definition as unique names during the deployment process.
A
Explanation:
AWS Systems Manager Parameter Store is a service that provides secure, hierarchical storage for configuration data management and secrets management. The developer can update the application to retrieve the variables from Parameter Store by using the AWS SDK or the AWS CLI. The developer can use unique paths in Parameter Store for each variable in each environment, such as /dev/api-url, /test/api-url, and /prod/api-url. The developer can also store the credentials in AWS Secrets Manager, which is integrated with Parameter Store and provides additional features such as automatic rotation and encryption.
Reference: [What Is AWS Systems Manager? – AWS Systems Manager]
[Parameter Store – AWS Systems Manager]
[What Is AWS Secrets Manager? – AWS Secrets Manager]
A company runs a serverless application on AWS. The application includes an AWS Lambda function.
The Lambda function processes data and stores the data in an Amazon RDS for PostgreSQL database.
A developer created user credentials in the database for the application.
The developer needs to use AWS Secrets Manager to manage the user credentials. The password must be rotated on a regular basis. The solution needs to ensure that there is high availability and no downtime for the application during secret rotation.
What should the developer do to meet these requirements?
- A . Configure managed rotation with the single user rotation strategy.
- B . Configure managed rotation with the alternating users rotation strategy.
- C . Configure automatic rotation with the single user rotation strategy.
- D . Configure automatic rotation with the alternating users rotation strategy.
D
Explanation:
To rotate database credentials without downtime, Secrets Manager recommends the alternating users (dual user) rotation strategy. With a single-user strategy, Secrets Manager changes the password for the same database user that the application is actively using. If the application continues using cached/old credentials during rotation, authentication failures and downtime can occur.
With the alternating users strategy, two database users exist (often named something like appuser and appuser_clone). Secrets Manager alternates which user is “active” in the secret. During rotation, Secrets Manager updates the inactive user’s password, verifies it, updates the secret to point to the newly updated user, and then (optionally) retires the old credentials. This approach minimizes disruption because there is always one known-good credential set while the other is being updated.
The question specifically requires regular rotation using Secrets Manager. That is achieved by enabling automatic rotation on the secret with a schedule (for example, every 30/60/90 days). Automatic rotation invokes the rotation Lambda (AWS-provided or configured) and performs the workflow without manual intervention.
Option D combines both requirements: automatic rotation plus the alternating users strategy for high availability and no downtime.
Option B mentions alternating users but “managed rotation” is not the standard term used in these choices as the primary differentiator―automatic rotation is the necessary capability to rotate regularly without manual effort.
Options A and C (single user) are more likely to cause downtime during rotation.
Therefore, configure automatic rotation with the alternating users rotation strategy.
A company is planning to use AWS CodeDeploy to deploy an application to Amazon Elastic Container Service (Amazon ECS) During the deployment of a new version of the application, the company initially must expose only 10% of live traffic to the new version of the deployed application. Then, after 15 minutes elapse, the company must route all the remaining live traffic to the new version of the deployed application.
Which CodeDeploy predefined configuration will meet these requirements?
- A . CodeDeployDefault ECSCanary10Percent15Minutes
- B . CodeDeployDefault LambdaCanary10Percent5Minutes
- C . CodeDeployDefault LambdaCanary10Percent15Minutes
- D . CodeDeployDefault ECSLinear10PercentEvery1 Minutes
A
Explanation:
CodeDeploy Predefined Configurations: CodeDeploy offers built-in deployment configurations for common scenarios.
Canary Deployment: Canary deployments gradually shift traffic to a new version, ideal for controlled rollouts like this requirement.
CodeDeployDefault.ECSCanary10Percent15Minutes: This configuration matches the company’s requirements, shifting 10% of traffic initially and then completing the rollout after 15 minutes.
Reference: AWS CodeDeploy Deployment
Configurations: https://docs.aws.amazon.com/codedeploy/latest/userguide/deployment-configurations-create.html
A company is running a custom application on a set of on-premises Linux servers that are accessed using Amazon API Gateway. AWS X-Ray tracing has been enabled on the API test stage.
How can a developer enable X-Ray tracing on the on-premises servers with the LEAST amount of configuration?
- A . Install and run the X-Ray SDK on the on-premises servers to capture and relay the data to the X-Ray service.
- B . Install and run the X-Ray daemon on the on-premises servers to capture and relay the data to the X-Ray service.
- C . Capture incoming requests on-premises and configure an AWS Lambda function to pull, process, and relay relevant data to X-Ray using the PutTraceSegments API call.
- D . Capture incoming requests on-premises and configure an AWS Lambda function to pull, process, and relay relevant data to X-Ray using the PutTelemetryRecords API call.
B
Explanation:
The X-Ray daemon is a software that collects trace data from the X-Ray SDK and relays it to the X-Ray service. The X-Ray daemon can run on any platform that supports Go, including Linux, Windows, and macOS. The developer can install and run the X-Ray daemon on the on-premises servers to capture and relay the data to the X-Ray service with minimal configuration. The X-Ray SDK is used to instrument the application code, not to capture and relay data. The Lambda function solutions are more complex and require additional configuration.
Reference: [AWS X-Ray concepts – AWS X-Ray]
[Setting up AWS X-Ray – AWS X-Ray]
A developer is migrating an application to Amazon Elastic Kubernetes Service (Amazon EKS). The developer migrates the application to Amazon Elastic Container Registry (Amazon ECR) with an EKS cluster.
As part of the application migration to a new backend, the developer creates a new AWS account. The developer makes configuration changes to the application to point the application to the new AWS account and to use new backend resources. The developer successfully tests the changes within the application by deploying the pipeline.
The Docker image build and the pipeline deployment are successful, but the application is still connecting to the old backend. The developer finds that the application’s configuration is still referencing the original EKS cluster and not referencing the new backend resources.
Which reason can explain why the application is not connecting to the new resources?
- A . The developer did not successfully create the new AWS account.
- B . The developer added a new tag to the Docker image.
- C . The developer did not update the Docker image tag to a new version.
- D . The developer pushed the changes to a new Docker image tag.
C
Explanation:
The correct answer is
C. The developer did not update the Docker image tag to a new version.
C. The developer did not update the Docker image tag to a new version. This is correct. When deploying an application to Amazon EKS, the developer needs to specify the Docker image tag that contains the application code and configuration. If the developer does not update the Docker image tag to a new version after making changes to the application, the EKS cluster will continue to use the old Docker image tag that references the original backend resources. To fix this issue, the developer should update the Docker image tag to a new version and redeploy the application to the EKS cluster.
A developer manages an application that stores user objects in an Amazon S3 bucket without versioning enabled. The application has premium users and basic users.
After premium users upload objects, the premium users have unlimited downloads of their objects. Their objects are stored with a premium/ prefix. After basic users upload objects, the basic users can download their objects for 90 days. Their objects are stored with a basic/ prefix.
The developer needs to implement a solution to automatically delete objects for the basic users after 90 days.
Which solution will meet these requirements with the LEAST development effort?
- A . Create an AWS Lambda function that removes any objects in the S3 bucket that have the basic/ prefix and are more than 90 days old. Set up an Amazon EventBridge schedule to invoke the Lambda function every day.
- B . Set up an S3 Lifecycle rule that applies to the objects that have the premium/ prefix. Set the S3 Lifecycle rule action to expire the current version of the objects that have the premium/ prefix after 90 days.
- C . Set up an S3 Lifecycle rule that applies to the objects that have the basic/ prefix. Set the S3 Lifecycle rule action to expire the current version of the objects that have the basic/ prefix after 90 days.
- D . Create a rule for the S3 bucket to identify objects that have the basic/ prefix. Set the rule action to delete any objects that have object delete markers and unfinished multipart uploads after 90 days.
C
Explanation:
The requirement is to automatically delete objects for basic users after 90 days with the least development effort. Amazon S3 provides a built-in, fully managed feature for this: S3 Lifecycle configuration rules. Lifecycle rules can filter objects by prefix (and/or tags) and then perform actions such as expiring (deleting) objects after a specified number of days since creation. Because the bucket does not have versioning enabled, the relevant action is to expire the current object after the retention period.
Option C matches this exactly: create an S3 Lifecycle rule that applies only to objects under the basic/ prefix and configure the rule to expire current versions after 90 days. S3 then automatically and continuously enforces the policy without any code, scheduling, or operational maintenance. This is typically the lowest-effort and most reliable approach because it is handled by S3 itself.
Option A requires writing and operating a Lambda function plus a scheduler, handling pagination, permissions, error handling, retries, and potential costs and throttling when scanning large buckets. That is more development and operational effort than a lifecycle rule.
Option B is incorrect because it targets the premium/ prefix, but premium objects must be retained indefinitely.
Option D refers to delete markers and unfinished multipart uploads; delete markers are relevant primarily for versioned buckets, and cleaning up multipart uploads does not implement the required “delete basic objects after 90 days” behavior.
Therefore, the correct solution is C: use an S3 Lifecycle rule scoped to the basic/ prefix to expire objects after 90 days, achieving automated retention with minimal development effort.
A company uses AWS CloudFormation to deploy an application that includes an Amazon API Gateway REST API integrated with AWS Lambda and Amazon DynamoDB. The application has three stages: development, testing, and production, each with its own DynamoDB table.
The company wants to deploy a new production release and route 20% of traffic to the new version while keeping 80% of traffic on the existing production version. The solution must minimize the number of errors that any single customer experiences.
Which approach should the developer take?
- A . Deploy incremental portions of the changes to production in multiple steps.
- B . Use Amazon Route 53 weighted routing between the production and testing stages.
- C . Deploy an Application Load Balancer in front of the API Gateway stages and weight traffic.
- D . Configure canary deployment settings for the production API stage and route 20% of traffic to the canary.
D
Explanation:
Amazon API Gateway supports canary deployments, which are designed specifically for controlled production rollouts. Canary deployments allow a developer to direct a configurable percentage of traffic to a new deployment while the remainder continues to use the existing deployment.
AWS documentation states that canary releases help minimize customer impact by exposing only a subset of users to potential issues. Because the routing is request-based, individual users are less likely to encounter inconsistent behavior across multiple requests.
Route 53 weighted routing (Option B) operates at the DNS level and can result in users being routed unpredictably due to DNS caching. An Application Load Balancer (Option C) is not supported as a direct frontend for API Gateway stages and adds unnecessary complexity.
Option A lacks traffic control guarantees.
Therefore, configuring a canary deployment on the production stage is the AWS-recommended and lowest-risk approach.
In a move toward using microservices, a company’s management team has asked all development teams to build their services so that API requests depend only on that service’s data store. One team is building a Payments service which has its own database; the service needs data that originates in the Accounts database. Both are using Amazon DynamoDB.
What approach will result in the simplest, decoupled, and reliable method to get near-real time updates from the Accounts database?
- A . Use AWS Glue to perform frequent ETL updates from the Accounts database to the Payments database.
- B . Use Amazon ElastiCache in Payments, with the cache updated by triggers in the Accounts database.
- C . Use Amazon Data Firehose to deliver all changes from the Accounts database to the Payments database.
- D . Use Amazon DynamoDB Streams to deliver all changes from the Accounts database to the Payments database.
