Practice Free DVA-C02 Exam Online Questions
A developer is building a web and mobile application and needs a solution to deploy the application code. The solution must be compatible with the developer’s Git source control repository. When the developer adds a new branch, the solution must create a separate deployment.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Push the code to an Amazon ECR repository. Deploy the application on Amazon ECS. Set up a GitHub Actions workflow to create new branches.
- B . Use AWS Elastic Beanstalk to deploy the application. Create new branches by uploading the application’s source bundle code to create new application versions.
- C . Deploy the application code to an AWS Lambda function. Publish a new version of the Lambda function and point an alias to the new version. Create a new branch in GitHub that connects to the Lambda alias.
- D . Use AWS Amplify to deploy the application. Use feature branch deployments to connect and
manage Git branches.
D
Explanation:
The requirements describe a Git-driven workflow where each new branch automatically gets its own deployment with minimal setup and management. AWS Amplify is designed for exactly this use case for web (and supporting mobile backends): it integrates directly with Git providers and offers continuous deployment. Amplify’s feature branch deployments automatically create a separate deployment (and typically a distinct preview URL) for each connected branch, enabling developers to test and review changes per branch without manually provisioning environments.
With Amplify, the developer connects the repository once, selects the branches to deploy, and Amplify automatically builds and deploys the application whenever code changes are pushed. When a new branch is added, Amplify can be configured to deploy it as a separate environment, providing isolated testing, previewing, and validation. This dramatically reduces operational overhead because Amplify manages build settings, hosting, SSL, and branch-based environments without needing custom CI/CD wiring.
Option A (ECR + ECS + GitHub Actions) can support branch deployments, but it requires substantial operational work: container build pipelines, task/service management, networking, scaling, and environment isolation per branch.
Option B (Elastic Beanstalk) can manage deployments, but branch-based automatic deployments are not as native; creating separate environments and managing application versions typically requires more manual steps or additional automation.
Option C (Lambda aliases) is not a general-purpose web/mobile deployment approach and does not naturally map “new Git branch” to an isolated deployment environment without significant custom tooling.
Therefore, D is the best fit with the least operational overhead: use AWS Amplify with feature branch deployments to automatically create and manage separate deployments for Git branches.
A development team uses AWS CodeBuild as part of a CI/CD pipeline. The project includes hundreds of unit and integration tests, and total build time continues to increase. The team wants faster feedback and lower overall testing duration without managing additional infrastructure.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Configure multiple CodeBuild projects and manually split tests across them.
- B . Configure CodeBuild to split tests across multiple parallel compute environments.
- C . Run all tests sequentially in a single CodeBuild environment.
- D . Use Amazon EC2 instances with a custom test runner to distribute tests.
B
Explanation:
AWS CodeBuild supports parallel test execution through batch builds and multiple compute environments. This feature allows a single CodeBuild project to divide test workloads across multiple isolated environments that run concurrently. Each environment executes a subset of the test suite, significantly reducing total build time.
AWS documentation emphasizes using native CodeBuild capabilities to minimize operational complexity. Parallel execution eliminates the need to manually manage infrastructure, test orchestration logic, or EC2 fleets. CodeBuild automatically provisions and terminates build
environments, scales as needed, and integrates seamlessly with CI/CD pipelines.
Options A and D require manual coordination and infrastructure management, increasing operational overhead.
Option C does not address the performance problem.
Therefore, using CodeBuild’s parallel compute environments is the most efficient and AWS-recommended approach.
A team has an Amazon API Gateway REST API that consists of a single resource and a GET method that is backed by an AWS Lambda integration.
A developer makes a change to the Lambda function and deploys the function as a new version. The developer needs to set up a process to test the new version of the function before using the new version in production. The tests must not affect the production REST API.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Create a new resource in the REST API. Add a GET method to the new resource, and add a Lambda integration to the updated version of the Lambda function. Deploy the new version.
- B . Create a new stage for the REST API. Create a stage variable. Assign the stage variable to the Lambda function. Set the API Gateway integrated Lambda function name to the stage variable. Deploy the new version.
- C . Create a new REST API. Add a resource that has a single GET method that is integrated with the updated version of the Lambda function.
- D . Update the Lambda integration of the existing GET method to point to the updated version of the Lambda function. Deploy the new version.
B
Explanation:
The requirement is to test a new Lambda version without impacting the production API, and to do so with the least operational overhead. In API Gateway (REST API), the simplest pattern is to create a separate stage (for example, test) and use a stage variable to control which Lambda function version (or alias) the integration invokes.
With option B, the developer creates a new stage and defines a stage variable (commonly something like lambdaAlias or lambdaVersion). The API method integration URI can reference this stage variable so that each stage points to a different Lambda alias/version. The production stage (for example, prod) continues to call the stable version, while the test stage calls the new version. This cleanly isolates testing traffic and prevents any effect on the production REST API because stages have distinct invoke URLs and deployments.
This approach is operationally efficient because it avoids duplicating the API definition (new REST API) and avoids creating extra resources/methods that must be maintained. It also provides a repeatable process: future versions can be tested by updating only stage variables or alias routing rather than restructuring the API.
Why not the others:
A adds additional resources/methods and increases API surface area and maintenance.
C duplicates the entire API, increasing overhead and configuration drift risk.
D would impact production because it changes the integration used by the existing method.
Therefore, using a new stage + stage variables to route to the updated Lambda version meets the requirements with minimal overhead.
A developer is setting up infrastructure by using AWS Cloud Formation. If an error occurs when the resources described in the CloudFormation template are provisioned, successfully provisioned resources must be preserved. The developer must provision and update the CloudFormation stack by using the AWS CLI.
Which solution will meet these requirements?
- A . Add an –enable-terminal ion-protection command line option to the create-stack command and the update-stack command.
- B . Add a -disable-roll back command line option to the create-stack command and the update-stack command
- C . Add a ―parameters ParameterKey=P reserve Resources. ParameterVaIue=True command line option to the create-stack command and the update-stack command.
- D . Add a -tags Key=PreserveResources.VaIue=True command line option to the create-stack command and the update-stack command.
A company uses a custom root certificate authority certificate chain (Root CA Cert) that is 10 KB in size generate SSL certificates for its on-premises HTTPS endpoints. One of the company’s cloud based applications has hundreds of AWS Lambda functions that pull date from these endpoints. A developer updated the trust store of the Lambda execution environment to use the Root CA Cert when the Lambda execution environment is initialized. The developer bundled the Root CA Cert as a text file in the Lambdas deployment bundle.
After 3 months of development the root CA Cert is no longer valid and must be updated. The developer needs a more efficient solution to update the Root CA Cert for all deployed Lambda functions. The solution must not include rebuilding or updating all Lambda functions that use the Root CA Cert. The solution must also work for all development, testing and production environment. Each environment is managed in a separate AWS account.
When combination of steps Would the developer take to meet these environments MOST cost-effectively? (Select TWO)
- A . Store the Root CA Cert as a secret in AWS Secrets Manager. Create a resource-based policy. Add IAM users to allow access to the secret
- B . Store the Root CA Cert as a Secure Sting parameter in aws Systems Manager Parameter Store Create a resource-based policy. Add IAM users to allow access to the policy.
- C . Store the Root CA Cert in an Amazon S3 bucket. Create a resource- based policy to allow access to the bucket.
- D . Refactor the Lambda code to load the Root CA Cert from the Root CA Certs location. Modify the runtime trust store inside the Lambda function handler.
- E . Refactor the Lambda code to load the Root CA Cert from the Root CA Cert’s location. Modify the runtime trust store outside the Lambda function handler.
B,E
Explanation:
This solution will meet the requirements by storing the Root CA Cert as a Secure String parameter in AWS Systems Manager Parameter Store, which is a secure and scalable service for storing and managing configuration data and secrets. The resource-based policy will allow IAM users in different AWS accounts and environments to access the parameter without requiring cross-account roles or permissions. The Lambda code will be refactored to load the Root CA Cert from the parameter store and modify the runtime trust store outside the Lambda function handler, which will improve performance and reduce latency by avoiding repeated calls to Parameter Store and trust store modifications for each invocation of the Lambda function.
Option A is not optimal because it will use AWS Secrets Manager instead of AWS Systems Manager Parameter Store, which will incur additional costs and complexity for storing and managing a non-secret configuration data such as Root CA Cert.
Option C is not optimal because it will deactivate the application secrets and monitor the application error logs temporarily, which will cause application downtime and potential data loss.
Option D is not optimal because it will modify the runtime trust store inside the Lambda function handler, which will degrade performance and increase latency by repeating unnecessary operations for each invocation of the Lambda function.
Reference: AWS Systems Manager Parameter Store, [Using SSL/TLS to Encrypt a Connection to a DB Instance]
A company is providing read access to objects in an Amazon S3 bucket for different customers. The company uses 1AM permissions to restrict access to the S3 bucket. The customers can access only their own files.
Due to a regulation requirement, the company needs to enforce encryption in transit for interactions with Amazon S3.
Which solution will meet these requirements?
- A . Add a bucket policy to the S3 bucket to deny S3 actions when the aws:SecureTransport condition is equal to false.
- B . Add a bucket policy to the S3 bucket to deny S3 actions when the s3:x-amz-acl condition is equal to public-read.
- C . Add an 1AM policy to the 1AM users to enforce the usage of the AWS SDK.
- D . Add an 1AM policy to the 1AM users that allows S3 actions when the s3:x-amz-acl condition is equal to bucket-owner-read.
A developer needs to set up an API to provide access to an application and its resources. The developer has a TLS certificate. The developer must have the ability to change the default base URL of the API to a custom domain name. The API users are distributed globally. The solution must minimize API latency.
- A . Create an Amazon CloudFront distribution that uses an AWS Lambda@Edge function to process API requests. Import the TLS certificate into AWS Certificate Manager and CloudFront. Add the custom domain name as an alias resource record set that is for the CloudFront distribution.
- B . Create an Amazon API Gateway REST API. Use the private endpoint type. Import the TLS certificate into AWS Certificate Manager. Create a custom domain name for the REST API. Route traffic to the custom domain name. Disable the default endpoint for the REST API.
- C . Create an Amazon API Gateway REST API. Use the edge-optimized endpoint type. Import the TLS certificate into AWS Certificate Manager. Create a custom domain name for the REST API. Route traffic to the custom domain name. Disable the default endpoint for the REST API.
- D . Create an Amazon CloudFront distribution that uses CloudFront Functions to process API requests. Import the TLS certificate into AWS Certificate Manager and CloudFront. Add the custom domain name as an alias resource record set that is for the CloudFront distribution.
C
Explanation:
Comprehensive and Detailed Step-by-Step
Option C: Edge-Optimized API Gateway with Custom Domain Name:
Edge-Optimized API Gateway: This endpoint type automatically leverages the Amazon CloudFront global distribution network, minimizing latency for API users distributed globally.
Custom Domain Name: API Gateway supports custom domain names for APIs. Importing the TLS certificate into AWS Certificate Manager (ACM) and associating it with the custom domain name ensures secure connections.
Disabling the Default Endpoint: Prevents direct access via the default API Gateway URL, enforcing the use of the custom domain name.
Why Other Options Are Incorrect:
Option A: While CloudFront can distribute API requests globally, API Gateway with edge-optimized endpoints already provides this functionality natively without requiring Lambda@Edge.
Option B: Private endpoint types are used for internal access via VPC, which does not meet the global distribution and low-latency requirement.
Option D: CloudFront Functions are not needed because API Gateway’s edge-optimized endpoints handle global distribution efficiently.
Reference: Amazon API Gateway Custom Domain Names
Amazon API Gateway Endpoint Types
A developer is investigating recent performance bottlenecks within a company’s distributed web application that runs on various AWS services, including Amazon EC2 and Amazon DynamoDB.
How can the developer determine the length of time of the application’s calls to the various downstream AWS services?
- A . Enable VPC Flow Logs and analyze them in Amazon OpenSearch Service.
- B . Use Amazon CloudWatch Logs to analyze application logs for the various calls.
- C . Enable detailed monitoring for the EC2 instances in Amazon CloudWatch.
- D . Implement AWS X-Ray with client handlers for the various downstream calls.
D
Explanation:
To measure how long an application spends calling downstream AWS services (for example, DynamoDB, S3, SNS, etc.) in a distributed system, the most effective AWS-native approach is AWS X-Ray with appropriate instrumentation. X-Ray provides distributed tracing by capturing end-to-end request paths and breaking each request into segments and subsegments with precise timing information. When the application is instrumented with the X-Ray SDK, client handlers/interceptors automatically create subsegments for calls to AWS services, recording latency, response status, and error/throttle indicators.
This directly satisfies the requirement: determine the length of time of calls to various downstream services. In the X-Ray trace map and trace details, the developer can see per-service timing, identify which dependency is slow, and distinguish application processing time from downstream call time. This is specifically designed for performance bottleneck analysis and root cause investigation across distributed components.
Option A (VPC Flow Logs) captures network flow metadata (source/destination, ports, bytes, accept/reject), not application-level request timing to AWS service APIs, and it won’t show the latency breakdown per downstream service call.
Option B can work only if the application is already logging detailed timing for each call, but this is manual, inconsistent across services, and does not give a unified distributed trace view. It is also higher effort than using standard tracing instrumentation.
Option C increases CloudWatch metric granularity for EC2 but does not show per-request downstream service call durations.
Therefore, implementing AWS X-Ray with client handlers is the correct solution.
A developer is building a REST API for a team of developers to use. The team needs to access the REST API to perform integration testing. The REST API implementation will require multiple backend services, but those backend services are not yet available.
The developer must ensure that the REST API is available for integration testing with the LEAST engineering effort.
Which solution will meet these requirements?
- A . Create an Amazon API Gateway REST API and enable mock integrations.
- B . Create an Application Load Balancer that routes traffic to Amazon EC2 instances running mock services.
- C . Create an AWS Lambda function that supports REST functionality and enable a Lambda function URL.
- D . Create an Amazon API Gateway REST API in front of an AWS Step Functions state machine.
A
Explanation:
Amazon API Gateway mock integrations allow developers to return static or templated responses without invoking any backend services. AWS documentation specifically recommends mock integrations for early API development and testing, when backend implementations are unavailable or incomplete.
With mock integrations, API Gateway itself generates responses based on mapping templates. This enables frontend and integration testing teams to validate request formats, response schemas, authentication, and error handling without deploying compute resources or writing application logic.
Options B, C, and D all require additional infrastructure or application logic, increasing engineering effort. Mock integrations require no backend code, no servers, and minimal configuration.
Therefore, using API Gateway mock integrations is the fastest and lowest-effort solution.
A developer is designing a new feature for an existing application that uses an AWS Lambda function. The developer wants to test the Lambda function safely in development and testing AWS accounts before deploying the function to a production AWS account. The developer must be able to roll back the function if issues are discovered.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Create new Lambda function versions and aliases for the development, testing, and production accounts. After successful testing, update the production alias to point to the function version. Roll back to the most recent stable version if issues are discovered.
- B . Deploy the Lambda function separately to the development, testing, and production accounts after testing the function in each environment.
- C . Use Lambda layers to separate the code and libraries for each AWS account and deploy the function with different layers.
- D . Use environment variables to differentiate development, testing, and production behavior.
A
Explanation:
AWS Lambda versions and aliases are designed specifically to support safe deployments, testing, and rollback. A Lambda version is immutable, meaning its code and configuration cannot change once published. Aliases act as pointers to specific versions and can be updated instantly.
AWS documentation recommends using aliases to manage environments such as dev, test, and prod. Each environment can reference a specific Lambda version, allowing the same function codebase to be promoted across accounts with confidence. If a defect is detected in production, the alias can be quickly repointed to a previously known good version, enabling an immediate rollback without redeployment.
Options B, C, and D introduce higher operational overhead or do not provide safe rollback. Deploying separately increases risk and management effort. Lambda layers are not intended for environment promotion. Environment variables do not protect against faulty code deployments.
Therefore, using Lambda versions and aliases is the most efficient and AWS-recommended solution.
