Practice Free DVA-C02 Exam Online Questions
A developer needs to use a code template to create an automated deployment of an application onto Amazon EC2 instances. The template must be configured to repeat deployment, installation, and updates of resources for the application. The template must be able to create identical environments and roll back to previous versions.
Which solution will meet these requirements?
- A . Use AWS Amplify for automatic deployment templates. Use a traffic-splitting deployment to copy any deployments. Modify any resources created by Amplify, if necessary.
- B . Use AWS CodeBuild for automatic deployment. Upload the required AppSpec file template. Save the appspec.yml file in the root directory folder of the revision. Specify the deployment group that includes the EC2 instances for the deployment.
- C . Use AWS CloudFormation to create an infrastructure template in JSON format to deploy the EC2 instances. Use Cloud Formation helper scripts to install the necessary software and to start the application. Call the scripts directly from the template.
- D . Use AWS AppSync to deploy the application. Upload the template as a GraphQL schema. Specify the EC2 instances for deployment of the application. Use resolvers as a version control mechanism
and to make any updates to the deployments.
A developer is building an application that processes a stream of user-supplied data. The data stream must be consumed by multiple Amazon EC2-based processing applications in parallel and in real time. Each processor must be able to resume without losing data if there is a service interruption. The application architect plans to add other processors in the near future and wants to minimize the amount of data duplication involved.
Which solution will satisfy these requirements?
- A . Publish the data to Amazon SQS.
- B . Publish the data to Amazon Data Firehose.
- C . Publish the data to Amazon EventBridge.
- D . Publish the data to Amazon Kinesis Data Streams.
D
Explanation:
The requirements describe a real-time streaming use case where multiple processing applications must read the same stream in parallel, each must be able to resume processing after interruptions without data loss, and the system should support adding new processors with minimal duplication. Amazon Kinesis Data Streams (KDS) is purpose-built for this pattern.
With Kinesis Data Streams, producers write records to a stream that is split into shards. Multiple consumer applications can independently read from the stream using separate consumer groups (for example, using Kinesis Client Library). Each consumer maintains its own checkpoint (sequence number position), which allows it to resume from the last processed record after a restart or failure. This directly meets the “resume without losing data” requirement.
KDS also supports a configurable retention period (commonly 24 hours by default, extendable), enabling replay and recovery without forcing producers to re-send data or duplicating messages for each processor. Adding new consumers is straightforward: a new processor can begin reading the existing stream (from latest or from a trim horizon), which minimizes duplication compared to fan-out duplication patterns.
Why the others don’t fit:
SQS is a queue, not a stream. Multiple consumers typically compete for messages rather than all consuming the same messages independently, and duplication is required if multiple apps must process the same data.
Kinesis Data Firehose is primarily for delivery to destinations (S3, Redshift, OpenSearch) and not designed for multiple real-time parallel consumers with replay semantics.
EventBridge is event routing with limited replay/resume semantics compared to KDS and is not intended for high-throughput stream processing with per-consumer checkpoints.
Therefore, Amazon Kinesis Data Streams satisfies all requirements.
A developer must analyze performance issues with production-distributed applications written as AWS Lambda functions. These distributed Lambda applications invoke other components that make up me applications.
How should the developer identify and troubleshoot the root cause of the performance issues in production?
- A . Add logging statements to the Lambda functions. then use Amazon CloudWatch to view the logs.
- B . Use AWS CloudTrail and then examine the logs.
- C . Use AWS X-Ray. then examine the segments and errors.
- D . Run Amazon inspector agents and then analyze performance.
C
Explanation:
This solution will meet the requirements by using AWS X-Ray to analyze and debug the performance issues with the distributed Lambda applications. AWS X-Ray is a service that collects data about requests that the applications serve, and provides tools to view, filter, and gain insights into that data. The developer can use AWS X-Ray to identify the root cause of the performance issues by examining the segments and errors that show the details of each request and the components that make up the applications.
Option A is not optimal because it will use logging statements and Amazon CloudWatch, which may not provide enough information or visibility into the distributed applications.
Option B is not optimal because it will use AWS CloudTrail, which is a service that records API calls and events for AWS services, not application performance data.
Option D is not optimal because it will use Amazon Inspector, which is a service that helps improve the security and compliance of applications on Amazon EC2 instances, not Lambda functions.
Reference: AWS X-Ray, Using AWS X-Ray with AWS Lambda
A developer is setting up a deployment pipeline. The pipeline includes an AWS CodeBuild build stage that requires access to a database to run integration tests. The developer is using a buildspec.yml file to configure the database connection. Company policy requires automatic rotation of all database credentials.
Which solution will handle the database credentials MOST securely?
- A . Retrieve the credentials from variables that are hardcoded in the buildspec.yml file. Configure an AWS Lambda function to rotate the credentials.
- B . Retrieve the credentials from an environment variable that is linked to a SecureString parameter in AWS Systems Manager Parameter Store. Configure Parameter Store for automatic rotation.
- C . Retrieve the credentials from an environment variable that is linked to an AWS Secrets Manager secret. Configure Secrets Manager for automatic rotation.
- D . Retrieve the credentials from an environment variable that contains the connection string in plaintext. Configure an Amazon EventBridge event to rotate the credentials.
C
Explanation:
The most secure way to handle database credentials for a CodeBuild stage―especially with a company requirement for automatic rotation―is to store credentials in AWS Secrets Manager and reference the secret securely from the CodeBuild environment. Secrets Manager is purpose-built to store, retrieve, and rotate secrets such as database credentials, API keys, and tokens. It encrypts secrets at rest (using AWS KMS), supports fine-grained IAM access control, and integrates with services like CodeBuild through environment variables and runtime retrieval.
With CodeBuild, the developer can configure environment variables to reference a Secrets Manager secret (rather than embedding credentials in buildspec.yml). This ensures that the build process retrieves the latest rotated credentials at runtime, reducing exposure risk and eliminating manual credential updates.
Option B is weaker because Systems Manager Parameter Store (SecureString) is a secure storage mechanism, but automatic rotation is not a native Parameter Store feature in the same way it is in Secrets Manager for database credentials. Secrets Manager provides managed rotation workflows (often via Lambda rotation functions) and scheduling that directly matches the requirement.
Options A and D violate secure handling practices by hardcoding credentials or storing plaintext connection strings. Both approaches significantly increase risk of exposure through source control, build logs, or environment inspection. Additionally, bolting on rotation via Lambda or EventBridge does not address the primary weakness of hardcoded/plaintext secret distribution.
Therefore, AWS Secrets Manager with automatic rotation, referenced securely from CodeBuild environment variables, is the most secure solution.
A developer is configuring an applications deployment environment in AWS CodePipeine. The application code is stored in a GitHub repository. The developer wants to ensure that the repository package’s unit tests run in the new deployment environment. The deployment has already set the pipeline’s source provider to GitHub and has specified the repository and branch to use in the deployment.
When combination of steps should the developer take next to meet these requirements with the least the LEAST overhead’ (Select TWO).
- A . Create an AWS CodeCommt project. Add the repository package’s build and test commands to the protects buildspec
- B . Create an AWS CodeBuid project. Add the repository package’s build and test commands to the projects buildspec
- C . Create an AWS CodeDeploy protect. Add the repository package’s build and test commands to the project’s buildspec
- D . Add an action to the source stage. Specify the newly created project as the action provider. Specify the build attract as the actions input artifact.
- E . Add a new stage to the pipeline alter the source stage. Add an action to the new stage. Speedy the newly created protect as the action provider. Specify the source artifact as the action’s input artifact.
B,E
Explanation:
This solution will ensure that the repository package’s unit tests run in the new deployment environment with the least overhead because it uses AWS CodeBuild to build and test the code in a fully managed service, and AWS CodePipeline to orchestrate the deployment stages and actions.
Option A is not optimal because it will use AWS CodeCommit instead of AWS CodeBuild, which is a source control service, not a build and test service.
Option C is not optimal because it will use AWS CodeDeploy instead of AWS CodeBuild, which is a deployment service, not a build and test service.
Option D is not optimal because it will add an action to the source stage instead of creating a new stage, which will not follow the best practice of separating different deployment phases.
Reference: AWS CodeBuild, AWS CodePipeline
A developer is migrating a containerized application from an on-premises environment to an Amazon ECS cluster.
In the on-premises environment, the container uses a Docker file to store the application. Service dependency configurations such as databases, caches, and storage volumes are stored in a docker-compose.yml file.
Both files are located at the top level of the code base that the developer needs to containerize. When the developer deploys the code to Amazon ECS, the instructions from the Docker file are carried out. However, none of the configurations from docker-compose.yml are applied.
The developer needs to resolve the error and ensure the configurations are applied.
- A . Store the file path for the docker-compose.yml file as a Docker label. Add the label to the ECS cluster’s container details.
- B . Add the details from the docker-compose.yml file to an ECS task definition. Associate the task with the ECS cluster.
- C . Create a namespace in the ECS cluster. Associate the docker-compose.yml file to the namespace.
- D . Update the service type of the ECS cluster to REPLICA, and redeploy the stack.
B
Explanation:
Why Option B is Correct: Amazon ECS does not natively process docker-compose.yml files. Instead, the configurations from docker-compose.yml must be converted into ECS-compatible configurations within a task definition. Task definitions are the primary way to specify container configurations in ECS, including service dependencies like databases, caches, and volumes.
Steps to Resolve the Error:
Extract the configurations from the docker-compose.yml file.
Map the dependencies and settings to the appropriate ECS task definition fields.
Deploy the task definition to the ECS cluster.
Why Other Options are Incorrect:
Option A: Docker labels do not directly impact ECS task execution or integrate with ECS service configurations.
Option C: ECS namespaces do not exist as a feature.
Option D: Changing the service type to REPLICA does not resolve the issue of missing service dependency configurations.
AWS Documentation
Reference: Amazon ECS Task Definitions
Migrating Docker Compose Workloads to ECS
A company has an Amazon S3 bucket containing premier content that it intends to make available to only paid subscribers of its website. The S3 bucket currently has default permissions of all objects being private to prevent inadvertent exposure of the premier content to non-paying website visitors.
How can the company Limit the ability to download a premier content file in the S3 Bucket to paid subscribers only?
- A . Apply a bucket policy that allows anonymous users to download the content from the S3 bucket.
- B . Generate a pre-signed object URL for the premier content file when a pad subscriber requests a download.
- C . Add a Docket policy that requires multi-factor authentication for request to access the S3 bucket objects.
- D . Enable server-side encryption on the S3 bucket for data protection against the non-paying website visitors.
B
Explanation:
This solution will limit the ability to download a premier content file in the S3 bucket to paid subscribers only because it uses a pre-signed object URL that grants temporary access to an S3 object for a specified duration. The pre-signed object URL can be generated by the company’s website when a paid subscriber requests a download, and can be verified by Amazon S3 using the signature in the URL.
Option A is not optimal because it will allow anyone to download the content from the S3 bucket without verifying their subscription status.
Option C is not optimal because it will require additional steps and costs to configure multi-factor authentication for accessing the S3 bucket objects, which may not be feasible or user-friendly for paid subscribers.
Option D is not optimal because it will not prevent non-paying website visitors from accessing the S3 bucket objects, but only encrypt them at rest.
Reference: Share an Object with Others, [Using Amazon S3 Pre-Signed URLs]
A developer is modifying an AWS Lambda function that accesses an Amazon RDS for MySQL database. The developer discovers that the Lambda function has the database credentials stored as plaintext in the Lambda function code.
The developer must implement a solution to make the credentials more secure. The solution must include automated credential rotation every 30 days.
Which solution will meet these requirements?
- A . Move the credentials to a secret in AWS Secrets Manager. Modify the Lambda function to read from Secrets Manager. Set a schedule to rotate the secret every 30 days.
- B . Move the credentials to a secure string parameter in AWS Systems Manager Parameter Store. Modify the Lambda function to read from Parameter Store. Set a schedule to rotate the parameter every 30 days.
- C . Move the credentials to an encrypted Amazon S3 bucket. Modify the Lambda function to read from the S3 bucket. Configure S3 Object Lambda to rotate the credentials every 30 days.
- D . Move the credentials to a secure string parameter in AWS Systems Manager Parameter Store.
Create an Amazon EventBridge rule to rotate the parameter every 30 days.
A
Explanation:
Requirement Summary:
Lambda function accesses RDS for MySQL
Credentials are currently hardcoded in code (insecure)
Must enable automated credential rotation every 30 days
Option A: Use AWS Secrets Manager + automatic rotation
Best and secure option
Secrets Manager allows:
Secure storage of secrets
Integration with RDS for automatic rotation
Scheduled rotation every X days (e.g., 30 days)
Lambda can fetch credentials via SDK (GetSecretValue)
Option B: Use SSM Parameter Store + rotation
SSM does not support automatic rotation of secrets.
You’d need to build custom rotation logic = higher operational overhead.
Option C: Encrypted S3 + Object Lambda rotation
Not intended for credential storage.
S3 is not a secrets management system, and Object Lambda does not perform rotation.
Option D: SSM + EventBridge rotation
No native integration between Parameter Store and EventBridge for secret rotation.
You’d need to build custom Lambda functions = higher maintenance.
Secrets Manager rotation for RDS:
https://docs.aws.amazon.com/secretsmanager/latest/userguide/rotating-secrets.html
Secure retrieval in Lambda:
https://docs.aws.amazon.com/secretsmanager/latest/userguide/retrieving-secrets_lambda.html
Integration with RDS MySQL:
https://docs.aws.amazon.com/secretsmanager/latest/userguide/integrating_rds_config.html
A company is developing an application that will be accessed through the Amazon API Gateway REST API. Registered users should be the only ones who can access certain resources of this API. The token being used should expire automatically and needs to be refreshed periodically.
How can a developer meet these requirements?
- A . Create an Amazon Cognito identity pool, configure the Amazon Cognito Authorizer in API Gateway, and use the temporary credentials generated by the identity pool.
- B . Create and maintain a database record for each user with a corresponding token and use an AWS Lambda authorizer in API Gateway.
- C . Create an Amazon Cognito user pool, configure the Cognito Authorizer in API Gateway, and use the identity or access token.
- D . Create an 1AM user for each API user, attach an invoke permissions policy to the API. and use an I AM authorizer in API Gateway.
When a developer tries to run an AWS Code Build project, it raises an error because the length of all environment variables exceeds the limit for the combined maximum of characters.
What is the recommended solution?
- A . Add the export LC-_ALL" on _ US, tuft" command to the pre _ build section to ensure POSIX Localization.
- B . Use Amazon Cognate to store key-value pairs for large numbers of environment variables
- C . Update the settings for the build project to use an Amazon S3 bucket for large numbers of environment variables
- D . Use AWS Systems Manager Parameter Store to store large numbers ot environment variables
D
Explanation:
This solution allows the developer to overcome the limit for the combined maximum of characters for environment variables in AWS CodeBuild. AWS Systems Manager Parameter Store provides secure, hierarchical storage for configuration data management and secrets management. The developer can store large numbers of environment variables as parameters in Parameter Store and reference them in the buildspec file using parameter references. Adding export LC_ALL=“en_US.utf8” command to the pre_build section will not affect the environment variables limit. Using Amazon Cognito or an Amazon S3 bucket to store key-value pairs for environment variables will require additional configuration and integration.
Reference: [Build Specification Reference for AWS CodeBuild], [What Is AWS Systems Manager Parameter Store?]
