Practice Free DVA-C02 Exam Online Questions
A developer is working on an application that will store protected health information (PHI) in an Amazon RDS database. The developer applies encryption to the database. The developer must also encrypt the PHI data separately to prevent administrators from accessing the data. Because some of the PHI data files are large, the developer must encrypt the PHI data in the application locally before saving the data to the database.
Which solution will meet these requirements in the MOST secure way?
- A . Create an AWS KMS customer managed key. Use the KMS Encrypt operation to encrypt the PHI data before storing the PHI data in the database.
- B . Generate a 256-bit AES encryption key. Store the key in base64-encoded format in the application source code. Use the encryption key to encrypt the PHI data before storing the PHI data in the database.
- C . Configure the database to use an AWS KMS managed key for encryption.
- D . Create an AWS KMS customer managed key. Use envelope encryption to encrypt the PHI data. Store the encrypted key in the same database record that stores the PHI data.
D
Explanation:
The requirement is application-level encryption (client-side) in addition to RDS encryption at rest, specifically to prevent database administrators from viewing PHI. Also, some PHI objects are large, so the solution must encrypt locally without size limitations.
AWS best practice for large data encryption with KMS is envelope encryption. With envelope encryption, the application generates a data encryption key (DEK) (typically obtained from KMS using GenerateDataKey). The application uses the plaintext DEK locally with a symmetric algorithm (such as AES-GCM) to encrypt the large PHI payload efficiently. The DEK itself is then protected by encrypting it with a KMS customer managed key (CMK), producing an encrypted data key that can be safely stored alongside the ciphertext. Only principals with permission to use the CMK can decrypt the DEK and then decrypt the PHI data. This ensures that even database administrators who can access the RDS data cannot read PHI without KMS authorization.
Option A is not suitable because the KMS Encrypt operation is intended for small payloads (KMS has size limits for direct encryption), making it inappropriate for large PHI files.
Option B is insecure because embedding encryption keys in source code violates key management best practices and makes rotation and compromise containment difficult.
Option C provides only storage-level encryption for the database volume; it does not ensure that PHI remains unreadable to administrators who can access the database content.
Therefore, using envelope encryption with a KMS customer managed key and storing the encrypted data key with the record is the most secure solution.
A developer is working on an application that will store protected health information (PHI) in an Amazon RDS database. The developer applies encryption to the database. The developer must also encrypt the PHI data separately to prevent administrators from accessing the data. Because some of the PHI data files are large, the developer must encrypt the PHI data in the application locally before saving the data to the database.
Which solution will meet these requirements in the MOST secure way?
- A . Create an AWS KMS customer managed key. Use the KMS Encrypt operation to encrypt the PHI data before storing the PHI data in the database.
- B . Generate a 256-bit AES encryption key. Store the key in base64-encoded format in the application source code. Use the encryption key to encrypt the PHI data before storing the PHI data in the database.
- C . Configure the database to use an AWS KMS managed key for encryption.
- D . Create an AWS KMS customer managed key. Use envelope encryption to encrypt the PHI data. Store the encrypted key in the same database record that stores the PHI data.
D
Explanation:
The requirement is application-level encryption (client-side) in addition to RDS encryption at rest, specifically to prevent database administrators from viewing PHI. Also, some PHI objects are large, so the solution must encrypt locally without size limitations.
AWS best practice for large data encryption with KMS is envelope encryption. With envelope encryption, the application generates a data encryption key (DEK) (typically obtained from KMS using GenerateDataKey). The application uses the plaintext DEK locally with a symmetric algorithm (such as AES-GCM) to encrypt the large PHI payload efficiently. The DEK itself is then protected by encrypting it with a KMS customer managed key (CMK), producing an encrypted data key that can be safely stored alongside the ciphertext. Only principals with permission to use the CMK can decrypt the DEK and then decrypt the PHI data. This ensures that even database administrators who can access the RDS data cannot read PHI without KMS authorization.
Option A is not suitable because the KMS Encrypt operation is intended for small payloads (KMS has size limits for direct encryption), making it inappropriate for large PHI files.
Option B is insecure because embedding encryption keys in source code violates key management best practices and makes rotation and compromise containment difficult.
Option C provides only storage-level encryption for the database volume; it does not ensure that PHI remains unreadable to administrators who can access the database content.
Therefore, using envelope encryption with a KMS customer managed key and storing the encrypted data key with the record is the most secure solution.
A developer needs to use Amazon DynamoDB to store customer orders. The developer’s company requires all customer data to be encrypted at rest with a key that the company generates.
What should the developer do to meet these requirements?
- A . Create the DynamoDB table with encryption set to None. Code the application to use the key to decrypt the data when the application reads from the table. Code the application to use the key to encrypt the data when the application writes to the table.
- B . Store the key by using AW5 KMS. Choose an AVVS KMS customer managed key during creation of the DynamoDB table. Provide the Amazon Resource Name (ARN) of the AWS KMS key.
- C . Store the key by using AWS KMS. Create the DynamoDB table with default encryption. Include the kms:Encrypt parameter with the Amazon Resource Name (ARN) of the AWS KMS key when using the DynamoDB SDK.
- D . Store the key by using AWS KMS. Choose an AWS KMS AWS managed key during creation of the DynamoDB table. Provide the Amazon Resource Name (ARN) of the AWS KMS key.
B
Explanation:
Requirement Summary:
Store customer orders in DynamoDB
Must encrypt data at rest
Company wants to use a key it generates (i.e., customer managed key)
Evaluate Options:
An AWS Lambda function requires read access to an Amazon S3 bucket and requires read/write access to an Amazon DynamoDB table. The correct IAM policy already exists.
What is the MOST secure way to grant the Lambda function access to the S3 bucket and the DynamoDB table?
- A . Attach the existing IAM policy to the Lambda function.
- B . Create an IAM role for the Lambda function. Attach the existing IAM policy to the role. Attach the role to the Lambda function.
- C . Create an IAM user with programmatic access. Attach the existing IAM policy to the user. Add the user access key ID and secret access key as environment variables in the Lambda function.
- D . Add the AWS account root user access key ID and secret access key as encrypted environment variables in the Lambda function.
B
Explanation:
The most secure and AWS-recommended way for a Lambda function to access other AWS services is to use an IAM execution role. Lambda assumes this role at runtime and receives temporary credentials that are automatically rotated by AWS. The developer attaches the necessary permissions (the existing IAM policy) to that role, and then configures the Lambda function to use the role.
Option B follows least privilege and avoids long-term credentials. It also integrates with AWS security tooling: IAM Access Analyzer, CloudTrail, and policy boundaries can all be applied cleanly. Because the policy already exists, this requires minimal extra work: create/choose the execution role, attach the policy, and assign the role to the function.
Option A is not correct because IAM policies are attached to IAM identities (users, groups, roles) ― not directly to Lambda functions as standalone entities. Lambda permissions are granted through the function’s execution role.
Option C is insecure because it uses long-term IAM user access keys embedded in environment variables. Even if encrypted, this expands the blast radius, complicates rotation, and contradicts AWS best practices for avoiding static credentials inside code/runtime.
Option D is extremely insecure and noncompliant. Root user access keys should not be used for applications and should generally not exist.
Therefore, create and attach the existing policy to a Lambda execution IAM role and assign that role to the function.
A company runs applications on Amazon EKS containers. The company sends application logs from the containers to an Amazon CloudWatch Logs log group. The company needs to process log data in real time based on a specific error in the application logs.
Which combination of steps will meet these requirements? (Select TWO.)
- A . Create an Amazon SNS topic that has a subscription filter policy.
- B . Create a subscription filter on the log group that has a filter pattern.
- C . Set up an Amazon CloudWatch agent operator to manage the trace collection daemon in Amazon EKS.
- D . Create an AWS Lambda function to process the logs.
- E . Create an Amazon EventBridge rule to invoke the AWS Lambda function on a schedule.
B,D
Explanation:
Requirement Summary:
EKS containers send logs to CloudWatch Logs
Need to process logs in real time
Trigger logic based on a specific error in logs
Evaluate Options:
Option A: SNS topic with filter policy
SNS filter policies work on message attributes, not on CloudWatch Logs subscription filters
Option B: Subscription filter on log group
This enables real-time log processing
You can create a subscription filter with a pattern matching specific error strings Sends matched logs to a Lambda function or Kinesis
Option C: CloudWatch agent operator for trace collection Irrelevant for log processing
Used for monitoring and tracing, not real-time log filtering
Option D: Lambda function to process logs
Once logs match the pattern, Lambda can process and act (e.g., alert, store, analyze)
Option E: EventBridge rule on a schedule
Not real-time
Scheduled EventBridge rules are for cron-like tasks, not log stream processing
Subscription filters:
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/SubscriptionFilters.html
Real-time log processing with Lambda:
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/SubscriptionFilters.html#LambdaExample
Logs in EKS to CloudWatch: https://docs.aws.amazon.com/eks/latest/userguide/fargate-logging.html
An application uses Lambda functions to extract metadata from files uploaded to an S3 bucket; the metadata is stored in Amazon DynamoDB. The application starts behaving unexpectedly, and the developer wants to examine the logs of the Lambda function code for errors.
Based on this system configuration, where would the developer find the logs?
- A . Amazon S3
- B . AWS CloudTrail
- C . Amazon CloudWatch
- D . Amazon DynamoDB
C
Explanation:
Amazon CloudWatch is the service that collects and stores logs from AWS Lambda functions. The developer can use CloudWatch Logs Insights to query and analyze the logs for errors and metrics.
Option A is not correct because Amazon S3 is a storage service that does not store Lambda function logs.
Option B is not correct because AWS CloudTrail is a service that records API calls and events for AWS services, not Lambda function logs.
Option D is not correct because Amazon DynamoDB is a database service that does not store Lambda function logs.
Reference: AWS Lambda Monitoring, [CloudWatch Logs Insights]
A developer is modifying an existing AWS Lambda function White checking the code the developer notices hardcoded parameter various for an Amazon RDS for SQL Server user name password database host and port. There also are hardcoded parameter values for an Amazon DynamoOB table. an Amazon S3 bucket, and an Amazon Simple Notification Service (Amazon SNS) topic.
The developer wants to securely store the parameter values outside the code m an encrypted format and wants to turn on rotation for the credentials. The developer also wants to be able to reuse the parameter values from other applications and to update the parameter values without modifying code.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Create an RDS database secret in AWS Secrets Manager. Set the user name password, database, host and port. Turn on secret rotation. Create encrypted Lambda environment variables for the DynamoDB table, S3 bucket and SNS topic.
- B . Create an RDS database secret in AWS Secrets Manager. Set the user name password, database, host and port. Turn on secret rotation. Create Secure String parameters in AWS Systems Manager Parameter Store for the DynamoDB table, S3 bucket and SNS topic.
- C . Create RDS database parameters in AWS Systems Manager Parameter. Store for the user name password, database, host and port. Create encrypted Lambda environment variables for me DynamoDB table, S3 bucket, and SNS topic. Create a Lambda function and set the logic for the credentials rotation task Schedule the credentials rotation task in Amazon EventBridge.
- D . Create RDS database parameters in AWS Systems Manager Parameter. Store for the user name password database, host, and port. Store the DynamoDB table. S3 bucket, and SNS topic in Amazon S3 Create a Lambda function and set the logic for the credentials rotation Invoke the Lambda function on a schedule.
B
Explanation:
This solution will meet the requirements by using AWS Secrets Manager and AWS Systems Manager Parameter Store to securely store the parameter values outside the code in an encrypted format. AWS Secrets Manager is a service that helps protect secrets such as database credentials by encrypting them with AWS Key Management Service (AWS KMS) and enabling automatic rotation of secrets. The developer can create an RDS database secret in AWS Secrets Manager and set the user name, password, database, host, and port for accessing the RDS database. The developer can also turn on secret rotation, which will change the database credentials periodically according to a specified schedule or event. AWS Systems Manager Parameter Store is a service that provides secure and scalable storage for configuration data and secrets. The developer can create Secure String parameters in AWS Systems Manager Parameter Store for the DynamoDB table, S3 bucket, and SNS topic, which will encrypt them with AWS KMS. The developer can also reuse the parameter values from other applications and update them without modifying code.
Option A is not optimal because it will create encrypted Lambda environment variables for the DynamoDB table, S3 bucket, and SNS topic, which may not be reusable or updatable without modifying code.
Option C is not optimal because it will create RDS database parameters in AWS Systems Manager Parameter Store, which does not support automatic rotation of secrets.
Option D is not optimal because it will store the DynamoDB table, S3 bucket, and SNS topic in Amazon S3, which may introduce additional costs and complexity for accessing configuration data.
Reference: AWS Secrets Manager, [AWS Systems Manager Parameter Store]
A developer is debugging an application that uses an AWS Lambda function. The function intermittently fails during a 1-hour window. Logs are sent to an Amazon CloudWatch Logs log group. The developer must collect logs related to failures and capture the dates and times of those failures.
Which solution will meet these requirements in the MOST operationally efficient way?
- A . Use AWS CLI commands in AWS CloudShell to manually browse the log group.
- B . Use CloudWatch Logs Insights to query the log group for error patterns.
- C . Download log files locally and search them with a text editor.
- D . Export the log group to Amazon S3 and query it with Amazon Athena.
B
Explanation:
Amazon CloudWatch Logs Insights is a purpose-built log analytics tool that allows developers to run interactive queries directly against CloudWatch Logs. AWS documentation highlights Logs Insights as the fastest and most efficient way to search, filter, and analyze log data without exporting it.
Logs Insights queries can filter error messages, extract timestamps, and aggregate failure counts over specific time windows. This directly satisfies the requirement to identify when intermittent failures occurred and what errors caused them.
Manual browsing (Option A) and local downloads (Option C) are time-consuming and do not scale. Exporting logs to S3 and querying with Athena (Option D) introduces unnecessary infrastructure and operational overhead.
Therefore, CloudWatch Logs Insights is the most operationally efficient solution.
A social media company is designing a platform that allows users to upload data, which is stored in Amazon S3. Users can upload data encrypted with a public key. The company wants to ensure that only the company can decrypt the uploaded content using an asymmetric encryption key. The data must always be encrypted in transit and at rest.
- A . Use server-side encryption with Amazon S3 managed keys (SSE-S3) to encrypt the data.
- B . Use server-side encryption with customer-provided encryption keys (SSE-C) to encrypt the data.
- C . Use client-side encryption with a data key to encrypt the data.
- D . Use client-side encryption with a customer-managed encryption key to encrypt the data.
D
Explanation:
Step 1: Problem Understanding
Asymmetric Encryption Requirement: Users encrypt data with a public key, and only the company can decrypt it using a private key.
Data Encryption at Rest and In Transit: The data must be encrypted during upload (in transit) and when stored in Amazon S3 (at rest).
Step 2: Solution Analysis
Option A: Server-side encryption with Amazon S3 managed keys (SSE-S3).
Amazon S3 manages the encryption and decryption keys.
This does not meet the requirement for asymmetric encryption, where the company uses a private key.
Not suitable.
Option B: Server-side encryption with customer-provided keys (SSE-C).
Requires the user to supply encryption keys during the upload process.
Does not align with the asymmetric encryption requirement.
Not suitable.
Option C: Client-side encryption with a data key.
Data key encryption is symmetric, not asymmetric.
Does not satisfy the requirement for a public-private key pair.
Not suitable.
Option D: Client-side encryption with a customer-managed encryption key.
Data is encrypted on the client side using the public key.
Only the company can decrypt the data using the corresponding private key.
Data remains encrypted during upload (in transit) and in S3 (at rest).
Correct option.
Step 3: Implementation Steps for Option D
Generate Key Pair:
The company generates an RSA key pair (public/private) for encryption and decryption.
Encrypt Data on Client Side:
Use the public key to encrypt the data before uploading to S3.
S3 Upload:
Upload the encrypted data to S3 over an HTTPS connection.
Decrypt Data on the Server:
Use the private key to decrypt data when needed.
AWS Developer
Reference: Amazon S3 Encryption Options
Asymmetric Key Cryptography in AWS
A large company has its application components distributed across multiple AWS accounts. The company needs to collect and visualize trace data across these accounts.
What should be used to meet these requirements?
- A . AWS X-Ray
- B . Amazon CloudWatch
- C . Amazon VPC flow logs
- D . Amazon OpenSearch Service
