Practice Free DVA-C02 Exam Online Questions
A company had an Amazon RDS for MySQL DB instance that was named mysql-db. The DB instance was deleted within the past 90 days. A developer needs to find which 1AM user or role deleted the DB instance in the AWS environment.
Which solution will provide this information?
- A . Retrieve the AWS CloudTrail events for the resource mysql-db where the event name is DeleteDBInstance. Inspect each event.
- B . Retrieve the Amazon CloudWatch log events from the most recent log stream within the rds/mysql-db log group. Inspect the log events.
- C . Retrieve the AWS X-Ray trace summaries. Filter by services with the name mysql-db. Inspect the ErrorRootCauses values within each summary.
- D . Retrieve the AWS Systems Manager deletions inventory Filter the inventory by deletions that have a TypeName value of RDS. Inspect the deletion details.
A company runs a serverless application that uses several AWS Lambda functions. The existing Lambda functions run in a VPC. The Lambda functions query public APIs successfully.
To add a new feature to the application, a developer creates a new Lambda function to query external public APIs. The new Lambda function must store aggregated results in an Amazon RDS database that is in a private subnet of the VPC. The developer configures VPC access for the new Lambda function and sets up a working connection to the RDS database. The requests that the new Lambda function makes to the external APIs fail. However, requests from the developer’s local workstation to the same APIs are successful.
Which solution will meet this requirement?
- A . Provision an elastic network interface for the new Lambda function.
- B . Provision a NAT gateway in a public subnet in the VPC.
- C . Provision an outbound rule for the new Lambda function’s security group to grant internet access.
- D . Provision a gateway VPC endpoint in a public subnet in the VPC.
B
Explanation:
When a Lambda function is configured to run inside a VPC, it receives ENIs in the selected subnets and uses those subnets’ routing to reach other networks. If the function is placed in private subnets (common when it needs to access an RDS database in private subnets), it typically does not have a direct route to the internet. As a result, outbound calls to public external APIs will fail unless the VPC provides a controlled egress path.
The standard AWS networking pattern for private-subnet internet access is a NAT gateway (or NAT instance) deployed in a public subnet, with a route from the private subnet(s) to the NAT gateway
(0.0.0.0/0 → NAT). The NAT gateway then uses an internet gateway attached to the VPC to reach the public internet while keeping the Lambda function (and the RDS database) in private address space. This resolves the connectivity issue while maintaining the security posture of private subnets.
Option A is not the solution: Lambda automatically provisions ENIs for VPC-enabled functions; manually provisioning an ENI does not provide internet access.
Option C (security group outbound rules) is necessary but not sufficient―security groups control allowed traffic, but without a proper route to the internet (via NAT), the packets still cannot reach external endpoints.
Option D (gateway VPC endpoint) is for AWS services like S3 and DynamoDB (gateway endpoints), not for arbitrary public internet APIs, and “in a public subnet” is not how gateway endpoints are used; they are associated with route tables.
Therefore, the correct solution is B: add a NAT gateway in a public subnet and update private subnet route tables so the new Lambda function can reach external public APIs while still accessing the private RDS database.
A developer is deploying an application on Amazon EC2 instances that run in Account A. In certain cases, this application needs to read data from a private Amazon S3 bucket in Account B. The developer must provide the application access to the S3 bucket without exposing the S3 bucket to anyone else.
Which combination of actions should the developer take to meet these requirements? (Select TWO.)
- A . Create an IAM role with S3 read permissions in Account B.
- B . Update the instance profile IAM role in Account A with S3 read permissions.
- C . Make the S3 bucket public with limited access for Account A.
- D . Configure the bucket policy in Account B to grant permissions to the instance profile role.
- E . Add a trust policy that allows s3:Get* permissions to the IAM role in Account B.
A,D
Explanation:
This is a classic cross-account S3 access requirement: EC2 instances in Account A must read from a private S3 bucket in Account B without making the bucket public.
The correct pattern is to grant access using IAM + S3 bucket policy while keeping the bucket private. In Account B, you create an IAM role that has the necessary S3 read permissions (such as s3:GetObject and potentially s3:ListBucket depending on access needs). That is Option A.
However, permissions on the role alone are not sufficient, because the S3 bucket is in a different account and is private by default. AWS requires that the bucket owner explicitly grants access to the external principal. This is done by adding a bucket policy in Account B that allows the principal (either the role in Account A or a role session via STS) to access the bucket objects. That is Option D.
Option B is not correct in the “select two” sense because adding permissions to Account A’s instance profile role does not, by itself, grant access to a bucket owned by Account B. You still need the bucket policy or another resource-based policy from the bucket owner.
Option C violates the requirement (“without exposing the S3 bucket to anyone else”). Making the bucket public is unnecessary and insecure.
Option E is incorrect because a trust policy controls who can assume a role (sts:AssumeRole), not what S3 permissions the role has. Also, trust policies do not grant s3:Get* access.
Therefore, create an IAM role with S3 read permissions in Account B and grant access via the Account B bucket policy.
A company maintains a REST service using Amazon API Gateway and the API Gateway native API key validation. The company recently launched a new registration page, which allows users to sign up for the service. The registration page creates a new API key using CreateApiKey and sends the new key to the user. When the user attempts to call the API using this key. the user receives a 403 Forbidden error. Existing users are unaffected and can still call the API.
What code updates will grant these new users access to the API?
- A . The createDeploymer.t method must be called so the API can be redeployed to include the newly created API key.
- B . The updateAuthorizer method must be called to update the API’s authorizer to include the newly created API key
- C . The importApiKeys method must be called to import all newly created API keys into the current stage of the API.
- D . The createUsagePlanKey method must be called to associate the newly created API key with the correct usage plan.
A developer warns to add request validation to a production environment Amazon API Gateway API.
The developer needs to test the changes before the API is deployed to the production environment.
For the lest the developer will send test requests to the API through a testing tool.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Export the existing API to an OpenAPI file. Create a new API Import the OpenAPI file Modify the new API to add request validation. Perform the tests Modify the existing API to add request validation. Deploy the existing API to production.
- B . Modify the existing API to add request validation. Deploy the updated API to a new API Gateway stage Perform the tests Deploy the updated API to the API Gateway production stage.
- C . Create a new API Add the necessary resources and methods including new request validation. Perform the tests Modify the existing API to add request validation. Deploy the existing API to production.
- D . Clone the exiting API Modify the new API lo add request validation. Perform the tests Modify the existing API to add request validation Deploy the existing API to production.
D
Explanation:
This solution allows the developer to test the changes without affecting the production environment. Cloning an API creates a copy of the API definition that can be modified independently. The developer can then add request validation to the new API and test it using a testing tool. After verifying that the changes work as expected, the developer can apply the same changes to the existing API and deploy it to production.
Reference: Clone an API, [Enable Request Validation for an API in API Gateway]
A developer has created a data collection application that uses Amazon API Gateway, AWS Lambda, and Amazon S3. The application’s users periodically upload data files and wait for the validation status to be reflected on a processing dashboard. The validation process is complex and time-consuming for large files.
Some users are uploading dozens of large files and have to wait and refresh the processing dashboard to see if the files have been validated. The developer must refactor the application to immediately update the validation result on the user’s dashboard without reloading the full dashboard.
What is the MOST operationally efficient solution that meets these requirements?
- A . Integrate the client with an API Gateway WebSocket API. Save the user-uploaded files with the WebSocket connection ID. Push the validation status to the connection ID when the processing is complete to initiate an update of the UI.
- B . Launch an Amazon EC2 micro instance, and set up a WebSocket server. Send the user-uploaded file and user detail to the EC2 instance after the user uploads the file. Use the WebSocket server to send updates to the UI when the uploaded file is processed.
- C . Save the user’s email address along with the user-uploaded file. When the validation process is complete, send an email notification through Amazon SNS to the user who uploaded the file.
- D . Save the user-uploaded file and user detail to Amazon DynamoDB. Use Amazon DynamoDB Streams with Amazon SNS push notifications to send updates to the browser to update the UI.
A
Explanation:
The requirement is real-time UI updates “immediately” without refreshing the dashboard. The most operationally efficient AWS-native method is to use API Gateway WebSocket APIs to push updates from the backend to the browser.
With option A, the client establishes a WebSocket connection and receives a connection ID. The application can associate uploads (or job IDs) with that connection ID (commonly storing the mapping in DynamoDB or another datastore). When the long-running validation finishes, the backend uses the WebSocket management API to post a message to the specific connection ID, and the browser updates the UI dynamically. This eliminates polling/refresh and scales well without managing servers.
Option B requires running and maintaining an EC2-hosted WebSocket server (patching, scaling, uptime), which is more operational overhead.
Option C uses email, which is not immediate UI updating and doesn’t update the dashboard without user action.
Option D is not appropriate: SNS does not directly push notifications to a web browser in a standard way, and it adds unnecessary complexity.
Therefore, using API Gateway WebSocket to push validation results to connected clients is the most operationally efficient solution.
A developer has an application that pushes files from an on-premises local server to an Amazon S3 bucket. The application uses an AWS access key and a secret key that are stored on the server for authentication. The application calls AWS STS to assume a role with access to perform the S3 PUT operation to upload the file.
The developer is migrating the server to an Amazon EC2 instance. The EC2 instance is configured with an IAM instance profile in the same AWS account that owns the S3 bucket.
What is the MOST secure solution for the developer to use to migrate the automation code?
- A . Remove the code that calls the STS AssumeRole operation. Use the same access key and secret key from the server to access the S3 bucket.
- B . Remove the access key and the secret key. Use the STS AssumeRole operation to add permissions to access the S3 bucket.
- C . Remove the access key, the secret key, and the code that calls the STS AssumeRole operation. Use an IAM instance profile role that grants access to the S3 bucket.
- D . Remove the access key, the secret key, and the code that calls the STS AssumeRole operation.
Create a new access key and secret key. Use the new keys to access the S3 bucket.
C
Explanation:
The most secure approach on Amazon EC2 is to avoid long-term static credentials entirely and rely on IAM roles for Amazon EC2 (instance profiles). When an EC2 instance is associated with an instance profile, AWS automatically provides temporary security credentials to the instance through the instance metadata service (IMDS). The AWS SDK and CLI can retrieve and rotate these credentials automatically, eliminating the need to store an access key and secret key on disk. This reduces the risk of credential leakage and removes the operational burden of key rotation.
In this scenario, the EC2 instance is already configured with an IAM instance profile in the same account as the S3 bucket. Because the bucket is in the same account and the instance profile can be granted the required permissions, there is no security benefit to keeping a separate stored access key/secret key or performing an additional STS AssumeRole hop. The simplest and most secure design is to attach an IAM policy to the instance profile role that allows the required s3:PutObject (and any related actions such as s3:PutObjectAcl if needed) on the specific bucket/prefix. The application code can then call S3 directly using the AWS SDK, which will automatically use the instance profile credentials.
Option A continues to use long-lived static credentials stored on the server, which is less secure than instance profiles.
Option D is the same problem with different keys: still long-lived credentials that must be protected and rotated.
Option B removes stored keys but keeps an AssumeRole call; although AssumeRole uses temporary credentials, it adds complexity and is unnecessary here because the instance already has an IAM role and the S3 bucket is in the same account.
Therefore, the most secure solution is C: remove static keys and the AssumeRole code and use the instance profile role with least-privilege S3 permissions.
A company operates a media streaming platform that delivers on-demand video content to users from around the world. User requests flow through an Amazon CloudFront distribution, an Amazon API Gateway REST API, AWS Lambda functions, and Amazon DynamoDB tables.
Some users have reported intermittent buffering issues and delays when users try to start a video stream. The company needs to investigate the issues to discover the underlying cause.
Which solution will meet this requirement?
- A . Enable AWS X-Ray tracing for the REST API, Lambda functions, and DynamoDB tables. Analyze the service map to identify any performance bottlenecks or errors.
- B . Enable logging in API Gateway. Ensure that each Lambda function is configured to send logs to Amazon CloudWatch. Use CloudWatch Logs Insights to query the log data.
- C . Use AWS Config to review details of any recent configuration changes to AWS resources in the application that could result in increased latency for users.
- D . Use AWS CloudTrail to track AWS resources in all AWS Regions. Stream CloudTrail data to an Amazon CloudWatch Logs log group. Enable CloudTrail Insights. Set up Amazon SN5 notifications if unusual API activity is detected.
A
Explanation:
Requirement Summary:
Users experience buffering/delay when starting video stream
Architecture:
CloudFront → API Gateway → Lambda → DynamoDB
Need to identify root cause of performance issues
Evaluate Options:
A: Enable AWS X-Ray tracing Ideal for end-to-end tracing
Visualizes latency across services (API Gateway, Lambda, DynamoDB) Creates a service map for easy identification of bottlenecks or errors Designed specifically for distributed tracing and performance monitoring
B: CloudWatch Logs Insights
⚠️ Helpful for querying logs
But lacks the visual trace linkage across services like X-Ray
Does not identify where latency accumulates
C: AWS Config
Tracks configuration changes, not runtime performance
D: CloudTrail + CloudWatch Logs
More useful for audit/logging, not tracing performance or latency issues
X-Ray overview: https://docs.aws.amazon.com/xray/latest/devguide/aws-xray.html
Service map: https://docs.aws.amazon.com/xray/latest/devguide/xray-console-service-map.html
Tracing API Gateway: https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-xray.html
A company has a website that is developed in PHP and is launched using AWS Elastic Beanstalk. There is a new version of the website that needs to be deployed in the Elastic Beanstalk environment. The company cannot tolerate having the website offline if an update fails. Deployments must have minimal impact and rollback as soon as possible.
- A . All at once
- B . Rolling
- C . Snapshots
- D . Immutable
D
Explanation:
The Immutable deployment method is the best choice when a company requires minimal downtime and automatic rollback in case of a failure. Here’s why:
In Immutable deployments, a new set of instances is launched with the updated version. These instances are tested and validated before they replace the old instances. This ensures zero downtime and immediate rollback if the deployment fails.
All at once (A) causes downtime because the update replaces all instances simultaneously.
Rolling deployments (B) update a few instances at a time, but if a failure occurs midway, downtime or partial unavailability can happen.
Snapshots (C) are not a deployment strategy in Elastic Beanstalk.
Reference: Elastic Beanstalk Deployment Policies
A developer compiles an AWS Lambda function and packages the result as a .zip file. The developer uses the Functions page on the Lambda console to attempt to upload the local packaged .zip file.
When pushing the package to Lambda, the console returns the following error:
![]()
Which solutions can the developer use to publish the code? (Select TWO.)
- A . Upload the package to Amazon S3. Use the Functions page on the Lambda console to upload the package from the S3 location.
- B . Create an AWS Support ticket to increase the maximum package size.
- C . Use the update-function-code AWS CLI command. Pass the -publish parameter.
- D . Repackage the Lambda function as a Docker container image. Upload the image to Amazon Elastic Container Registry {Amazon ECR). Create a new Lambda function by using the Lambda console. Reference the image that is deployed to Amazon ECR.
- E . Sign the .zip file digitally. Create a new Lambda function by using the Lambda console. Update the configuration of the new Lambda function to include the Amazon Resource Name (ARN) of the code signing configuration.
