Practice Free DVA-C02 Exam Online Questions
A developer must use multi-factor authentication (MFA) to access data in an Amazon S3 bucket that is in another AWS account.
Which AWS Security Token Service (AWS STS) API operation should the developer use with the MFA information to meet this requirement?
- A . AssumeRoleWithWebidentity
- B . GetFederationToken
- C . AssumeRoleWithSAML
- D . AssumeRole
D
Explanation:
AWS STS AssumeRole: The central operation for assuming temporary security credentials, commonly used for cross-account access.
MFA Integration: The AssumeRole call can include MFA information to enforce multi-factor authentication.
Credentials for S3 Access: The returned temporary credentials would provide the necessary permissions to access the S3 bucket in the other account.
Reference: AWS STS AssumeRole
Documentation: https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html
A company created an application to consume and process data. The application uses Amazon SQS and AWS Lambda functions. The application is currently working as expected, but it occasionally receives several messages that it cannot process properly. The company needs to clear these messages to prevent the queue from becoming blocked. A developer must implement a solution that makes queue processing always operational. The solution must give the company the ability to defer the messages with errors and save these messages for further analysis.
What is the MOST operationally efficient solution that meets these requirements?
- A . Configure Amazon CloudWatch Logs to save the error messages to a separate log stream.
- B . Create a new SQS queue. Set the new queue as a dead-letter queue for the application queue.
Configure the Maximum Receives setting. - C . Change the SQS queue to a FIFO queue. Configure the message retention period to 0 seconds.
- D . Configure an Amazon CloudWatch alarm for Lambda function errors. Publish messages to an Amazon SNS topic to notify administrator users.
B
Explanation:
Using a dead-letter queue (DLQ) with Amazon SQS is the most operationally efficient solution for handling unprocessable messages.
Amazon SQS Dead-Letter Queue:
A DLQ is used to capture messages that fail processing after a specified number of attempts.
Allows the application to continue processing other messages without being blocked.
Messages in the DLQ can be analyzed later for debugging and resolution.
Why DLQ is the Best Option:
Operational Efficiency: Automatically defers messages with errors, ensuring the queue is not blocked.
Analysis Ready: Messages in the DLQ can be inspected to identify recurring issues.
Scalable: Works seamlessly with Lambda and SQS at scale.
Why Not Other Options:
Option A: Logs the messages but does not resolve the queue blockage issue.
Option C: FIFO queues and 0-second retention do not provide error handling or analysis capabilities.
Option D: Alerts administrators but does not handle or store the unprocessable messages.
Steps to Implement:
Create a new SQS queue to serve as the DLQ.
Attach the DLQ to the primary queue and configure the Maximum Receives setting.
Using Amazon SQS Dead-Letter Queues
Best Practices for Using Amazon SQS with AWS Lambda
An organization is using Amazon CloudFront to ensure that its users experience low-latency access to its web application. The organization has identified a need to encrypt all traffic between users and CloudFront, and all traffic between CloudFront and the web application.
How can these requirements be met? (Select TWO)
- A . Use AWS KMS t0 encrypt traffic between cloudFront and the web application.
- B . Set the Origin Protocol Policy to "HTTPS Only".
- C . Set the Origin’s HTTP Port to 443.
- D . Set the Viewer Protocol Policy to "HTTPS Only" or Redirect HTTP to HTTPS"
- E . Enable the CloudFront option Restrict Viewer Access.
B,D
Explanation:
This solution will meet the requirements by ensuring that all traffic between users and CloudFront, and all traffic between CloudFront and the web application, are encrypted using HTTPS protocol. The Origin Protocol Policy determines how CloudFront communicates with the origin server (the web application), and setting it to “HTTPS Only” will force CloudFront to use HTTPS for every request to the origin server. The Viewer Protocol Policy determines how CloudFront responds to HTTP or HTTPS requests from users, and setting it to “HTTPS Only” or “Redirect HTTP to HTTPS” will force CloudFront to use HTTPS for every response to users.
Option A is not optimal because it will use AWS KMS to encrypt traffic between CloudFront and the web application, which is not necessary or supported by CloudFront.
Option C is not optimal because it will set the origin’s HTTP port to 443, which is incorrect as port 443 is used for HTTPS protocol, not HTTP protocol.
Option E is not optimal because it will enable the CloudFront option Restrict Viewer Access, which is used for controlling access to private content using signed URLs or signed cookies, not for encrypting traffic.
Reference: [Using HTTPS with CloudFront], [Restricting Access to Amazon S3 Content by Using an Origin Access Identity]
A company developed an API application on AWS by using Amazon CloudFront, Amazon API Gateway, and AWS Lambda. The API has a minimum of four requests every second. A developer notices that many API users run the same query by using the POST method. The developer wants to cache the POST request to optimize the API resources.
Which solution will meet these requirements?
- A . Configure the CloudFront cache. Update the application to return cached content based upon the default request headers.
- B . Override the cache method in the selected stage of API Gateway. Select the POST method.
- C . Save the latest request response in Lambda /tmp directory. Update the Lambda function to check the /tmp directory.
- D . Save the latest request in AWS Systems Manager Parameter Store. Modify the Lambda function to take the latest request response from Parameter Store.
B
Explanation:
Amazon API Gateway provides tools for creating and documenting web APIs that route HTTP requests to Lambda functions2. You can secure access to your API with authentication and authorization controls. Your APIs can serve traffic over the internet or can be accessible only within your VPC2. You can override the cache method in the selected stage of API Gateway2. Therefore, option B is correct.
A company’s development team uses an SSH key pair to copy files among a large fleet of development servers. The SSH key pair has been compromised. A developer has generated a replacement key pair. The company has deployed the AWS Systems Manager Agent (SSM Agent) and the Amazon CloudWatch agent on all of the development servers.
The developer needs a solution to distribute the new key to all the Linux servers.
Which solution will meet these requirements in the MOST operationally efficient way?
- A . Create an Amazon S3 bucket. Store the public key in the root of the S3 bucket. Log in to each server. Copy the private key from the S3 bucket to the appropriate directory of each server.
- B . Create an Amazon S3 bucket. Store the public key in the root of the S3 bucket. Create a script to copy the private key from the S3 bucket to the appropriate directory of each server. Use Systems Manager Run Command to run the script on all Linux servers.
- C . Upload the new SSH key pair to AWS Secrets Manager as a new secret. Grant the team members permissions to download the secret into the appropriate directory of each server.
- D . Upload the new SSH key pair to AWS Systems Manager Parameter Store. Make each key a new parameter. Grant the team members permissions to download the parameters into the appropriate directory of each server.
B
Explanation:
The company already has SSM Agent deployed on all servers, which enables centralized, scalable fleet operations without logging in to each instance. The most operationally efficient solution is to automate key distribution using Systems Manager Run Command, executing a script across the entire fleet (or a targeted subset) in one action.
Option B uses S3 as a simple distribution source and Run Command to perform the update at scale. The developer can store the required key material in a controlled S3 location and write a script that places the key in the correct filesystem path (with correct permissions/ownership), updates authorized_keys if needed, and restarts any services if required. Run Command can target instances by tags, resource groups, or instance IDs and provides execution logging and status.
Option A is not operationally efficient because it requires logging into each server manually.
Option C and D push the distribution burden to humans (“grant team members permissions to download”), which is error-prone, slow, and not scalable. Also, distributing private keys broadly to individuals increases risk.
A financial company must store original customer records for 10 years for legal reasons. A complete record contains personally identifiable information (PII). According to local regulations, PII is available to only certain people in the company and must not be shared with third parties. The company needs to make the records available to third-party organizations for statistical analysis without sharing the PII.
A developer wants to store the original immutable record in Amazon S3. Depending on who accesses the S3 document, the document should be returned as is or with all the PII removed. The developer has written an AWS Lambda function to remove the PII from the document. The function is named removePii.
What should the developer do so that the company can meet the PII requirements while maintaining only one copy of the document?
- A . Set up an S3 event notification that invokes the removePii function when an S3 GET request is made. Call Amazon S3 by using a GET request to access the object without PII.
- B . Set up an S3 event notification that invokes the removePii function when an S3 PUT request is made. Call Amazon S3 by using a PUT request to access the object without PII.
- C . Create an S3 Object Lambda access point from the S3 console. Select the removePii function. Use S3 Access Points to access the object without PII.
- D . Create an S3 access point from the S3 console. Use the access point name to call the GetObjectLegalHold S3 API function. Pass in the removePii function name to access the object without PII.
C
Explanation:
S3 Object Lambda allows you to add your own code to process data retrieved from S3 before returning it to an application. You can use an AWS Lambda function to modify the data, such as removing PII, redacting confidential information, or resizing images. You can create an S3 Object Lambda access point and associate it with your Lambda function. Then, you can use the access point to request objects from S3 and get the modified data back. This way, you can maintain only one copy of the original document in S3 and apply different transformations depending on who accesses it.
Reference: Using AWS Lambda with Amazon S3
A developer is using AWS Step Functions to automate a workflow The workflow defines each step as an AWS Lambda function task The developer notices that runs of the Step Functions state machine fail in the GetResource task with either an UlegalArgumentException error or a TooManyRequestsException error
The developer wants the state machine to stop running when the state machine encounters an UlegalArgumentException error. The state machine needs to retry the GetResource task one additional time after 10 seconds if the state machine encounters a TooManyRequestsException error. If the second attempt fails, the developer wants the state machine to stop running.
How can the developer implement the Lambda retry functionality without adding unnecessary complexity to the state machine?
- A . Add a Delay task after the GetResource task. Add a catcher to the GetResource task. Configure the catcher with an error type of TooManyRequestsException. Configure the next step to be the Delay task Configure the Delay task to wait for an interval of 10 seconds Configure the next step to be the GetResource task.
- B . Add a catcher to the GetResource task Configure the catcher with an error type of TooManyRequestsException. an interval of 10 seconds, and a maximum attempts value of 1. Configure the next step to be the GetResource task.
- C . Add a retrier to the GetResource task Configure the retrier with an error type of TooManyRequestsException, an interval of 10 seconds, and a maximum attempts value of 1.
- D . Duplicate the GetResource task Rename the new GetResource task to TryAgain Add a catcher to the original GetResource task Configure the catcher with an error type of TooManyRequestsException. Configure the next step to be TryAgain.
C
Explanation:
Step Functions Retriers: Retriers provide a built-in way to gracefully handle transient errors within State Machines.
Here’s how to use them:
Directly attach a retrier to the problematic ‘GetResource’ task.
Configure the retrier:
ErrorEquals: Set this to [‘TooManyRequestsException’] to target the specific error. IntervalSeconds: Set to 10 for the desired retry delay. MaxAttempts: Set to 1, as you want only one retry attempt. Error Handling:
Upon ‘TooManyRequestsException’, the retrier triggers the task again after 10 seconds.
On a second failure, Step Functions moves to the next state or fails the workflow, as per your design.
‘IllegalArgumentException’ causes error propagation as intended.
Reference: Error Handling in Step Functions: https://docs.aws.amazon.com/step-functions/latest/dg/concepts-error-handling.html
A team of developed is using an AWS CodePipeline pipeline as a continuous integration and continuous delivery (CI/CD) mechanism for a web application. A developer has written unit tests to
programmatically test the functionality of the application code. The unit tests produce a test report that shows the results of each individual check. The developer now wants to run these tests automatically during the CI/CD process.
- A . Write a Git pre-commit hook that runs the test before every commit. Ensure that each developer who is working on the project has the pre-commit hook instated locally. Review the test report and resolve any issues before pushing changes to AWS CodeCommit.
- B . Add a new stage to the pipeline. Use AWS CodeBuild as the provider. Add the new stage after the stage that deploys code revisions to the test environment. Write a buildspec that fails the CodeBuild stage if any test does not pass. Use the test reports feature of Codebuild to integrate the report with the CodoBuild console. View the test results in CodeBuild Resolve any issues.
- C . Add a new stage to the pipeline. Use AWS CodeBuild at the provider. Add the new stage before the stage that deploys code revisions to the test environment. Write a buildspec that fails the CodeBuild stage it any test does not pass. Use the test reports feature of CodeBuild to integrate the report with the CodeBuild console. View the test results in codeBuild Resolve any issues.
- D . Add a new stage to the pipeline. Use Jenkins as the provider. Configure CodePipeline to use Jenkins to run the unit tests. Write a Jenkinsfile that fails the stage if any test does not pass. Use the test report plugin for Jenkins to integrate the repot with the Jenkins dashboard. View the test results in Jenkins. Resolve any issues.
C
Explanation:
The solution that will meet the requirements is to add a new stage to the pipeline. Use AWS CodeBuild as the provider. Add the new stage before the stage that deploys code revisions to the test environment. Write a buildspec that fails the CodeBuild stage if any test does not pass. Use the test reports feature of CodeBuild to integrate the report with the CodeBuild console. View the test results in CodeBuild. Resolve any issues. This way, the developer can run the unit tests automatically during the CI/CD process and catch any bugs before deploying to the test environment. The developer can also use the test reports feature of CodeBuild to view and analyze the test results in a graphical interface. The other options either involve running the tests manually, running them after deployment, or using a different provider that requires additional configuration and integration.
Reference: Test reports for CodeBuild
A cloud-based video surveillance company is developing an application that analyzes video files.
After the application analyzes the files, the company can discard the files.
The company stores the files in an Amazon S3 bucket. The files are 1 GB in size on average. No file is larger than 2 GB. An AWS Lambda function will run one time for each video file that is processed. The processing is very I/O intensive, and the application must read each file multiple times.
Which solution will meet these requirements in the MOST performance-optimized way?
- A . Attach an Amazon EBS volume that is larger than 1 GB to the Lambda function. Copy the files from the S3 bucket to the EBS volume.
- B . Attach an Elastic Network Adapter (ENA) to the Lambda function. Use the ENA to read the video files from the S3 bucket.
- C . Increase the ephemeral storage size to 2 GB. Copy the files from the S3 bucket to the /tmp directory of the Lambda function.
- D . Configure the Lambda function code to read the video files directly from the S3 bucket.
C
Explanation:
The workload is explicitly very I/O intensive and requires reading the same file multiple times. Reading repeatedly from S3 across the network will introduce latency and repeated network I/O. For best performance, the function should download the file once to fast local storage and perform all repeated reads locally.
AWS Lambda provides ephemeral storage mounted at /tmp. AWS allows increasing this ephemeral storage beyond the default size. Because the files are at most 2 GB, the developer can configure the Lambda function’s ephemeral storage to 2 GB and copy each S3 object into /tmp before processing. This converts repeated network reads into repeated local filesystem reads, which is significantly faster and more consistent for I/O-heavy workloads.
Option D (read directly from S3) is the least optimized because each pass over the file requires additional network I/O and S3 GET requests.
Option A is not feasible: Lambda cannot directly attach and manage an EBS volume like an EC2 instance can.
Option B is not applicable: ENA is an EC2 networking feature; Lambda networking is managed by the service and does not allow attaching ENAs for performance tuning in that way.
Because the files are disposable after processing, using ephemeral /tmp storage is also cost-effective and operationally simple, and it is aligned with the “process once per file” pattern.
Therefore, increase Lambda ephemeral storage and copy the video into /tmp for repeated reads.
A company hosts a batch processing application on AWS Elastic Beanstalk with instances that run the most recent version of Amazon Linux. The application sorts and processes large datasets. In recent weeks, the application’s performance has decreased significantly during a peak period for traffic. A developer suspects that the application issues are related to the memory usage. The developer checks the Elastic Beanstalk console and notices that memory usage is not being tracked.
How should the developer gather more information about the application performance issues?
- A . Configure the Amazon CloudWatch agent to push logs to Amazon CloudWatch Logs by using port 443.
- B . Configure the Elastic Beanstalk .ebextensions directory to track the memory usage of the instances.
- C . Configure the Amazon CloudWatch agent to track the memory usage of the instances.
- D . Configure an Amazon CloudWatch dashboard to track the memory usage of the instances.
C
Explanation:
To monitor memory usage in Amazon Elastic Beanstalk environments, it’s important to understand that default Elastic Beanstalk monitoring capabilities in Amazon CloudWatch do not track memory usage, as memory metrics are not collected by default. Instead, the Amazon CloudWatch agent must be configured to collect memory usage metrics.
Why Option C is Correct:
The Amazon CloudWatch agent can be installed and configured to monitor system-level metrics such as memory and disk utilization.
To enable memory tracking, developers need to install the CloudWatch agent on the Amazon Elastic Compute Cloud (EC2) instances associated with the Elastic Beanstalk environment.
After installation, the agent can be configured to collect memory metrics, which can then be sent to CloudWatch for further analysis.
How to Implement This Solution:
Install the CloudWatch Agent:Use .ebextensions or AWS Systems Manager to install and configure the CloudWatch agent on the EC2 instances running in the Elastic Beanstalk environment.
Modify CloudWatch Agent Configuration:Create a config.json file that specifies memory usage tracking and other desired metrics.
Enable Metrics Reporting:The CloudWatch agent can push the metrics to CloudWatch, where they can be monitored.
Why Other Options are Incorrect:
Option A: Configuring the agent to push logs is not sufficient to track memory metrics. This option addresses logging but not system-level metrics like memory usage.
Option B: The .ebextensions directory is used to customize Elastic Beanstalk environments but does not directly track memory metrics without additional configuration of the CloudWatch agent.
Option D: Configuring a CloudWatch dashboard will only visualize the metrics that are already being collected. It will not enable memory usage tracking.
AWS Documentation
Reference: Amazon CloudWatch Agent Overview
Elastic Beanstalk Customization Using .ebextensions
Monitoring Custom Metrics
