Practice Free DVA-C02 Exam Online Questions
An application stores user data in Amazon S3 buckets in multiple AWS Regions. A developer needs to implement a solution that analyzes the user data in the S3 buckets to find sensitive information. The analysis findings from all the S3 buckets must be available in the eu-west-2 Region.
Which solution will meet these requirements with the LEAST development effort?
- A . Create an AWS Lambda function to generate findings. Program the Lambda function to send the findings to another S3 bucket in eu-west-2.
- B . Configure Amazon Made to generate findings. Use Amazon EventBridge to create rules that copy the findings to eu-west-2.
- C . Configure Amazon Inspector to generate findings. Use Amazon EventBridge to create rules that copy the findings to eu-west-2.
- D . Configure Amazon Macie to generate findings and to publish the findings to AWS CloudTrail. Use a CloudTrail trail to copy the results to eu-west-2.
A developer maintains a critical business application that uses Amazon DynamoDB as the primary data store The DynamoDB table contains millions of documents and receives 30-60 requests each minute. The developer needs to perform processing in near-real time on the documents when they are added or updated in the DynamoDB table
How can the developer implement this feature with the LEAST amount of change to the existing application code?
- A . Set up a cron job on an Amazon EC2 instance Run a script every hour to query the table for changes and process the documents
- B . Enable a DynamoDB stream on the table Invoke an AWS Lambda function to process the documents.
- C . Update the application to send a PutEvents request to Amazon EventBridge. Create an
EventBridge rule to invoke an AWS Lambda function to process the documents. - D . Update the application to synchronously process the documents directly after the DynamoDB write
B
Explanation:
DynamoDB Streams: Capture near real-time changes to DynamoDB tables, triggering downstream actions.
Lambda for Processing: Lambda functions provide a serverless way to execute code in response to events like DynamoDB Stream updates.
Minimal Code Changes: This solution requires the least modifications to the existing application.
Reference: DynamoDB
Streams: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.html
AWS Lambda: https://aws.amazon.com/lambda/
A company has a web application that is hosted on Amazon EC2 instances The EC2 instances are configured to stream logs to Amazon CloudWatch Logs The company needs to receive an Amazon Simple Notification Service (Amazon SNS) notification when the number of application error messages exceeds a defined threshold within a 5-minute period
Which solution will meet these requirements?
- A . Rewrite the application code to stream application logs to Amazon SNS Configure an SNS topic to send a notification when the number of errors exceeds the defined threshold within a 5-minute period
- B . Configure a subscription filter on the CloudWatch Logs log group. Configure the filter to send an SNS notification when the number of errors exceeds the defined threshold within a 5-minute period.
- C . Install and configure the Amazon Inspector agent on the EC2 instances to monitor for errors Configure Amazon Inspector to send an SNS notification when the number of errors exceeds the defined threshold within a 5-minute period
- D . Create a CloudWatch metric filter to match the application error pattern in the log data. Set up a CloudWatch alarm based on the new custom metric. Configure the alarm to send an SNS notification when the number of errors exceeds the defined threshold within a 5-minute period.
D
Explanation:
CloudWatch for Log Analysis: CloudWatch is the best fit here because logs are already centralized.
Here’s the process:
Metric Filter: Create a metric filter on the CloudWatch Logs log group. Design a pattern to specifically identify application error messages.
Custom Metric: This filter generates a new custom CloudWatch metric (e.g., ApplicationErrors). This metric tracks the error count.
CloudWatch Alarm: Create an alarm on the ApplicationErrors metric. Configure the alarm with your desired threshold and a 5-minute evaluation period.
SNS Action: Set the alarm to trigger an SNS notification when it enters the alarm state.
Reference: CloudWatch Metric
Filters: https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/MonitoringLogData.html
CloudWatch
Alarms: https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/AlarmThatSendsEmai l.html
A developer needs to migrate an online retail application to AWS to handle an anticipated increase in traffic. The application currently runs on two servers: one server for the web application and another server for the database. The web server renders webpages and manages session state in memory.
The database server hosts a MySQL database that contains order details. When traffic to the application is heavy, the memory usage for the web server approaches 100% and the application slows down considerably.
The developer has found that most of the memory increase and performance decrease is related to the load of managing additional user sessions. For the web server migration, the developer will use Amazon EC2 instances with an Auto Scaling group behind an Application Load Balancer.
Which additional set of changes should the developer make to the application to improve the application’s performance?
- A . Use an EC2 instance to host the MySQL database. Store the session data and the application data in the MySQL database.
- B . Use Amazon ElastiCache for Memcached to store and manage the session data. Use an Amazon RDS for MySQL DB instance to store the application data.
- C . Use Amazon ElastiCache for Memcached to store and manage the session data and the application data.
- D . Use the EC2 instance store to manage the session data. Use an Amazon RDS for MySQL DB instance to store the application data.
B
Explanation:
Using Amazon ElastiCache for Memcached to store and manage the session data will reduce the memory load and improve the performance of the web server. Using Amazon RDS for MySQL DB instance to store the application data will provide a scalable, reliable, and managed database service.
Option A is not optimal because it does not address the memory issue of the web server.
Option C is not optimal because it does not provide a persistent storage for the application data.
Option D is not optimal because it does not provide a high availability and durability for the session data.
Reference: Amazon ElastiCache, Amazon RDS
A company is planning to securely manage one-time fixed license keys in AWS. The company’s development team needs to access the license keys in automaton scripts that run in Amazon EC2 instances and in AWS CloudFormation stacks.
Which solution will meet these requirements MOST cost-effectively?
- A . Amazon S3 with encrypted files prefixed with “config”
- B . AWS Secrets Manager secrets with a tag that is named SecretString
- C . AWS Systems Manager Parameter Store SecureString parameters
- D . CloudFormation NoEcho parameters
C
Explanation:
AWS Systems Manager Parameter Store is a service that provides secure, hierarchical storage for configuration data and secrets. Parameter Store supports SecureString parameters, which are encrypted using AWS Key Management Service (AWS KMS) keys. SecureString parameters can be used to store license keys in AWS and retrieve them securely from automation scripts that run in EC2 instances or CloudFormation stacks. Parameter Store is a cost-effective solution because it does not charge for storing parameters or API calls.
Reference: Working with Systems Manager parameters
A company recently deployed an AWS Lambda function. A developer notices an increase in the function throttle metrics in Amazon CloudWatch.
What are the MOST operationally efficient solutions to reduce the function throttling? (Select TWO.)
- A . Migrate the function to Amazon EKS.
- B . Increase the maximum age of events in Lambda.
- C . Increase the function’s reserved concurrency.
- D . Add the lambda:GetFunctionConcurrency action to the execution role.
- E . Request a service quota change for increased concurrency.
C,E
Explanation:
Lambda throttling occurs when the number of concurrent executions exceeds the available concurrency. This can happen due to account-level concurrency limits, function-level reserved concurrency limits, or sudden traffic spikes.
The most operationally efficient ways to reduce throttling are to increase available concurrency:
Option C: Increasing the function’s reserved concurrency can reduce throttling when the function is being constrained by a too-low reserved concurrency value. Reserved concurrency guarantees a fixed amount of concurrency for the function (and also caps it). If the cap is currently too low, raising it allows more parallel executions and reduces throttles.
Option E: If throttling is due to the account concurrency quota being reached (or burst scaling limits in some patterns), the correct fix is to request a service quota increase. This increases the total available concurrency capacity for the account (and/or specific dimensions), allowing more simultaneous executions across functions.
Why the others are not correct:
A (migrate to EKS) is high effort and not operationally efficient for solving Lambda throttling.
B (maximum age of events) applies to asynchronous event retries/queues; it does not reduce throttling―it only changes how long events remain eligible for processing.
D is irrelevant: adding lambda:GetFunctionConcurrency to the execution role only allows reading settings and does not change throttling behavior.
Therefore, the best operational fixes are to increase reserved concurrency and request a concurrency quota increase.
A developer is building a new application on AWS. The application uses an AWS Lambda function that retrieves information from an Amazon DynamoDB table. The developer hardcoded the DynamoDB table name into the Lambda function code. The table name might change over time. The developer does not want to modify the Lambda code if the table name changes.
Which solution will meet these requirements MOST efficiently?
- A . Create a Lambda environment variable to store the table name. Use the standard method for the programming language to retrieve the variable.
- B . Store the table name in a file. Store the file in the /tmp folder. Use the SDK for the programming language to retrieve the table name.
- C . Create a file to store the table name. Zip the file and upload the file to the Lambda layer. Use the SDK for the programming language to retrieve the table name.
- D . Create a global variable that is outside the handler in the Lambda function to store the table name.
A
Explanation:
The simplest and most efficient way to avoid hardcoding configuration such as a DynamoDB table name is to use Lambda environment variables. Environment variables are designed for runtime configuration and allow changing values without updating application code logic. The developer can store the table name in an environment variable (for example, TABLE_NAME) and read it from the runtime environment using the standard language method (such as os.environ in Python, process.env in Node.js, etc.).
This approach has very low operational overhead: updating the table name becomes a configuration change (in the Lambda console, IaC templates, or CI/CD pipeline) rather than a code change. It also keeps deployment packages stable and avoids unnecessary dependencies.
Option B is not suitable because /tmp is ephemeral storage that can be cleared between invocations and is not intended for configuration management.
Option C is heavier than necessary. Lambda layers are great for shared libraries and dependencies,
but using a layer to store a single changing table name is awkward and forces a layer update and function reconfiguration when the value changes.
Option D does not solve the problem because a global variable is still set in the code. If the table name changes, the code must still be modified and redeployed.
Therefore, storing the table name in a Lambda environment variable is the most efficient solution.
Users are reporting errors in an application. The application consists of several micro services that are deployed on Amazon Elastic Container Serves (Amazon ECS) with AWS Fargate.
When combination of steps should a developer take to fix the errors? (Select TWO)
- A . Deploy AWS X-Ray as a sidecar container to the micro services. Update the task role policy to allow access to me X -Ray API.
- B . Deploy AWS X-Ray as a daemon set to the Fargate cluster. Update the service role policy to allow access to the X-Ray API.
- C . Instrument the application by using the AWS X-Ray SDK. Update the application to use the Put-XrayTrace API call to communicate with the X-Ray API.
- D . Instrument the application by using the AWS X-Ray SDK. Update the application to communicate with the X-Ray daemon.
- E . Instrument the ECS task to send the stout and spider- output to Amazon CloudWatch Logs. Update the task role policy to allow the cloudwatch Putlogs action.
A,E
Explanation:
The combination of steps that the developer should take to fix the errors is to deploy AWS X-Ray as a sidecar container to the microservices and instrument the ECS task to send the stdout and stderr output to Amazon CloudWatch Logs. This way, the developer can use AWS X-Ray to analyze and debug the performance of the microservices and identify any issues or bottlenecks. The developer can also use CloudWatch Logs to monitor and troubleshoot the logs from the ECS task and detect any errors or exceptions. The other options either involve using AWS X-Ray as a daemon set, which is not supported by Fargate, or using the PutTraceSegments API call, which is not necessary when using a sidecar container.
Reference: Using AWS X-Ray with Amazon ECS
A company runs a new application on AWS Elastic Beanstalk. The company needs to deploy updates to the application. The updates must not cause any downtime for application users. The deployment must forward a specified percentage of incoming client traffic to a new application version during an evaluation period.
Which deployment type will meet these requirements?
- A . Rolling
- B . Traffic-splitting
- C . In-place
- D . Immutable
B
Explanation:
AWS Elastic Beanstalk supports several deployment policies, and in this case, the requirement is to forward a specific percentage of traffic to the new version without causing downtime. The Traffic-splitting deployment policy is the most appropriate choice.
Traffic-splitting Deployment: This deployment method allows you to gradually shift a specified percentage of incoming traffic from the old environment version to the new one. During the evaluation period, if any issues are detected, the traffic can be redirected back to the old version.
No Downtime: This method ensures no downtime since both versions of the application run concurrently, and traffic is split between them.
Alternatives:
Rolling deployments (Option A): These gradually replace instances but may result in partial downtime if some instances fail during deployment.
In-place deployments (Option C): In-place deployments replace instances without creating new ones, which can lead to downtime.
Immutable deployments (Option D): While this ensures no downtime by creating entirely new instances, it doesn’t provide traffic splitting during the evaluation phase.
Elastic Beanstalk Deployment Policies
A developer at a company needs to create a small application mat makes the same API call once each flay at a designated time. The company does not have infrastructure in the AWS Cloud yet, but the company wants to implement this functionality on AWS.
Which solution meets these requirements in the MOST operationally efficient manner?
- A . Use a Kubermetes cron job that runs on Amazon Elastic Kubemetes Sen/ice (Amazon EKS)
- B . Use an Amazon Linux crontab scheduled job that runs on Amazon EC2
- C . Use an AWS Lambda function that is invoked by an Amazon EventBridge scheduled event.
- D . Use an AWS Batch job that is submitted to an AWS Batch job queue.
C
Explanation:
This solution meets the requirements in the most operationally efficient manner because it does not require any infrastructure provisioning or management. The developer can create a Lambda function that makes the API call and configure an EventBridge rule that triggers the function once a day at a designated time. This is a serverless solution that scales automatically and only charges for the execution time of the function.
Reference: [Using AWS Lambda with Amazon EventBridge], [Schedule Expressions for Rules]
