Practice Free DVA-C02 Exam Online Questions
A company is creating a new application that gives users the ability to upload and share short video files. The average size of the video files is 10 MB. After a user uploads a file, a message needs to be placed into an Amazon Simple Queue Service (Amazon SQS) queue so the file can be processed. The files need to be accessible for processing within 5 minutes.
Which solution will meet these requirements MOST cost-effectively?
- A . Write the files to Amazon S3 Glacier Deep Archive. Add the S3 location of the files to the SQS queue.
- B . Write the files to Amazon S3 Standard. Add the S3 location of the files to the SQS queue.
- C . Write the files to an Amazon Elastic Block Store (Amazon EBS) General Purpose SSD volume. Add the EBS location of the files to the SQS queue.
- D . Write messages that contain the contents of the uploaded files to the SQS queue.
B
Explanation:
Why Option B is Correct:
Amazon S3 Standard provides immediate access to files and is cost-effective for files that need to be accessed within 5 minutes.
By adding the S3 location to the SQS queue, you avoid transferring large files directly, which is both more efficient and scalable.
Why Other Options are Incorrect:
Option A: S3 Glacier Deep Archive is designed for archival storage with retrieval times ranging from minutes to hours, which does not meet the 5-minute requirement.
Option C: Amazon EBS is designed for block storage attached to EC2 instances, which adds unnecessary complexity and cost.
Option D: SQS is not designed to handle large file content directly and has message size limits (256 KB).
AWS Documentation
Reference: Amazon S3 Overview
Amazon SQS Best Practices
A company runs an application on AWS The application uses an AWS Lambda function that is configured with an Amazon Simple Queue Service (Amazon SQS) queue called high priority queue as the event source A developer is updating the Lambda function with another SQS queue called low priority queue as the event source The Lambda function must always read up to 10 simultaneous messages from the high priority queue before processing messages from low priority queue. The Lambda function must be limited to 100 simultaneous invocations.
Which solution will meet these requirements?
- A . Set the event source mapping batch size to 10 for the high priority queue and to 90 for the low priority queue
- B . Set the delivery delay to 0 seconds for the high priority queue and to 10 seconds for the low priority queue
- C . Set the event source mapping maximum concurrency to 10 for the high priority queue and to 90 for the low priority queue
- D . Set the event source mapping batch window to 10 for the high priority queue and to 90 for the low priority queue
C
Explanation:
Lambda Concurrency: The ‘maximum concurrency’ setting in event source mappings controls the maximum number of simultaneous invocations Lambda allows for that specific source.
Prioritizing Queues: Setting a lower maximum concurrency for the ‘high priority queue’ ensures it’s processed first while allowing more concurrent invocations from the ‘low priority queue’.
Batching: Batch size settings affect the number of messages Lambda retrieves from a queue per invocation, which is less relevant to the prioritization requirement.
Reference: Lambda Event Source Mappings: https://docs.aws.amazon.com/lambda/latest/dg/invocation-eventsourcemapping.html
Lambda Concurrency: https://docs.aws.amazon.com/lambda/latest/dg/configuration-concurrency.html
A company regularly receives route status updates from its delivery trucks as events in Amazon EventBridge. The company is building an API-based application in a VPC that will consume and process the events to create a delivery status dashboard. The API application must not be available by using public IP addresses because of security and compliance requirements.
How should the company send events from EventBridge to the API application?
- A . Create an AWS Lambda function that runs in the same VPC as the API application. Configure the function as an EventBridge target. Use the function to send events to the API.
- B . Create an internet-facing Application Load Balancer (ALB) in front of the API application. Associate a security group with rules that block access from all external sources except for EventBridge. Configure the ALB as an EventBridge target.
- C . Create an internet-facing Network Load Balancer (NLB) in front of the API application. Associate a security group with rules that block access from all external sources except for EventBridge. Configure the NLB as an EventBridge target.
- D . Use the application API endpoint in the VPC as a target for EventBridge. Send events directly to the application API endpoint from EventBridge.
A
Explanation:
Why Option A is Correct: Running an AWS Lambda function within the same VPC ensures secure communication without exposing the API application to public IP addresses. The Lambda function can serve as a secure EventBridge target to send events to the API.
Why Other Options are Incorrect:
Option B & C: Internet-facing load balancers expose public IP addresses, which violates compliance requirements.
Option D: EventBridge cannot directly target an endpoint within a private VPC without intermediary services like Lambda.
AWS Documentation
Reference: EventBridge Targets
An application ingests data from an Amazon Kinesis data stream. The shards in the data stream are set for normal traffic.
During tests for peak traffic, the application ingests data slowly. A developer needs to adjust the data stream to handle the peak traffic.
What should the developer do to meet this requirement MOST cost-effectively?
- A . Install the Kinesis Producer Library {KPL) to ingest data into the data stream.
- B . Switch to on-demand capacity mode for the data stream. Specify a partition key when writing data to the data stream.
- C . Decrease the amount of time that data is kept in the data stream by using the DecreaseStreamRetention Period API operation.
- D . Increase the shard count in the data stream by using the UpdateShardCount API operation.
An application ingests data from an Amazon Kinesis data stream. The shards in the data stream are set for normal traffic.
During tests for peak traffic, the application ingests data slowly. A developer needs to adjust the data stream to handle the peak traffic.
What should the developer do to meet this requirement MOST cost-effectively?
- A . Install the Kinesis Producer Library {KPL) to ingest data into the data stream.
- B . Switch to on-demand capacity mode for the data stream. Specify a partition key when writing data to the data stream.
- C . Decrease the amount of time that data is kept in the data stream by using the DecreaseStreamRetention Period API operation.
- D . Increase the shard count in the data stream by using the UpdateShardCount API operation.
A developer is storing sensitive data generated by an application in Amazon S3. The developer wants to encrypt the data at rest. A company policy requires an audit trail of when the AWS Key Management Service (AWS KMS) key was used and by whom.
Which encryption option will meet these requirements?
- A . Server-side encryption with Amazon S3 managed keys (SSE-S3)
- B . Server-side encryption with AWS KMS managed keys (SSE-KMS}
- C . Server-side encryption with customer-provided keys (SSE-C)
- D . Server-side encryption with self-managed keys
B
Explanation:
This solution meets the requirements because it encrypts data at rest using AWS KMS keys and provides an audit trail of when and by whom they were used. Server-side encryption with AWS KMS managed keys (SSE-KMS) is a feature of Amazon S3 that encrypts data using keys that are managed by AWS KMS. When SSE-KMS is enabled for an S3 bucket or object, S3 requests AWS KMS to generate data keys and encrypts data using these keys. AWS KMS logs every use of its keys in AWS CloudTrail, which records all API calls to AWS KMS as events. These events include information such as who made the request, when it was made, and which key was used. The company policy can use CloudTrail logs to audit critical events related to their data encryption and access. Server-side encryption with Amazon S3 managed keys (SSE-S3) also encrypts data at rest using keys that are managed by S3, but does not provide an audit trail of key usage. Server-side encryption with customer-provided keys (SSE-C) and server-side encryption with self-managed keys also encrypt data at rest using keys that are provided or managed by customers, but do not provide an audit trail of key usage and require additional overhead for key management.
Reference: [Protecting Data Using Server-Side Encryption with AWS KMSCManaged Encryption Keys (SSE-KMS)], [Logging AWS KMS API calls with AWS CloudTrail]
A developer is creating an application that uses an AWS Lambda function to transform and load data from an Amazon S3 bucket. When the developer tests the application, the developer finds that some invocations of the Lambda function are slower than others.
The developer needs to update the Lambda function to have predictable invocation durations that run with low latency. Any initialization activities, such as loading libraries and instantiating clients, must run during allocation time rather than during actual function invocations.
Which combination of steps will meet these requirements? (Select TWO.)
- A . Create a schedule group in Amazon EventBridge Scheduler to invoke the Lambda function.
- B . Configure provisioned concurrency for the Lambda function to have the necessary number of execution environments.
- C . Use the SLATEST version of the Lambda function.
- D . Configure reserved concurrency for the Lambda function to have the necessary number of execution environments.
- E . Deploy changes, and publish a new version of the Lambda function.
A developer is setting up infrastructure by using AWS Cloud Formation. If an error occurs when the resources described in the CloudFormation template are provisioned, successfully provisioned resources must be preserved. The developer must provision and update the CloudFormation stack by using the AWS CLI.
Which solution will meet these requirements?
- A . Add an –enable-terminal ion-protection command line option to the create-stack command and the update-stack command.
- B . Add a -disable-roll back command line option to the create-stack command and the update-stack command
- C . Add a ―parameters ParameterKey=P reserve Resources. ParameterVaIue=True command line option to the create-stack command and the update-stack command.
- D . Add a -tags Key=PreserveResources.VaIue=True command line option to the create-stack command and the update-stack command.
A company is using an Amazon API Gateway REST API endpoint as a webhook to publish events from an on-premises source control management (SCM) system to Amazon EventBridge. The company has configured an EventBridge rule to listen for the events and to control application deployment in a central AWS account. The company needs to receive the same events across multiple receiver AWS accounts.
How can a developer meet these requirements without changing the configuration of the SCM system?
- A . Deploy the API Gateway REST API to all the required AWS accounts. Use the same custom domain name for all the gateway endpoints so that a single SCM webhook can be used for all events from all accounts.
- B . Deploy the API Gateway REST API to all the receiver AWS accounts. Create as many SCM webhooks as the number of AWS accounts.
- C . Grant permission to the central AWS account for EventBridge to access the receiver AWS accounts.
Add an EventBridge event bus on the receiver AWS accounts as the targets to the existing
EventBridge rule. - D . Convert the API Gateway type from REST API to HTTP API.
A development team maintains a web application by using a single AWS CloudFormation template. The template defines web servers and an Amazon RDS database. The team uses the Cloud Formation template to deploy the Cloud Formation stack to different environments.
During a recent application deployment, a developer caused the primary development database to be dropped and recreated. The result of this incident was a loss of data. The team needs to avoid accidental database deletion in the future.
Which solutions will meet these requirements? (Choose two.)
- A . Add a CloudFormation Deletion Policy attribute with the Retain value to the database resource.
- B . Update the CloudFormation stack policy to prevent updates to the database.
- C . Modify the database to use a Multi-AZ deployment.
- D . Create a CloudFormation stack set for the web application and database deployments.
- E . Add a Cloud Formation DeletionPolicy attribute with the Retain value to the stack.
A,B
Explanation:
AWS CloudFormation is a service that enables developers to model and provision AWS resources using templates. The developer can add a CloudFormation Deletion Policy attribute with the Retain value to the database resource. This will prevent the database from being deleted when the stack is deleted or updated. The developer can also update the CloudFormation stack policy to prevent updates to the database. This will prevent accidental changes to the database configuration or properties.
Reference: [What Is AWS CloudFormation? – AWS CloudFormation]
[DeletionPolicy Attribute – AWS CloudFormation]
[Protecting Resources During Stack Updates – AWS CloudFormation]
