Practice Free DVA-C02 Exam Online Questions
A company needs to harden its container images before the images are in a running state. The company’s application uses Amazon Elastic Container Registry (Amazon ECR) as an image registry. Amazon Elastic Kubernetes Service (Amazon EKS) for compute, and an AWS CodePipeline pipeline that orchestrates a continuous integration and continuous delivery (CI/CD) workflow.
Dynamic application security testing occurs in the final stage of the pipeline after a new image is deployed to a development namespace in the EKS cluster. A developer needs to place an analysis stage before this deployment to analyze the container image earlier in the CI/CD pipeline.
Which solution will meet these requirements with the MOST operational efficiency?
- A . Build the container image and run the docker scan command locally. Mitigate any findings before pushing changes to the source code repository. Write a pre-commit hook that enforces the use of this workflow before commit.
- B . Create a new CodePipeline stage that occurs after the container image is built. Configure ECR basic image scanning to scan on image push. Use an AWS Lambda function as the action provider. Configure the Lambda function to check the scan results and to fail the pipeline if there are findings.
- C . Create a new CodePipeline stage that occurs after source code has been retrieved from its repository. Run a security scanner on the latest revision of the source code. Fail the pipeline if there are findings.
- D . Add an action to the deployment stage of the pipeline so that the action occurs before the deployment to the EKS cluster. Configure ECR basic image scanning to scan on image push. Use an AWS Lambda function as the action provider. Configure the Lambda function to check the scan results and to fail the pipeline if there are findings.
B
Explanation:
The solution that will meet the requirements with the most operational efficiency is to create a new CodePipeline stage that occurs after the container image is built. Configure ECR basic image scanning to scan on image push. Use an AWS Lambda function as the action provider. Configure the Lambda function to check the scan results and to fail the pipeline if there are findings. This way, the container image is analyzed earlier in the CI/CD pipeline and any vulnerabilities are detected and reported before deploying to the EKS cluster. The other options either delay the analysis until after deployment, which increases the risk of exposing insecure images, or perform analysis on the source code instead of the container image, which may not capture all the dependencies and configurations that affect the security posture of the image.
Reference: Amazon ECR image scanning
A developer is deploying a new application to Amazon Elastic Container Service (Amazon ECS). The developer needs to securely store and retrieve different types of variables. These variables include authentication information for a remote API, the URL for the API, and credentials. The authentication information and API URL must be available to all current and future deployed versions of the application across development, testing, and production environments.
How should the developer retrieve the variables with the FEWEST application changes?
- A . Update the application to retrieve the variables from AWS Systems Manager Parameter Store. Use unique paths in Parameter Store for each variable in each environment. Store the credentials in AWS Secrets Manager in each environment.
- B . Update the application to retrieve the variables from AWS Key Management Service (AWS KMS).
Store the API URL and credentials as unique keys for each environment. - C . Update the application to retrieve the variables from an encrypted file that is stored with the application. Store the API URL and credentials in unique files for each environment.
- D . Update the application to retrieve the variables from each of the deployed environments. Define the authentication information and API URL in the ECS task definition as unique names during the deployment process.
A
Explanation:
AWS Systems Manager Parameter Store is a service that provides secure, hierarchical storage for configuration data management and secrets management. The developer can update the application to retrieve the variables from Parameter Store by using the AWS SDK or the AWS CLI. The developer can use unique paths in Parameter Store for each variable in each environment, such as /dev/api-url,/test/api-url, and /prod/api-url. The developer can also store the credentials in AWS Secrets Manager, which is integrated with Parameter Store and provides additional features such as automatic rotation and encryption.
Reference: [What Is AWS Systems Manager? – AWS Systems Manager] [Parameter Store – AWS Systems Manager]
[What Is AWS Secrets Manager? – AWS Secrets Manager]
A developer is building a web application that uses Amazon API Gateway to expose an AWS Lambda function to process requests from clients. During testing, the developer notices that the API Gateway times out even though the Lambda function finishes under the set time limit.
Which of the following API Gateway metrics in Amazon CloudWatch can help the developer troubleshoot the issue? (Choose two.)
- A . CacheHitCount
- B . IntegrationLatency
- C . CacheMissCount
- D . Latency
- E . Count
B, D
Explanation:
Amazon API Gateway is a service that enables developers to create, publish, maintain, monitor, and secure APIs at any scale. Amazon CloudWatch is a service that monitors AWS resources and applications. API Gateway provides several CloudWatch metrics to help developers troubleshoot issues with their APIs.
Two of the metrics that can help the developer troubleshoot the issue of API Gateway timing out are:
Integration Latency: This metric measures the time between when API Gateway relays a request to the backend and when it receives a response from the backend. A high value for this metric indicates that the backend is taking too long to respond and may cause API Gateway to time out.
Latency: This metric measures the time between when API Gateway receives a request from a client and when it returns a response to the client. A high value for this metric indicates that either the integration latency is high or API Gateway is taking too long to process the request or response.
Reference: [What Is Amazon API Gateway? – Amazon API Gateway]
[Amazon API Gateway Metrics and Dimensions – Amazon CloudWatch] [Troubleshooting API Errors – Amazon API Gateway]
A developer has created an AWS Lambda function that is written in Python. The Lambda function reads data from objects in Amazon S3 and writes data to an Amazon DynamoDB table. The function is successfully invoked from an S3 event notification when an object is created. However, the function fails when it attempts to write to the DynamoDB table.
What is the MOST likely cause of this issue?
- A . The Lambda function’s concurrency limit has been exceeded.
- B . DynamoDB table requires a global secondary index (GSI) to support writes.
- C . The Lambda function does not have IAM permissions to write to DynamoDB.
- D . The DynamoDB table is not running in the same Availability Zone as the Lambda function.
C
Explanation:
https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_examples_lambda-access-dynamodb.html
A company built an online event platform. For each event the company organizes quizzes and generates leaderboards that are based on the quiz scores. The company stores the leaderboard data in Amazon DynamoDB and retains the data for 30 days after an event is complete. The company then uses a scheduled job to delete the old leaderboard data.
The DynamoDB table is configured with a fixed write capacity. During the months when many events occur, the DynamoDB write API requests are throttled when the scheduled delete job runs.
A developer must create a long-term solution that deletes the old leaderboard data and optimizes write throughput
Which solution meets these requirements?
- A . Configure a TTL attribute for the leaderboard data
- B . Use DynamoDB Streams to schedule and delete the leaderboard data
- C . Use AWS Step Functions to schedule and delete the leaderboard data.
- D . Set a higher write capacity when the scheduled delete job runs
A
Explanation:
DynamoDB TTL (Time-to-Live): A native feature that automatically deletes items after a specified expiration time.
Efficiency: Eliminates the need for scheduled deletion jobs, optimizing write throughput by avoiding potential throttling conflicts.
Seamless Integration: TTL works directly within DynamoDB, requiring minimal development overhead.
Reference: DynamoDB TTL
Documentation: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/TTL.html
A company has multiple Amazon VPC endpoints in the same VPC. A developer needs configure an Amazon S3 bucket policy so users can access an S3 bucket only by using these VPC endpoints.
Which solution will meet these requirements?
- A . Create multiple S3 bucket polices by using each VPC endpoint ID that have the aws SourceVpce value in the StringNotEquals condition.
- B . Create a single S3 bucket policy that has the aws SourceVpc value and in the StingNotEquals condition to use VPC ID.
- C . Create a single S3 bucket policy that the multiple aws SourceVpce value and in the SringNotEquals condton to use vpce.
- D . Create a single S3 bucket policy that has multiple aws sourceVpce value in the StingNotEquale condition. Repeat for all the VPC endpoint IDs.
D
Explanation:
This solution will meet the requirements by creating a single S3 bucket policy that denies access to the S3 bucket unless the request comes from one of the specified VPC endpoints. The aws:SourceVpce condition key is used to match the ID of the VPC endpoint that is used to access the S3 bucket. The StringNotEquals condition operator is used to negate the condition, so that only requests from the listed VPC endpoints are allowed. Option A is not optimal because it will create multiple S3 bucket policies, which is not possible as only one bucket policy can be attached to an S3 bucket. Option B is not optimal because it will use the aws:SourceVpc condition key, which matches the ID of the VPC that is used to access the S3 bucket, not the VPC endpoint. Option C is not optimal because it will use the StringNotEquals condition operator with a single value, which will deny access to the S3 bucket from all VPC endpoints except one.
Reference: Using Amazon S3 Bucket Policies and User Policies, AWS Global Condition Context Keys
A developer accesses AWS CodeCommit over SSH.
The SSH keys configured to access AWS CodeCommit are tied to a user with the following permissions:
The developer needs to create/delete branches
Which specific IAM permissions need to be added based on the principle of least privilege?
- A . Option A
- B . Option B
- C . Option C
- D . Option D
A
Explanation:
This solution allows the developer to create and delete branches in AWS CodeCommit by granting the codecommit:CreateBranch and codecommit:DeleteBranch permissions. These are the minimum permissions required for this task, following the principle of least privilege. Option B grants too many permissions, such as codecommit:Put*, which allows the developer to create, update, or delete any resource in CodeCommit. Option C grants too few permissions, such as codecommit:Update*, which does not allow the developer to create or delete branches. Option D grants all permissions, such as
codecommit:*, which is not secure or recommended.
Reference: [AWS CodeCommit Permissions Reference], [Create a Branch (AWS CLI)]
A developer must use multi-factor authentication (MFA) to access data in an Amazon S3 bucket that is in another AWS account.
Which AWS Security Token Service (AWS STS) API operation should the developer use with the MFA information to meet this requirement?
- A . AssumeRoleWithWebidentity
- B . GetFederationToken
- C . AssumeRoleWithSAML
- D . AssumeRole
D
Explanation:
AWS STS AssumeRole:. The central operation for assuming temporary security credentials, commonly used for cross-account access.
MFA Integration:. The AssumeRole call can include MFA information to enforce multi-factor authentication.
Credentials for S3 Access:. The returned temporary credentials would provide the necessary permissions to access the S3 bucket in the other account.
Reference: AWS STS AssumeRole
Documentation: https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html
A developer is deploying a company’s application to Amazon EC2 instances. The application generates gigabytes of data files each day. The files are rarely accessed but the files must be available to the application’s users within minutes of a request during the first year of storage. The company must retain the files for 7 years.
How can the developer implement the application to meet these requirements MOST cost-effectively?
- A . Store the files in an Amazon S3 bucket Use the S3 Glacier Instant Retrieval storage class Create an S3 Lifecycle policy to transition the files to the S3 Glacier Deep Archive storage class after 1 year
- B . Store the files in an Amazon S3 bucket. Use the S3 Standard storage class. Create an S3 Lifecycle policy to transition the files to the S3 Glacier Flexible Retrieval storage class after 1 year.
- C . Store the files on an Amazon Elastic Block Store (Amazon EBS) volume Use Amazon Data Lifecycle Manager (Amazon DLM) to create snapshots of the EBS volumes and to store those snapshots in Amazon S3
- D . Store the files on an Amazon Elastic File System (Amazon EFS) mount. Configure EFS lifecycle management to transition the files to the EFS Standard-Infrequent Access (Standard-IA) storage class after 1 year.
A
Explanation:
Amazon S3 Glacier Instant Retrieval is an archive storage class that delivers the lowest-cost storage for long-lived data that is rarely accessed and requires retrieval in milliseconds. With S3 Glacier Instant Retrieval, you can save up to 68% on storage costs compared to using the S3 Standard-Infrequent Access (S3 Standard-IA) storage class, when your data is accessed once per quarter. https://aws.amazon.com/s3/storage-classes/glacier/instant-retrieval/
Understanding Storage Requirements:
Files are large and infrequently accessed, but need to be available within minutes when requested in
the first year.
Long-term (7-year) retention is required.
Cost-effectiveness is a top priority.
Why S3 Glacier Instant Retrieval:
Matches the retrieval requirements (access within minutes).
More cost-effective than S3 Standard for infrequently accessed data. Simpler to use than traditional Glacier where retrievals take hours.
Why S3 Glacier Deep Archive:
Most cost-effective S3 storage class for long term archival.
Meets the 7-year retention requirement.
S3 Lifecycle Policy:
Automate the transition from Glacier Instant Retrieval to Glacier Deep Archive after one year.
Optimize costs by matching storage classes to access patterns.
Reference: Amazon S3 Storage Classes: https://aws.amazon.com/s3/storage-classes/
S3 Glacier Instant Retrieval: [invalid URL removed]
S3 Glacier Deep Archive: [invalid URL removed]
S3 Lifecycle Policies: https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lifecycle-mgmt.html
A developer is designing an AWS Lambda function that creates temporary files that are less than 10 MB during invocation. The temporary files will be accessed and modified multiple times during invocation. The developer has no need to save or retrieve these files in the future.
Where should the temporary files be stored?
- A . the /tmp directory
- B . Amazon Elastic File System (Amazon EFS)
- C . Amazon Elastic Block Store (Amazon EBS)
- D . Amazon S3
A
Explanation:
AWS Lambda is a service that lets developers run code without provisioning or managing servers. Lambda provides a local file system that can be used to store temporary files during invocation. The local file system is mounted under the /tmp directory and has a limit of 512 MB. The temporary files are accessible only by the Lambda function that created them and are deleted after the function execution ends. The developer can store temporary files that are less than 10 MB in the /tmp directory and access and modify them multiple times during invocation.
Reference: [What Is AWS Lambda? – AWS Lambda]
[AWS Lambda Execution Environment – AWS Lambda]