Practice Free DOP-C02 Exam Online Questions
A DevOps engineer needs to apply a core set of security controls to an existing set of AWS accounts. The accounts are in an organization in AWS Organizations. Individual teams will administer individual accounts by using the AdministratorAccess AWS managed policy. For all accounts. AWS CloudTrail and AWS Config must be turned on in all available AWS Regions. Individual account administrators must not be able to edit or delete any of the baseline resources. However, individual account administrators must be able to edit or delete their own CloudTrail trails and AWS Config rules.
Which solution will meet these requirements in the MOST operationally efficient way?
- A . Create an AWS CloudFormation template that defines the standard account resources.
Deploy the template to all accounts from the organization’s management account by using
CloudFormation StackSets. Set the stack policy to deny Update:Delete actions. - B . Enable AWS Control Tower. Enroll the existing accounts in AWS Control Tower. Grant the individual account administrators access to CloudTrail and AWS Config.
- C . Designate an AWS Config management account. Create AWS Config recorders in all accounts by using AWS CloudFormation StackSets. Deploy AWS Config rules to the organization by using the AWS Config management account. Create a CloudTrail organization trail in the organization’s management account. Deny modification or deletion of the AWS Config recorders by using an SCP.
- D . Create an AWS CloudFormation template that defines the standard account resources. Deploy the template to all accounts from the organization’s management account by using Cloud Formation StackSets Create an SCP that prevents updates or deletions to CloudTrail resources or AWS Config resources unless the principal is an administrator of the organization’s management account.
A company frequently creates Docker images of an application. The company stores the images in Amazon Elastic Container Registry (Amazon ECR). The company creates both tagged images and untagged images.
The company wants to implement a solution to automatically delete images that have not been updated for a long time and are not frequently used. The solution must retain at least a specified number of images.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Use Amazon S3 Lifecycle policies on the ECR repository to automatically delete images based on image age or the absence of tags on the image.
- B . Use Amazon ECR lifecycle policies to delete images based on age or the number of images that need to be retained in the repository.
- C . Configure an AWS Lambda function to run on a schedule to delete images based on age or the number of images that need to be retained in the repository.
- D . Use AWS Systems Manager to run a script by using the aws:executeScript action to automatically delete images based on image age or the absence of tags on the image.
B
Explanation:
Option B is the correct choice and has the least operational overhead because ECR lifecycle policies are the native, managed feature built specifically to automate image cleanup:
ECR lifecycle policies can expire images based on image age (e.g., images older than X days) and can
also enforce retaining a minimum number of images (count-based retention), which directly matches the requirement “retain at least a specified number of images.”
Lifecycle policies work for both tagged and untagged images using rule selection and tagging status filters, so the repository stays clean without custom automation.
Why the other options are not appropriate / higher overhead:
A is invalid: ECR is not S3; S3 lifecycle policies do not manage ECR images.
C and D introduce custom automation (Lambda or SSM scripting), scheduling, permissions, error handling, retries, logging, and ongoing maintenance―higher overhead than a managed lifecycle policy.
A company is developing a web application and is using AWS CodeBuild for its CI/CD pipeline. The company must generate multiple artifacts from a single build process. The company also needs the ability to determine which build generated each artifact. The artifacts must be stored in an Amazon S3 bucket for further processing and deployment. Builds occur frequently and are based on a large Git repository. The company needs to optimize build times.
Which solution will meet these requirements with the MOST operational efficiency?
- A . Configure the buildspec.yml file to specify multiple artifacts with different file sets. Enable local caching for the build process by using source cache mode. Use environment variables to dynamically name artifacts based on the build ID.
- B . Configure the buildspec.yml file to output all files as a single artifact. Enable local caching for the build process by using custom cache mode. Create an AWS Lambda function that is invoked by CodeBuild completion. Program the Lambda function to split the artifact into multiple files and to upload the files to the S3 bucket with dynamic names based on build ID.
- C . Create separate CodeBuild projects for each artifact type. Enable local caching for the build process by using Docker layer cache mode. Configure each project to output a single artifact to the S3 bucket with a dynamic name based on build ID. Use AWS Step Functions to orchestrate the projects in parallel.
- D . Set up CodeBuild to generate a single ZIP artifact that contains all files. Enable S3 caching for the build process. Use AWS CodePipeline with a custom action to extract the files and reorganize the files into multiple artifacts in the S3 bucket. Configure the custom action to dynamically name the files based on the time of the build.
