Practice Free DOP-C02 Exam Online Questions
A company has deployed a new platform that runs on Amazon Elastic Kubernetes Service (Amazon EKS). The new platform hosts web applications that users frequently update. The application developers build the Docker images for the applications and deploy the Docker images manually to the platform.
The platform usage has increased to more than 500 users every day. Frequent updates, building the updated Docker images for the applications, and deploying the Docker images on the platform manually have all become difficult to manage.
The company needs to receive an Amazon Simple Notification Service (Amazon SNS) notification if Docker image scanning returns any HIGH or CRITICAL findings for operating system or programming language package vulnerabilities.
Which combination of steps will meet these requirements? (Select TWO.)
- A . Create an AWS CodeCommit repository to store the Dockerfile and Kubernetes deployment files. Create a pipeline in AWS CodePipeline. Use an Amazon S3 event to invoke the pipeline when a newer version of the Dockerfile is committed. Add a stop to the pipeline to initiate the AWS CodeBuild project.
- B . Create an AWS CodeCommit repository to store the Dockerfile and Kubernetes deployment files. Create a pipeline in AWS CodePipeline. Use an Amazon EvenlBridge event to invoke the pipeline when a newer version of the Dockerfile is committed. Add a step to the pipeline to initiate the AWS CodeBuild project.
- C . Create an AWS CodeBuild project that builds the Docker images and stores the Docker images in an Amazon Elastic Container Registry (Amazon ECR) repository. Turn on basic scanning for the ECR repository. Create an Amazon EventBridge rule that monitors Amazon GuardDuty events. Configure the EventBridge rule to send an event to an SNS topic when the finding-severity-counts parameter is more than 0 at a CRITICAL or HIGH level.
- D . Create an AWS CodeBuild project that builds the Docker images and stores the Docker images in an Amazon Elastic Container Registry (Amazon ECR) repository. Turn on enhanced scanning for the ECR repository. Create an Amazon EventBridge rule that monitors ECR image scan events. Configure the EventBridge rule to send an event to an SNS topic when the finding-severity-counts parameter is more than 0 at a CRITICAL or HIGH level.
- E . Create an AWS CodeBuild project that scans the Dockerfile. Configure the project to build the Docker images and store the Docker images in an Amazon Elastic Container Registry (Amazon ECR) repository if the scan is successful. Configure an SNS topic to provide notification if the scan returns any vulnerabilities.
B, D
Explanation:
Step 1: Automate Docker Image Deployment using AWS CodePipeline
The first challenge is the manual process of building and deploying Docker images. To address this, you can use AWS CodePipeline to automate the process. AWS CodePipeline integrates with CodeCommit (for source code and Dockerfile storage) and CodeBuild (to build Docker images and store them in Amazon Elastic Container Registry (ECR)).
Action: Create an AWS CodeCommit repository to store the Dockerfile and Kubernetes deployment files. Then, create a pipeline in AWS CodePipeline that triggers on new commits via an Amazon EventBridge event.
Why: This automation significantly reduces the manual effort of building and deploying Docker images when updates are made to the codebase.
Reference: AWS documentation on AWS CodePipeline and CodeCommit Integration.
This corresponds to Option B: Create an AWS CodeCommit repository to store the Dockerfile and Kubernetes deployment files. Create a pipeline in AWS CodePipeline. Use an Amazon EventBridge event to invoke the pipeline when a newer version of the Dockerfile is committed. Add a step to the pipeline to initiate the AWS CodeBuild project.
Step 2: Enabling Enhanced Scanning on Amazon ECR and Monitoring Vulnerabilities
To scan for vulnerabilities in Docker images, Amazon ECR provides both basic and enhanced scanning options. Enhanced scanning offers deeper and more frequent scans, and integrates with Amazon EventBridge to send notifications based on findings.
Action: Turn on enhanced scanning for the Amazon ECR repository where the Docker images are stored. Use Amazon EventBridge to monitor image scan events and trigger an Amazon SNS notification if any HIGH or CRITICAL vulnerabilities are found.
Why: Enhanced scanning provides a detailed analysis of operating system and programming language package vulnerabilities, which can trigger notifications in real-time.
Reference: AWS documentation on Enhanced Scanning for Amazon ECR.
This corresponds to Option D: Create an AWS CodeBuild project that builds the Docker images and stores the Docker images in an Amazon Elastic Container Registry (Amazon ECR) repository. Turn on enhanced scanning for the ECR repository. Create an Amazon EventBridge rule that monitors ECR image scan events. Configure the EventBridge rule to send an event to an SNS topic when the finding-severity-counts parameter is more than 0 at a CRITICAL or HIGH level.
A security team is concerned that a developer can unintentionally attach an Elastic IP address to an Amazon EC2 instance in production. No developer should be allowed to attach an Elastic IP address to an instance. The security team must be notified if any production server has an Elastic IP address at any time.
How can this task be automated’?
- A . Use Amazon Athena to query AWS CloudTrail logs to check for any associate-address attempts Create an AWS Lambda function to disassociate the Elastic IP address from the instance, and alert the security team.
- B . Attach an 1AM policy to the developers’ 1AM group to deny associate-address permissions Create a custom AWS Config rule to check whether an Elastic IP address is associated with any instance tagged as production, and alert the security team
- C . Ensure that all 1AM groups associated with developers do not have associate-address permissions. Create a scheduled AWS Lambda function to check whether an Elastic IP address is associated with any instance tagged as production, and alert the secunty team if an instance has an Elastic IP address associated with it
- D . Create an AWS Config rule to check that all production instances have EC2 1AM roles that include deny associate-address permissions Verify whether there is an Elastic IP address associated with any instance, and alert the security team if an instance has an Elastic IP address associated with it.
B
Explanation:
To prevent developers from unintentionally attaching an Elastic IP address to an Amazon EC2 instance in production, the best approach is to use IAM policies and AWS Config rules. By attaching an IAM policy that denies the associate-address permission to the developers’ IAM group, you ensure that developers cannot perform this action. Additionally, creating a custom AWS Config rule to check for Elastic IP addresses associated with instances tagged as production provides ongoing monitoring. If the rule detects an Elastic IP address, it can trigger an alert to notify the security team. This method is proactive and enforces the necessary permissions while also providing a mechanism for detection and notification.
Reference: from Amazon DevOps sources
A company is using AWS CodePipeline to deploy an application. According to a new guideline, a member of the company’s security team must sign off on any application changes before the changes are deployed into production. The approval must be recorded and retained.
Which combination of actions will meet these requirements? (Select TWO.)
- A . Configure CodePipeline to write actions to Amazon CloudWatch Logs.
- B . Configure CodePipeline to write actions to an Amazon S3 bucket at the end of each pipeline stage.
- C . Create an AWS CloudTrail trail to deliver logs to Amazon S3.
- D . Create a CodePipeline custom action to invoke an AWS Lambda function for approval. Create a policy that gives the security team access to manage CodePipeline custom actions.
- E . Create a CodePipeline manual approval action before the deployment step. Create a policy that grants the security team access to approve manual approval stages.
CE
Explanation:
To meet the new guideline for application deployment, the company can use a combination of AWS CodePipeline and AWS CloudTrail. A manual approval action in CodePipeline allows the security team to review and approve changes before they are deployed. This action can be configured to pause the pipeline until approval is granted, ensuring that no changes move to production without the necessary sign-off. Additionally, by creating an AWS CloudTrail trail, all actions taken within CodePipeline, including approvals, are recorded and delivered to an Amazon S3 bucket. This provides an audit trail that can be retained for compliance and review purposes.
Reference: AWS CodePipeline’s manual approval action provides a way to ensure that a member of the security team can review and approve changes before they are deployed1.
AWS CloudTrail integration with CodePipeline allows for the recording and retention of all pipeline actions, including approvals, which can be stored in Amazon S3 for record-keeping2.
An ecommerce company hosts a web application on Amazon EC2 instances that are in an Auto Scaling group. The company deploys the application across multiple Availability Zones.
Application users are reporting intermittent performance issues with the application.
The company enables basic Amazon CloudWatch monitoring for the EC2 instances. The company identifies and implements a fix for the performance issues. After resolving the issues, the company wants to implement a monitoring solution that will quickly alert the company about future performance issues.
Which solution will meet this requirement?
- A . Enable detailed monitoring for the EC2 instances. Create custom CloudWatch metrics for application-specific performance indicators. Set up CloudWatch alarms based on the custom metrics. Use CloudWatch Logs Insights to analyze application logs for error patterns.
- B . Use AWS X-Ray to implement distributed tracing. Integrate X-Ray with Amazon CloudWatch RUM. Use Amazon EventBridge to trigger automatic scaling actions based on custom events.
- C . Use Amazon CloudFront to deliver the application. Use AWS CloudTrail to monitor API calls. Use AWS Trusted Advisor to generate recommendations to optimize performance. Use Amazon GuardDuty to detect potential performance issues.
- D . Enable VPC Flow Logs. Use Amazon Data Firehose to stream flow logs to Amazon S3. Use Amazon Athena to analyze the logs and to send alerts to the company.
A
Explanation:
The company needs fast, proactive alerts for future performance issues, beyond basic EC2 metrics.
Option A provides a complete, AWS-native monitoring pattern aligned with best practices:
Enable detailed monitoring on EC2 instances to increase metric resolution (from 5-minute to 1-minute intervals), improving the responsiveness of alarms.
Define custom CloudWatch metrics for application-level indicators such as request latency, error rate, queue depth, or throughput. These metrics can be published from the application or sidecar agents.
Create CloudWatch alarms on both infrastructure (CPU, network, disk) and custom application metrics with thresholds that reflect performance SLOs. Alarms can notify teams via SNS or incident management tools.
Use CloudWatch Logs Insights to analyze logs for recurring error patterns, slow requests, or exceptions when alarms fire.
Option B focuses on tracing and frontend RUM; while useful, it is more complex and not necessary just to get quick alerts.
Option C uses services (CloudTrail, GuardDuty, Trusted Advisor) that are not focused on real-time performance detection.
Option D with VPC Flow Logs is network-level and would not detect general application performance issues.
Thus, Option A offers a direct, efficient way to detect and alert on performance degradations quickly.
38 1. A company runs several applications in the same AWS account. The applications send logs to Amazon CloudWatch.
A data analytics team needs to collect performance metrics and custom metrics from the applications. The analytics team needs to transform the metrics data before storing the data in an Amazon S3 bucket. The analytics team must automatically collect any new metrics that are added to the CloudWatch namespace.
Which solution will meet these requirements with the LEAST operational overhead?
A company uses an organization in AWS Organizations to manage multiple AWS accounts. The company has enabled all features for the organization. The member accounts under one OU contain S3 buckets that store sensitive data.
A DevOps engineer wants to ensure that only IAM principals from within the organization can access the S3 buckets in the OU.
Which solution will meet this requirement?
- A . Create an SCP in the management account of the organization to restrict Amazon S3 actions by using the aws:PrincipalAccount condition. Apply the SCP to the OU.
- B . Create an IAM permissions boundary in the management account of the organization to restrict access to Amazon S3 actions by using the aws:PrincipalOrgID condition.
- C . Configure AWS Resource Access Manager (AWS RAM) to restrict access to S3 buckets in the OU so the S3 buckets cannot be shared outside the organization.
- D . Create a resource control policy (RCP) in the management account of the organization to restrict Amazon S3 actions by using the aws:PrincipalOrgID condition. Apply the RCP to the OU.
A
Explanation:
Comprehensive and Detailed Explanation From Exact Extract:
To restrict access to S3 buckets so that only IAM principals from within the AWS Organization can access them, an SCP (Service Control Policy) with conditions using the aws:PrincipalAccount or preferably aws:PrincipalOrgID can be applied at the OU level.
SCPs restrict the maximum permissions for IAM entities in member accounts and can be used to enforce access control policies across accounts.
The aws:PrincipalAccount condition restricts access to principals from specific accounts, while aws:PrincipalOrgID restricts based on the organization ID.
IAM permissions boundaries (Option B) cannot be applied organization-wide and do not enforce restrictions across accounts.
AWS RAM (Option C) is for sharing resources but does not restrict S3 bucket access based on organizational principals.
There is no such thing as an RCP in AWS Organizations (Option D is invalid).
Reference: AWS Organizations SCPs with Conditions:
"Use SCPs with aws:PrincipalOrgID to restrict resource access to principals in your organization."
(AWS Organizations SCP Conditions)
S3 Bucket Policy Conditions for Organization:
"Use the aws:PrincipalOrgID condition key in S3 bucket policies to restrict access to members of your organization."
(S3 Bucket Policy Examples)
30 1. A company’s application has an API that retrieves workload metrics. The company needs to audit, analyze, and visualize these metrics from the application to detect issues at scale.
Which combination of steps will meet these requirements? (Select THREE).
A company is implementing an Amazon Elastic Container Service (Amazon ECS) cluster to run its workload. The company architecture will run multiple ECS services on the cluster. The architecture includes an Application Load Balancer on the front end and uses multiple target groups to route traffic.
A DevOps engineer must collect application and access logs. The DevOps engineer then needs to send the logs to an Amazon S3 bucket for near-real-time analysis.
Which combination of steps must the DevOps engineer take to meet these requirements? (Choose three.)
- A . Download the Amazon CloudWatch Logs container instance from AWS. Configure this instance as a task. Update the application service definitions to include the logging task.
- B . Install the Amazon CloudWatch Logs agent on the ECS instances. Change the logging driver in the ECS task definition to awslogs.
- C . Use Amazon EventBridge to schedule an AWS Lambda function that will run every 60 seconds and will run the Amazon CloudWatch Logs create-export-task command. Then point the output to the logging S3 bucket.
- D . Activate access logging on the ALB. Then point the ALB directly to the logging S3 bucket.
- E . Activate access logging on the target groups that the ECS services use. Then send the logs directly to the logging S3 bucket.
- F . Create an Amazon Kinesis Data Firehose delivery stream that has a destination of the logging S3 bucket. Then create an Amazon CloudWatch Logs subscription filter for Kinesis Data Firehose.
A company has deployed an Amazon Elastic Kubernetes Service (Amazon EKS) cluster with Amazon EC2 node groups. The company’s DevOps team uses the Kubernetes Horizontal Pod Autoscaler and recently installed a supported EKS cluster Autoscaler.
The DevOps team needs to implement a solution to collect metrics and logs of the EKS cluster to establish a baseline for performance. The DevOps team will create an initial set of thresholds for specific metrics and will update the thresholds over time as the cluster is used. The DevOps team must receive an Amazon Simple Notification Service (Amazon SNS) email notification if the initial set of thresholds is exceeded or if the EKS cluster Autoscaler is not functioning properly.
The solution must collect cluster, node, and pod metrics. The solution also must capture logs in Amazon CloudWatch.
Which combination of steps should the DevOps team take to meet these requirements? (Select THREE.)
- A . Deploy the CloudWatch agent and Fluent Bit to the cluster. Ensure that the EKS cluster has appropriate permissions to send metrics and logs to CloudWatch.
- B . Deploy AWS Distro for OpenTelemetry to the cluster. Ensure that the EKS cluster has appropriate permissions to send metrics and logs to CloudWatch.
- C . Create CloudWatch alarms to monitor the CPU, memory, and node failure metrics of the cluster. Configure the alarms to send an SNS email notification to the DevOps team if thresholds are exceeded.
- D . Create a CloudWatch composite alarm to monitor a metric log filter of the CPU, memory, and node metrics of the cluster. Configure the alarm to send an SNS email notification to the DevOps team when anomalies are detected.
- E . Create a CloudWatch alarm to monitor the logs of the Autoscaler deployments for errors. Configure the alarm to send an SNS email notification to the DevOps team if thresholds are exceeded.
- F . Create a CloudWatch alarm to monitor a metric log filter of the Autoscaler deployments for errors. Configure the alarm to send an SNS email notification to the DevOps team if thresholds are exceeded.
A, C, F
Explanation:
Comprehensive and Detailed Explanation From Exact Extract of DevOps Engineer Documents Only:
Deploy CloudWatch Agent + Fluent Bit (supported by Amazon EKS integration) to forward metrics and logs. Create CloudWatch Alarms for CPU/memory/node metrics and metric log filters for Autoscaler logs, triggering SNS notifications when anomalies occur. This pattern matches AWS guidance on “Monitoring EKS clusters with CloudWatch Container Insights and alarms.”
A company needs to implement failover for its application. The application includes an Amazon CloudFront distribution and a public Application Load Balancer (ALB) in an AWS Region. The company has configured the ALB as the default origin for the distribution.
After some recent application outages, the company wants a zero-second RTO. The company deploys the application to a secondary Region in a warm standby configuration. A DevOps engineer needs to automate the failover of the application to the secondary Region so that HTTP GET requests meet the desired RTO.
Which solution will meet these requirements?
- A . Create a second CloudFront distribution that has the secondary ALB as the default origin. Create Amazon Route 53 alias records that have a failover policy and Evaluate Target Health set to Yes for both CloudFront distributions. Update the application to use the new record set.
- B . Create a new origin on the distribution for the secondary ALB. Create a new origin group. Set the original ALB as the primary origin. Configure the origin group to fail over for HTTP 5xx status codes. Update the default behavior to use the origin group.
- C . Create Amazon Route 53 alias records that have a failover policy and Evaluate Target Health set to Yes for both ALBs. Set the TTL of both records to 0. Update the distribution’s origin to use the new record set.
- D . Create a CloudFront function that detects HTTP 5xx status codes. Configure the function to return a 307 Temporary Redirect error response to the secondary ALB if the function detects 5xx status codes. Update the distribution’s default behavior to send origin responses to the function.
B
Explanation:
The best solution to implement failover for the application is to use CloudFront origin groups. Origin groups allow CloudFront to automatically switch to a secondary origin when the primary origin is unavailable or returns specific HTTP status codes that indicate a failure1. This way, CloudFront can serve the requests from the secondary ALB in the secondary Region without any delay or redirection. To set up origin groups, the DevOps engineer needs to create a new origin on the distribution for the secondary ALB, create a new origin group with the original ALB as the primary origin and the secondary ALB as the secondary origin, and configure the origin group to fail over for HTTP 5xx status codes. Then, the DevOps engineer needs to update the default behavior to use the origin group instead of the single origin2.
The other options are not as effective or efficient as the solution in option B.
Option A is not suitable because creating a second CloudFront distribution will increase the complexity and cost of the application. Moreover, using Route 53 alias records with a failover policy will introduce some delay in detecting and switching to the secondary CloudFront distribution, which may not meet the zero-second RTO requirement.
Option C is not feasible because CloudFront does not support using Route 53 alias records as origins3.
Option D is not advisable because using a CloudFront function to redirect the requests to the secondary ALB will add an extra round-trip and latency to the failover process, which may also not meet the zero-second RTO requirement.
Reference:
1: Optimizing high availability with CloudFront origin failover – Amazon CloudFront
2: Creating an origin group – Amazon CloudFront
3: Values That You Specify When You Create or Update a Web Distribution – Amazon CloudFront
A company deploys a web application on Amazon EC2 instances that are behind an Application Load Balancer (ALB). The company stores the application code in an AWS CodeConnections compatible Git repository.
When the company merges code to the main branch, an AWS CodeBuild project is initiated. The CodeBuild project compiles the code, stores the packaged code in AWS CodeArtifact, and invokes AWS Systems Manager Run Command to deploy the packaged code to the EC2 instances.
Previous deployments have resulted in defects, EC2 instances that were not running the latest version of the packaged code, and inconsistencies between instances. A DevOps engineer needs to improve the reliability of the deployment solution.
Which combination of actions will meet this requirement? (Select TWO.)
- A . Create a pipeline in AWS CodePipeline that uses the Git repository as the source provider. Configure the pipeline to have parallel build and test stages. In the pipeline, pass the CodeBuild project output artifact to an AWS CodeDeploy action.
- B . Create a pipeline in AWS CodePipeline that uses the Git repository as the source provider. Configure the pipeline to have a build stage followed by a test stage. In the pipeline, pass the CodeBuild project output artifact to an AWS CodeDeploy action.
- C . Create an AWS CodeDeploy application and a deployment group to deploy the packaged code to the EC2 instances. Configure the ALB for the deployment group.
- D . Create individual AWS Lambda functions that use AWS CodeDeploy instead of Systems Manager to run build, test, and deploy actions.
- E . Create an Amazon S3 bucket. Modify the CodeBuild project to store the packages in the S3 bucket instead of in CodeArtifact. Use deploy actions in CodeDeploy to deploy.
B, C
Explanation:
The core problem described is deployment inconsistency and lack of reliability caused by using AWS
Systems Manager Run Command for application deployment. Run Command executes ad hoc commands and does not provide deployment orchestration, version tracking, lifecycle hooks, or health-based traffic control, which commonly leads to drift between EC2 instances.
The most reliable AWS-native solution is to adopt AWS CodePipeline combined with AWS CodeDeploy.
Option B introduces a structured CI/CD pipeline with a clear build stage followed by a test stage, ensuring that only tested artifacts progress to deployment. Sequential build and test stages are preferred for reliability and deterministic behavior, especially when test results must gate deployments.
Option C is essential because AWS CodeDeploy is the service specifically designed to deploy application revisions consistently across EC2 fleets. By creating a CodeDeploy application and deployment group and integrating it with the ALB, deployments gain support for lifecycle events, health checks, instance synchronization, and automatic rollback. This ensures that all EC2 instances receive the same application version and that traffic is managed safely during deployments.
Option A introduces unnecessary parallelism that does not address the core issue.
Option D adds excessive complexity with Lambda orchestration.
Option E incorrectly replaces CodeArtifact with S3 without addressing deployment reliability.
Therefore, combining CodePipeline (B) with CodeDeploy and ALB integration (C) provides consistent, repeatable, and reliable deployments aligned with AWS best practices.
A company uses Amazon Elastic Kubernetes Services (Amazon EKS) to host containerized applications that are available in Amazon Elastic Container Registry (Amazon ECR).
The company currently launches EKS clusters in the company’s development environment by using the AWS CLI aws eks create-cluster command. The company uses the aws eks create-addon command to install required add-ons. All installed add-ons are currently version compatible with the version of Kubernetes that the company uses. All clusters exclusively use managed node groups for compute capacity.
Some of the EKS clusters require a version upgrade. A DevOps engineer must ensure that upgrades continuously occur within the AWS standard support schedule.
Which solution will meet this requirement with the LEAST operational overhead?
- A . Run the aws eks update-cluster-version command, providing appropriate arguments such as cluster name and version number.
- B . Enable EKS Auto Mode on all EKS clusters. Remove all existing managed node groups.
- C . Run the eksctl command to upgrade the EKS clusters. Provide appropriate arguments such as cluster name and version number.
- D . Refactor the environment to create EKS clusters by using infrastructure as code (IaC). Upgrade the clusters by using code changes.
B
Explanation:
The requirement is not just to perform a one-time upgrade, but to ensure that Kubernetes and related components stay continuously within the AWS standard support window, with minimal ongoing effort. EKS Auto Mode is designed to reduce operational burden by managing cluster infrastructure, including control plane and data plane lifecycle, in a more automated fashion. When EKS Auto Mode is enabled and existing managed node groups are removed, EKS can manage compute capacity and associated upgrades in a more integrated, hands-off way, tracking supported versions automatically.
Option A (aws eks update-cluster-version) and Option C (eksctl upgrade) are both manual upgrade mechanisms. They can update clusters to a specific version but do not continuously ensure future upgrades as new supported versions are released; they require the DevOps team to schedule and execute each upgrade.
Option D (IaC refactor) is generally a good practice but does not inherently provide automatic version lifecycle management; it also introduces significant upfront effort compared to enabling Auto Mode.
Thus, enabling EKS Auto Mode and deprecating manual managed node groups provides the least operational overhead for staying within the EKS-supported version schedule.
