Practice Free DOP-C02 Exam Online Questions
A company recently launched multiple applications that use Application Load Balancers. Application response time often slows down when the applications experience problems A DevOps engineer needs to Implement a monitoring solution that alerts the company when the applications begin to perform slowly The DevOps engineer creates an Amazon Simple Notification Semce (Amazon SNS) topic and subscribe the company’s email address to the topic
What should the DevOps engineer do next to meet the requirements?
- A . Create an Amazon EventBridge rule that invokes an AWS Lambda function to query the applications on a 5-minute interval Configure the Lambda function to publish a notification to the SNS topic when the applications return errors.
- B . Create an Amazon CloudWatch Synthetics canary that runs a custom script to query the applications on a 5-minute interval. Configure the canary to use the SNS topic when the applications return errors.
- C . Create an Amazon CloudWatch alarm that uses the AWS/AppljcabonELB namespace RequestCountPerTarget metric Configure the CloudWatch alarm to send a notification when the number of connections becomes greater than the configured number of threads that the application supports Configure the CloudWatch alarm to use the SNS topic.
- D . Create an Amazon CloudWatch alarm that uses the AWS/ApplicationELB namespace RequestCountPerTarget metric Configure the CloudWatch alarm to send a notification when the average response time becomes greater than the longest response time that the application supports Configure the CloudWatch alarm to use the SNS topic
B
Explanation:
Option A is incorrect because creating an Amazon EventBridge rule that invokes an AWS Lambda function to query the applications on a 5-minute interval is not a valid solution. EventBridge rules can only trigger Lambda functions based on events, not on time intervals. Moreover, querying the applications on a 5-minute interval might incur unnecessary costs and network overhead, and might not detect performance issues in real time.
Option B is correct because creating an Amazon CloudWatch Synthetics canary that runs a custom script to query the applications on a 5-minute interval is a valid solution. CloudWatch Synthetics canaries are configurable scripts that monitor endpoints and APIs by simulating customer behavior. Canaries can run as often as once per minute, and can measure the latency and availability of the applications. Canaries can also send notifications to an Amazon SNS topic when they detect errors or performance issues1.
Option C is incorrect because creating an Amazon CloudWatch alarm that uses the AWS/ApplicationELB namespace RequestCountPerTarget metric is not a valid solution. The RequestCountPerTarget metric measures the number of requests completed or connections made per target in a target group2. This metric does not reflect the application response time, which is the requirement. Moreover, configuring the CloudWatch alarm to send a notification when the number of connections becomes greater than the configured number of threads that the application supports is not a valid way to measure the application performance, as it depends on the application design and implementation.
Option D is incorrect because creating an Amazon CloudWatch alarm that uses the AWS/ApplicationELB namespace RequestCountPerTarget metric is not a valid solution, for the same reason as option
C. The RequestCountPerTarget metric does not reflect the application response time, which is the requirement. Moreover, configuring the CloudWatch alarm to send a notification when the average response time becomes greater than the longest response time that the application supports is not a valid way to measure the application performance, as it does not account for variability or outliers in the response time distribution.
References:
1: Using synthetic monitoring
2: Application Load Balancer metrics
A company uses a trunk-based development branching strategy. The company has two AWS CodePipeline pipelines that are integrated with a Git provider. The pull_request pipeline has a branch filter that matches the feature branches. The main_branch pipeline has a branch filter that matches the main branch.
When pull requests are merged into the main branch, the pull requests are deployed by using the main_branch pipeline. The company’s developers need test results for all submitted pull requests as quickly as possible from the pull_request pipeline. The company wants to ensure that the main_branch pipeline’s test results finish and that each deployment is complete before the next pipeline execution.
Which solution will meet these requirements?
- A . Configure the pull_request pipeline to use SUPERSEDED mode. Configure the main_branch pipeline to use QUEUED mode.
- B . Configure the pull_request pipeline to use PARALLEL mode. Configure the main_branch pipeline to use QUEUED mode.
- C . Configure the pull_request pipeline to use PARALLEL mode. Configure the main_branch pipeline to use SUPERSEDED mode.
- D . Configure the pull_request pipeline to use QUEUED mode. Configure the main_branch pipeline to use SUPERSEDED mode.
B
Explanation:
Comprehensive and Detailed Explanation From Exact Extract of DevOps Engineer Documents Only:
In CodePipeline’s execution mode, PARALLEL mode for pull_request pipelines ensures that multiple feature branches can be tested simultaneously for quick feedback.
QUEUED mode for main_branch ensures deployments run sequentially ― each must finish before the next begins, preventing overlap.
This configuration aligns with AWS CodePipeline best practices for trunk-based development and concurrent test pipelines.
A rapidly growing company wants to scale for developer demand for AWS development environments. Development environments are created manually in the AWS Management Console. The networking team uses AWS CloudFormation to manage the networking infrastructure, exporting stack output values for the Amazon VPC and all subnets. The development environments have common standards, such as Application Load Balancers, Amazon EC2 Auto Scaling groups, security groups, and Amazon DynamoDB tables.
To keep up with demand, the DevOps engineer wants to automate the creation of development environments. Because the infrastructure required to support the application is expected to grow, there must be a way to easily update the deployed infrastructure. CloudFormation will be used to create a template for the development environments.
Which approach will meet these requirements and quickly provide consistent AWS environments for developers?
- A . Use Fn::ImportValue intrinsic functions in the Resources section of the template to retrieve Virtual Private Cloud (VPC) and subnet values. Use CloudFormation StackSets for the development environments, using the Count input parameter to indicate the number of environments needed. Use the UpdateStackSet command to update existing development environments.
- B . Use nested stacks to define common infrastructure components. To access the exported values, use TemplateURL to reference the networking team’s template. To retrieve Virtual Private Cloud (VPC) and subnet values, use Fn::ImportValue intrinsic functions in the Parameters section of the root template. Use the CreateChangeSet and ExecuteChangeSet commands to update existing development environments.
- C . Use nested stacks to define common infrastructure components. Use Fn::ImportValue intrinsic functions with the resources of the nested stack to retrieve Virtual Private Cloud (VPC) and subnet values. Use the CreateChangeSet and ExecuteChangeSet commands to update existing development environments.
- D . Use Fn::ImportValue intrinsic functions in the Parameters section of the root template to retrieve Virtual Private Cloud (VPC) and subnet values. Define the development resources in the order they need to be created in the CloudFormation nested stacks. Use the CreateChangeSet. and ExecuteChangeSet commands to update existing development environments.
A company uses an Amazon Elastic Kubernetes Service (Amazon EKS) cluster to host its machine learning (ML) application. As the ML model and the container image size grow, the time that new pods take to start up has increased to several minutes.
A DevOps engineer needs to reduce the startup time to seconds. The solution must also reduce the startup time to seconds when the pod runs on nodes that were recently added to the cluster.
The DevOps engineer creates an Amazon EventBridge rule that invokes an automation in AWS Systems Manager. The automation prefetches the container images from an Amazon Elastic Container Registry (Amazon ECR) repository when new images are pushed to the repository. The DevOps engineer also configures tags to be applied to the cluster and the node groups.
What should the DevOps engineer do next to meet the requirements?
- A . Create an IAM role that has a policy that allows EventBridge to use Systems Manager to run
commands in the EKS cluster’s control plane nodes. Create a Systems Manager State Manager association that uses the control plane nodes’ tags to prefetch corresponding container images. - B . Create an IAM role that has a policy that allows EventBridge to use Systems Manager to run commands in the EKS cluster’s nodes. Create a Systems Manager State Manager association that uses the nodes’ machine size to prefetch corresponding container images.
- C . Create an IAM role that has a policy that allows EventBridge to use Systems Manager to run commands in the EKS cluster’s nodes. Create a Systems Manager State Manager association that uses the nodes’ tags to prefetch corresponding container images.
- D . Create an IAM role that has a policy that allows EventBridge to use Systems Manager to run commands in the EKS cluster’s control plane nodes. Create a Systems Manager State Manager association that uses the nodes’ tags to prefetch corresponding container images.
C
Explanation:
The startup delay occurs because large container images must be pulled from Amazon ECR when pods are scheduled, especially on newly added nodes that do not have cached images. To consistently reduce pod startup time to seconds, container images must be prefetched directly onto the worker nodes that run the pods, not the EKS control plane.
Amazon EKS control plane nodes are fully managed by AWS and cannot be accessed or modified using AWS Systems Manager. Therefore, any solution that attempts to run Systems Manager commands on control plane nodes is invalid. Prefetching must target the EKS worker nodes (EC2 instances in managed node groups) where container images are actually stored and used.
Option C correctly creates an IAM role that allows Amazon EventBridge to invoke AWS Systems Manager to run commands on the EKS worker nodes. By using Systems Manager State Manager associations with node tags, the automation dynamically targets both existing nodes and newly added nodes that inherit the same tags. This ensures that container images are pulled immediately when a new image is pushed to ECR or when new nodes join the cluster.
Option B incorrectly targets nodes based on machine size, which is not reliable for lifecycle automation.
Options A and D incorrectly reference control plane nodes, which cannot be managed with Systems Manager.
Therefore, Option C is the correct and AWS-aligned solution to ensure fast pod startup times across all nodes.
A company deploys its corporate infrastructure on AWS across multiple AWS Regions and Availability Zones. The infrastructure is deployed on Amazon EC2 instances and connects with AWS loT Greengrass devices. The company deploys additional resources on on-premises servers that are located in the corporate headquarters.
The company wants to reduce the overhead involved in maintaining and updating its resources. The company’s DevOps team plans to use AWS Systems Manager to implement automated management and application of patches. The DevOps team confirms that Systems Manager is available in the Regions that the resources are deployed m Systems Manager also is available in a Region near the corporate headquarters.
Which combination of steps must the DevOps team take to implement automated patch and configuration management across the company’s EC2 instances loT devices and on-premises infrastructure? (Select THREE.)
- A . Apply tags lo all the EC2 instances. AWS loT Greengrass devices, and on-premises servers. Use Systems Manager Session Manager to push patches to all the tagged devices.
- B . Use Systems Manager Run Command to schedule patching for the EC2 instances AWS loT Greengrass devices and on-premises servers.
- C . Use Systems Manager Patch Manager to schedule patching loT the EC2 instances AWS loT Greengrass devices and on-premises servers as a Systems Manager maintenance window task.
- D . Configure Amazon EventBridge to monitor Systems Manager Patch Manager for updates to patch baselines. Associate Systems Manager Run Command with the event lo initiate a patch action for all EC2 instances AWS loT Greengrass devices and on-premises servers.
- E . Create an IAM instance profile for Systems Manager Attach the instance profile to all the EC2 instances in the AWS account. For the AWS loT Greengrass devices and on-premises servers create an IAM service role for Systems Manager.
- F . Generate a managed-instance activation Use the Activation Code and Activation ID to install Systems Manager Agent (SSM Agent) on each server in the on-premises environment Update the AWS loT Greengrass IAM token exchange role Use the role to deploy SSM Agent on all the loT devices.
BEF
Explanation:
To implement automated patch and configuration management across the company’s EC2 instances, IoT devices, and on-premises infrastructure using AWS Systems Manager, the DevOps team should take the following steps:
B. Use Systems Manager Run Command to schedule patching for the EC2 instances, AWS IoT Greengrass devices, and on-premises servers.
Explanation: Systems Manager Run Command allows you to remotely and securely manage the configuration of your managed instances. It can be utilized to schedule patch management activities across EC2 instances, IoT devices, and on-premises servers.
E. Create an IAM instance profile for Systems Manager. Attach the instance profile to all the EC2 instances in the AWS account. For the AWS IoT Greengrass devices and on-premises servers, create an IAM service role for Systems Manager.
Explanation: IAM roles and profiles are necessary for Systems Manager to have the necessary permissions to manage EC2 instances, IoT Greengrass devices, and on-premises servers. Setting up proper roles will ensure secure and authorized operations.
F. Generate a managed-instance activation. Use the Activation Code and Activation ID to install Systems Manager Agent (SSM Agent) on each server in the on-premises environment. Update the AWS IoT Greengrass IAM token exchange role. Use the role to deploy SSM Agent on all the IoT devices.
Explanation: To manage servers and IoT devices using Systems Manager, you need to install the SSM Agent on these entities. Managed-instance activation will help in registering the on-premises servers with Systems Manager, and the IAM token exchange role will facilitate the deployment of the SSM Agent on IoT devices.
By combining these steps, the DevOps team can establish a solution for automated patch and configuration management across the various environments in which the company operates.
A company uses an organization in AWS Organizations to manage its AWS accounts. The company’s DevOps team has developed an AWS Lambda function that calls the Organizations API to create new AWS accounts.
The Lambda function runs in the organization’s management account. The DevOps team needs to move the Lambda function from the management account to a dedicated AWS account. The DevOps team must ensure that the Lambda function has the ability to create new AWS accounts only in Organizations before the team deploys the Lambda function to the new account.
Which solution will meet these requirements?
- A . In the management account, create a new IAM role that has the necessary permission to create new accounts in Organizations. Allow the role to be assumed by the Lambda execution role in the new AWS account. Update the Lambda function code to assume the role when the Lambda function creates new AWS accounts. Update the Lambda execution role to ensure that it has permission to assume the new role.
- B . In the management account, turn on delegated administration for Organizations. Create a new delegation policy that grants the new AWS account permission to create new AWS accounts in Organizations. Ensure that the Lambda execution role has the organizations:CreateAccount permission.
- C . In the management account, create a new IAM role that has the necessary permission to create new accounts in Organizations. Allow the role to be assumed by the Lambda service principal. Update the Lambda function code to assume the role when the Lambda function creates new AWS accounts. Update the Lambda execution role to ensure that it has permission to assume the new role.
- D . In the management account, enable AWS Control Tower. Turn on delegated administration for AWS Control Tower. Create a resource policy that allows the new AWS account to create new AWS accounts in AWS Control Tower. Update the Lambda function code to use the AWS Control Tower API in the new AWS account. Ensure that the Lambda execution role has the controltower:CreateManagedAccount permission.
A
Explanation:
Only the Organizations management account (or roles in that account) can call organizations:CreateAccount directly. When moving the Lambda function to a different (dedicated) account, the correct pattern is to perform cross-account role assumption into a role that resides in the management account and has tightly scoped Organizations permissions.
Option A describes exactly this:
Create an IAM role in the management account with only the required Organizations permissions (for example, organizations:CreateAccount and related read permissions).
Configure the trust policy on that role to allow it to be assumed by the Lambda execution role in the new account.
Update the Lambda code to call sts:AssumeRole into the management-account role before invoking the Organizations API.
This approach ensures that:
The Lambda function can create new accounts, but only via the management account.
The Lambda execution role in the dedicated account has no direct Organizations permissions; it only has permission to assume the specific role.
Option B is incorrect because “delegated administration for Organizations” in the generic sense is not how CreateAccount is exposed; account creation remains a management-account responsibility.
Options C and D either misconfigure trust or introduce unnecessary Control Tower complexity. Cross-account assume-role from the dedicated account into the management account is the correct and least-privilege solution.
A company has an application that runs on a fleet of Amazon EC2 instances. The application requires frequent restarts. The application logs contain error messages when a restart is required. The application logs are published to a log group in Amazon CloudWatch Logs.
An Amazon CloudWatch alarm notifies an application engineer through an Amazon Simple Notification Service (Amazon SNS) topic when the logs contain a large number of restart-related error messages. The application engineer manually restarts the application on the instances after the application engineer receives a notification from the SNS topic.
A DevOps engineer needs to implement a solution to automate the application restart on the instances without restarting the instances.
Which solution will meet these requirements in the MOST operationally efficient manner?
- A . Configure an AWS Systems Manager Automation runbook that runs a script to restart the application on the instances. Configure the SNS topic to invoke the runbook.
- B . Create an AWS Lambda function that restarts the application on the instances. Configure the Lambda function as an event destination of the SNS topic.
- C . Configure an AWS Systems Manager Automation runbook that runs a script to restart the application on the instances. Create an AWS Lambda function to invoke the runbook. Configure the Lambda function as an event destination of the SNS topic.
- D . Configure an AWS Systems Manager Automation runbook that runs a script to restart the application on the instances. Configure an Amazon EventBridge rule that reacts when the CloudWatch alarm enters ALARM state. Specify the runbook as a target of the rule.
A company is migrating an application to Amazon Elastic Container Service (Amazon ECS). The company wants to consolidate log data in Amazon CloudWatch in the us-west-2 Region. No CloudWatch log groups currently exist for Amazon ECS.
The company receives the following error code when an ECS task attempts to launch:
“service my-service-name was unable to place a task because no container instance met all of its requirements.”
The ECS task definition includes the following container log configuration:
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-create-group": "true",
"awslogs-group": "awslogs-mytask",
"awslogs-region": "us-west-2",
"awslogs-stream-prefix": "awslogs-mytask",
"mode": "non-blocking",
"max-buffer-size": "25m"
}
}
The ECS cluster uses an Amazon EC2 Auto Scaling group to provide capacity for tasks. EC2 instances launch an Amazon ECS-optimized AMI.
Which solution will fix the problem?
- A . Modify the ECS infrastructure IAM role to add the logs:CreateLogStream and logs:PutLogEvents permissions.
- B . Modify the ECS log configuration to use blocking mode.
- C . Modify the ECS container instance IAM role to add the logs:CreateLogStream and logs:PutLogEvents permissions.
- D . Modify the ECS log configuration by setting the awslogs-create-group option to false.
C
Explanation:
When using the awslogs log driver with Amazon ECS on EC2, CloudWatch Logs permissions must be granted to the ECS container instance IAM role, not the task definition or infrastructure role. The ECS agent running on the EC2 instances is responsible for creating log streams and pushing log events to Amazon CloudWatch Logs on behalf of the containers.
In this scenario, the task definition is correctly configured to automatically create the log group (awslogs-create-group: true) and send logs to the specified Region. However, the error occurs because the EC2 container instances do not have sufficient IAM permissions to perform the required CloudWatch Logs API calls. As a result, ECS cannot place the task, and it reports that no container instance meets the requirements.
According to AWS documentation, the container instance IAM role must include the following permissions when using the awslogs driver:
logs:CreateLogStream
logs:PutLogEvents
(and, when creating log groups automatically) logs:CreateLogGroup
Option C correctly addresses the root cause by updating the container instance IAM role.
Option A is incorrect because the ECS infrastructure or service role is not used to write logs.
Option B is unrelated to permissions and does not resolve the issue.
Option D would fail because the log group does not already exist, causing task startup to fail.
Therefore, modifying the container instance IAM role is the correct solution.
A large company recently acquired a small company. The large company invited the small company to join the large company’s existing organization in AWS Organizations as a new OU. A DevOps engineer determines that the small company needs to launch t3.small Amazon EC2 instance types for the company’s application workloads. The small company needs to deploy the instances only within US-based AWS Regions. The DevOps engineer needs to use an SCP in the small company’s new OU to ensure that the small company can launch only the required instance types.
Which solution will meet these requirements?
- A . Configure a statement to deny the ec2:RunInstances action for all EC2 instance resources when the ec2:InstanceType condition is not equal to t3.small. Configure another statement to deny the ec2:RunInstances action for all EC2 instance resources when the aws:RequestedRegion condition is not equal to us-.
- B . Configure a statement to allow the ec2:RunInstances action for all EC2 instance resources when the ec2:InstanceType condition is not equal to t3.small. Configure another statement to allow the ec2:RunInstances action for all EC2 instance resources when the aws:RequestedRegion condition is not equal to us-.
- C . Configure a statement to deny the ec2:RunInstances action for all EC2 instance resources when the ec2:InstanceType condition is equal to t3.small. Configure another statement to deny the ec2:RunInstances action for all EC2 instance resources when the aws:RequestedRegion condition is equal to us-.
- D . Configure a statement to allow the ec2:RunInstances action for all EC2 instance resources when the ec2:InstanceType condition is equal to t3.small. Configure another statement to allow the ec2:RunInstances action for all EC2 instance resources when the aws:RequestedRegion condition is equal to us-.
B
26 1. A company is developing a web application that runs on Amazon EC2 Linux instances. The application requires monitoring of custom performance metrics. The company must collect metrics for API response times and database query latency across multiple instances.
Which solution will generate the custom metrics with the LEAST operational overhead?
A company has an organization in AWS Organizations for its multi-account environment. A DevOps engineer is developing an AWS CodeArtifact based strategy for application package management across the organization. Each application team at the company has its own account in the organization. Each application team also has limited access to a centralized shared services account. Each application team needs full access to download, publish, and grant access to its own packages. Some common library packages that the application teams use must also be shared with the entire organization.
Which combination of steps will meet these requirements with the LEAST administrative overhead? (Select THREE.)
- A . Create a domain in each application team’s account. Grant each application team’s account lull read access and write access to the application team’s domain
- B . Create a domain in the shared services account Grant the organization read access and CreateRepository access.
- C . Create a repository in each application team’s account. Grant each application team’s account lull read access and write access to its own repository.
- D . Create a repository in the shared services account. Grant the organization read access to the repository in the shared services account. Set the repository as the upstream repository in each application team’s repository.
- E . For teams that require shared packages, create resource-based policies that allow read access to
the repository from other application teams’ accounts. - F . Set the other application teams’ repositories as upstream repositories.
BCD
Explanation:
Step 1: Creating a Centralized Domain in the Shared Services Account
To manage application package dependencies across multiple accounts, the most efficient solution is to create a centralized domain in the shared services account. This allows all application teams to access and manage package repositories within the same domain, ensuring consistency and centralization.
Action: Create a domain in the shared services account.
Why: A single, centralized domain reduces the need for redundant management in each application team’s account.
Reference: AWS documentation on AWS CodeArtifact domains and repositories.
This corresponds to Option B: Create a domain in the shared services account. Grant the organization read access and CreateRepository access.
Step 2: Sharing Repositories Across Teams with Upstream Configurations
To share common library packages across the organization, each application team’s repository can point to the shared services repository as an upstream repository. This enables teams to access shared packages without managing them individually in each team’s account.
Action: Create a repository in the shared services account and set it as the upstream repository for each application team.
Why: Upstream repositories allow package sharing while maintaining individual team repositories for managing their own packages.
Reference: AWS documentation on Upstream repositories in CodeArtifact.
This corresponds to Option D: Create a repository in the shared services account. Grant the organization read access to the repository in the shared services account. Set the repository as the upstream repository in each application team’s repository.
Step 3: Using Resource-Based Policies for Cross-Account Access
For teams that need to share their packages with other application teams, resource-based policies can be applied to grant the necessary permissions. These policies allow cross-account access without having to manage permissions at the individual account level.
Action: Create resource-based policies that allow read access to the repositories across application teams.
Why: This simplifies management by centralizing permissions in the shared services account while allowing cross-team collaboration.
Reference: AWS documentation on CodeArtifact resource-based policies.
This corresponds to Option E: For teams that require shared packages, create resource-based policies that allow read access to the repository from other application teams’ accounts.
