Practice Free DOP-C02 Exam Online Questions
A company’s production environment uses an AWS CodeDeploy blue/green deployment to deploy an application. The deployment incudes Amazon EC2 Auto Scaling groups that launch instances that run Amazon Linux 2.
A working appspec. ymi file exists in the code repository and contains the following text.

A DevOps engineer needs to ensure that a script downloads and installs a license file onto the instances before the replacement instances start to handle request traffic. The DevOps engineer adds a hooks section to the appspec. yml file.
Which hook should the DevOps engineer use to run the script that downloads and installs the license file?
- A . AfterBlockTraffic
- B . BeforeBlockTraffic
- C . Beforelnstall
- D . Down load Bundle
C
Explanation:
This hook runs before the new application version is installed on the replacement instances. This is the best place to run the script because it ensures that the license file is downloaded and installed before the replacement instances start to handle request traffic. If you use any other hook, you may encounter errors or inconsistencies in your application.
A DevOps engineer is creating an AWS CloudFormation template to deploy a web service. The web service will run on Amazon EC2 instances in a private subnet behind an Application Load Balancer (ALB). The DevOps engineer must ensure that the service can accept requests from clients that have IPv6 addresses.
What should the DevOps engineer do with the CloudFormation template so that IPv6 clients can access the web service?
- A . Add an IPv6 CIDR block to the VPC and the private subnet for the EC2 instances. Create route table entries for the IPv6 network, use EC2 instance types that support IPv6, and assign IPv6 addresses to each EC2 instance.
- B . Assign each EC2 instance an IPv6 Elastic IP address. Create a target group, and add the EC2 instances as targets. Create a listener on port 443 of the ALB, and associate the target group with the ALB.
- C . Replace the ALB with a Network Load Balancer (NLB). Add an IPv6 CIDR block to the VPC and subnets for the NLB, and assign the NLB an IPv6 Elastic IP address.
- D . Add an IPv6 CIDR block to the VPC and subnets for the ALB. Create a listener on port 443. and specify the dualstack IP address type on the ALB. Create a target group, and add the EC2 instances as targets. Associate the target group with the ALB.
A company uses AWS Organizations to manage multiple AWS accounts. The company needs a solution to improve the company’s management of AWS resources in a production account.
The company wants to use AWS CloudFormation to manage all manually created infrastructure. The company must have the ability to strictly control who can make manual changes to AWS infrastructure. The solution must ensure that users can deploy new infrastructure only by making changes to a CloudFormation template that is stored in an AWS CodeConnections compatible Git provider.
Which combination of steps will meet these requirements with the LEAST implementation effort? (Select THREE).
- A . Configure the CloudFormation infrastructure as code (IaC) generator to scan for existing resources in the AWS account. Create a CloudFormation template that includes the scanned resources. Import the CloudFormation template into a new CloudFormation stack.
- B . Configure AWS Config to scan for existing resources in the AWS account. Create a CloudFormation template that includes the scanned resources. Import the CloudFormation template into a new CloudFormation stack.
- C . Use CodeConnections to establish a connection between the Git provider and AWS CodePipeline. Push the CloudFormation template to the Git repository. Run a pipeline in CodePipeline that deploys the CloudFormation stack for every merge into the Git repository.
- D . Use CodeConnections to establish a connection between the Git provider and CloudFormation. Push the CloudFormation template to the Git repository. Sync the Git repository with the CloudFormation stack.
- E . Create an IAM role, and set CloudFormation as the principal. Grant the IAM role access to manage the stack resources. Create an SCP that denies all actions to all the principals except by the IAM role. Link the SCP with the production OU.
- F . Create an IAM role, and set CloudFormation as the principal. Grant the IAM role access to manage the stack resources. Create an SCP that allows all actions to only the IAM role. Link the SCP with the production OU.
A, C, E
Explanation:
Comprehensive and Detailed Explanation From Exact Extract:
Step A: Using a tool like CloudFormation resource import or IaC generator to scan and create a template from existing resources is efficient to bring current infrastructure under management.
Step C: Using CodeConnections (AWS’s solution to connect Git repositories) with AWS CodePipeline ensures any changes to CloudFormation templates in the Git repo automatically deploy infrastructure changes, enforcing infrastructure as code workflows.
Step E: Creating an IAM role with CloudFormation as the principal ensures CloudFormation has permissions to manage resources. Using an SCP to deny all actions except by this role enforces strict control, preventing manual changes outside the pipeline.
Option B uses AWS Config which is more for compliance and auditing, not direct resource import.
Option D is invalid because CloudFormation does not natively sync with Git; CodePipeline does.
Option F is less secure than denying all except the IAM role.
Reference: AWS CloudFormation Resource Import:
"Import existing resources into CloudFormation stacks for management."
(CloudFormation Resource Import)
AWS CodePipeline and CodeConnections Integration:
"Use CodeConnections to connect Git providers with AWS CodePipeline for continuous deployment."
(AWS CodePipeline Git Integration)
AWS Organizations SCP and IAM Role Best Practices:
"Use SCPs to restrict actions and IAM roles with limited principals to enforce secure management." (AWS Organizations Best Practices)
A company uses an organization in AWS Organizations that has all features enabled. The company uses AWS Backup in a primary account and uses an AWS Key Management Service (AWS KMS) key to encrypt the backups.
The company needs to automate a cross-account backup of the resources that AWS Backup backs up in the primary account. The company configures cross-account backup in the Organizations management account. The company creates a new AWS account in the organization and configures an AWS Backup backup vault in the new account. The company creates a KMS key in the new account to encrypt the backups. Finally, the company configures a new backup plan in the primary account. The destination for the new backup plan is the backup vault in the new account.
When the AWS Backup job in the primary account is invoked, the job creates backups in the primary account. However, the backups are not copied to the new account’s backup vault.
Which combination of steps must the company take so that backups can be copied to the new account’s backup vault? (Select TWO.)
- A . Edit the backup vault access policy in the new account to allow access to the primary account.
- B . Edit the backup vault access policy in the primary account to allow access to the new account.
- C . Edit the backup vault access policy in the primary account to allow access to the KMS key in the new account.
- D . Edit the key policy of the KMS key in the primary account to share the key with the new account.
- E . Edit the key policy of the KMS key in the new account to share the key with the primary account.
AE
Explanation:
To enable cross-account backup, the company needs to grant permissions to both the backup vault and the KMS key in the destination account. The backup vault access policy in the destination account must allow the primary account to copy backups into the vault. The key policy of the KMS key in the destination account must allow the primary account to use the key to encrypt and decrypt the backups. These steps are described in the AWS documentation12. Therefore, the correct answer is A and E.
Reference:
1: Creating backup copies across AWS accounts – AWS Backup
2: Using AWS Backup with AWS Organizations – AWS Backup
A company uses AWS Secrets Manager to store a set of sensitive API keys that an AWS Lambda
function uses. When the Lambda function is invoked, the Lambda function retrieves the API keys and makes an API call to an external service. The Secrets Manager secret is encrypted with the default AWS Key Management Service (AWS KMS) key.
A DevOps engineer needs to update the infrastructure to ensure that only the Lambda function’s execution role can access the values in Secrets Manager. The solution must apply the principle of least privilege.
Which combination of steps will meet these requirements? (Select TWO.)
- A . Update the default KMS key for Secrets Manager to allow only the Lambda function’s execution role to decrypt.
- B . Create a KMS customer managed key that trusts Secrets Manager and allows the Lambda function’s execution role to decrypt. Update Secrets Manager to use the new customer managed key.
- C . Create a KMS customer managed key that trusts Secrets Manager and allows the account’s :root principal to decrypt. Update Secrets Manager to use the new customer managed key.
- D . Ensure that the Lambda function’s execution role has the KMS permissions scoped on the resource level. Configure the permissions so that the KMS key can encrypt the Secrets Manager secret.
- E . Remove all KMS permissions from the Lambda function’s execution role.
A company is developing an application that uses AWS Lambda functions. A DevOps engineer must create an AWS CloudFormation template that defines a deployment configuration for gradual traffic shifting to new Lambda function versions.
Which CloudFormation resource configuration will meet this requirement?
- A . Use an AWS::CodeDeploy::DeploymentConfig resource. Define a TimeBasedCanary configuration.
Specify values for percentage and minutes for traffic shifting. - B . Use an AWS::CodeDeploy::DeploymentGroup resource. Define the DeploymentStyle property as BLUE_GREEN. Configure the TrafficRoutingConfig data type for linear traffic shifting.
- C . Use an AWS::Lambda::Version resource with the VersionWeight property to control the percentage of traffic that is routed to the new Lambda function versions.
- D . Use an AWS::Lambda::Alias resource with the RoutingConfig property to specify weights for gradual traffic shifting between the Lambda function versions.
D
Explanation:
Comprehensive and Detailed Explanation From Exact Extract:
For gradual traffic shifting in Lambda deployments, AWS Lambda aliases support the RoutingConfig property, which specifies the percentage of traffic routed to different versions of the Lambda function. This enables weighted traffic shifting between versions as part of deployment strategies.
The AWS::Lambda::Alias resource’s RoutingConfig can specify multiple versions with weights, enabling canary or linear deployment strategies without needing CodeDeploy resources explicitly.
AWS CodeDeploy resources like DeploymentConfig and DeploymentGroup are used primarily for blue/green deployments and managing deployment strategies outside Lambda aliases.
Lambda versions themselves do not have a VersionWeight property; instead, weighted routing is managed via aliases.
Reference: AWS::Lambda::Alias – RoutingConfig:
"Specifies the versions of the function and the percentage of traffic to send to each version."
(AWS CloudFormation Lambda Alias)
AWS Lambda Deployment Preferences:
"Weighted aliases enable gradual traffic shifting between Lambda function versions."
(AWS Lambda Deployment Preferences)
A DevOps engineer manages an AWS CodePipeline pipeline that builds and deploys a web application on AWS. The pipeline has a source stage, a build stage, and a deploy stage. When deployed properly, the web application responds with a 200 OK HTTP response code when the URL of the home page is requested. The home page recently returned a 503 HTTP response code after CodePipeline deployed the application. The DevOps engineer needs to add an automated test into the pipeline. The automated test must ensure that the application returns a 200 OK HTTP response code after the application is deployed. The pipeline must fail if the response code is not present during the test. The DevOps engineer has added a CheckURL stage after the deploy stage in the pipeline.
What should the DevOps engineer do next to implement the automated test?
- A . Configure the CheckURL stage to use an Amazon CloudWatch action. Configure the action to use a canary synthetic monitoring check on the application URL and to report a success or failure to CodePipeline.
- B . Create an AWS Lambda function to check the response code status of the URL and to report a success or failure to CodePipeline. Configure an action in the CheckURL stage to invoke the Lambda function.
- C . Configure the CheckURL stage to use an AWS CodeDeploy action. Configure the action with an input artifact that is the URL of the application and to report a success or failure to CodePipeline.
- D . Deploy an Amazon API Gateway HTTP API that checks the response code status of the URL and that reports success or failure to CodePipeline. Configure the CheckURL stage to use the AWS Device Farm test action and to provide the API Gateway HTTP API as an input artifact.
A DevOps engineer notices that all Amazon EC2 instances running behind an Application Load Balancer in an Auto Scaling group are failing to respond to user requests. The EC2 instances are also failing target group HTTP health checks
Upon inspection, the engineer notices the application process was not running in any EC2 instances. There are a significant number of out of memory messages in the system logs. The engineer needs to improve the resilience of the application to cope with a potential application memory leak.
Monitoring and notifications should be enabled to alert when there is an issue Which combination of actions will meet these requirements? (Select TWO.)
- A . Change the Auto Scaling configuration to replace the instances when they fail the load balancer’s health checks.
- B . Change the target group health check HealthChecklntervalSeconds parameter to reduce the interval between health checks.
- C . Change the target group health checks from HTTP to TCP to check if the port where the application is listening is reachable.
- D . Enable the available memory consumption metric within the Amazon CloudWatch dashboard for the entire Auto Scaling group Create an alarm when the memory utilization is high Associate an Amazon SNS topic to the alarm to receive notifications when the alarm goes off
- E . Use the Amazon CloudWatch agent to collect the memory utilization of the EC2 instances in the Auto Scaling group Create an alarm when the memory utilization is high and associate an Amazon SNS topic to receive a notification.
AE
Explanation:
https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/metrics-collected-by-CloudWatch-agent.html
A business has an application that consists of five independent AWS Lambda functions.
The DevOps engineer has built a CI/CD pipeline using AWS CodePipeline and AWS CodeBuild that builds tests packages and deploys each Lambda function in sequence. The pipeline uses an Amazon EventBridge rule to ensure the pipeline starts as quickly as possible after a change is made to the application source code.
After working with the pipeline for a few months the DevOps engineer has noticed the pipeline takes too long to complete.
What should the DevOps engineer implement to BEST improve the speed of the pipeline?
- A . Modify the CodeBuild projects within the pipeline to use a compute type with more available network throughput.
- B . Create a custom CodeBuild execution environment that includes a symmetric multiprocessing configuration to run the builds in parallel.
- C . Modify the CodePipeline configuration to run actions for each Lambda function in parallel by specifying the same runorder.
- D . Modify each CodeBuild protect to run within a VPC and use dedicated instances to increase throughput.
C
Explanation:
https://docs.aws.amazon.com/codepipeline/latest/userguide/reference-pipeline-structure.html AWS doc: "To specify parallel actions, use the same integer for each action you want to run in parallel. For example, if you want three actions to run in sequence in a stage, you would give the first action the runOrder value of 1, the second action the runOrder value of 2, and the third the runOrder value of 3. However, if you want the second and third actions to run in parallel, you would give the first action the runOrder value of 1 and both the second and third actions the runOrder value of 2."
A company is running its ecommerce website on AWS. The website is currently hosted on a single Amazon EC2 instance in one Availability Zone. A MySQL database runs on the same EC2 instance. The company needs to eliminate single points of failure in the architecture to improve the website’s availability and resilience.
Which solution will meet these requirements with the LEAST configuration changes to the website?
- A . Deploy the application by using AWS Fargate containers. Migrate the database to Amazon DynamoDB. Use Amazon API Gateway to route requests.
- B . Deploy the application on EC2 instances across multiple Availability Zones. Put the EC2 instances into an Auto Scaling group behind an Application Load Balancer. Migrate the database to Amazon Aurora Multi-AZ. Use Amazon CloudFront for content delivery.
- C . Use AWS Elastic Beanstalk to deploy the application across multiple AWS Regions. Migrate the database to Amazon Redshift. Use Amazon ElastiCache for session management.
- D . Migrate the application to AWS Lambda functions. Use Amazon S3 for static content hosting.
Migrate the database to Amazon DocumentDB (with MongoDB compatibility).
