Practice Free DOP-C02 Exam Online Questions
A company’s application is currently deployed to a single AWS Region. Recently, the company opened a new office on a different continent. The users in the new office are experiencing high latency. The company’s application runs on Amazon EC2 instances behind an Application Load Balancer (ALB) and uses Amazon DynamoDB as the database layer. The instances run in an EC2 Auto Scaling group across multiple Availability Zones. A DevOps engineer is tasked with minimizing application response times and improving availability for users in both Regions.
Which combination of actions should be taken to address the latency issues? (Choose three.)
- A . Create a new DynamoDB table in the new Region with cross-Region replication enabled.
- B . Create new ALB and Auto Scaling group global resources and configure the new ALB to direct traffic to the new Auto Scaling group.
- C . Create new ALB and Auto Scaling group resources in the new Region and configure the new ALB to direct traffic to the new Auto Scaling group.
- D . Create Amazon Route 53 records, health checks, and latency-based routing policies to route to the ALB.
- E . Create Amazon Route 53 aliases, health checks, and failover routing policies to route to the ALB.
- F . Convert the DynamoDB table to a global table.
A company is running a custom-built application that processes records. All the components run on Amazon EC2 instances that run in an Auto Scaling group. Each record’s processing is a multistep sequential action that is compute-intensive. Each step is always completed in 5 minutes or less.
A limitation of the current system is that if any steps fail, the application has to reprocess the record from the beginning. The company wants to update the architecture so that the application must reprocess only the failed steps.
What is the MOST operationally efficient solution that meets these requirements?
- A . Create a web application to write records to Amazon S3 Use S3 Event Notifications to publish to an Amazon Simple Notification Service (Amazon SNS) topic Use an EC2 instance to poll Amazon SNS and start processing Save intermediate results to Amazon S3 to pass on to the next step
- B . Perform the processing steps by using logic in the application. Convert the application code to run in a container. Use AWS Fargate to manage the container Instances. Configure the container to invoke itself to pass the state from one step to the next.
- C . Create a web application to pass records to an Amazon Kinesis data stream. Decouple the processing by using the Kinesis data stream and AWS Lambda functions.
- D . Create a web application to pass records to AWS Step Functions. Decouple the processing into Step Functions tasks and AWS Lambda functions.
D
Explanation:
Use AWS Step Functions to Orchestrate Processing:
AWS Step Functions allow you to build distributed applications by combining AWS Lambda functions or other AWS services into workflows.
Decoupling the processing into Step Functions tasks enables you to retry individual steps without reprocessing the entire record.
Architectural Steps:
Create a web application to pass records to AWS Step Functions:
The web application can be a simple frontend that receives input and triggers the Step Functions workflow.
Define a Step Functions state machine:
Each step in the state machine represents a processing stage. If a step fails, Step Functions can retry the step based on defined conditions.
Use AWS Lambda functions:
Lambda functions can be used to handle each processing step. These functions can be stateless and handle specific tasks, reducing the complexity of error handling and reprocessing logic. Operational Efficiency:
Using Step Functions and Lambda improves operational efficiency by providing built-in error handling, retries, and state management.
This architecture scales automatically and isolates failures to individual steps, ensuring only failed
steps are retried.
Reference: AWS Step Functions
Building Workflows with Step Functions
A company manages shared libraries across development and production accounts with IAM roles and Code Pipeline/CDK. Developers must be the only ones to access latest versions. Shared packages must be independently tested before production.
Which solution meets these requirements?
- A . Single Code Artifact repository in central account with IAM policies allowing only developers access. Use EventBridge to start Code Build testing projects before copying packages to production
repo. - B . Separate Code Artifact repositories in dev and prod accounts. Dev repo has repository policy allowing only developers access. EventBridge triggers pipeline to test packages before copying to prod repo.
- C . Single S3 bucket with versioning in central account, IAM policies restricting developers. Use EventBridge to trigger CodeBuild tests before copying to production.
- D . Separate S3 buckets with versioning in dev and prod accounts, dev bucket policy restricting developers. EventBridge triggers pipeline to test packages before copying to prod and revert if tests fail.
B
Explanation:
Having separate CodeArtifact repositories in dev and prod accounts provides clear isolation and control.
Repository policies can restrict dev repo access to developers.
EventBridge triggers pipelines to test and promote packages only if tests pass, ensuring safe deployment to production.
Using S3 (C and D) is not ideal for package management.
A single repo (A) complicates access and version control across accounts.
References:
CodeArtifact Repository Policies
Cross-Account Package Promotion
A company operates a fleet of Amazon EC2 instances that host critical applications and handle sensitive data. The EC2 instances must have up-to-date security patches to protect against vulnerabilities and ensure compliance with industry standards and regulations. The company needs an automated solution to monitor and enforce security patch compliance across the EC2 fleet.
Which solution will meet these requirements?
- A . Configure AWS Systems Manager Patch Manager and AWS Config with defined patch baselines and compliance rules that run Systems Manager Automation documents.
- B . Access each EC2 instance by using SSH keys. Check for and apply security updates by using package managers. Verify the installations.
- C . Configure Auto Scaling groups that have scaling policies based on Amazon CloudWatch metrics. Configure Auto Scaling launch templates that launch new instances by using the latest AMIs that contain new security patches.
- D . Use AWS CloudFormation to recreate EC2 instances with the latest AMI every time a new patch becomes available. Use AWS CloudTrail logs to monitor patch compliance and to send alerts for non-compliant instances.
A
Explanation:
Option A is the most correct because it provides both: (1) automated patching and (2) compliance monitoring/enforcement across a fleet, using AWS-native services built for exactly this purpose.
AWS Systems Manager Patch Manager is designed to automate patching of managed instances using patch baselines, maintenance windows (or on-demand), and it produces compliance status for patching. It’s the standard AWS service to apply OS/security patches at scale without SSH’ing into instances.
AWS Config can be used to evaluate and track compliance over time against defined rules, giving centralized visibility and continuous compliance assessment. With remediation, Config can invoke Systems Manager Automation documents to correct non-compliant resources or trigger patch actions (depending on the rule/remediation design). This meets the “monitor and enforce” requirement.
Why the other options don’t meet requirements as well:
B is manual, doesn’t scale well, and increases operational risk (key management, human error). It’s not “automated monitoring and enforcement.”
C (replacing instances with new AMIs) can be part of an immutable infrastructure strategy, but by itself it does not provide compliance monitoring across the current fleet, and scaling policies based on CloudWatch metrics are unrelated to patch compliance. Also, patch cadence would depend on AMI pipelines and instance rotation rather than direct compliance enforcement.
D is operationally heavy and mismatched: CloudTrail records API activity; it does not natively provide “patch compliance” status for instance OS packages. Recreating instances via CloudFormation for every patch is not an efficient or standard enforcement mechanism for patch compliance.
A company has an organization in AWS Organizations with many OUs that contain many AWS accounts. The organization has a dedicated delegated administrator AWS account.
The company needs the accounts in one OU to have server-side encryption enforced for all Amazon Elastic Block Store (Amazon EBS) volumes and Amazon Simple Queue Service (Amazon SQS) queues that are created or updated on an AWS CloudFormation stack.
Which solution will enforce this policy before a CloudFormation stack operation in the accounts of this OU?
- A . Activate trusted access to CloudFormation StackSets. Create a CloudFormation Hook that enforces server-side encryption on EBS volumes and SQS queues. Deploy the Hook across the accounts in the OU by using StackSets.
- B . Set up AWS Config in all the accounts in the OU. Use AWS Systems Manager to deploy AWS Config rules that enforce server-side encryption for EBS volumes and SQS queues across the accounts in the OU.
- C . Write an SCP to deny the creation of EBS volumes and SQS queues unless the EBS volumes and SQS queues have server-side encryption. Attach the SCP to the OU.
- D . Create an AWS Lambda function in the delegated administrator account that checks whether server-side encryption is enforced for EBS volumes and SQS queues. Create an IAM role to provide the Lambda function access to the accounts in the OU.
A
Explanation:
The requirement specifies enforcing encryption before CloudFormation creates or updates resources. This is key because preventive enforcement must occur during the provisioning workflow, not after resources already exist. AWS provides CloudFormation Hooks specifically for this purpose. A Hook allows an organization to intercept a CloudFormation stack operation and validate resource configurations before provisioning occurs. This feature is recommended by AWS for pre-deployment governance such as enforcing encryption policies, tag compliance, or security restrictions.
By enabling trusted access between CloudFormation StackSets and AWS Organizations, the Hook can be deployed centrally from the delegated administrator account across all accounts in the specified OU. Any attempt to create or update EBS volumes or SQS queues through CloudFormation is validated first by the Hook. If encryption is not configured, the operation fails immediately.
Option C (SCP) blocks API calls globally, but SCPs cannot perform conditional logic based on resource properties passed by CloudFormation prior to creation.
Option B (AWS Config) detects violations after resources already exist, which does not satisfy “before stack operation.” Option D (Lambda remediation) also occurs after the resource is created.
Thus, CloudFormation Hooks distributed via StackSets provide the only solution that enforces compliance before the provisioning lifecycle
A company has deployed a critical application in two AWS Regions. The application uses an Application Load Balancer (ALB) in both Regions. The company has Amazon Route 53 alias DNS records for both ALBs.
The company uses Amazon Route 53 Application Recovery Controller to ensure that the application can fail over between the two Regions. The Route 53 ARC configuration includes a routing control for both Regions. The company uses Route 53 ARC to perform quarterly disaster recovery (DR) tests.
During the most recent DR test, a DevOps engineer accidentally turned off both routing controls. The company needs to ensure that at least one routing control is turned on at all times.
Which solution will meet these requirements?
- A . In Route 53 ARC. create a new assertion safety rule. Apply the assertion safety rule to the two routing controls. Configure the rule with the ATLEAST type with a threshold of 1.
- B . In Route 53 ARC, create a new gating safety rule. Apply the assertion safety rule to the two routing controls. Configure the rule with the OR type with a threshold of 1.
- C . In Route 53 ARC, create a new resource set. Configure the resource set with an AWS: Route53: HealthCheck resource type. Specify the ARNs of the two routing controls as the target resource. Create a new readiness check for the resource set.
- D . In Route 53 ARC, create a new resource set. Configure the resource set with an AWS: Route53RecoveryReadiness: DNSTargetResource resource type. Add the domain names of the two Route 53 alias DNS records as the target resource. Create a new readiness check for the resource set.
A
Explanation:
The correct solution is to create a new assertion safety rule in Route 53 ARC and apply it to the two routing controls. An assertion safety rule is a type of safety rule that ensures that a minimum number of routing controls are always enabled. The ATLEAST type of assertion safety rule specifies the minimum number of routing controls that must be enabled for the rule to evaluate as healthy. By setting the threshold to 1, the rule ensures that at least one routing control is always turned on. This prevents the scenario where both routing controls are accidentally turned off and the application becomes unavailable in both Regions.
The other solutions are incorrect because they do not use safety rules to prevent both routing controls from being turned off. A gating safety rule is a type of safety rule that prevents routing control state changes that violate the rule logic. The OR type of gating safety rule specifies that one or more routing controls must be enabled for the rule to evaluate as healthy. However, this rule does not prevent a user from turning off both routing controls manually. A resource set is a collection of resources that are tested for readiness by Route 53 ARC. A readiness check is a test that verifies that all the resources in a resource set are operational. However, these concepts are not related to routing control states or safety rules. Therefore, creating a new resource set and a new readiness check will not ensure that at least one routing control is turned on at all times.
Reference: Routing control in Amazon Route 53 Application Recovery Controller Viewing and updating routing control states in Route 53 ARC Creating a control panel in Route 53 ARC Creating safety rules in Route 53 ARC
A company wants to deploy a workload on several hundred Amazon EC2 instances. The company will provision the EC2 instances in an Auto Scaling group by using a launch template.
The workload will pull files from an Amazon S3 bucket, process the data, and put the results into a different S3 bucket. The EC2 instances must have least-privilege permissions and must use temporary security credentials.
Which combination of steps will meet these requirements? (Select TWO.)
- A . Create an IAM role that has the appropriate permissions for S3 buckets. Add the IAM role to an instance profile.
- B . Update the launch template to include the IAM instance profile.
- C . Create an IAM user that has the appropriate permissions for Amazon S3. Generate a secret key and token.
- D . Create a trust anchor and profile. Attach the IAM role to the profile.
- E . Update the launch template. Modify the user data to use the new secret key and token.
AB
Explanation:
To meet the requirements of deploying a workload on several hundred EC2 instances with least-privilege permissions and temporary security credentials, the company should use an IAM role and an instance profile. An IAM role is a way to grant permissions to an entity that you trust, such as an EC2 instance. An instance profile is a container for an IAM role that you can use to pass role information to an EC2 instance when the instance starts. By using an IAM role and an instance profile, the EC2 instances can automatically receive temporary security credentials from the AWS Security Token Service (STS) and use them to access the S3 buckets. This way, the company does not need to manage or rotate any long-term credentials, such as IAM users or access keys.
To use an IAM role and an instance profile, the company should create an IAM role that has the appropriate permissions for S3 buckets. The permissions should allow the EC2 instances to read from the source S3 bucket and write to the destination S3 bucket. The company should also create a trust policy for the IAM role that specifies that EC2 is allowed to assume the role. Then, the company should add the IAM role to an instance profile. An instance profile can have only one IAM role, so the company does not need to create multiple roles or profiles for this scenario.
Next, the company should update the launch template to include the IAM instance profile. A launch template is a way to save launch parameters for EC2 instances, such as the instance type, security group, user data, and IAM instance profile. By using a launch template, the company can ensure that all EC2 instances in the Auto Scaling group have consistent configuration and permissions. The company should specify the name or ARN of the IAM instance profile in the launch template. This way, when the Auto Scaling group launches new EC2 instances based on the launch template, they will automatically receive the IAM role and its permissions through the instance profile.
The other options are not correct because they do not meet the requirements or follow best practices. Creating an IAM user and generating a secret key and token is not a good option because it involves managing long-term credentials that need to be rotated regularly. Moreover, embedding credentials in user data is not secure because user data is visible to anyone who can describe the EC2 instance. Creating a trust anchor and profile is not a valid option because trust anchors are used for certificate-based authentication, not for IAM roles or instance profiles. Modifying user data to use a new secret key and token is also not a good option because it requires updating user data every time the credentials change, which is not scalable or efficient.
References:
1: AWS Certified DevOps Engineer – Professional Certification | AWS Certification | AWS
2: DevOps Resources – Amazon Web Services (AWS)
3: Exam Readiness: AWS Certified DevOps Engineer – Professional
: IAM Roles for Amazon EC2 – AWS Identity and Access Management
: Working with Instance Profiles – AWS Identity and Access Management
: Launching an Instance Using a Launch Template – Amazon Elastic Compute Cloud
: Temporary Security Credentials – AWS Identity and Access Management
A company has multiple accounts in an organization in AWS Organizations. The company’s SecOps team needs to receive an Amazon Simple Notification Service (Amazon SNS) notification if any account in the organization turns off the Block Public Access feature on an Amazon S3 bucket. A DevOps engineer must implement this change without affecting the operation of any AWS accounts. The implementation must ensure that individual member accounts in the organization cannot turn off the notification.
Which solution will meet these requirements?
- A . Designate an account to be the delegated Amazon GuardDuty administrator account. Turn on GuardDuty for all accounts across the organization. In the GuardDuty administrator account, create an SNS topic. Subscribe the SecOps team’s email address to the SNS topic. In the same account, create an Amazon EventBridge rule that uses an event pattern for GuardDuty findings and a target of the SNS topic.
- B . Create an AWS CloudFormation template that creates an SNS topic and subscribes the SecOps team’s email address to the SNS topic. In the template, include an Amazon EventBridge rule that uses an event pattern of CloudTrail activity for s3:PutBucketPublicAccessBlock and a target of the SNS topic. Deploy the stack to every account in the organization by using CloudFormation StackSets.
- C . Turn on AWS Config across the organization. In the delegated administrator account, create an SNS topic. Subscribe the SecOps team’s email address to the SNS topic. Deploy a conformance pack that uses the s3-bucket-level-public-access-prohibited AWS Config managed rule in each account and uses an AWS Systems Manager document to publish an event to the SNS topic to notify the SecOps team.
- D . Turn on Amazon Inspector across the organization. In the Amazon Inspector delegated administrator account, create an SNS topic. Subscribe the SecOps team’semail address to the SNS topic. In the same account, create an Amazon EventBridge rule that uses an event pattern for public network exposure of the S3 bucket and publishes an event to the SNS topic to notify the SecOps team.
A company wants to use a grid system for a proprietary enterprise m-memory data store on top of AWS. This system can run in multiple server nodes in any Linux-based distribution. The system must be able to reconfigure the entire cluster every time a node is added or removed. When adding or removing nodes an /etc./cluster/nodes config file must be updated listing the IP addresses of the current node members of that cluster.
The company wants to automate the task of adding new nodes to a cluster.
What can a DevOps engineer do to meet these requirements?
- A . Use AWS OpsWorks Stacks to layer the server nodes of that cluster. Create a Chef recipe that populates the content of the ‘etc./cluster/nodes config file and restarts the service by using the current members of the layer. Assign that recipe to the Configure lifecycle event.
- B . Put the file nodes config in version control. Create an AWS CodeDeploy deployment configuration and deployment group based on an Amazon EC2 tag value for thecluster nodes. When adding a new node to the cluster update the file with all tagged instances and make a commit in version control. Deploy the new file and restart the services.
- C . Create an Amazon S3 bucket and upload a version of the /etc./cluster/nodes config file Create a crontab script that will poll for that S3 file and download it frequently. Use a process manager such as Monit or system, to restart the cluster services when it detects that the new file was modified. When adding a node to the cluster edit the file’s most recent members Upload the new file to the S3 bucket.
- D . Create a user data script that lists all members of the current security group of the cluster and automatically updates the /etc/cluster/. nodes config. Tile whenever a new instance is added to the cluster.
A
Explanation:
You can run custom recipes manually, but the best approach is usually to have AWS OpsWorks Stacks run them automatically. Every layer has a set of built-in recipes assigned each of five lifecycle events―Setup, Configure, Deploy, Undeploy, and Shutdown. Each time an event occurs for an instance, AWS OpsWorks Stacks runs the associated recipes for each of the instance’s layers, which handle the corresponding tasks. For example, when an instance finishes booting, AWS OpsWorks Stacks triggers a Setup event. This event runs the associated layer’s Setup recipes, which typically handle tasks such as installing and configuring packages
A company is using AWS Organizations to create separate AWS accounts for each of its departments
The company needs to automate the following tasks
• Update the Linux AMIs with new patches periodically and generate a golden image
• Install a new version to Chef agents in the golden image, is available
• Provide the newly generated AMIs to the department’s accounts
Which solution meets these requirements with the LEAST management overhead’?
- A . Write a script to launch an Amazon EC2 instance from the previous golden image Apply the patch updates Install the new version of the Chef agent, generate a new golden image, and then modify the AMI permissions to share only the new image with the department’s accounts.
- B . Use Amazon EC2 Image Builder to create an image pipeline that consists of the base Linux AMI and components to install the Chef agent Use AWS Resource Access Manager to share EC2 Image Builder images with the department’s accounts
- C . Use an AWS Systems Manager Automation runbook to update the Linux AMI by using the previous image Provide the URL for the script that will update the Chef agent Use AWS Organizations to replace the previous golden image in the department’s accounts.
- D . Use Amazon EC2 Image Builder to create an image pipeline that consists of the base Linux AMI and components to install the Chef agent Create a parameter in AWS Systems Manager Parameter Store to store the new AMI ID that can be referenced by the department’s accounts
B
Explanation:
Amazon EC2 Image Builder is a service that automates the creation, management, and deployment of customized, secure, and up-to-date server images that are pre-installed with software and configuration settings tailored to meet specific IT standards. EC2 Image Builder simplifies the creation and maintenance of golden images, and makes it easy to generate images for multiple platforms, such as Amazon EC2 and on-premises. EC2 Image Builder also integrates with AWS Resource Access Manager, which allows you to share your images across accounts within your organization or with external AWS accounts. This solution meets the requirements of automating the tasks of updating the Linux AMIs, installing the Chef agent, and providing the images to the department’s accounts with the least management overhead.
Reference: Amazon EC2 Image Builder
Sharing EC2 Image Builder images
