Practice Free DOP-C02 Exam Online Questions
An ecommerce company is receiving reports that its order history page is experiencing delays in reflecting the processing status of orders. The order processing system consists of an AWS Lambda function that uses reserved concurrency. The Lambda function processes order messages from an Amazon Simple Queue Service (Amazon SQS) queue and inserts processed orders into an Amazon DynamoDB table. The DynamoDB table has auto scaling enabled for read and write capacity.
Which actions should a DevOps engineer take to resolve this delay? (Choose two.)
- A . Check the ApproximateAgeOfOldestMessage metric for the SQS queue. Increase the Lambda function concurrency limit.
- B . Check the ApproximateAgeOfOldestMessage metnc for the SQS queue Configure a redrive policy on the SQS queue.
- C . Check the NumberOfMessagesSent metric for the SQS queue. Increase the SQS queue visibility timeout.
- D . Check the WriteThrottleEvents metric for the DynamoDB table. Increase the maximum write capacity units (WCUs) for the table’s scaling policy.
- E . Check the Throttles metric for the Lambda function. Increase the Lambda function timeout.
A company manages an application that stores logs in Amazon CloudWatch Logs. The company wants to archive the logs to an Amazon S3 bucket. Logs are rarely accessed after 90 days and must be retained for 10 years.
Which combination of steps should a DevOps engineer take to meet these requirements? (Select TWO.)
- A . Configure a CloudWatch Logs subscription filter to use AWS Glue to transfer all logs to an S3 bucket.
- B . Configure a CloudWatch Logs subscription filter to use Amazon Data Firehose to stream all logs to an S3 bucket.
- C . Configure a CloudWatch Logs subscription filter to stream all logs to an S3 bucket.
- D . Configure the S3 bucket lifecycle policy to transition logs to S3 Glacier Instant Retrieval after 90 days and to expire logs after 3,650 days.
- E . Configure the S3 bucket lifecycle policy to transition logs to Reduced Redundancy after 90 days and to expire logs after 3,650 days.
B, D
Explanation:
To archive CloudWatch Logs to S3 with long-term retention, you need: a streaming mechanism to move data from CloudWatch Logs to S3, and an S3 Lifecycle policy to handle tiering and expiration.
Option B uses a CloudWatch Logs subscription filter that targets Amazon Data Firehose. Firehose provides managed, scalable streaming from CloudWatch Logs to S3 with optional buffering and transformation. This is the AWS-recommended pattern for exporting continuous log streams with minimal operational overhead.
Option C is not valid because CloudWatch Logs subscription filters cannot directly target S3; they must target Kinesis, Firehose, or Lambda.
Once logs are in S3, the company wants to keep them rarely accessed after 90 days and retained for 10 years.
Option D configures an S3 lifecycle policy to transition logs to S3 Glacier Instant Retrieval after 90 days, which is a low-cost archival tier with relatively fast access, and to expire (delete) logs after 3,650 days (10 years). This precisely matches the retention requirement.
Option E uses Reduced Redundancy, which is a legacy storage class and not optimized for long-term archival. Therefore, the correct combination is B and D.
A company uses AWS Organizations with CloudTrail trusted access. All events across accounts and Regions must be logged and retained in an audit account, and failed login attempts should trigger real-time notifications.
Which solution meets these requirements?
- A . Publish CloudTrail logs to S3 in the audit account. Create an EventBridge rule for failed login events and notify via SNS.
- B . Store logs in the management account and query using Athena + Lambda every 5 minutes.
- C . Store logs in audit S3 + CloudWatch log group in management account + metric filter for failed logins → SNS.
- D . Stream to Kinesis → Flink → SNS.
A
Explanation:
Using an organization trail with logs centralized in the audit account’s S3 bucket ensures compliance and isolation. An EventBridge rule in the audit account triggers on failed login events (ConsoleLogin failed) and sends SNS notifications in near real time.
A company runs a microservices application on Amazon Elastic Kubernetes Service (Amazon EKS). Users recently reported significant delays while accessing an account summary feature, particularly during peak business hours.
A DevOps engineer used Amazon CloudWatch metrics and logs to troubleshoot the issue. The logs indicated normal CPU and memory utilization on the EKS nodes. The DevOps engineer was not able to identify where the delays occurred within the microservices architecture.
The DevOps engineer needs to increase the observability of the application to pinpoint where the delays are occurring.
Which solution will meet these requirements?
- A . Deploy the AWS X-Ray daemon as a DaemonSet in the EKS cluster. Use the X-Ray SDK to instrument the application code. Redeploy the application.
- B . Enable CloudWatch Container Insights for the EKS cluster. Use the Container Insights data to diagnose the delays.
- C . Create alarms based on the existing CloudWatch metrics. Set up an Amazon Simple Notification Service (Amazon SNS) topic to send email alerts.
- D . Increase the timeout settings in the application code for network operations to allow more time for operations to finish.
A
Explanation:
Comprehensive and Detailed Explanation From Exact Extract of DevOps Engineer Documents Only:
AWS X-Ray provides distributed tracing for microservice-based applications. Deploying the X-Ray daemon as a DaemonSet in the EKS cluster and instrumenting the application with the X-Ray SDK enables end-to-end tracing across microservices, helping identify performance bottlenecks. This method is documented in “Using AWS X-Ray with Amazon EKS” (AWS Observability Guide).
A company manages a large fleet of Amazon EC2 Linux instances in its production AWS account by using AWS Systems Manager. The EC2 instances must comply with a list of compliance requirements.
The company’s DevOps engineers wrote Chef cookbooks to detect and remediate configuration deviations. The company does not want to manage a Chef server and agent infrastructure.
The DevOps engineers need to set up the Chef cookbooks to run periodically on the EC2 instances.
Which solution will meet these requirements?
- A . Create a Systems Manager State Manager association. Associate the AWS-ApplyChefRecipes document with all EC2 instances. Configure the association to retrieve the Chef cookbooks from a source repository and to run every hour.
- B . Store the Chef agent installation package in an Amazon S3 bucket. Configure a Systems Manager Run Command to invoke the AWS-InstallApplication command on all instances and to run the repair action. Schedule the Run Command to run every hour.
- C . Create a Systems Manager State Manager association that applies the AWS-RefreshAssociation document to all EC2 instances. Configure the association to run every hour.
- D . Configure a Systems Manager patch policy to run the scan and install operation every hour. Create a patch baseline for the EC2 instances. Configure the instance IAM profile with permissions for patch operations.
A
Explanation:
Option A directly matches all requirements with the least extra infrastructure:
State Manager is the Systems Manager capability for defining and maintaining a desired configuration by running associations on a schedule (for example, hourly). That satisfies the “run periodically” requirement in a managed way across a large fleet.
The AWS-ApplyChefRecipes Systems Manager document is specifically intended to run Chef recipes/cookbooks on managed instances without requiring you to run your own Chef server infrastructure. You can point it at a cookbook source (such as an artifact/repo location) and have Systems Manager handle execution on the instances.
Why the other options aren’t correct:
B is not the right mechanism for periodic Chef cookbook execution. Installing an application package and running a “repair action” via Run Command is not the purpose-built Chef cookbook runner, and it’s more brittle/DIY than State Manager associations.
C (AWS-RefreshAssociation) is used to refresh/reevaluate association metadata/targets; it does not itself execute Chef cookbooks as the compliance mechanism.
D is for OS patching compliance (Patch Manager/patch policies), not for applying Chef cookbooks that detect and remediate configuration drift.
A company is deploying a new application that uses Amazon EC2 instances. The company needs a solution to query application logs and AWS account API activity.
Which solution will meet these requirements?
- A . Use the Amazon CloudWatch agent to send logs from the EC2 instances to Amazon CloudWatch Logs Configure AWS CloudTrail to deliver the API logs to Amazon S3 Use CloudWatch to query both sets of logs.
- B . Use the Amazon CloudWatch agent to send logs from the EC2 instances to Amazon CloudWatch Logs Configure AWS CloudTrail to deliver the API logs to CloudWatch Logs Use CloudWatch Logs Insights to query both sets of logs.
- C . Use the Amazon CloudWatch agent to send logs from the EC2 instances to Amazon Kinesis Configure AWS CloudTrail to deliver the API logs to Kinesis Use Kinesis to load the data into Amazon Redshift Use Amazon Redshift to query both sets of logs.
- D . Use the Amazon CloudWatch agent to send logs from the EC2 instances to Amazon S3 Use AWS
CloudTrail to deliver the API togs to Amazon S3 Use Amazon Athena to query both sets of logs in Amazon S3.
D
Explanation:
This solution will meet the requirements because it will use Amazon S3 as a common data lake for both the application logs and the API logs. Amazon S3 is a service that provides scalable, durable, and secure object storage for any type of data. You can use the Amazon CloudWatch agent to send logs from your EC2 instances to S3 buckets, and use AWS CloudTrail to deliver the API logs to S3 buckets as well. You can also use Amazon Athena to query both sets of logs in S3 using standard SQL, without loading or transforming them. Athena is a serverless interactive query service that allows you to analyze data in S3 using a variety of data formats, such as JSON, CSV, Parquet, and ORC.
A company has its AWS accounts in an organization in AWS Organizations. AWS Config is manually configured in each AWS account. The company needs to implement a solution to centrally configure AWS Config for all accounts in the organization The solution also must record resource changes to a central account.
Which combination of actions should a DevOps engineer perform to meet these requirements? (Choose two.)
- A . Configure a delegated administrator account for AWS Config. Enable trusted access for AWS Config in the organization.
- B . Configure a delegated administrator account for AWS Config. Create a service-linked role for AWS Config in the organization’s management account.
- C . Create an AWS CloudFormation template to create an AWS Config aggregator. Configure a CloudFormation stack set to deploy the template to all accounts in the organization.
- D . Create an AWS Config organization aggregator in the organization’s management account. Configure data collection from all AWS accounts in the organization and from all AWS Regions.
- E . Create an AWS Config organization aggregator in the delegated administrator account. Configure data collection from all AWS accounts in the organization and from all AWS Regions.
A company uses AWS Organizations, AWS Control Tower, AWS Config, and Terraform to manage its AWS accounts and resources. The company must ensure that users deploy only AWS Lambda functions that are connected to a VPC in member AWS accounts.
Which solution will meet these requirements with the LEAST operational effort?
- A . Configure AWS Control Tower to use proactive controls (guardrails). Enable optional controls implemented with AWS CloudFormation hooks for Lambda on all OUs.
- B . Create a new SCP that checks the lambda:VpcIds condition key for allowed values.
- C . Create a custom AWS Config rule to detect non-VPC-connected Lambda functions.
- D . Create a new SCP with a conditional statement that denies Lambda creation if lambda:VpcIds is null.
D
Explanation:
Use a Service Control Policy (SCP) with a Null condition on lambda:VpcIds to deny Lambda function creation or update when not VPC-attached. This enforces compliance across all accounts automatically without manual remediation, aligning with AWS Control Tower governance recommendations.
A company has 20 service learns Each service team is responsible for its own microservice. Each service team uses a separate AWS account for its microservice and a VPC with the 192 168 0 0/22 CIDR block. The company manages the AWS accounts with AWS Organizations.
Each service team hosts its microservice on multiple Amazon EC2 instances behind an Application Load Balancer. The microservices communicate with each other across the public internet. The company’s security team has issued a new guideline that all communication between microservices must use HTTPS over private network connections and cannot traverse the public internet.
A DevOps engineer must implement a solution that fulfills these obligations and minimizes the number of changes for each service team
Which solution will meet these requirements?
- A . Create a new AWS account in AWS Organizations Create a VPC in this account and use AWS Resource Access Manager to share the private subnets of this VPC with the organization Instruct the service teams to launch a new. Network Load Balancer (NLB) and EC2 instances that use the shared private subnets Use the NLB DNS names for communication between microservices.
- B . Create a Network Load Balancer (NLB) in each of the microservice VPCs Use AWS PrivateLink to create VPC endpoints in each AWS account for the NLBs Create subscriptions to each VPC endpoint in each of the other AWS accounts Use the VPC endpoint DNS names for communication between microservices.
- C . Create a Network Load Balancer (NLB) in each of the microservice VPCs Create VPC peering connections between each of the microservice VPCs Update the route tables for each VPC to use the peering links Use the NLB DNS names for communication between microservices.
- D . Create a new AWS account in AWS Organizations Create a transit gateway in this account and use AWS Resource Access Manager to share the transit gateway with the organization. In each of the microservice VPCs. create a transit gateway attachment to the shared transit gateway Update the route tables of each VPC to use the transit gateway Create a Network Load Balancer (NLB) in each of the microservice VPCs Use the NLB DNS names for communication between microservices.
B
Explanation:
https://aws.amazon.com/blogs/networking-and-content-delivery/connecting-networks-with-overlapping-ip-ranges/ Private link is the best option because Transit Gateway doesn’t support overlapping CIDR ranges.
A company has developed a web application that conducts seasonal sales on public holidays. The web application is deployed on AWS and uses AWS services for storage, database, computing, and encryption. During seasonal sales, the company expects high network traffic from many users. The company must receive insights regarding any unexpected behavior during the sale. A DevOps team wants to review the insights upon detecting anomalous behaviors during the sale. The DevOps team wants to receive recommended actions to resolve the anomalous behaviors. The recommendations must be provided on the provisioned infrastructure to address issues that might occur in the future.
Which combination of steps will meet these requirements with the LEAST operational overhead? (Select TWO.)
- A . Enable Amazon DevOps Guru in the AWS account. Determine the coverage for DevOps Guru for all supported AWS resources in the account. Use the DevOps Guru dashboard to find the analysis, recommendations, and related metrics.
- B . Create an Amazon Simple Notification Service (Amazon SNS) topic. Configure Amazon DevOps Guru to send notifications about important events to the company when anomalies are identified.
- C . Create an Amazon S3 bucket. Store Amazon CloudWatch logs, AWS CloudTrail data, and AWS Config data in the S3 bucket. Use Amazon Athena to generate insights on the data. Create a dashboard by using Amazon QuickSight.
- D . Configure email message reports for an Amazon QuickSight dashboard. Schedule and send the email reports to the company.
- E . Create an Amazon Simple Notification Service (Amazon SNS) topic. Configure Amazon Athena to send query results about important events to the company when anomalies are identified.
A, B
Explanation:
Comprehensive and Detailed Explanation From Exact Extract:
Amazon DevOps Guru is a fully managed service that uses machine learning to detect anomalous application behavior and provides insights with recommended remediation actions to improve operational performance.
Enabling DevOps Guru (Option A) covers many AWS resources automatically and reduces manual analysis.
Configuring SNS notifications (Option B) allows the team to be proactively alerted about important events.
Options C, D, and E require manual data setup, complex querying, and dashboard/reporting maintenance, which increase operational overhead.
Therefore, the combination of enabling DevOps Guru and setting up notifications achieves the goals with minimal effort.
Reference: Amazon DevOps Guru Overview:
"DevOps Guru provides operational insights and anomaly detection with actionable recommendations."
(Amazon DevOps Guru Documentation)
Amazon DevOps Guru Notifications:
"You can configure DevOps Guru to send notifications via Amazon SNS topics for detected anomalies."
(DevOps Guru Notifications)
