Practice Free DOP-C02 Exam Online Questions
A company that uses electronic patient health records runs a fleet of Amazon EC2 instances with an Amazon Linux operating system. The company must continuously ensure that the EC2 instances are running operating system patches and application patches that are in compliance with current privacy regulations. The company uses a custom repository to store application patches.
A DevOps engineer needs to automate the deployment of operating system patches and application patches. The DevOps engineer wants to use both the default operating system patch repository and the custom patch repository.
Which solution will meet these requirements with the LEAST effort?
- A . Use AWS Systems Manager to create a new custom patch baseline that includes the default operating system repository and the custom repository. Run the AWS-RunPatchBaseline document by using the Run command to verify and install patches. Use the BaselineOverride API to configure the new custom patch baseline.
- B . Use AWS Direct Connect to integrate the custom repository with the EC2 instances. Use Amazon EventBridge events to deploy the patches.
- C . Use the yum-config-manager command to add the custom repository to the /etc/yum.repos.d configuration. Run the yum-config-manager-enable command to activate the new repository.
- D . Use AWS Systems Manager to create a patch baseline for the default operating system repository and a second patch baseline for the custom repository. Run the AWS-RunPatchBaseline document by using the Run command to verify and install patches. Use the BaselineOverride API to configure the default patch baseline and the custom patch baseline.
C
Explanation:
Comprehensive and Detailed Explanation From Exact Extract:
The operating system (Amazon Linux) uses yum for package management. To use both the default and a custom repository, the simplest method is to add the custom repository configuration file under /etc/yum.repos.d/ using the yum-config-manager CLI, and enable it. This allows the system to access both repositories during patching automatically.
This approach requires minimal setup and effort and leverages the existing yum patching mechanisms.
Option A and D involve managing multiple patch baselines in Systems Manager Patch Manager, which does not natively support custom repositories ― it relies on the underlying OS package manager.
Option B (Direct Connect) is not necessary unless network connectivity is an issue.
Thus, option C meets the requirement with the least effort.
Reference: Managing Custom Repositories on Amazon Linux:
"To add a custom yum repository, add a repo file in /etc/yum.repos.d/ and enable it using yum-config-manager."
(Amazon Linux Repository Management)
AWS Systems Manager Patch Manager:
"Patch Manager uses the OS package manager, so custom repositories must be configured at the OS level."
(AWS Patch Manager)
A company discovers that its production environment and disaster recovery (DR) environment are deployed to the same AWS Region. All the production applications run on Amazon EC2 instances and are deployed by AWS CloudFormation. The applications use an Amazon FSx for NetApp ONTAP volume for application storage. No application data resides on the EC2 instances. A DevOps engineer copies the required AMIs to a new DR Region. The DevOps engineer also updates the CloudFormation code to accept a Region as a parameter. The storage needs to have an RPO of 10 minutes in the DR Region.
Which solution will meet these requirements?
- A . Create an Amazon S3 bucket in both Regions. Configure S3 Cross-Region Replication (CRR) for the S3 buckets. Create a scheduled AWS Lambda function to copy any new content from the FSx for ONTAP volume to the S3 bucket in the production Region.
- B . Use AWS Backup to create a backup vault and a custom backup plan that has a 10-minute frequency. Specify the DR Region as the target Region. Assign the EC2 instances in the production Region to the backup plan.
- C . Create an AWS Lambda function to create snapshots of the instance store volumes that are attached to the EC2 instances. Configure the Lambda function to copy the snapshots to the DR Region and to remove the previous copies. Create an Amazon EventBridge scheduled rule that invokes the Lambda function every 10 minutes.
- D . Create an FSx for ONTAP instance in the DR Region. Configure a 5-minute schedule for a volume-level NetApp SnapMirror to replicate the volume from the production Region to the DR Region.
A company has a public application that uses an Amazon API Gateway REST API, an AWS Lambda function, and an Amazon RDS for PostgreSQL DB cluster. Users have recently received error messages as application demand increased.
The company’s DevOps engineer discovered that the errors were caused by RDS connection limits being reached. The DevOps engineer also discovered that more than 90% of the API requests are GET requests that read from the DB cluster.
How should the DevOps engineer solve this problem with the LEAST development effort?
- A . Migrate from Amazon RDS to Amazon DynamoDB. Add an Amazon CloudFront distribution in front of the API Gateway REST API.
- B . Add a proxy from Amazon RDS Proxy in front of the RDS DB cluster. Enable API caching in API Gateway.
- C . Add an Amazon RDS Proxy in front of the RDS database cluster. Provision an Amazon ElastiCache (Redis OSS) cluster.
- D . Migrate from Amazon RDS to Amazon DynamoDB. Enable API caching in API Gateway.
B
Explanation:
The root cause of the problem is that the RDS PostgreSQL instance is exceeding its database connection limit due to high volumes of concurrent GET requests. The most efficient and lowest-effort remediation is to reduce the number of connections that reach the database and serve as many reads as possible from a cache layer.
Option B accomplishes both goals with minimal changes to the existing architecture.
First, RDS Proxy manages database connections efficiently by pooling and reusing them. This reduces connection churn and prevents the Lambda function from opening excessive concurrent connections during traffic spikes. RDS Proxy is natively integrated with RDS and requires only minor configuration changes without altering application logic.
Second, enabling API Gateway caching allows caching GET responses directly at the API layer. Because over 90% of requests are GET operations, enabling caching drastically reduces traffic hitting both Lambda and RDS, significantly improving performance and lowering connection pressure.
Option C also improves read scaling, but adding ElastiCache introduces additional infrastructure and requires modifying application code to query Redis before querying the database. This is more complex than enabling API Gateway caching.
Options A and D require full database migration, which is far more labor-intensive.
Option B is the simplest, most effective, and most aligned with AWS best practices for reducing RDS connection saturation.
A company wants governance where only specific Regions and services can be used, with centralized AD authentication and job-function-based roles.
Which solution meets these requirements?
- A . Use OUs with group policies and StackSets for IAM roles.
- B . Use permission boundaries and StackSets.
- C . Use SCPs to restrict Regions/services and Resource Access Manager to share roles.
- D . Use SCPs to restrict Regions/services and StackSets for IAM roles with trust to AD.
D
Explanation:
Apply SCPs for Region and service restriction. Use CloudFormation StackSets to consistently deploy IAM roles with trust policies for SSO/AD integration. This model enforces governance uniformly across all accounts per AWS multi-account best practices.
A company uses a CI/CD pipeline to deploy its workload in the ap-southeast-2 Region. The company receives images through a Network Load Balancer (NLB) and processes the images in AWS Fargate tasks on an Amazon ECS cluster. An Amazon ECR repository stores the images as Docker images. The company uses Route 53 for DNS. The company saves the images in an S3 bucket and metadata in DynamoDB. The company wants to expand to eu-west-2 with high availability and resilience.
Which combination of steps will meet these requirements with the FEWEST configuration changes? (Select THREE).
- A . Configure ECR replication to eu-west-2 on the repository. Configure an NLB in eu-west-2 that
resolves to Fargate tasks in an ECS cluster in eu-west-2. Configure a latency routing policy in Route 53 for the two workloads. - B . Configure the DynamoDB table as a global table with a replica in eu-west-2. Configure the Fargate tasks to interact with the DynamoDB table in ap-southeast-2.
- C . Configure the DynamoDB table as a global table with a replica in eu-west-2. Configure the Fargate tasks to interact with the DynamoDB table in the same Region that the tasks run in.
- D . Configure a new S3 bucket in eu-west-2. Configure data replication between the S3 bucket in ap-southeast-2 and the S3 bucket in eu-west-2. Configure the Fargate tasks to use the S3 bucket in the same Region that the tasks run in to perform S3 PUT and GET operations.
- E . Configure an S3 Multi-Region Access Point for the S3 bucket in ap-southeast-2 and a new S3 bucket in eu-west-2. Configure two-way replication on the S3 buckets. Configure the workloads to use the Multi-Region Access Point for S3 PUT and GET operations.
- F . Configure the CI/CD pipeline to deploy ECR images to both Regions. Configure an NLB in eu-west-2 that resolves to Fargate tasks in an ECS cluster in eu-west-2. Configure a failover routing policy in Route 53 for the two workloads.
A, C, D
Explanation:
(A) Use ECR replication to keep images synchronized between Regions, minimizing CI/CD pipeline changes.
(C) DynamoDB global tables allow multi-Region replication and provide local read/write access, so tasks should interact with the DynamoDB replica in their Region.
(D) Use S3 cross-Region replication or separate buckets with replication; tasks access the bucket in the same Region for latency and data sovereignty.
(B) Using DynamoDB global table but pointing tasks to only one Region reduces resilience.
(E) S3 Multi-Region Access Points are newer but add complexity and are not required for minimal changes.
(F) Managing CI/CD pipeline for multiple Regions and failover routing adds complexity beyond minimal changes.
References:
Amazon ECR Cross-Region Replication
DynamoDB Global Tables
Amazon S3 Cross-Region Replication
A company runs applications on Windows and Linux Amazon EC2 instances. The instances run across multiple Availability Zones In an AWS Region. The company uses Auto Scaling groups for each application.
The company needs a durable storage solution for the instances. The solution must use SMB for Windows and must use NFS for Linux. The solution must also have sub-millisecond latencies. All instances will read and write the data.
Which combination of steps will meet these requirements? (Select THREE.)
- A . Create an Amazon Elastic File System (Amazon EFS) file system that has targets in multiple Availability Zones
- B . Create an Amazon FSx for NetApp ONTAP Multi-AZ file system.
- C . Create a General Purpose SSD (gp3) Amazon Elastic Block Store (Amazon EBS) volume to use for shared storage.
- D . Update the user data for each application’s launch template to mount the file system
- E . Perform an instance refresh on each Auto Scaling group.
- F . Update the EC2 instances for each application to mount the file system when new instances are launched
EBD
Explanation:
Create an Amazon Elastic File System (Amazon EFS) File System with Targets in Multiple Availability Zones:
Amazon EFS provides a scalable and highly available network file system that supports the NFS protocol. EFS is ideal for Linux instances as it allows multiple instances to read and write data concurrently.
Setting up EFS with targets in multiple Availability Zones ensures high availability and durability.
Reference: Amazon EFS Overview
Create an Amazon FSx for NetApp ONTAP Multi-AZ File System:
Amazon FSx for NetApp ONTAP offers a fully managed file storage solution that supports both SMB for Windows and NFS for Linux.
The Multi-AZ deployment ensures high availability and durability, providing sub-millisecond latencies suitable for the application’s performance requirements.
Reference: Amazon FSx for NetApp ONTAP
Update the User Data for Each Application’s Launch Template to Mount the File System:
Updating the user data in the launch template ensures that every new instance launched by the Auto Scaling group will automatically mount the appropriate file system.
This step is necessary to ensure that all instances can access the shared storage without manual intervention.
Example user data for mounting EFS (Linux)
#!/bin/bash
sudo yum install -y amazon-efs-utils
sudo mount -t efs fs-12345678:/ /mnt/efs
Example user data for mounting FSx (Windows):
By implementing these steps, the company can provide a durable storage solution with sub-millisecond latencies that supports both SMB and NFS protocols, meeting the requirements for both Windows and Linux instances.
Reference: Mounting EFS File Systems
Mounting Amazon FSx File Systems
A company needs to update its order processing application to improve resilience and availability. The application requires a stateful database and uses a single-node Amazon RDS DB instance to store customer orders and transaction history. A DevOps engineer must make the database highly available.
Which solution will meet this requirement?
- A . Migrate the database to Amazon DynamoDB global tables. Configure automatic failover between AWS Regions by using Amazon Route 53 health checks.
- B . Migrate the database to Amazon EC2 instances in multiple Availability Zones. Use Amazon Elastic Block Store (Amazon EBS) Mult-Attach to connect all the instances to a single EBS volume.
- C . Use the RDS DB instance as the source instance to create read replicas in multiple Availability Zones. Deploy an Application Load Balancer to distribute read traffic across the read replicas.
- D . Modify the RDS DB instance to be a Multi-AZ deployment. Verify automatic failover to the standby instance if the primary instance becomes unavailable.
A company recently migrated its legacy application from on-premises to AWS. The application is hosted on Amazon EC2 instances behind an Application Load Balancer which is behind Amazon API Gateway. The company wants to ensure users experience minimal disruptions during any deployment of a new version of the application. The company also wants to ensure it can quickly roll back updates if there is an issue.
Which solution will meet these requirements with MINIMAL changes to the application?
- A . Introduce changes as a separate environment parallel to the existing one Configure API Gateway to use a canary release deployment to send a small subset of user traffic to the new environment.
- B . Introduce changes as a separate environment parallel to the existing one Update the application’s DNS alias records to point to the new environment.
- C . Introduce changes as a separate target group behind the existing Application Load Balancer Configure API Gateway to route user traffic to the new target group in steps.
- D . Introduce changes as a separate target group behind the existing Application Load Balancer Configure API Gateway to route all traffic to the Application Load Balancer which then sends the traffic to the new target group.
A
Explanation:
API Gateway supports canary deployment on a deployment stage before you direct all traffic to that stage. A parallel environment means we will create a new ALB and a target group that will target a new set of EC2 instances on which the newer version of the app will be deployed. So the canary setting associated to the new version of the API will connect with the new ALB instance which in turn will direct the traffic to the new EC2 instances on which the newer version of the application is deployed.
A DevOps engineer is building a continuous deployment pipeline for a serverless application that uses AWS Lambda functions. The company wants to reduce the customer impact of an unsuccessful deployment. The company also wants to monitor for issues.
Which deploy stage configuration will meet these requirements?
- A . Use an AWS Serverless Application Model (AWS SAM) template to define the serverless application. Use AWS CodeDeploy to deploy the Lambda functions with the Canary10Percent15Minutes Deployment Preference Type. Use Amazon CloudWatch alarms to monitor the health of the functions.
- B . Use AWS CloudFormation to publish a new stack update, and include Amazon CloudWatch alarms on all resources. Set up an AWS CodePipeline approval action for a developer to verify and approve the AWS CloudFormation change set.
- C . Use AWS CloudFormation to publish a new version on every stack update, and include Amazon CloudWatch alarms on all resources. Use the RoutingConfig property of the AWS::Lambda::Alias resource to update the traffic routing during the stack update.
- D . Use AWS CodeBuild to add sample event payloads for testing to the Lambda functions. Publish a new version of the functions, and include Amazon CloudWatch alarms. Update the production alias to point to the new version. Configure rollbacks to occur when an alarm is in the ALARM state.
A company uses an Amazon Aurora PostgreSQL global database that has two secondary AWS Regions. A DevOps engineer has configured the database parameter group to guarantee an RPO of 60 seconds. Write operations on the primary cluster are occasionally blocked because of the RPO setting.
The DevOps engineer needs to reduce the frequency of blocked write operations.
Which solution will meet these requirements?
- A . Add an additional secondary cluster to the global database.
- B . Enable write forwarding for the global database.
- C . Remove one of the secondary clusters from the global database.
- D . Configure synchronous replication for the global database.
C
Explanation:
Step 1: Reducing Replication Lag in Aurora Global Databases
In Amazon Aurora global databases, write operations on the primary cluster can be delayed due to the time it takes to replicate to secondary clusters, especially when there are multiple secondary regions involved.
Issue: The write operations are occasionally blocked due to the RPO setting, which guarantees replication within 60 seconds.
Action: Remove one of the secondary clusters from the global database.
Why: Fewer secondary clusters will reduce the overall replication lag, improving write performance and reducing the frequency of blocked writes.
Reference: AWS documentation on Aurora Global Database.
This corresponds to Option C: Remove one of the secondary clusters from the global database.
