Practice Free SOA-C03 Exam Online Questions
A company’s CloudOps engineer monitors multiple AWS accounts in an organization and checks each account’s AWS Health Dashboard. After adding 10 new accounts, the engineer wants to consolidate health alerts from all accounts.
Which solution meets this requirement with the least operational effort?
- A . Enable organizational view in AWS Health.
- B . Configure the Health Dashboard in each account to forward events to a central AWS CloudTrail log.
- C . Create an AWS Lambda function to query the AWS Health API and write all events to an Amazon DynamoDB table.
- D . Use the AWS Health API to write events to an Amazon DynamoDB table.
A
Explanation:
The AWS Cloud Operations and Governance documentation defines that enabling Organizational View in AWS Health allows the management account in AWS Organizations to view and aggregate health events from all member accounts.
This feature provides a single-pane-of-glass view of service health issues, account-specific events, and planned maintenance across the organization ― without requiring additional automation or data pipelines.
Alternative options (B, C, and D) require custom integration and ongoing maintenance. CloudTrail does not natively forward AWS Health events, and custom Lambda or DynamoDB approaches increase complexity.
Therefore, Option A ― enabling the Organizational View feature in AWS Health ― is the most operationally efficient and AWS-recommended solution.
Reference: AWS Cloud Operations & Governance Guide C Consolidating Multi-Account Health Events with AWS Health Organizational View
A CloudOps engineer is preparing to deploy an application to Amazon EC2 instances that are in an Auto Scaling group. The application requires dependencies to be installed. Application updates are issued weekly.
The CloudOps engineer needs to implement a solution to incorporate the application updates on a regular basis. The solution also must conduct a vulnerability scan during Amazon Machine Image (AMI) creation.
What is the MOST operationally efficient solution that meets these requirements?
- A . Create a script that uses Packer and schedule a cron job.
- B . Install the application and dependencies on an EC2 instance and create an AMI.
- C . Use EC2 Image Builder with a custom recipe to install the application and dependencies.
- D . Invoke the EC2 CreateImage API operation by using an EventBridge scheduled rule.
C
Explanation:
EC2 Image Builder is a managed service that automates the creation, testing, vulnerability scanning, and distribution of AMIs. It supports scheduled image pipelines, which makes it ideal for weekly application updates.
Image Builder integrates with Amazon Inspector to perform vulnerability scans during image creation, fulfilling the security requirement. Custom image recipes define application dependencies and installation steps, ensuring consistency across deployments.
Manual AMI creation, cron-based scripts, or direct API calls require ongoing maintenance and do not natively support vulnerability scanning.
Therefore, EC2 Image Builder is the most operationally efficient solution.
A company runs an application on Amazon EC2 that connects to an Amazon Aurora PostgreSQL database. A developer accidentally drops a table from the database, causing application errors. Two hours later, a CloudOps engineer needs to recover the data and make the application functional again.
Which solution will meet this requirement?
- A . Use the Aurora Backtrack feature to rewind the database to a specified time, 2 hours in the past.
- B . Perform a point-in-time recovery on the existing database to restore the database to a specified point in time, 2 hours in the past.
- C . Perform a point-in-time recovery and create a new database to restore the database to a specified point in time, 2 hours in the past. Reconfigure the application to use a new database endpoint.
- D . Create a new Aurora cluster. Choose the Restore data from S3 bucket option. Choose log files up to the failure time 2 hours in the past.
C
Explanation:
In the AWS Cloud Operations and Aurora documentation, when data loss occurs due to human error such as dropped tables, Point-in-Time Recovery (PITR) is the recommended method for restoration. PITR creates a new Aurora cluster restored to a specific time before the failure.
The restored cluster has a new endpoint that must be reconfigured in the application to resume normal operations. AWS does not support performing PITR directly on an existing production database because that would overwrite current data.
Aurora Backtrack (Option A) applies only to Aurora MySQL, not PostgreSQL.
Option B is incorrect because PITR cannot be executed in place.
Option D refers to an import process from S3, which is unrelated to time-based recovery.
Hence, Option C is correct and follows the AWS CloudOps standard recovery pattern for PostgreSQL workloads.
Reference: AWS Cloud Operations & Aurora Guide C Section: Performing Point-in-Time Recovery for Aurora PostgreSQL Clusters
A company uses AWS CloudFormation to manage a stack of Amazon EC2 instances. A CloudOps engineer needs to keep the EC2 instances and their data even if the stack is deleted.
Which solution will meet these requirements?
- A . Set the DeletionPolicy attribute to Snapshot.
- B . Use Amazon Data Lifecycle Manager (DLM).
- C . Create an AWS Backup plan.
- D . Set the DeletionPolicy attribute to Retain.
D
Explanation:
Comprehensive Explanation (250C350 words):
CloudFormation’s DeletionPolicy: Retain ensures that resources are not deleted when the stack is deleted. This preserves both the EC2 instance and any attached storage.
Snapshot does not apply to EC2 instances themselves. Backup solutions do not prevent deletion.
Therefore, Retain is the correct policy.
A company hosts a critical legacy application on two Amazon EC2 instances that are in one Availability Zone. The instances run behind an Application Load Balancer (ALB). The company uses Amazon CloudWatch alarms to send Amazon Simple Notification Service (Amazon SNS) notifications when the ALB health checks detect an unhealthy instance. After a notification, the company’s engineers manually restart the unhealthy instance. A CloudOps engineer must configure the application to be highly available and more resilient to failures.
Which solution will meet these requirements?
- A . Create an Amazon Machine Image (AMI) from a healthy instance. Launch additional instances from the AMI in the same Availability Zone. Add the new instances to the ALB target group.
- B . Increase the size of each instance. Create an Amazon EventBridge rule. Configure the EventBridge
rule to restart the instances if they enter a failed state. - C . Create an Amazon Machine Image (AMI) from a healthy instance. Launch an additional instance from the AMI in the same Availability Zone. Add the new instance to the ALB target group. Create an AWS Lambda function that runs when an instance is unhealthy. Configure the Lambda function to stop and restart the unhealthy instance.
- D . Create an Amazon Machine Image (AMI) from a healthy instance. Create a launch template that uses the AMI. Create an Amazon EC2 Auto Scaling group that is deployed across multiple Availability Zones. Configure the Auto Scaling group to add instances to the ALB target group.
D
Explanation:
High availability requires removing single-AZ risk and eliminating manual recovery. The AWS Reliability best practices state to design for multi-AZ and automatic healing: Auto Scaling “helps maintain application availability and allows you to automatically add or remove EC2 instances” (AWS Auto Scaling User Guide). The Reliability Pillar recommends to “distribute workloads across multiple Availability Zones” and to “automate recovery from failure” (AWS Well-Architected Framework C Reliability Pillar). Attaching the Auto Scaling group to an ALB target group enables health-based replacement: instances failing load balancer health checks are replaced and traffic is routed only to healthy targets. Using an AMI in a launch template ensures consistent, repeatable instance configuration (AWS EC2 Launch Templates). Options A and C keep all instances in a single Availability Zone and rely on manual or ad-hoc restarts, which do not meet high-availability or resiliency goals.
Option B only scales vertically and adds a restart rule; it neither removes the single-AZ failure domain nor provides automated replacement. Therefore, creating a multi-AZ EC2 Auto Scaling group with a launch template and attaching it to the ALB target group (Option D) is the CloudOps-aligned solution for resilience and business continuity.
Reference:
• AWS Certified CloudOps Engineer C Associate (SOA-C03) Exam Guide: Domain 2 C Reliability and Business Continuity
• AWS Well-Architected Framework C Reliability Pillar
• Amazon EC2 Auto Scaling User Guide C Health checks and replacement
• Elastic Load Balancing User Guide C Target group health checks and ALB integration
• Amazon EC2 Launch Templates C Reproducible instance configuration
A CloudOps engineer needs to ensure that AWS resources across multiple AWS accounts are tagged consistently. The company uses an organization in AWS Organizations to centrally manage the accounts. The company wants to implement cost allocation tags to accurately track the costs that are allocated to each business unit.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Use Organizations tag policies to enforce mandatory tagging on all resources. Enable cost allocation tags in the AWS Billing and Cost Management console.
- B . Configure AWS CloudTrail events to invoke an AWS Lambda function to detect untagged resources and to automatically assign tags based on predefined rules.
- C . Use AWS Config to evaluate tagging compliance. Use AWS Budgets to apply tags for cost allocation.
- D . Use AWS Service Catalog to provision only pre-tagged resources. Use AWS Trusted Advisor to enforce tagging across the organization.
A
Explanation:
Tagging is essential for governance, cost management, and automation in CloudOps operations. The AWS Organizations tag policies feature allows centralized definition and enforcement of required tag keys and accepted values across all accounts in an organization. According to the AWS CloudOps study guide under Deployment, Provisioning, and Automation, tag policies enable automatic validation of tags, ensuring consistency with minimal manual overhead.
Once tagging consistency is enforced, enabling cost allocation tags in the AWS Billing and Cost Management console allows accurate cost distribution per business unit. AWS documentation states:
“Use AWS Organizations tag policies to standardize tags across accounts. You can activate cost allocation tags in the Billing console to track and allocate costs.”
Option B introduces unnecessary complexity with Lambda automation.
Option C detects but does not enforce tagging.
Option D limits flexibility to Service Catalog resources only. Therefore, Option A provides a centrally managed, automated, and low-overhead solution that meets CloudOps tagging and cost-tracking requirements.
Reference:
• AWS Certified CloudOps Engineer C Associate (SOA-C03) Exam Guide C Domain 3: Deployment, Provisioning and Automation
• AWS Organizations C Tag Policies
• AWS Billing and Cost Management C Cost Allocation Tags
• AWS Well-Architected Framework C Operational Excellence and Cost Optimization Pillars
A CloudOps engineer wants to share a copy of a production database with a migration account. The production database is hosted on an Amazon RDS DB instance and is encrypted at rest with an AWS Key Management Service (AWS KMS) key that has an alias of production-rds-key.
What must the CloudOps engineer do to meet these requirements with the LEAST administrative overhead?
- A . Take a snapshot of the RDS DB instance. Update the KMS key policy to allow access for the migration account root user. Share the snapshot with the migration account.
- B . Create an RDS read replica in the migration account. Replicate the KMS key.
- C . Take a snapshot and create a new KMS key in the migration account with the same alias.
- D . Export the database to Amazon S3 and import it into a new RDS instance.
A
Explanation:
Comprehensive Explanation (250C350 words):
For encrypted RDS snapshots, cross-account sharing requires both sharing the snapshot and granting access to the KMS key used to encrypt it. By updating the KMS key policy to allow the migration account access, the encrypted snapshot can be restored in the migration account.
Option A uses native RDS snapshot sharing and minimal configuration changes, making it the least administrative overhead approach.
Options B, C, and D introduce unnecessary complexity, higher operational cost, or unsupported workflows.
A company hosts an encrypted Amazon S3 bucket in the ap-southeast-2 Region. Users from the eu-west-2 Region access the S3 bucket through the internet. The users from eu-west-2 need faster transfers to and from the S3 bucket for large files.
Which solution will meet these requirements?
- A . Create an S3 access point in eu-west-2 to use as the destination for S3 replication from ap-southeast-2. Ensure all users switch to the new S3 access point.
- B . Create an Amazon Route 53 hosted zone with a geolocation routing policy. Choose the Alias to S3 website endpoint option. Specify the S3 bucket that is in ap-southeast-2 as the source bucket.
- C . Create a new S3 bucket in eu-west-2. Copy all contents from ap-southeast-2 to the new bucket in eu-west-2. Create an S3 access point, and associate it with both buckets. Ensure users use the new S3 access point.
- D . Configure and activate S3 Transfer Acceleration on the S3 bucket. Use the new S3 acceleration endpoint’s domain name for access.
D
Explanation:
Comprehensive and Detailed Explanation From Exact Extract of AWS CloudOps Documents:
For users in eu-west-2 transferring large files to/from an S3 bucket in ap-southeast-2 over the public internet, S3 Transfer Acceleration is designed to improve performance by leveraging the AWS global edge network. With Transfer Acceleration enabled, users access the bucket via the acceleration endpoint. Requests enter AWS at the nearest edge location and then traverse the AWS backbone network to the bucket’s Region, which typically reduces latency variability and can improve throughput for long-distance transfers.
Option A is incorrect because access points do not “replicate destinations” in the way described, and an access point in another Region does not move data closer by itself.
Option B is incorrect because Route 53 DNS routing does not accelerate data transfers; it only resolves names and (in this case) also incorrectly references S3 website endpoints (not appropriate for general large file transfer APIs).
Option C is not valid as written because a single S3 access point cannot be “associated with both
buckets,” and maintaining two buckets introduces synchronization/consistency overhead; it’s not the least-friction acceleration approach for direct internet users.
Reference: Amazon S3 User Guide C Transfer Acceleration
AWS SysOps Administrator Study Guide C S3 performance optimization
AWS Well-Architected Framework C Performance Efficiency considerations for data transfer
A company is migrating its production file server to AWS. All data stored on the file server must remain accessible if an Availability Zone becomes unavailable or during system maintenance. Users must access the file server through the SMB protocol and manage permissions by using Windows ACLs.
Which solution will meet these requirements?
- A . Create a single AWS Storage Gateway file gateway.
- B . Create an Amazon FSx for Windows File Server Multi-AZ file system.
- C . Deploy two AWS Storage Gateway file gateways in two Availability Zones behind an Application Load Balancer.
- D . Deploy two Amazon FSx for Windows File Server Single-AZ file systems and configure DFS Replication.
B
Explanation:
Amazon FSx for Windows File Server is a fully managed native Windows file system that supports SMB, Windows authentication, and Windows ACLs. The Multi-AZ deployment option automatically replicates data synchronously across Availability Zones and provides automatic failover with minimal downtime.
This architecture ensures continuous availability during AZ failures or maintenance events without manual intervention. Users experience consistent access and permissions through SMB, fully meeting the stated requirements.
Storage Gateway introduces on-premises dependencies. DFS Replication increases complexity and recovery time. Therefore, FSx for Windows Multi-AZ is the correct solution.
A company asks a SysOps administrator to provision an additional environment for an application in four additional AWS Regions. The application is running on more than 100 Amazon EC2 instances in the us-east-1 Region, using fully configured Amazon Machine Images (AMIs). The company has an AWS CloudFormation template to deploy resources in us-east-1.
What should the SysOps administrator do to provision the application in the MOST operationally efficient manner?
- A . Copy the AMI to each Region by using the aws ec2 copy-image command. Update the CloudFormation template to include mappings for the copied AMIs.
- B . Create a snapshot of the running instance. Copy the snapshot to the other Regions. Create an AMI from the snapshots. Update the CloudFormation template for each Region to use the new AMI.
- C . Run the existing CloudFormation template in each additional Region based on the success of the template that is used currently in us-east-1.
- D . Update the CloudFormation template to include the additional Regions in the Auto Scaling group. Update the existing stack in us-east-1.
A
Explanation:
Comprehensive and Detailed Explanation From Exact Extract of AWS CloudOps Documents:
The most operationally efficient approach is A: copy the AMI to each target Region using copy-image and update the CloudFormation template to reference the correct AMI IDs per Region (commonly via Mappings or parameters). AMIs are regional resources, so an AMI built in us-east-1 cannot be launched directly in other Regions without copying. The copy-image operation is the standard, supported method to replicate an AMI across Regions while preserving the image configuration and backing snapshots in the destination Region.
Once AMIs exist in each Region, CloudFormation can be executed in each Region using the same template logic. Adding mappings for AMI IDs keeps the deployment consistent and repeatable, aligning with Infrastructure as Code practices and minimizing manual steps.
Option B is more work than necessary because copying snapshots and re-creating AMIs adds extra steps and increases the chance of inconsistency.
Option C is incomplete because the template will fail or launch incorrect resources if it references an AMI ID that does not exist in the target Region.
Option D is not feasible because an Auto Scaling group is a regional construct and cannot span multiple Regions from a single stack update in us-east-1.
Reference: Amazon EC2 User Guide C Copy an AMI across Regions (copy-image) and AMI regional scope AWS CloudFormation User Guide C Mappings/parameters for Region-specific values
AWS SysOps Administrator Study Guide C Multi-Region provisioning and automation best practices
