Practice Free SOA-C03 Exam Online Questions
A web application runs on Amazon EC2 instances in the us-east-1 Region and the us-west-2 Region. The instances run behind an Application Load Balancer (ALB) in each Region. An Amazon Route 53 hosted zone controls DNS records.
The instances in us-east-1 are production resources. The instances in us-west-2 are for disaster recovery. EC2 Auto Scaling groups are configured based on the ALBRequestCountPerTarget metric in both Regions.
A SysOps administrator must implement a solution that provides failover from us-east-1 to us-west-2. The instances in us-west-2 must be used only for failover.
Which solution will meet these requirements?
- A . Implement a Route 53 health check and a failover routing policy for the hosted zone. Configure the failover routing policy to automatically redirect traffic to the resources in us-west-2.
- B . Implement a Route 53 health check and a latency routing policy for the hosted zone. Configure the latency routing policy to automatically redirect traffic to the resources in us-west-2.
- C . In us-east-1, create an Amazon CloudWatch alarm that enters ALARM state when an EC2 instance is terminated. In us-west-2, create an AWS Lambda function that modifies the Route 53 hosted zone records to send traffic to us-west-2. Configure the CloudWatch alarm to invoke the Lambda function.
- D . In us-west-2, create an Amazon CloudWatch alarm that enters ALARM state when resources in us-east-1 cannot be resolved. In us-west-2, create an AWS Lambda function that modifies the Route 53 hosted zone records to send traffic to us-west-2. Configure the CloudWatch alarm to invoke the Lambda function.
A
Explanation:
Comprehensive and Detailed Explanation From Exact Extract of AWS CloudOps Documents:
The requirement is classic active-passive (production in us-east-1, DR in us-west-2 “only for failover”). The most operationally efficient and purpose-built solution is Route 53 failover routing combined with health checks. With failover routing, Route 53 designates one record as PRIMARY (us-east-1) and another as SECONDARY (us-west-2). Route 53 continuously evaluates the health check associated with the primary endpoint (commonly the ALB DNS name or a specific health-check path). If the primary fails, Route 53 automatically returns the secondary record, directing client DNS resolution to the DR region. This ensures us-west-2 is used only when us-east-1 is unhealthy, directly matching the requirement.
Latency routing (Option B) is designed to route users to the region with the lowest latency, which can actively send traffic to us-west-2 even when us-east-1 is healthy―violating the “DR only” constraint. Options C and D introduce custom automation (CloudWatch + Lambda + DNS record updates) that increases operational overhead, adds failure modes, and is unnecessary because Route 53 already provides managed health-check-based failover. Additionally, “EC2 instance terminated” is not a reliable proxy for full application availability, and DNS modification automation is more complex than using native Route 53 failover policies.
Reference: Amazon Route 53 Developer Guide C Health checks and failover routing policy
AWS Well-Architected Framework C Reliability pillar (failover, DR patterns)
AWS SysOps Administrator Study Guide C DNS failover and Route 53 routing policies
A company is running an application on premises and wants to use AWS for data backup. All of the data must be available locally. The backup application can write only to block-based storage that is compatible with the Portable Operating System Interface (POSIX).
Which backup solution will meet these requirements?
- A . Configure the backup software to use Amazon S3 as the target for the data backups.
- B . Configure the backup software to use Amazon S3 Glacier Flexible Retrieval as the target for the data backups.
- C . Use AWS Storage Gateway, and configure it to use gateway-cached volumes.
- D . Use AWS Storage Gateway, and configure it to use gateway-stored volumes.
D
Explanation:
The Storage Gateway service enables hybrid cloud backup by presenting local block storage that synchronizes with AWS cloud storage. For scenarios where all data must remain available locally while still backed up to AWS, the correct mode is gateway-stored volumes.
AWS documentation defines:
“Use stored volumes if you want to keep all your data locally while asynchronously backing up point-in-time snapshots to Amazon S3 for durable storage.”
These volumes expose an iSCSI interface compatible with POSIX file systems, allowing direct use by on-premises backup software.
Gateway-cached volumes (Option C) store primary data in AWS with limited local cache, violating the “all data must be available locally” requirement. Options A and B are object-based storage solutions, not compatible with POSIX or block-based backup applications.
Therefore, Option D fully satisfies CloudOps reliability and continuity best practices by ensuring local availability, cloud durability, and POSIX compatibility for backups.
Reference:
• AWS Certified CloudOps Engineer C Associate (SOA-C03) Exam Guide C Domain 2: Reliability and Business Continuity
• AWS Storage Gateway User Guide C Stored Volumes Overview
• AWS Well-Architected Framework C Reliability Pillar
• AWS Hybrid Cloud Storage Best Practices
A company uses hundreds of Amazon EC2 On-Demand Instances and Spot Instances to run production and non-production workloads. The company installs and configures the AWS Systems Manager Agent (SSM Agent) on the EC2 instances.
During a recent instance patch operation, some instances were not patched because the instances were either busy or down. The company needs to generate a report that lists the current patch version of all instances.
Which solution will meet these requirements in the MOST operationally efficient way?
- A . Use Systems Manager Inventory to collect patch versions. Generate a report of all instances.
- B . Use Systems Manager Run Command to remotely collect patch version information. Generate a report of all instances.
- C . Use AWS Config to track EC2 instance configuration changes by using output from the SSM Agents.
Create a custom rule to check for patch versions. Generate a report of all unpatched instances. - D . Use AWS Config to monitor the patch status of the EC2 instances by using output from the SSM Agents. Create a configuration compliance rule to check whether patches are installed. Generate a report of all instances.
A
Explanation:
Comprehensive Explanation (250C350 words):
AWS Systems Manager Inventory is designed to collect metadata from managed instances, including installed software, applications, and patch information. It works asynchronously and does not require instances to be actively running a command at the time of collection, which is critical when instances may be busy or temporarily unavailable during patch windows.
Inventory data is stored centrally and can be queried to generate reports showing the current patch
level or installed patch versions across all managed instances. This makes it well-suited for large fleets that include both On-Demand and Spot Instances and that may scale dynamically.
Option B relies on Run Command, which requires instances to be online and available at execution time. This does not meet the requirement because some instances were already missed during patch operations due to being busy or down.
Option C and Option D use AWS Config, which is primarily intended for configuration compliance and drift detection, not detailed patch version reporting. Creating custom or managed rules for patch status introduces unnecessary complexity and overhead compared to Inventory’s built-in capability.
Therefore, Systems Manager Inventory provides the most operationally efficient and reliable solution for collecting and reporting patch version data across all EC2 instances.
A company needs to monitor its website’s availability to end users. The company needs a solution to provide an Amazon Simple Notification Service (Amazon SNS) notification if the website’s uptime decreases to less than 99%. The monitoring must provide an accurate view of the user experience on the website.
Which solution will meet these requirements?
- A . Create an Amazon CloudWatch alarm that is based on the website’s logs that are published to a CloudWatch Logs log group. Configure the alarm to publish an SNS notification if the number of HTTP 4xx and 5xx errors exceeds a specified threshold.
- B . Create an Amazon CloudWatch alarm that is based on the website’s published metrics in CloudWatch. Configure the alarm to publish an SNS notification based on anomaly detection.
- C . Create an Amazon CloudWatch Synthetics heartbeat monitoring canary. Associate the canary with the website’s URL. Create a CloudWatch alarm for the canary. Configure the alarm to publish an SNS notification if the value of the SuccessPercent metric is less than 99%.
- D . Create an Amazon CloudWatch Synthetics broken link checker monitoring canary. Associate the canary with the website’s URL. Create a CloudWatch alarm for the canary. Configure the alarm to publish an SNS notification if the value of the SuccessPercent metric is less than 99%.
C
Explanation:
Amazon CloudWatch Synthetics heartbeat canaries actively test a website by sending periodic requests from AWS-managed locations, closely simulating real user access. This provides an accurate measurement of availability from an end-user perspective, which is a key requirement.
The SuccessPercent metric represents the percentage of successful executions over time and directly maps to website uptime. Creating a CloudWatch alarm on this metric allows the CloudOps engineer to receive SNS notifications when availability drops below the 99% threshold.
Log-based or anomaly-detection approaches do not reliably represent user experience, and broken link checkers focus on content integrity rather than availability. Therefore, a heartbeat canary is the correct solution.
A CloudOps engineer is creating a simple, public-facing website running on Amazon EC2. The CloudOps engineer created the EC2 instance in an existing public subnet and assigned an Elastic IP address. The CloudOps engineer created a new security group that allows incoming HTTP traffic from 0.0.0.0/0. The CloudOps engineer also created a new network ACL and applied it to the subnet to allow incoming HTTP traffic from 0.0.0.0/0. However, the website cannot be reached from the internet.
What is the cause of this issue?
- A . The CloudOps engineer did not create an outbound rule that allows ephemeral port return traffic in the new network ACL.
- B . The CloudOps engineer did not create an outbound rule in the security group that allows HTTP traffic from port 80.
- C . The Elastic IP address assigned to the EC2 instance has changed.
- D . There is an additional network ACL associated with the subnet that denies inbound HTTP traffic.
A
Explanation:
Comprehensive Explanation (250C350 words):
Network ACLs are stateless, meaning both inbound and outbound rules must explicitly allow traffic. While inbound HTTP traffic (port 80) was allowed, the return traffic from the EC2 instance uses ephemeral ports (typically 1024C65535). If outbound rules do not allow this ephemeral port range, the response traffic is dropped, preventing the website from loading.
Security groups are stateful and automatically allow return traffic, but network ACLs do not. This commonly causes connectivity issues when custom ACLs are applied without matching outbound rules.
Option B is incorrect because security groups allow all outbound traffic by default.
Option C is irrelevant.
Option D is incorrect because only one network ACL can be associated with a subnet at a time.
Thus, the missing outbound ephemeral port rule in the network ACL is the root cause.
A CloudOps engineer has an AWS CloudFormation template of the company’s existing infrastructure in us-west-2. The CloudOps engineer attempts to use the template to launch a new stack in eu-west-1, but the stack partially deploys, receives an error message, and then rolls back.
Why would this template fail to deploy? (Select TWO.)
- A . The template referenced an IAM user that is not available in eu-west-1.
- B . The template referenced an Amazon Machine Image (AMI) that is not available in eu-west-1.
- C . The template did not have the proper level of permissions to deploy the resources.
- D . The template requested services that do not exist in eu-west-1.
- E . CloudFormation templates can be used only to update existing services.
B, D
Explanation:
Amazon Machine Images (AMIs) are Region-specific. An AMI ID that exists in us-west-2 does not automatically exist in eu-west-1. If a CloudFormation template references a hardcoded AMI ID from one Region, stack creation in another Region will fail when that AMI cannot be found.
Additionally, not all AWS services or service features are available in every AWS Region. If the template includes a resource type or feature that is unsupported in eu-west-1, CloudFormation will fail during stack creation.
IAM users are global resources, not Region-specific, so Option A is incorrect. Permission issues would typically fail immediately and are not Region-dependent.
Option E is incorrect because CloudFormation can both create and update resources.
Therefore, Region-specific AMIs and unavailable services are the valid reasons for failure.
A company has created a new video-on-demand (VOD) application. The application runs on a fleet of Amazon EC2 instances behind an Application Load Balancer (ALB). The company configured an Amazon CloudFront distribution and set the ALB as the origin. Because of increasing application demand, the company wants to move all video files to a central Amazon S3 bucket.
A SysOps administrator needs to ensure that video files can be cached at edge locations after the company migrates the files to Amazon S3.
Which solution will meet this requirement?
- A . Configure CloudFront to send the X-Forwarded-For header to the origin and to redirect video requests to Amazon S3 instead of the ALB.
- B . Configure a new CloudFront cache behavior to route to Amazon S3 as a new origin, based on matching a URL path pattern.
- C . Configure URL signing in the CloudFront distribution by using a custom policy. Ensure that video files are accessed through signed URLs only.
- D . Configure a CloudFront origin group. Specify the required HTTP status codes to direct connection attempts to a secondary origin.
B
Explanation:
Comprehensive and Detailed Explanation From Exact Extract of AWS CloudOps Documents:
To ensure video files are cached at CloudFront edge locations after migrating the files to Amazon S3, CloudFront must be able to fetch those video objects directly from S3 as an origin. The most operationally straightforward pattern is to add S3 as a second origin and create a separate cache behavior that routes requests for video paths (for example, /video/* or /*.mp4) to the S3 origin. With this configuration, CloudFront caches the S3-served objects at edge locations according to the cache policy/headers, while the existing ALB origin continues serving dynamic application paths. This isolates static media delivery from the application tier and improves performance by maximizing cache hits at the edge.
Option A is not how CloudFront origin selection works: CloudFront does not “redirect” to S3 based on headers; it selects an origin per behavior.
Option C (signed URLs) is an access-control mechanism and does not, by itself, ensure the objects are retrieved from S3 or cached correctly.
Option D (origin groups) is for origin failover (primary/secondary) and does not provide path-based routing to ensure videos come from S3.
Reference: Amazon CloudFront Developer Guide C Origins and Cache Behaviors (path pattern routing) Amazon S3 User Guide C Using S3 as an origin for CloudFront
AWS SysOps Administrator Study Guide C Content delivery patterns with CloudFront
A company runs a business application on more than 300 Linux-based instances. Each instance has the AWS Systems Manager Agent (SSM Agent) installed. The company expects the number of instances to grow in the future. All business application instances have the same user-defined tag.
A CloudOps engineer wants to run a command on all the business application instances to download and install a package from a private repository. To avoid overwhelming the repository, the CloudOps engineer wants to ensure that no more than 30 downloads occur at one time.
Which solution will meet this requirement in the MOST operationally efficient way?
- A . Use a secondary tag to create 10 batches of 30 instances each. Use a Systems Manager Run Command document to download and install the package. Run each batch one time.
- B . Use an AWS Lambda function to automatically run a Systems Manager Run Command document.
Set reserved concurrency for the Lambda function to 30. - C . Use a Systems Manager Run Command document to download and install the package. Use rate control to set concurrency to 30. Specify the target by using the user-defined tag.
- D . Use a parallel workflow state in AWS Step Functions. Set the number of parallel states to 30.
C
Explanation:
Comprehensive Explanation (250C350 words):
AWS Systems Manager Run Command includes a built-in rate control feature that allows administrators to control the maximum number of concurrent executions across target instances. This directly addresses the requirement to limit downloads to 30 at a time without custom orchestration or additional services.
By targeting instances using tags, the solution automatically scales as new instances are added, which aligns with future growth expectations. Rate control ensures controlled concurrency and protects the private repository from overload.
Option A is manual and does not scale operationally.
Option B introduces unnecessary complexity with Lambda and concurrency management that does not map cleanly to instance execution concurrency.
Option D significantly increases architectural complexity without added value.
Run Command with rate control is the simplest, most native, and most scalable solution.
A company is running an ecommerce application on AWS. The application maintains many open but idle connections to an Amazon Aurora DB cluster. During times of peak usage, the database produces the following error message: "Too many connections." The database clients are also experiencing errors.
Which solution will resolve these errors?
- A . Increase the read capacity units (RCUs) and the write capacity units (WCUs) on the database.
- B . Configure RDS Proxy. Update the application with the RDS Proxy endpoint.
- C . Turn on enhanced networking for the DB instances.
- D . Modify the DB cluster to use a burstable instance type.
B
Explanation:
Comprehensive and Detailed Explanation From Exact Extract of AWS CloudOps Documents:
The correct solution is B. Configure RDS Proxy, because RDS Proxy is specifically designed to manage and pool database connections for Amazon Aurora and Amazon RDS. AWS CloudOps documentation states that RDS Proxy reduces database load and prevents connection exhaustion by reusing existing connections and managing spikes in application demand.
In this scenario, the ecommerce application maintains many idle connections, which consume
database connection slots even when not actively used. During peak traffic, new connections cannot be established, resulting in the “Too many connections” error. RDS Proxy sits between the application and the Aurora DB cluster, maintaining a smaller, efficient pool of database connections and multiplexing application requests over those connections.
Option A is incorrect because RCUs and WCUs apply to DynamoDB, not Aurora.
Option C is incorrect because enhanced networking improves network throughput and latency but does not manage database connections.
Option D is incorrect because changing instance types does not address idle connection buildup and can still result in connection exhaustion.
AWS CloudOps best practices recommend RDS Proxy for applications with connection-heavy workloads, unpredictable traffic patterns, or serverless components.
Reference: Amazon RDS User Guide C RDS Proxy concepts and benefits
Amazon Aurora User Guide C Managing database connections
AWS SysOps Administrator Study Guide C Database reliability and scaling
An AWS Lambda function is intermittently failing several times a day. A CloudOps engineer must find out how often this error occurred in the last 7 days.
Which action will meet this requirement in the MOST operationally efficient manner?
- A . Use Amazon Athena to query the Amazon CloudWatch logs that are associated with the Lambda function.
- B . Use Amazon Athena to query the AWS CloudTrail logs that are associated with the Lambda function.
- C . Use Amazon CloudWatch Logs Insights to query the associated Lambda function logs.
- D . Use Amazon OpenSearch Service to stream the Amazon CloudWatch logs for the Lambda function.
C
Explanation:
The AWS Cloud Operations and Monitoring documentation states that Amazon CloudWatch Logs Insights provides a purpose-built query engine for analyzing and visualizing log data directly within CloudWatch. For Lambda, all invocation results (including errors) are automatically logged to CloudWatch Logs.
By querying these logs with CloudWatch Logs Insights, the CloudOps engineer can efficiently count the number of “ERROR” or “Exception” occurrences over the past 7 days using simple SQL-like commands. This method is serverless, cost-efficient, and real-time.
Athena (Options A and B) would require exporting data to Amazon S3, and OpenSearch (Option D) adds unnecessary operational complexity.
Thus, Option C provides the most efficient and native AWS CloudOps approach for rapid Lambda error analysis.
Reference: AWS Cloud Operations & Monitoring Guide C Analyzing Lambda Logs with CloudWatch Logs Insights
