Practice Free SOA-C03 Exam Online Questions
A company runs thousands of Amazon EC2 instances that are based on the Amazon Linux 2 Amazon Machine Image (AMI). A SysOps administrator must implement a solution to record commands and output from any user that needs an interactive session on one of the EC2 instances. The solution must log the data to a durable storage location. The solution also must provide automated notifications and alarms that are based on the log data.
Which solution will meet these requirements with the MOST operational efficiency?
- A . Configure command session logging on each EC2 instance. Configure the unified Amazon CloudWatch agent to send session logs to Amazon CloudWatch Logs. Set up query filters and alerts by using Amazon Athena.
- B . Require all users to use a central bastion host when they need command line access to an EC2 instance. Configure the unified Amazon CloudWatch agent on the bastion host to send session logs to
Amazon CloudWatch Logs. Set up a metric filter and a metric alarm for relevant security findings in CloudWatch Logs. - C . Require all users to use AWS Systems Manager Session Manager when they need command line access to an EC2 instance. Configure Session Manager to stream session logs to Amazon CloudWatch Logs. Set up a metric filter and a metric alarm for relevant security findings in CloudWatch Logs.
- D . Configure command session logging on each EC2 instance. Require all users to use AWS Systems Manager Run Command documents when they need command line access to an EC2 instance. Configure the unified Amazon CloudWatch agent to send session logs to Amazon CloudWatch Logs. Set up CloudWatch alarms that are based on Amazon Athena query results.
C
Explanation:
Comprehensive and Detailed Explanation From Exact Extract of AWS CloudOps Documents:
The most operationally efficient solution is C because AWS Systems Manager Session Manager is purpose-built for secure, auditable interactive access to EC2 instances at scale―without managing bastion hosts or distributing SSH keys. Session Manager can be configured to log session activity, including commands and output, to durable destinations such as Amazon CloudWatch Logs (and optionally Amazon S3). This directly satisfies the requirement to record interactive sessions and store logs durably.
For automated notifications and alarms, CloudWatch Logs supports metric filters that transform matching log patterns into CloudWatch metrics. Those metrics can then drive CloudWatch alarms and notifications (for example, via Amazon SNS). This is a standard CloudOps pattern: centralize logs, derive metrics from security-relevant patterns, and alert automatically.
Option A and D require installing and operating agents and building a more complex analytics path (Athena queries for alerting), which is less efficient and introduces more moving parts across thousands of instances.
Option B adds a bastion host dependency that becomes an operational burden (scaling, patching, hardening, HA) and a potential choke point. Session Manager reduces these burdens by using SSM Agent already installed, IAM-based access control, and centralized logging/monitoring integrations.
Reference: AWS Systems Manager User Guide C Session Manager and session logging to CloudWatch Logs/S3
Amazon CloudWatch Logs User Guide C Metric filters and alarms from log patterns
AWS SysOps Administrator Study Guide C Centralized logging, auditing, and operational monitoring
A media company hosts a public news and video portal on AWS. The portal uses an Amazon DynamoDB table with provisioned capacity to maintain an index of video files that are stored in an Amazon S3 bucket. During a recent event, millions of visitors came to the portal for news. This increase in traffic caused read requests to be throttled in the DynamoDB table. Videos could not be displayed in the portal.
The company’s operations team manually increased the provisioned capacity on a temporary basis to meet the demand. The company wants the operations team to receive an alert before the table is throttled in the future. The company has created an Amazon Simple Notification Service (Amazon SNS) topic and has subscribed the operations team’s email address to the SNS topic.
What should the company do next to meet these requirements?
- A . Create an Amazon CloudWatch alarm that uses the ConsumedReadCapacityUnits metric. Set the alarm threshold to a value that is close to the DynamoDB table’s provisioned capacity. Configure the alarm to publish notifications to the SNS topic.
- B . Turn on auto scaling on the DynamoDB table. Configure an Amazon EventBridge rule to publish notifications to the SNS topic during scaling events.
- C . Turn on Amazon CloudWatch Logs for the DynamoDB table. Create an Amazon CloudWatch metric filter to pattern match the THROTTLING_EXCEPTION status code from DynamoDB. Create a CloudWatch alarm for the metric. Select the SNS topic for notifications.
- D . Configure the application to store logs in Amazon CloudWatch Logs. Create an Amazon CloudWatch metric filter to pattern match the THROTTLING_EXCEPTION status code from DynamoDB. Create a CloudWatch alarm for the metric. Select the SNS topic for notifications.
A
Explanation:
Comprehensive and Detailed Explanation From Exact Extract of AWS CloudOps Documents:
The requirement is to alert before throttling occurs. For a DynamoDB table in provisioned capacity mode, throttling happens when demand approaches or exceeds provisioned throughput. CloudWatch provides direct table metrics such as ConsumedReadCapacityUnits and ProvisionedReadCapacityUnits (and related utilization signals). Creating an alarm on ConsumedReadCapacityUnits with a threshold set close to the table’s provisioned read capacity provides an early warning that the table is nearing its limit―before actual throttling prevents reads. The alarm can publish directly to the existing SNS topic so the operations team is notified proactively.
Option C and D focus on detecting throttling after it occurs by matching throttling exceptions in logs. That is reactive and violates “before throttled.” Option B (auto scaling) may reduce the likelihood of throttling, but it does not directly satisfy the alerting requirement and “scaling events” notifications are not a reliable proxy for “approaching throttle” (and may not fire early enough depending on scaling configuration). The simplest, most direct CloudOps approach is a CloudWatch alarm on consumption nearing provisioned capacity.
Reference: Amazon DynamoDB Developer Guide C Provisioned capacity, throttling behavior, CloudWatch metrics
Amazon CloudWatch User Guide C Alarms and SNS notifications
AWS SysOps Administrator Study Guide C Monitoring DynamoDB and capacity planning
A company uses a large number of Linux-based Amazon EC2 instances to run business operations. The company uses AWS Systems Manager to manage the EC2 instances. The company wants to ensure that the Systems Manager Agent (SSM Agent) is always up to date with the latest version.
Which solution will meet this requirement in the MOST operationally efficient way?
- A . Enable the Auto update SSM Agent setting in Systems Manager Fleet Manager.
- B . Subscribe to SSM Agent GitHub notifications and use Lambda to update agents.
- C . Enable the Auto update SSM Agent setting in Systems Manager Patch Manager.
- D . Use GitHub notifications and a Systems Manager Automation document.
A
Explanation:
Comprehensive Explanation (250C350 words):
AWS Systems Manager provides a built-in capability to automatically update the SSM Agent on managed instances. Enabling Auto update SSM Agent ensures that instances receive the latest agent version without manual intervention or custom automation.
This setting is centrally managed and applies across the fleet, making it the most scalable and operationally efficient solution. It eliminates the need for external notifications, custom Lambda functions, or scheduled automation workflows.
Options B and D introduce unnecessary complexity and operational burden.
Option C is incorrect because Patch Manager is designed for operating system patching, not for managing SSM Agent updates.
Therefore, enabling automatic SSM Agent updates is the best solution.
A company’s security policy requires incoming SSH traffic to be restricted to a defined set of addresses. The company is using an AWS Config rule to check whether security groups allow unrestricted incoming SSH traffic.
A CloudOps engineer discovers a noncompliant resource and fixes the security group manually. The CloudOps engineer wants to automate the remediation of other noncompliant resources.
What is the MOST operationally efficient solution that meets these requirements?
- A . Create a CloudWatch alarm for the AWS Config rule and invoke a Lambda function to remediate.
- B . Configure an automatic remediation action on the AWS Config rule using AWS-DisableIncomingSSHOnPort22.
- C . Create an EventBridge rule for AWS Config events and invoke a Lambda function.
- D . Run a scheduled Lambda function to inspect and remediate security groups.
B
Explanation:
Comprehensive Explanation (250C350 words):
AWS Config supports automatic remediation for both managed and custom rules. When a resource is found noncompliant, AWS Config can automatically invoke an AWS Systems Manager Automation document to remediate the issue. The managed automation document AWS-DisableIncomingSSHOnPort22 is specifically designed to remove unrestricted SSH access (0.0.0.0/0) from security group inbound rules.
This approach is the most operationally efficient because it requires no custom code, no event orchestration, and no ongoing maintenance. The remediation runs immediately when AWS Config detects noncompliance and ensures consistent enforcement of security policy across all applicable resources.
Options A, C, and D rely on Lambda functions and event-driven glue logic, which significantly increase operational overhead, complexity, and long-term maintenance costs. These approaches are unnecessary when AWS provides a fully managed remediation capability.
Therefore, configuring an automatic remediation action directly on the AWS Config rule is the correct and most efficient solution.
A company has an AWS CloudFormation template that includes an AWS::EC2::Instance resource and a custom resource (Lambda function). The Lambda function fails because it runs before the EC2 instance is launched.
Which solution will resolve this issue?
- A . Add a DependsOn attribute to the custom resource. Specify the EC2 instance in the DependsOn attribute.
- B . Update the custom resource’s service token to point to a valid Lambda function.
- C . Update the Lambda function to use the cfn-response module to send a response to the custom resource.
- D . Use the Fn::If intrinsic function to check for the EC2 instance before the custom resource runs.
A
Explanation:
The AWS Cloud Operations and Infrastructure-as-Code documentation specifies that when using AWS CloudFormation, resources are created in parallel by default unless explicitly ordered using DependsOn.
If a custom resource (Lambda) depends on another resource (like an EC2 instance) to exist before execution, a DependsOn attribute must be added to enforce creation order. This ensures the EC2 instance is launched and available before the custom resource executes its automation logic.
Updating the service token (Option B) doesn’t affect order of execution. The cfn-response module (Option C) handles callback communication but not sequencing. Fn::If (Option D) is for conditional creation, not dependency control.
Therefore, Option A is correct ― adding a DependsOn attribute guarantees that CloudFormation provisions the EC2 instance before executing the Lambda custom resource.
Reference: AWS Cloud Operations & Infrastructure-as-Code Guide C Using DependsOn for Resource Creation Order in CloudFormation Templates
A company runs applications on Amazon EC2 instances. The company wants to ensure that SSH ports on the EC2 instances are never open. The company has enabled AWS Config and has set up the restricted-ssh AWS managed rule.
A CloudOps engineer must implement a solution to remediate SSH port access for noncompliant security groups.
What should the engineer do to meet this requirement with the MOST operational efficiency?
- A . Configure the AWS Config rule to identify noncompliant security groups. Configure the rule to use the AWS-PublishSNSNotification AWS Systems Manager Automation runbook to send notifications about noncompliant resources.
- B . Configure the AWS Config rule to identify noncompliant security groups. Configure the rule to use the AWS-DisableIncomingSSHOnPort22 AWS Systems Manager Automation runbook to remediate noncompliant resources.
- C . Make an AWS Config API call to search for noncompliant security groups. Disable SSH access for noncompliant security groups by using a Deny rule.
- D . Configure the AWS Config rule to identify noncompliant security groups. Manually update each noncompliant security group to remove the Allow rule.
B
Explanation:
The AWS Cloud Operations and Governance documentation specifies that AWS Config can be paired with AWS Systems Manager Automation runbooks for automatic remediation of noncompliant resources.
For SSH restrictions, the restricted-ssh managed rule detects any security group allowing inbound traffic on port 22. To automatically remediate these findings, AWS provides the AWS-DisableIncomingSSHOnPort22 runbook. This runbook programmatically removes inbound rules that allow port 22 traffic from affected security groups.
This approach achieves continuous compliance with minimal human intervention. By contrast, sending notifications (Option A) does not enforce remediation, API-based scripts (Option C) add operational overhead, and manual remediation (Option D) violates automation best practices.
Therefore, the most efficient CloudOps solution is Option B, using AWS Config with the AWS-DisableIncomingSSHOnPort22 automation runbook for automatic, scalable enforcement.
Reference: AWS Cloud Operations & Governance Guide C Automated Security Remediation Using Config Managed Rules and Systems Manager Runbooks
An AWS CloudFormation template creates an Amazon RDS instance. This template is used to build up development environments as needed and then delete the stack when the environment is no longer required. The RDS-persisted data must be retained for further use, even after the CloudFormation stack is deleted.
How can this be achieved in a reliable and efficient way?
- A . Write a script to continue backing up the RDS instance every five minutes.
- B . Create an AWS Lambda function to take a snapshot of the RDS instance, and manually invoke the function before deleting the stack.
- C . Use the Snapshot Deletion Policy in the CloudFormation template definition of the RDS instance.
- D . Create a new CloudFormation template to perform backups of the RDS instance, and run this template before deleting the stack.
C
Explanation:
Comprehensive and Detailed Explanation From Exact Extract of AWS CloudOps Doocuments:
AWS CloudFormation supports the DeletionPolicy attribute to control what happens to a resource when a stack is deleted. For Amazon RDS DB instances, setting DeletionPolicy: Snapshot instructs CloudFormation to retain a final DB snapshot automatically at stack deletion. CloudOps best practice recommends using this native mechanism for data retention and auditability, avoiding manual scripts or out-of-band processes. Options A, B, and D introduce operational overhead and potential human error. With DeletionPolicy set to Snapshot, the environment can be repeatedly created and torn down while preserving data states for later restoration with minimal manual steps. This aligns with IaC principles―declarative, repeatable, and reliable―and supports efficient lifecycle management of ephemeral development stacks.
Reference:
• AWS Certified CloudOps Engineer C Associate (SOA-C03) Exam Guide C Deployment, Provisioning and Automation
• AWS CloudFormation User Guide C DeletionPolicy Attribute (Snapshot for RDS)
• AWS Well-Architected Framework C Operational Excellence Pillar
A company is performing deployments of an application at regular intervals. Users report that the application sometimes does not work properly. The company discovers that some users’ browsers are fetching previous versions of the JavaScript files. The application runs on Amazon EC2 instances behind an Application Load Balancer (ALB). The ALB is the origin for an Amazon CloudFront distribution.
A SysOps administrator must implement a solution to ensure that CloudFront serves the latest version of the JavaScript files. The solution must not affect application server performance.
Which solution will meet these requirements?
- A . Reduce the maximum TTL and default TTL of the CloudFront distribution behavior to 0.
- B . Add a final step in the deployment process to invalidate all files in the CloudFront distribution.
- C . Add a final step in the deployment process to invalidate only the changed JavaScript files in the CloudFront distribution.
- D . Remove CloudFront from the path of serving JavaScript files. Serve the JavaScript files directly through the ALB.
C
Explanation:
Comprehensive and Detailed Explanation From Exact Extract of AWS CloudOps Documents:
The correct answer is C because selective CloudFront invalidation ensures that only updated JavaScript files are refreshed across edge locations. AWS CloudOps documentation explains that invalidations remove cached objects so that CloudFront fetches the latest version from the origin on the next request.
Invalidating only changed files minimizes cost, reduces operational impact, and avoids unnecessary origin requests. This approach ensures users always receive the latest application assets without degrading backend performance.
Option A is incorrect because setting TTLs to 0 forces CloudFront to query the origin for every request, increasing load on the ALB and EC2 instances.
Option B is inefficient and costly because invalidating all files is unnecessary.
Option D removes the benefits of CloudFront caching and increases latency.
AWS CloudOps best practices recommend targeted invalidations during deployments to balance performance, cost, and correctness.
Reference: Amazon CloudFront Developer Guide C Cache Invalidation
AWS SysOps Administrator Study Guide C Content Delivery
AWS Well-Architected Framework C Performance Efficiency and Reliability
A company uses Amazon ElastiCache (Redis OSS) to cache application data. A CloudOps engineer must implement a solution to increase the resilience of the cache. The solution also must minimize the recovery time objective (RTO).
Which solution will meet these requirements?
- A . Replace ElastiCache (Redis OSS) with ElastiCache (Memcached).
- B . Create an Amazon EventBridge rule to initiate a backup every hour. Restore the backup when necessary.
- C . Create a read replica in a second Availability Zone. Enable Multi-AZ for the ElastiCache (Redis OSS) replication group.
- D . Enable automatic backups. Restore the backups when necessary.
C
Explanation:
Comprehensive and Detailed Explanation From Exact Extract of AWS CloudOps Doocuments:
For high availability and fast failover, ElastiCache for Redis supports replication groups with Multi-AZ and automatic failover. CloudOps guidance states that a primary node can be paired with one or more replicas across multiple Availability Zones; if the primary fails, Redis automatically promotes a replica to primary in seconds, thereby minimizing RTO. This architecture maintains in-memory data continuity without waiting for backup restore operations. Backups (Options B and D) provide durability but require restore and re-warm procedures that increase RTO and may impact application latency. Switching engines (Option A) to Memcached does not provide Redis replication/failover semantics and would not inherently improve resilience for this use case. Therefore, creating a read replica in a different AZ and enabling Multi-AZ with automatic failover is the prescribed CloudOps pattern to increase resilience and achieve the lowest practical RTO for Redis caches.
Reference:
• AWS Certified CloudOps Engineer C Associate (SOA-C03) Exam Guide C Reliability and Business Continuity
• Amazon ElastiCache for Redis C Replication Groups, Multi-AZ, and Automatic Failover
• AWS Well-Architected Framework C Reliability Pillar
A financial services company stores customer images in an Amazon S3 bucket in the us-east-1 Region. To comply with regulations, the company must ensure that all existing objects are replicated to an S3 bucket in a second AWS Region. If an object replication fails, the company must be able to retry replication for the object.
What solution will meet these requirements?
- A . Configure Amazon S3 Cross-Region Replication (CRR). Use Amazon S3 live replication to replicate existing objects.
- B . Configure Amazon S3 Cross-Region Replication (CRR). Use S3 Batch Replication to replicate existing objects.
- C . Configure Amazon S3 Cross-Region Replication (CRR). Use S3 Replication Time Control (S3 RTC) to replicate existing objects.
- D . Use S3 Lifecycle rules to move objects to the destination bucket in a second Region.
B
Explanation:
Per the AWS Cloud Operations and S3 Data Management documentation, Cross-Region Replication (CRR) automatically replicates new objects between S3 buckets across Regions. However, CRR alone does not retroactively replicate existing objects created before replication configuration. To include such objects, AWS introduced S3 Batch Replication.
S3 Batch Replication scans the source bucket and replicates all existing objects that were not copied previously. Additionally, it can retry failed replication tasks automatically, ensuring regulatory compliance for complete dataset replication.
S3 Replication Time Control (S3 RTC) guarantees predictable replication times for new objects only― it does not cover previously stored data. S3 Lifecycle rules (Option D) move or transition objects between storage classes or buckets, but not in a replication context.
Therefore, the correct solution is to use S3 Cross-Region Replication (CRR) combined with S3 Batch Replication to ensure all current and future data is synchronized across Regions with retry capability.
Reference: AWS Cloud Operations and S3 Guide C Section: Cross-Region Replication and Batch Replication for Existing Objects
