Practice Free SOA-C03 Exam Online Questions
A company manages a set of AWS accounts by using AWS Organizations. The company’s security team wants to use a native AWS service to regularly scan all AWS accounts against the Center for Internet Security (CIS) AWS Foundations Benchmark.
What is the MOST operationally efficient way to meet these requirements?
- A . Designate a central security account as the AWS Security Hub administrator account. Use scripts to invite and accept member accounts.
- B . Run the CIS AWS Foundations Benchmark by using Amazon Inspector.
- C . Designate a central security account as the Amazon GuardDuty administrator account and configure CIS scans.
- D . Designate an AWS Security Hub administrator account, automatically enroll new organization accounts, and enable CIS AWS Foundations Benchmark.
D
Explanation:
Comprehensive Explanation (250C350 words):
AWS Security Hub is the native AWS service that provides continuous compliance checks against
security standards, including the CIS AWS Foundations Benchmark. When integrated with AWS Organizations, Security Hub can automatically enroll existing and newly created accounts, eliminating manual invitations and scripts.
By designating a Security Hub administrator account and enabling automatic account onboarding, the organization ensures consistent security posture monitoring across all accounts. CIS benchmark checks are run continuously, and findings are aggregated centrally, simplifying governance and remediation workflows.
Amazon Inspector focuses on vulnerability scanning for EC2, container images, and Lambda functions―not CIS compliance. GuardDuty is a threat detection service and does not run CIS benchmarks.
Therefore, using AWS Security Hub with automatic organization-wide enrollment is the most efficient solution.
A user working in the Amazon EC2 console increased the size of an Amazon Elastic Block Store (Amazon EBS) volume attached to an Amazon EC2 Windows instance. The change is not reflected in the file system.
What should a CloudOps engineer do to resolve this issue?
- A . Extend the file system with operating system-level tools to use the new storage capacity.
- B . Reattach the EBS volume to the EC2 instance.
- C . Reboot the EC2 instance that is attached to the EBS volume.
- D . Take a snapshot of the EBS volume. Replace the original volume with a volume that is created from the snapshot.
A
Explanation:
When an Amazon EBS volume is resized, the new storage capacity is immediately available to the attached EC2 instance. However, EBS does not automatically extend the file system. The CloudOps engineer must manually extend the file system within the operating system to utilize the additional space.
AWS documentation for EC2 and EBS specifies:
“After you increase the size of an EBS volume, use file systemCspecific tools to extend the file system so that the operating system can use the new storage capacity.”
On Windows instances, this can be achieved through Disk Management or diskpart commands. On Linux systems, utilities such as growpart and resize2fs are used.
Options B and C do not modify file system metadata and are ineffective.
Option D unnecessarily replaces the volume, which adds risk and downtime. Thus, Option A aligns with the Monitoring and Performance Optimization practices of AWS CloudOps by properly extending the file system to recognize the new capacity.
Reference:
• AWS Certified CloudOps Engineer C Associate (SOA-C03) Exam Guide C Domain 1
• Amazon EBS C Modifying EBS Volumes
• Amazon EC2 User Guide C Extending a File System After Resizing a Volume
• AWS Well-Architected Framework C Performance Efficiency Pillar
A CloudOps engineer needs to build an event infrastructure for custom application-specific events. The events must be sent to an AWS Lambda function for processing. The CloudOps engineer must record the events so they can be replayed later by event type or event time.
Which solution will meet these requirements?
- A . Create an Amazon EventBridge custom event bus, create an archive, and create a rule to send events to Lambda.
- B . Create an archive on the default event bus and use pattern matching.
- C . Create an EventBridge pipe and store events in an archive.
- D . Create a CloudWatch Logs log group and route events there.
A
Explanation:
Comprehensive Explanation (250C350 words):
Amazon EventBridge supports custom event buses for application-specific events. EventBridge archives allow events to be retained and replayed later based on time ranges or event patterns, directly meeting the replay requirement.
Creating a custom event bus provides isolation and governance for application events. The archive preserves events automatically, and EventBridge rules route events to AWS Lambda for processing without custom code.
Options B and C do not properly align with custom event use cases or supported archive behavior.
Option D lacks native replay functionality.
Therefore, a custom event bus with an archive and rule is the correct solution.
An environment consists of 100 Amazon EC2 Windows instances. The Amazon CloudWatch agent is deployed and running on all EC2 instances with a baseline configuration file to capture log files. There is a new requirement to capture DHCP log files that exist on 50 of the instances.
What is the MOST operationally efficient way to meet this new requirement?
- A . Create an additional CloudWatch agent configuration file to capture the DHCP logs. Use AWS Systems Manager Run Command to restart the CloudWatch agent on each EC2 instance with the append-config option.
- B . Log in to each EC2 instance with administrator rights and create a PowerShell script to push logs to CloudWatch.
- C . Run the CloudWatch agent configuration wizard on each EC2 instance and add DHCP logs manually.
- D . Run the CloudWatch agent configuration wizard on each EC2 instance and select the advanced detail level.
A
Explanation:
Comprehensive Explanation (250C350 words):
The CloudWatch agent supports modular configuration using the append-config option, which allows additional log sources to be added without overwriting the existing baseline configuration. This makes it ideal for incremental log collection changes across large fleets.
By creating a separate configuration file specifically for DHCP logs and using Systems Manager Run Command, the CloudOps engineer can remotely update only the relevant instances in a scalable and automated manner. This approach avoids manual login and preserves the existing baseline configuration.
Options B and C require manual intervention on each instance, which is not scalable or operationally efficient.
Option D captures additional logs indiscriminately and does not specifically target DHCP logs, potentially increasing noise and cost.
Therefore, Option A is the most efficient and AWS-recommended approach.
A company has two AWS accounts connected by a transit gateway. Each account has one VPC in the same AWS Region. The company wants to simplify inbound and outbound rules in security groups by referencing security group IDs instead of IP CIDR blocks.
Which solution will meet this requirement?
- A . Create VPC peering connections and remove the transit gateway.
- B . Enable security group referencing support on the transit gateway.
- C . Enable security group referencing support on each transit gateway attachment.
- D . Deploy private NAT gateways in each VPC.
C
Explanation:
Comprehensive Explanation (250C350 words):
AWS Transit Gateway supports security group referencing across VPCs, but this feature must be explicitly enabled on each transit gateway attachment. Once enabled, security groups in one VPC can reference security groups in another VPC attached to the same transit gateway, simplifying rule management and improving security posture.
Enabling the feature on the transit gateway itself is not sufficient; it must be enabled per attachment to allow traffic evaluation based on security group IDs. This approach avoids brittle CIDR-based rules and allows dynamic scaling without rule updates.
Option A removes the transit gateway, which contradicts the existing architecture.
Option B is incomplete.
Option D does not address security group referencing.
Thus, enabling security group referencing on each transit gateway attachment is the correct solution.
A company has a workload that is sending log data to Amazon CloudWatch Logs. One of the fields includes a measure of application latency. A CloudOps engineer needs to monitor the p90 statistic of this field over time.
What should the CloudOps engineer do to meet this requirement?
- A . Create an Amazon CloudWatch Contributor Insights rule on the log data.
- B . Create a metric filter on the log data.
- C . Create a subscription filter on the log data.
- D . Create an Amazon CloudWatch Application Insights rule for the workload.
B
Explanation:
To analyze and visualize custom statistics such as the p90 latency (90th percentile), a CloudWatch metric must be generated from the log data. The correct method is to create a metric filter that extracts the latency value from each log event and publishes it as a CloudWatch metric. Once the metric is published, percentile statistics (p90, p95, etc.) can be displayed in CloudWatch dashboards or alarms.
AWS documentation states:
“You can use metric filters to extract numerical fields from log events and publish them as metrics in CloudWatch. CloudWatch supports percentile statistics such as p90 and p95 for these metrics.”
Contributor Insights (Option A) is for analyzing frequent contributors, not numeric distributions. Subscription filters (Option C) are used for log streaming, and Application Insights (Option D) provides monitoring of application health but not custom p90 statistics. Hence, Option B is the CloudOps-aligned, minimal-overhead solution for percentile latency monitoring.
Reference:
• AWS Certified CloudOps Engineer C Associate (SOA-C03) Exam Guide C Domain 1: Monitoring and Logging
• Amazon CloudWatch Logs C Metric Filters
• AWS Well-Architected Framework C Operational Excellence Pillar
