Practice Free SOA-C03 Exam Online Questions
An ecommerce company uses Amazon ElastiCache (Redis OSS) for caching product queries. The CloudOps engineer observes a large number of cache evictions in Amazon CloudWatch metrics and needs to reduce evictions while retaining popular data in cache.
Which solution meets these requirements with the least operational overhead?
- A . Add another node to the ElastiCache cluster.
- B . Increase the ElastiCache TTL value.
- C . Decrease the ElastiCache TTL value.
- D . Migrate to a new ElastiCache cluster with larger nodes.
D
Explanation:
According to the AWS Cloud Operations and ElastiCache documentation, cache evictions occur when the cache runs out of memory and must remove items to make space for new data.
To reduce evictions and retain frequently accessed items, AWS recommends increasing the total available memory ― either by scaling up to larger node types or scaling out by adding shards/nodes. Migrating to a cluster with larger nodes is the simplest and most efficient solution because it immediately expands capacity without architectural changes.
Adjusting TTL (Options B and C) controls expiration timing, not memory allocation. Adding a single node (Option A) may help, but redistributing data requires resharding, introducing more complexity.
Thus, Option D provides the lowest operational overhead and ensures high cache hit rates by increasing total cache memory.
Reference: AWS Cloud Operations & Performance Optimization Guide C Reducing Evictions and Scaling Amazon ElastiCache Clusters
A company moves workloads from public subnets to private subnets to improve security. During testing, servers in the private subnets cannot reach an external API. The VPC has a CIDR block of 10.0.0.0/16, two public subnets, two private subnets, one internet gateway, and a NAT gateway in each private subnet.
The company must ensure that workloads in the private subnets can reach the external API.
Which solution will meet this requirement?
- A . Deploy an outbound-only internet gateway and update route tables.
- B . Create an Amazon API Gateway HTTP API as a proxy.
- C . Deploy a NAT gateway in each public subnet and update private subnet route tables.
- D . Create a VPC interface endpoint and update route tables.
C
Explanation:
Comprehensive Explanation (250C350 words):
For IPv4 traffic, private subnets require a NAT gateway in a public subnet to access the internet. NAT gateways must be deployed in public subnets and associated with an Elastic IP address. Private subnet route tables must direct 0.0.0.0/0 traffic to the NAT gateway.
The question states that NAT gateways are incorrectly placed in private subnets, which cannot provide internet access. Deploying NAT gateways in public subnets resolves this issue and restores outbound connectivity to external APIs.
Option A applies only to IPv6.
Option B adds unnecessary complexity.
Option D is not applicable because external APIs are not AWS services.
A company runs applications on Amazon EC2 instances. Many of the instances are not patched. The company has a tagging policy. All the instances are tagged with details about the owners, application, and environment. AWS Systems Manager Agent (SSM Agent) is installed on all the instances.
A SysOps administrator must implement a solution to automatically patch all existing and future instances that have "Prod" in the environment tag. The SysOps administrator plans to create a patch policy in Systems Manager Patch Manager.
Which solution will meet the patching requirements with the LEAST operational overhead?
- A . Define targets of the patch policy by specifying node tags that match the company’s tagging strategy.
- B . Configure an AWS Lambda function to scan for new instances and to add the instances to the targets of the patch policy.
- C . Create resource groups. Add the existing instances to the resource groups. Configure an AWS Lambda function to scan for new instances and to add the instances to the resource groups at regular intervals. Attach the resource groups to the patch policy.
- D . Create resource groups. Add the existing instances to the resource groups. Create an Amazon EventBridge rule that uses an appropriately defined filter to add new instances to the resource groups. Attach the resource groups to the patch policy.
A
Explanation:
Comprehensive and Detailed Explanation From Exact Extract of AWS CloudOps Documents:
The correct answer is A because AWS Systems Manager Patch Manager natively supports tag-based targeting, which automatically includes both existing and future instances that match specified tag criteria. AWS CloudOps documentation states that patch policies can target managed nodes by instance tags, allowing administrators to dynamically scope patching operations without additional automation.
By defining the patch policy target as instances with an environment tag value of “Prod,” Patch Manager automatically applies patch baselines to all matching instances. Any new EC2 instance launched with the same tag is included automatically, requiring no manual intervention or additional services. This approach delivers the least operational overhead while remaining fully scalable and compliant.
Options B, C, and D are incorrect because they introduce unnecessary complexity by adding AWS Lambda functions, resource groups, or EventBridge rules. AWS CloudOps best practices emphasize using native Systems Manager capabilities whenever possible to reduce operational burden and failure points.
Reference: AWS Systems Manager User Guide C Patch Manager Tag-Based Targeting AWS SysOps Administrator Study Guide C Automation and Patch Management AWS Well-Architected Framework C Operational Excellence
A company’s reporting job that previously ran in 15 minutes is now taking 1 hour. The application runs on Amazon EC2 and extracts data from an Amazon RDS for MySQL DB instance.
CloudWatch metrics show high Read IOPS even when reports are not running. The CloudOps engineer must improve performance and availability.
Which solution will meet these requirements?
- A . Configure Amazon ElastiCache and query it for reports.
- B . Deploy an RDS read replica and update the reporting job to query the reader endpoint.
- C . Create a CloudFront distribution with the RDS instance as the origin.
- D . Increase the size of the RDS instance.
B
Explanation:
Comprehensive Explanation (250C350 words):
RDS read replicas offload read traffic from the primary database, improving performance and availability. By directing reporting queries to the reader endpoint, the primary instance is freed from heavy read workloads.
ElastiCache is unsuitable for complex SQL reporting. CloudFront cannot front a database. Increasing instance size does not address inefficient read scaling.
Thus, read replicas are the correct solution.
A company is storing backups in an Amazon S3 bucket. The backups must not be deleted for at least 3 months after the backups are created.
What should a CloudOps engineer do to meet this requirement?
- A . Configure an IAM policy that denies the s3:DeleteObject action for all users. Remove the policy after three months.
- B . Enable S3 Object Lock on a new S3 bucket in compliance mode. Place all backups in the new S3 bucket with a retention period of 3 months.
- C . Enable S3 Versioning on the existing S3 bucket. Configure S3 Lifecycle rules to protect the backups.
- D . Enable S3 Object Lock on a new S3 bucket in governance mode. Place all backups in the new S3 bucket with a retention period of 3 months.
B
Explanation:
Comprehensive Explanation (250C350 words):
Amazon S3 Object Lock in compliance mode provides immutable storage that prevents objects from being deleted or overwritten for a defined retention period. In compliance mode, even the root user cannot remove the retention or delete the object before the retention period expires. This makes it suitable for regulatory and strict data-protection requirements.
Because Object Lock must be enabled at bucket creation time, a new bucket is required. Setting a retention period of 3 months ensures that backups cannot be deleted before that time under any circumstances.
Option D (governance mode) allows privileged users to bypass retention, which violates the strict “must not be deleted” requirement.
Option A relies on IAM policy changes, which are reversible and error-prone.
Option C does not prevent deletion; versioning only retains previous versions if objects are deleted, but users can still delete versions unless additional controls are applied.
Therefore, S3 Object Lock in compliance mode is the correct and most secure solution.
A CloudOps engineer is troubleshooting an AWS CloudFormation stack creation that failed. Before the CloudOps engineer can identify the problem, the stack and its resources are deleted. For future deployments, the CloudOps engineer must preserve any resources that CloudFormation successfully created.
What should the CloudOps engineer do to meet this requirement?
- A . Set the value of the DisableRollback parameter to False during stack creation.
- B . Set the value of the OnFailure parameter to DO_NOTHING during stack creation.
- C . Specify a rollback configuration that has a rollback trigger of DO_NOTHING during stack creation.
- D . Set the value of the OnFailure parameter to ROLLBACK during stack creation.
B
Explanation:
By default, when AWS CloudFormation encounters a failure during stack creation, it automatically rolls back and deletes any resources that were successfully created. This behavior makes
troubleshooting difficult because the failed and partially created resources are no longer available for inspection.
CloudFormation provides the OnFailure parameter to control this behavior. Setting the parameter to DO_NOTHING instructs CloudFormation to stop stack creation when a failure occurs and retain all successfully created resources. This allows the CloudOps engineer to inspect the environment, review logs, and identify the root cause without redeploying resources.
The DisableRollback parameter controls rollback behavior but does not provide the same explicit behavior control during failure scenarios. Rollback triggers are used for monitoring-based rollback, not for preserving resources on failure. Setting OnFailure to ROLLBACK explicitly enforces deletion, which is the opposite of the requirement.
Therefore, setting the OnFailure parameter to DO_NOTHING is the correct solution.
A company uses an AWS Lambda function to process user uploads to an Amazon S3 bucket. The Lambda function runs in response to Amazon S3 PutObject events.
A SysOps administrator needs to set up monitoring for the Lambda function. The SysOps administrator wants to receive a notification through an Amazon Simple Notification Service (Amazon SNS) topic if the function takes more than 10 seconds to process an event.
Which solution will meet this requirement?
- A . Collect Amazon CloudWatch logs for the Lambda function. Create a metric filter to extract the PostRuntimeExtensionsDuration metric from the logs. Create a CloudWatch alarm to publish a notification to the SNS topic when the function runtime exceeds 10 seconds.
- B . Collect Amazon CloudWatch metrics for the Lambda function to extract the function runtime. Create a CloudWatch alarm to publish a notification to the SNS topic when the runtime exceeds 10 seconds.
- C . Configure an Amazon CloudWatch metric filter to capture the runtime of the Lambda function. Set the function’s timeout setting to 10 seconds. Create an SNS subscription to alert the SysOps administrator if the function times out.
- D . Use Amazon CloudWatch Logs Insights to query Lambda logs for the function runtime. Set up a CloudWatch alarm based on the query result. Configure Amazon SNS to send notifications when function runtime exceeds 10 seconds.
B
Explanation:
Comprehensive and Detailed Explanation From Exact Extract of AWS CloudOps Documents:
AWS Lambda automatically publishes operational metrics to Amazon CloudWatch, including Duration, which represents the time a function invocation takes to run. To alert when processing time exceeds 10 seconds, the most direct and operationally efficient solution is to create a CloudWatch alarm on the Lambda Duration metric and configure the alarm action to publish to an SNS topic. This meets the monitoring requirement without requiring log parsing or additional query mechanisms.
Option B fits best because it uses the built-in metric stream for Lambda observability: CloudWatch metrics are near real-time, require minimal configuration, and alarms are a standard CloudOps practice for proactive notification. The alarm can be configured with the appropriate statistic (for example, Maximum or p99 via metric math where applicable) and a threshold of 10,000 milliseconds, ensuring the operations team is notified before performance degrades further.
Option A is incorrect because PostRuntimeExtensionsDuration is not the primary runtime metric for function execution time, and extracting runtime from logs is unnecessary.
Option C changes the function timeout to 10 seconds, which would cause failures rather than simply notifying on slow executions.
Option D is more operationally complex because it relies on log queries; CloudWatch alarms are more straightforward when a native metric exists.
Reference: AWS Lambda Developer Guide C Monitoring functions with CloudWatch metrics (Duration)
Amazon CloudWatch User Guide C Creating alarms and SNS notifications
AWS SysOps Administrator Study Guide C Monitoring and alerting patterns
A company runs a retail website on multiple Amazon EC2 instances behind an Application Load Balancer (ALB). The company must secure traffic to the website over an HTTPS connection.
Which combination of actions should a SysOps administrator take to meet these requirements? (Select TWO.)
- A . Attach the certificate to each EC2 instance.
- B . Attach the certificate to the ALB.
- C . Create a private certificate in AWS Certificate Manager (ACM).
- D . Create a public certificate in AWS Certificate Manager (ACM).
- E . Export the certificate, and attach it to the website.
B, D
Explanation:
Comprehensive and Detailed Explanation From Exact Extract of AWS CloudOps Documents:
To secure inbound web traffic over HTTPS when using an Application Load Balancer, AWS CloudOps best practices recommend terminating TLS at the load balancer. This is achieved by attaching an SSL/TLS certificate directly to the ALB listener. Therefore, option B is required. By terminating HTTPS at the ALB, traffic between clients and the load balancer is encrypted, and the ALB can then forward traffic to backend EC2 instances using HTTP or HTTPS based on design requirements.
The certificate used for a public-facing retail website must be trusted by internet browsers. AWS Certificate Manager (ACM) public certificates are specifically designed for this purpose and are trusted by common browsers and operating systems. Therefore, option D is required. ACM manages
certificate provisioning, renewal, and deployment automatically, which significantly reduces operational overhead.
Option A is incorrect because attaching certificates to each EC2 instance is unnecessary and does not scale well.
Option C is incorrect because private certificates are intended for internal use cases and are not trusted by public browsers.
Option E is incorrect because ACM-managed public certificates cannot be exported; they must be attached directly to supported AWS services such as ALBs.
This approach aligns with AWS CloudOps guidance for secure, scalable, and highly available HTTPS architectures.
Reference: Elastic Load Balancing User Guide C HTTPS listeners for Application Load Balancers
AWS Certificate Manager User Guide C Public certificates
AWS SysOps Administrator Study Guide C Securing applications with ALB and ACM
A CloudOps engineer has created an AWS Service Catalog portfolio and shared it with a second AWS account in the company, managed by a different CloudOps engineer.
Which action can the CloudOps engineer in the second account perform?
- A . Add a product from the imported portfolio to a local portfolio.
- B . Add new products to the imported portfolio.
- C . Change the launch role for the products contained in the imported portfolio.
- D . Customize the products in the imported portfolio.
A
Explanation:
Per the AWS Cloud Operations and Service Catalog documentation, when a portfolio is shared across AWS accounts, the recipient account imports the shared portfolio.
The recipient CloudOps engineer cannot modify the original products or their configurations but can:
Add products from the imported portfolio into their local portfolios for deployment, Control end-user access in the recipient account, and Manage local constraints or permissions.
However, the recipient cannot edit, delete, or reconfigure the shared products (Options B, C, and D). The source (owner) account retains full administrative control over products, launch roles, and lifecycle policies.
This model aligns with AWS CloudOps principles of centralized governance with distributed self-service deployment across multiple accounts.
Thus, Option A is correct―imported portfolios allow the recipient to add products to a local portfolio but not alter the shared configuration.
Reference: AWS Cloud Operations & Governance Guide C Managing Shared AWS Service Catalog Portfolios Across Multiple Accounts
A CloudOps engineer wants to provide access to AWS services by attaching an IAM policy to multiple IAM users. The CloudOps engineer also wants to be able to change the policy and create new versions.
Which combination of actions will meet these requirements? (Select TWO.)
- A . Add the users to an IAM service-linked role. Attach the policy to the role.
- B . Add the users to an IAM user group. Attach the policy to the group.
- C . Create an AWS managed policy.
- D . Create a customer managed policy.
- E . Create an inline policy.
B, D
Explanation:
Comprehensive Explanation (250C350 words):
IAM user groups allow permissions to be managed centrally and applied to multiple users simultaneously, making them ideal for scalable access management. Attaching policies to groups ensures that changes propagate automatically to all members.
A customer managed policy supports versioning, reuse, and centralized updates, which meets the requirement to modify policies and manage versions over time. AWS managed policies cannot be edited, and inline policies do not support reuse or versioning across multiple principals.
Option A is invalid because service-linked roles are AWS-managed and not designed for user access.
Option E lacks versioning and reusability.
Option C does not allow customization.
Therefore, using IAM groups with a customer managed policy is the correct and secure solution.
