Practice Free SOA-C03 Exam Online Questions
A company that uses AWS Organizations recently implemented AWS Control Tower. The company now needs to centralize identity management. A CloudOps engineer must federate AWS IAM Identity Center with an external SAML 2.0 identity provider (IdP) to centrally manage access to all AWS accounts and cloud applications.
Which prerequisites must the CloudOps engineer have so that the CloudOps engineer can connect to the external IdP? (Select TWO.)
- A . A copy of the IAM Identity Center SAML metadata
- B . The IdP metadata, including the public X.509 certificate
- C . The IP address of the IdP
- D . Root access to the management account
- E . Administrative permissions to the member accounts of the organization
A, B
Explanation:
According to the AWS Cloud Operations and Identity Management documentation, when configuring federation between IAM Identity Center (formerly AWS SSO) and an external SAML 2.0 identity provider, two key prerequisites are required:
The IAM Identity Center SAML metadata file ― This is uploaded to the external IdP to establish trust, define SAML endpoints, and enable identity federation.
The IdP metadata (including the public X.509 certificate) ― This information is imported into IAM Identity Center to validate authentication assertions and encryption signatures.
IAM Identity Center and the IdP exchange this metadata to mutually establish secure, bidirectional federation.
Network-level details such as IP addresses (Option C) are unnecessary. Root access (Option D) or permissions to member accounts (Option E) are not required; only Control Tower or IAM administrative permissions in the management account are needed for setup.
Thus, the correct answer is A and B ― the SAML metadata from both sides is required for federation.
Reference: AWS Cloud Operations & Identity Management Guide C Federating IAM Identity Center with an External SAML 2.0 Provider
An Amazon EC2 instance is running an application that uses Amazon Simple Queue Service (Amazon SQS) queues. A CloudOps engineer must ensure that the application can read, write, and delete messages from the SQS queues.
Which solution will meet these requirements in the MOST secure manner?
- A . Create an IAM user with an IAM policy that allows the sqs: SendMessage permission, the sqs: ReceiveMessage permission, and the sqs: DeleteMessage permission to the appropriate queues. Embed the IAM user’s credentials in the application’s configuration.
- B . Create an IAM user with an IAM policy that allows the sqs: SendMessage permission, the sqs: ReceiveMessage permission, and the sqs: DeleteMessage permission to the appropriate queues. Export the IAM user’s access key and secret access key as environment variables on the EC2 instance.
- C . Create and associate an IAM role that allows EC2 instances to call AWS services. Attach an IAM policy to the role that allows sqs: * permissions to the appropriate queues.
- D . Create and associate an IAM role that allows EC2 instances to call AWS services. Attach an IAM policy to the role that allows the sqs: SendMessage permission, the sqs: ReceiveMessage permission, and the sqs: DeleteMessage permission to the appropriate queues.
D
Explanation:
The most secure pattern is to use an IAM role for Amazon EC2 with the minimum required permissions. AWS guidance states: “Use roles for applications that run on Amazon EC2 instances” and “grant least privilege by allowing only the actions required to perform a task.” By attaching a role to the instance, short-lived credentials are automatically provided through the instance metadata service; this removes the need to create long-term access keys or embed secrets. Granting only sqs: SendMessage, sqs: ReceiveMessage, and sqs: DeleteMessage against the specific SQS queues enforces least privilege and aligns with CloudOps security controls.
Options A and B rely on IAM user access keys, which contravene best practices for workloads on EC2 and increase credential-management risk.
Option C uses a role but grants sqs: *, violating least-privilege principles.
Therefore, Option D meets the security requirement with scoped, temporary credentials and precise permissions.
References (AWS CloudOps Documents / Study Guide):
• AWS Certified CloudOps Engineer C Associate (SOA-C03) Exam Guide C Security & Compliance
• IAM Best Practices C “Use roles instead of long-term access keys,” “Grant least privilege”
• IAM Roles for Amazon EC2 C Temporary credentials for applications on EC2
• Amazon SQS C Identity and access management for Amazon SQS
A company has a VPC that contains a public subnet and a private subnet. The company deploys an Amazon EC2 instance that uses an Amazon Linux Amazon Machine Image (AMI) and has the AWS Systems Manager Agent (SSM Agent) installed in the private subnet. The EC2 instance is in a security group that allows only outbound traffic.
A CloudOps engineer needs to give a group of privileged administrators the ability to connect to the instance through SSH without exposing the instance to the internet.
Which solution will meet this requirement?
- A . Create an EC2 Instance Connect endpoint in the private subnet. Update the security group to allow inbound SSH traffic. Create an IAM group for privileged administrators. Assign the PowerUserAccess managed policy to the IAM group.
- B . Create a Systems Manager endpoint in the private subnet. Update the security group to allow SSH traffic from the private network where the Systems Manager endpoint is connected. Create an IAM group for privileged administrators. Assign the PowerUserAccess managed policy to the IAM group.
- C . Create an EC2 Instance Connect endpoint in the public subnet. Update the security group to allow SSH traffic from the private network. Create an IAM group for privileged administrators. Assign the PowerUserAccess managed policy to the IAM group.
- D . Create a Systems Manager endpoint in the public subnet. Create an IAM role that has the AmazonSSMManagedInstanceCore permission for the EC2 instance. Create an IAM group for privileged administrators. Assign the AmazonEC2ReadOnlyAccess IAM policy to the IAM group.
A
Explanation:
Comprehensive and Detailed Explanation From Exact Extract of AWS CloudOps Doocuments:
EC2 Instance Connect Endpoint (EIC Endpoint) enables SSH to instances in private subnets without public IPs and without needing to traverse the public internet. CloudOps guidance explains that you deploy the endpoint in the same VPC/subnet as the targets, then allow inbound SSH on the instance security group from the endpoint’s security group. Access is governed by IAM―administrators must
have Instance Connect permissions; while the example uses a broad policy, the key mechanism is EIC in the private subnet plus SG rules scoped to the endpoint. Systems Manager Session Manager can provide shell access without SSH, but the requirement explicitly states “connect through SSH,” making EIC the purpose-built solution.
Options B and D misuse Systems Manager for SSH and propose unnecessary SG changes or incorrect endpoint placement; Option C places the endpoint in a public subnet, which is not required for private SSH access. Therefore, creating an EC2 Instance Connect endpoint in the private subnet and updating SGs accordingly meets the requirement while keeping the instance non-internet-exposed.
References (AWS CloudOps Documents / Study Guide):
• AWS Certified CloudOps Engineer C Associate (SOA-C03) Exam Guide C Security and Compliance
• Amazon EC2 C Instance Connect Endpoint (Private SSH Access)
• AWS Well-Architected Framework C Security Pillar (Least Privilege Network Access)
A company’s ecommerce application is running on Amazon EC2 instances that are behind an Application Load Balancer (ALB). The instances are in an Auto Scaling group. Customers report that the website is occasionally down. When the website is down, it returns an HTTP 500 (server error) status code to customer browsers.
The Auto Scaling group’s health check is configured for EC2 status checks, and the instances appear healthy.
Which solution will resolve the problem?
- A . Replace the ALB with a Network Load Balancer.
- B . Add Elastic Load Balancing (ELB) health checks to the Auto Scaling group.
- C . Update the target group configuration on the ALB. Enable session affinity (sticky sessions).
- D . Install the Amazon CloudWatch agent on all instances. Configure the agent to reboot the instances.
B
Explanation:
In this scenario, the EC2 instances pass their EC2 status checks, indicating that the operating system is responsive. However, the application hosted on the instance is failing intermittently, returning HTTP 500 errors. This demonstrates a discrepancy between the instance-level health and the application-level health.
According to AWS CloudOps best practices under Monitoring, Logging, Analysis, Remediation and Performance Optimization (SOA-C03 Domain 1), Auto Scaling groups should incorporate Elastic Load Balancing (ELB) health checks instead of relying solely on EC2 status checks. The ELB health check probes the application endpoint (for example, HTTP or HTTPS target group health checks), ensuring that the application itself is functioning correctly.
When an instance fails an ELB health check, Amazon EC2 Auto Scaling will automatically mark the instance as unhealthy and replace it with a new one, ensuring continuous availability and performance optimization.
Extract from AWS CloudOps (SOA-C03) Study Guide C Domain 1:
“Implement monitoring and health checks using ALB and EC2 Auto Scaling integration. Application Load Balancer health checks allow Auto Scaling to terminate and replace instances that fail application-level health checks, ensuring consistent application performance.”
Extract from AWS Auto Scaling Documentation:
“When you enable the ELB health check type for your Auto Scaling group, Amazon EC2 Auto Scaling considers both EC2 status checks and Elastic Load Balancing health checks to determine instance health. If an instance fails the ELB health check, it is automatically replaced.”
Therefore, the correct answer is B, as it ensures proper application-level monitoring and remediation using ALB-integrated ELB health checks―a core CloudOps operational practice for proactive incident response and availability assurance.
References (AWS CloudOps Verified Source Extracts):
AWS Certified CloudOps Engineer C Associate (SOA-C03) Exam Guide: Domain 1 C Monitoring, Logging, and Remediation.
AWS Auto Scaling User Guide: Health checks for Auto Scaling instances (Elastic Load Balancing integration).
AWS Well-Architected Framework C Operational Excellence and Reliability Pillars.
AWS Elastic Load Balancing Developer Guide C Target group health checks and monitoring.
A user working in the Amazon EC2 console increased the size of an Amazon Elastic Block Store (Amazon EBS) volume attached to an Amazon EC2 Windows instance. The change is not reflected in the file system.
What should a CloudOps engineer do to resolve this issue?
- A . Extend the file system with operating system-level tools to use the new storage capacity.
- B . Reattach the EBS volume to the EC2 instance.
- C . Reboot the EC2 instance that is attached to the EBS volume.
- D . Take a snapshot of the EBS volume. Replace the original volume with a volume that is created from the snapshot.
A
Explanation:
When an Amazon EBS volume is resized, the new storage capacity is immediately available to the attached EC2 instance. However, EBS does not automatically extend the file system. The CloudOps engineer must manually extend the file system within the operating system to utilize the additional space.
AWS documentation for EC2 and EBS specifies:
“After you increase the size of an EBS volume, use file systemCspecific tools to extend the file system so that the operating system can use the new storage capacity.”
On Windows instances, this can be achieved through Disk Management or diskpart commands. On Linux systems, utilities such as growpart and resize2fs are used.
Options B and C do not modify file system metadata and are ineffective.
Option D unnecessarily replaces the volume, which adds risk and downtime. Thus, Option A aligns with the Monitoring and Performance Optimization practices of AWS CloudOps by properly extending the file system to recognize the new capacity.
References (AWS CloudOps Documents / Study Guide):
• AWS Certified CloudOps Engineer C Associate (SOA-C03) Exam Guide C Domain 1
• Amazon EBS C Modifying EBS Volumes
• Amazon EC2 User Guide C Extending a File System After Resizing a Volume
• AWS Well-Architected Framework C Performance Efficiency Pillar
A CloudOps engineer must manage the security of an AWS account. Recently, an IAM user’s access key was mistakenly uploaded to a public code repository. The engineer must identify everything that was changed using this compromised key.
How should the CloudOps engineer meet these requirements?
- A . Create an Amazon EventBridge rule to send all IAM events to an AWS Lambda function for analysis.
- B . Query Amazon EC2 logs by using Amazon CloudWatch Logs Insights for all events initiated with the compromised access key within the suspected timeframe.
- C . Search AWS CloudTrail event history for all events initiated with the compromised access key within the suspected timeframe.
- D . Search VPC Flow Logs for all events initiated with the compromised access key within the suspected timeframe.
C
Explanation:
According to the AWS Cloud Operations and Security documentation, AWS CloudTrail is the authoritative service for recording API activity across all AWS services within an account.
When an access key is compromised, CloudTrail logs all API requests made using that key, including details such as:
The user identity (access key ID) that made the request,
The service, operation, resource, and timestamp affected, and
The source IP address and region of the request.
By searching the CloudTrail event history for the specific access key ID, the CloudOps engineer can identify every action performed by that key during the suspected breach window.
Other options are incorrect:
EventBridge (A) is event-driven, not historical.
CloudWatch Logs (B) monitors system logs, not AWS API activity.
VPC Flow Logs (D) track network-level traffic, not API calls.
Therefore, the correct solution is Option C ― using AWS CloudTrail event history to audit and trace all actions executed via the compromised access key.
Reference: AWS Cloud Operations & Security Management Guide C Investigating Compromised Access Keys Using AWS CloudTrail
A company is storing backups in an Amazon S3 bucket. These backups must not be deleted for at least 3 months after creation.
What should the CloudOps engineer do?
- A . Configure an IAM policy that denies the s3:DeleteObject action for all users. Three months after an object is written, remove the policy.
- B . Enable S3 Object Lock on a new S3 bucket in compliance mode. Place all backups in the new S3 bucket with a retention period of 3 months.
- C . Enable S3 Versioning on the existing S3 bucket. Configure S3 Lifecycle rules to protect the backups.
- D . Enable S3 Object Lock on a new S3 bucket in governance mode. Place all backups in the new S3 bucket with a retention period of 3 months.
B
Explanation:
Per the AWS Cloud Operations and Data Protection documentation, S3 Object Lock enforces write-once-read-many (WORM) protection on objects for a defined retention period.
There are two modes:
Compliance mode: Even the root user cannot delete or modify objects during the retention period.
Governance mode: Privileged users with special permissions can override lock settings.
For regulatory or audit requirements that prohibit deletion, Compliance mode is the correct choice. When configured with a 3-month retention period, all backup objects are protected from deletion until expiration, ensuring compliance with data retention mandates.
Versioning (Option C) alone does not prevent deletion. IAM-based restrictions (Option A) lack time-based enforcement and require manual intervention. Governance mode (Option D) is less strict and
unsuitable for regulatory retention.
Thus, Option B is the correct CloudOps solution for immutable S3 backups.
Reference: AWS Cloud Operations & Storage Governance Guide C Implementing Retention with Amazon S3 Object Lock in Compliance Mode
A company uses AWS Systems Manager Session Manager to manage EC2 instances in the eu-west-1 Region. The company wants private connectivity using VPC endpoints.
Which VPC endpoints are required to meet these requirements? (Select THREE.)
- A . com.amazonaws.eu-west-1.ssm
- B . com.amazonaws.eu-west-1.ec2messages
- C . com.amazonaws.eu-west-1.ec2
- D . com.amazonaws.eu-west-1.ssmmessages
- E . com.amazonaws.eu-west-1.s3
- F . com.amazonaws.eu-west-1.states
A, B, D
Explanation:
The AWS Cloud Operations and Systems Manager documentation states that to use Session Manager privately within a VPC (without internet access), three interface VPC endpoints must be configured:
com.amazonaws.<region>.ssm C enables Systems Manager core API communication.
com.amazonaws.<region>.ec2messages C allows the agent to send and receive messages between EC2 and Systems Manager.
com.amazonaws.<region>.ssmmessages C enables real-time interactive communication for Session Manager connections.
These endpoints ensure secure, private connectivity over the AWS network, eliminating the need for public internet routing.
Endpoints for S3, Step Functions, or EC2 API (Options C, E, F) are not required for Session Manager functionality.
Thus, the correct combination is A, B, and D, aligning with AWS CloudOps best practices for secure, private Systems Manager access.
Reference: AWS Cloud Operations & Systems Manager Guide C Configuring VPC Endpoints for Session Manager Private Connectivity
A CloudOps engineer has created a VPC that contains a public subnet and a private subnet. Amazon EC2 instances that were launched in the private subnet cannot access the internet. The default network ACL is active on all subnets in the VPC, and all security groups allow outbound traffic.
Which solution will provide the EC2 instances in the private subnet with access to the internet?
- A . Create a NAT gateway in the public subnet. Create a route from the private subnet to the NAT gateway.
- B . Create a NAT gateway in the public subnet. Create a route from the public subnet to the NAT gateway.
- C . Create a NAT gateway in the private subnet. Create a route from the public subnet to the NAT gateway.
- D . Create a NAT gateway in the private subnet. Create a route from the private subnet to the NAT gateway.
A
Explanation:
According to the AWS Cloud Operations and Networking documentation, instances in a private subnet do not have a direct route to the internet gateway and thus require a NAT gateway for outbound internet access.
The correct configuration is to create a NAT gateway in the public subnet, associate an Elastic IP address, and then update the private subnet’s route table to send all 0.0.0.0/0 traffic to the NAT gateway. This enables instances in the private subnet to initiate outbound connections while keeping inbound traffic blocked for security.
Placing the NAT gateway inside the private subnet (Options C or D) prevents connectivity because it would not have a route to the internet gateway. Configuring routes from the public subnet to the NAT gateway (Option B) does not serve private subnet traffic.
Hence, Option A follows AWS best practices for enabling secure, managed, outbound-only internet access from private resources.
Reference: AWS Cloud Operations & Networking Guide C Section: Providing Internet Access to Private Subnets Using NAT Gateway
A company’s website runs on an Amazon EC2 Linux instance. The website needs to serve PDF files from an Amazon S3 bucket. All public access to the S3 bucket is blocked at the account level. The company needs to allow website users to download the PDF files.
Which solution will meet these requirements with the LEAST administrative effort?
- A . Create an IAM role that has a policy that allows s3:list* and s3:get* permissions. Assign the role to the EC2 instance. Assign a company employee to download requested PDF files to the EC2 instance and deliver the files to website users. Create an AWS Lambda function to periodically delete local files.
- B . Create an Amazon CloudFront distribution that uses an origin access control (OAC) that points to the S3 bucket. Apply a bucket policy to the bucket to allow connections from the CloudFront distribution. Assign a company employee to provide a download URL that contains the distribution URL and the object path to users when users request PDF files.
- C . Change the S3 bucket permissions to allow public access on the source S3 bucket. Assign a company employee to provide a PDF file URL to users when users request the PDF files.
- D . Deploy an EC2 instance that has an IAM instance profile to a public subnet. Use a signed URL from the EC2 instance to provide temporary access to the S3 bucket for website users.
B
Explanation:
Per the AWS Cloud Operations, Networking, and Security documentation, the best practice for serving private S3 content securely to end users is to use Amazon CloudFront with Origin Access Control (OAC).
OAC enables CloudFront to access S3 buckets privately, even when Block Public Access settings are enabled at the account level. This allows content to be delivered globally and securely without making the S3 bucket public. The bucket policy explicitly allows access only from the CloudFront distribution, ensuring that users can retrieve PDF files only via CloudFront URLs.
This configuration offers:
Automatic scalability through CloudFront caching,
Improved security via private access control,
Minimal administration effort with fully managed services.
Other options require manual handling or make the bucket public, violating AWS security best practices.
Therefore, Option B―using CloudFront with Origin Access Control and a restrictive bucket policy― provides the most secure, efficient, and low-maintenance CloudOps solution.
Reference: AWS Cloud Operations and Content Delivery Guide C Section: Serving Private Content Securely from Amazon S3 via CloudFront Using Origin Access Control
