Practice Free SCS-C03 Exam Online Questions
A company runs a web application on a fleet of Amazon EC2 instances that are in an Auto Scaling group. The EC2 instances are in the same VPC subnet as other workloads.
A security engineer deploys Amazon GuardDuty and integrates it with AWS Security Hub. The security engineer needs to implement anautomated solutionto detect and respond to anomalous traffic patterns. The solution must follow AWS best practices forinitial incident responseand mustminimize disruptionto the web application.
Which solution will meet these requirements?
- A . Disable the instance profile access keys by using AWS Lambda.
- B . Remove the affected instance from the Auto Scaling group and isolate it with a restricted security
group by using AWS Lambda. - C . Update the network ACL to block the detected traffic source.
- D . Send GuardDuty findings to Amazon SNS for email notification.
B
Explanation:
AWS incident response best practices emphasizecontainment with minimal blast radiuswhile preserving business continuity. According to the AWS Certified Security C Specialty Official Study Guide, isolating a compromised resource while allowing the application to continue operating is the recommended initial response.
By creating an Amazon EventBridge rule that reacts to GuardDuty anomalous traffic findings and invokes an AWS Lambda function, the security engineer can automaticallyremove the affected EC2 instance from the Auto Scaling groupand attach arestricted security group. This immediately stops malicious activity while allowing Auto Scaling to replace the instance and keep the application available.
Option A is inappropriate because EC2 instance profiles do not use long-term access keys. Option C applies subnet-wide changes that could disrupt unrelated workloads. Option D provides notification only and does not meet the automated response requirement.
AWS documentation explicitly identifiesinstance isolation via security groupsas a preferred containment technique that preserves application availability and forensic integrity.
AWS Certified Security C Specialty Official Study Guide
Amazon GuardDuty User Guide
AWS Incident Response Best Practices
A company has an encrypted Amazon Aurora DB cluster in the us-east-1 Region that uses an AWS KMS customer managed key. The company must copy a DB snapshot to the us-west-1 Region but cannot access the encryption key across Regions.
What should the company do to properly encrypt the snapshot in us-west-1?
- A . Store the customer managed key in AWS Secrets Manager in us-west-1.
- B . Create a new customer managed key in us-west-1 and use it to encrypt the snapshot.
- C . Create an IAM policy to allow access to the key in us-east-1 from us-west-1.
- D . Create an IAM policy that allows RDS in us-west-1 to access the key in us-east-1.
B
Explanation:
AWS KMS keys are strictly regional resources. According to AWS Certified Security C Specialty documentation, a KMS key created in one Region cannot be used to encrypt or decrypt data in another Region. This includes encrypted RDS and Aurora snapshots.
When copying an encrypted snapshot to a different Region, the destination Region must have its own
KMS key. AWS automatically re-encrypts the snapshot using the specified KMS key in the destination
Region during the copy operation.
Options C and D are invalid because IAM policies cannot extend a KMS key’s scope across Regions.
Option A is incorrect because Secrets Manager does not store or manage KMS keys themselves.
AWS best practices require creating a new customer managed key in the target Region and using it during the snapshot copy process.
Referenced AWS Specialty Documents:
AWS Certified Security C Specialty Official Study Guide
AWS KMS Regional Key Limitations
Amazon RDS Encrypted Snapshot Copy
A company detects bot activity targeting Amazon Cognito user pool endpoints. The solution must block malicious requests while maintaining access for legitimate users.
Which solution meets these requirements?
- A . Enable Amazon Cognito threat protection.
- B . Restrict access to authenticated users only.
- C . Associate AWS WAF with the Cognito user pool.
- D . Monitor requests with CloudWatch.
A
Explanation:
Amazon Cognito threat protection is purpose-built to detect and mitigate malicious authentication activity such as credential stuffing and bot traffic. It uses adaptive risk-based analysis without disrupting legitimate users.
AWS WAF cannot be directly associated with Cognito user pools.
Referenced AWS Specialty Documents:
AWS Certified Security C Specialty Official Study Guide
Amazon Cognito Threat Protection
A company has several Amazon S3 buckets that do not enforce encryption in transit. A security engineer must implement a solution that enforces encryption in transit for all the company’s existing and future S3 buckets.
Which solution will meet these requirements?
- A . Enable AWS Config. Create a proactive AWS Config Custom Policy rule. Create a Guard clause to evaluate the S3 bucket policies to check for a value of True for the aws:SecureTransport condition key. If the AWS Config rule evaluates to NON_COMPLIANT, block resource creation.
- B . Enable AWS Config. Configure the s3-bucket-ssl-requests-only AWS Config managed rule and set the rule trigger type to Hybrid. Create an AWS Systems Manager Automation runbook that applies a bucket policy to deny requests when the value of the aws:SecureTransport condition key is False.
Configure automatic remediation. Set the runbook as the target of the rule. - C . Enable Amazon Inspector. Create a custom AWS Lambda rule. Create a Lambda function that applies a bucket policy to deny requests when the value of the aws:SecureTransport condition key is False. Set the Lambda function as the target of the rule.
- D . Create an AWS CloudTrail trail. Enable S3 data events on the trail. Create an AWS Lambda function that applies a bucket policy to deny requests when the value of the aws:SecureTransport condition key is False. Configure the CloudTrail trail to invoke the Lambda function.
B
Explanation:
To enforce encryption in transit for Amazon S3, AWS best practice is to require HTTPS (TLS) by using a bucket policy condition that denies any request where aws:SecureTransport is false. The requirement includes both existing buckets and future buckets, so the control must continuously evaluate configuration drift and automatically remediate. AWS Config is the service intended for continuous configuration compliance monitoring across resources, and AWS Config managed rules provide standardized checks with low operational overhead. The s3-bucket-ssl-requests-only managed rule evaluates whether S3 buckets enforce SSL-only requests, aligning directly with enforcing encryption in transit. Setting the trigger type to Hybrid ensures evaluation both on configuration changes and periodically. Automatic remediation with an AWS Systems Manager Automation runbook allows the organization to apply or correct the bucket policy consistently at scale without manual work. This approach also supports governance by maintaining a measurable compliance status while actively fixing noncompliance. Option A is not the best fit because a “proactive” custom policy rule does not by itself remediate existing buckets and “block resource creation” is not how AWS Config enforces controls. Option C is incorrect because Amazon Inspector is a vulnerability management service and does not govern S3 bucket transport policies. Option D is inefficient and indirect because CloudTrail data events are not a compliance engine and would require custom processing.
Referenced AWS Specialty Documents:
AWS Certified Security C Specialty Official Study Guide
AWS Config Managed Rules for S3 Compliance
Amazon S3 Security Best Practices for SSL-only Access
A security engineer needs to implement a solution to create and control the keys that a company uses for cryptographic operations. The security engineer must create symmetric keys in which the key material is generated and used within a custom key store that is backed by an AWS CloudHSM cluster. The security engineer will use symmetric and asymmetric data key pairs for local use within applications. The security engineer also must audit the use of the keys.
How can the security engineer meet these requirements?
- A . To create the keys, use AWS Key Management Service (AWS KMS) and the custom key stores with the CloudHSM cluster. For auditing, use Amazon Athena.
- B . To create the keys, use Amazon S3 and the custom key stores with the CloudHSM cluster. For auditing, use AWS CloudTrail.
- C . To create the keys, use AWS Key Management Service (AWS KMS) and the custom key stores with the CloudHSM cluster. For auditing, use Amazon GuardDuty.
- D . To create the keys, use AWS Key Management Service (AWS KMS) and the custom key stores with the CloudHSM cluster. For auditing, use AWS CloudTrail.
D
Explanation:
The requirement is to havekey material generated and used inside a custom key store backed by an AWS CloudHSM cluster. This is exactly whatAWS KMS Custom Key Storesprovide: KMS manages the keys and policies, but the cryptographic operations for those KMS keys occur in the associatedCloudHSMcluster, keeping the key material within HSM boundaries. For applications that needlocal-use data keys(both symmetric data keys and asymmetric data key pairs), KMS supports generating data keys and data key pairs that applications can use for envelope encryption and local cryptographic operations, while the master key protections remain within KMS (and within CloudHSM when using a custom key store).
For auditing, AWS best practice isAWS CloudTrail, which records KMS API calls (such as CreateKey, GenerateDataKey, GenerateDataKeyPair, Encrypt/Decrypt, etc.) and provides an immutable event history for compliance and investigation. Athena can query logs, but it is not the primary audit record source; GuardDuty is for threat detection, not authoritative key-usage auditing. Therefore, the correct combination isKMS with a CloudHSM-backed custom key storeplusCloudTrailfor auditability.
A company needs the ability to identify the root cause of security findings in an AWS account. The company has enabled VPC Flow Logs, Amazon GuardDuty, and AWS CloudTrail. The company must investigate any IAM roles that are involved in the security findings and must visualize the findings.
Which solution will meet these requirements?
- A . Use Amazon Detective to run investigations on the IAM roles and to visualize the findings.
- B . Use Amazon Inspector to run investigations on the IAM roles and visualize the findings.
- C . Export GuardDuty findings to Amazon S3 and analyze them with Amazon Athena.
- D . Enable AWS Security Hub and use custom actions to investigate IAM roles.
A
Explanation:
Amazon Detective is a managed service designed specifically toinvestigate and analyze security findingsby automatically correlating data from Amazon GuardDuty, AWS CloudTrail, and VPC Flow Logs. According to the AWS Certified Security C Specialty Official Study Guide, Detective enables security teams to identifyroot causes, anomalous behavior, and indicators of compromisethrough interactive visualizations.
Amazon Detective allows investigators to pivot directly toIAM roles, users, and resources that are
involved in GuardDuty findings. Detective builds behavior graphs and timelines that show API activity, network traffic, and historical context, making it easier to understand how and why a security incident occurred.
Amazon Inspector (Option B) focuses on vulnerability scanning of compute resources and does not investigate IAM behavior. Option C requires manual analysis and lacks native visualization. AWS Security Hub (Option D) aggregates findings but does not perform root-cause investigation or behavioral analysis.
AWS documentation explicitly states thatAmazon Detective is the recommended service for deep-dive investigations following GuardDuty alerts, especially when IAM roles are involved.
AWS Certified Security C Specialty Official Study Guide
Amazon Detective User Guide
Amazon GuardDuty Integration Documentation
A company runs a global ecommerce website using Amazon CloudFront. The company must block traffic from specific countries to comply with data regulations.
Which solution will meet these requirements MOST cost-effectively?
- A . Use AWS WAF IP match rules.
- B . Use AWS WAF geo match rules.
- C . Use CloudFront geo restriction to deny the countries.
- D . Use geolocation headers in CloudFront.
C
Explanation:
Amazon CloudFront includes a built-in geo restriction feature that allows content to be allowed or denied based on the viewer’s country. According to AWS Certified Security C Specialty documentation, CloudFront geo restriction is the most cost-effective method for country-based blocking because it does not require AWS WAF or additional rule processing.
AWS WAF geo match rules incur additional cost and are more appropriate when advanced inspection or layered security controls are required. IP-based blocking is impractical due to frequent IP changes. Geolocation headers do not enforce access control.
CloudFront geo restriction is evaluated at the edge and efficiently blocks disallowed countries with minimal latency and cost.
Referenced AWS Specialty Documents:
AWS Certified Security C Specialty Official Study Guide
Amazon CloudFront Geo Restriction
AWS Edge Security Best Practices
A company uses AWS Organizations to manage an organization that consists of three workload OUs: Production, Development, and Testing. The company uses AWS CloudFormation templates to define and deploy workload infrastructure in AWS accounts that are associated with the OUs. Different SCPs are attached to each workload OU.
The company successfully deployed a CloudFormation stack update to workloads in the Development OU and the Testing OU. When the company uses the same CloudFormation template to deploy the stack update in an account in the Production OU, the update fails. The error message reports insufficient IAM permissions.
What is the FIRST step that a security engineer should take to troubleshoot this issue?
- A . Review the AWS CloudTrail logs in the account in the Production OU. Search for any failed API calls from CloudFormation during the deployment attempt.
- B . Remove all the SCPs that are attached to the Production OU. Rerun the CloudFormation stack update to determine if the SCPs were preventing the CloudFormation API calls.
- C . Confirm that the role used by CloudFormation has sufficient permissions to create, update, and delete the resources that are referenced in the CloudFormation template.
- D . Make all the SCPs that are attached to the Production OU the same as the SCPs that are attached to the Testing OU.
A
Explanation:
AWS CloudTrail provides a record of all API calls made in an AWS account, including calls initiated by AWS CloudFormation. According to the AWS Certified Security C Specialty Study Guide, CloudTrail is the primary source for troubleshooting authorization failures because it records denied actions and the policy type that caused the denial, including service control policies.
Reviewing CloudTrail logs allows a security engineer to identify which specific API calls failed during the CloudFormation deployment and whether the denial was caused by an SCP, an IAM policy, or a permission boundary. This evidence-based approach is the recommended first step before making any configuration changes.
Option B is unsafe and violates governance best practices by removing SCPs in production. Option C may be necessary later, but it does not identify whether SCPs are the root cause. Option D introduces unnecessary risk and bypasses the purpose of differentiated controls across OUs.
AWS documentation emphasizes observing and validating before modifying security controls, making CloudTrail log analysis the correct initial troubleshooting step.
Referenced AWS Specialty Documents:
AWS Certified Security C Specialty Official Study Guide
AWS Organizations Service Control Policies
AWS CloudTrail Authorization Failure Analysis
A healthcare company stores more than 1 million patient records in an Amazon S3 bucket. The patient records include personally identifiable information (PII). The S3 bucket contains hundreds of terabytes of data.
A security engineer receives an alert that was triggered by an Amazon GuardDuty
Exfiltration:S3/AnomalousBehavior finding. The security engineer confirms that an attacker is using
temporary credentials that were obtained from a compromised Amazon EC2 instance that has
s3:GetObject permissions for the S3 bucket. The attacker has begun downloading the contents of the
bucket. The security engineer contacts a development team. The development team will require 4
hours to implement and deploy a fix.
The security engineer must take immediate action to prevent the attacker from downloading more data from the S3 bucket.
Which solution will meet this requirement?
- A . Revoke the temporary session that is associated with the instance profile that is attached to the EC2 instance.
- B . Quarantine the EC2 instance by replacing the existing security group with a new security group that has no rules applied.
- C . Enable Amazon Macie on the S3 bucket. Configure the managed data identifiers for personally identifiable information (PII). Enable S3 Object Lock on objects that Macie flags.
- D . Apply an S3 bucket policy temporarily. Configure the policy to deny read access for all principals to block downloads while the development team address the vulnerability.
A
Explanation:
Amazon GuardDuty Exfiltration:S3/AnomalousBehavior findings indicate that S3 data access patterns are consistent with data exfiltration. In this scenario, the attacker is usingtemporary credentials obtained from an EC2 instance profile, which are issued by AWS Security Token Service (STS).
According to AWS Certified Security C Specialty documentation, thefastest and most targeted remediationis to revoke the temporary session credentials associated with the compromised instance profile. This can be accomplished by removing or modifying the IAM role permissions, detaching the instance profile, or stopping the instance, which immediately invalidates the temporary credentials and prevents further S3 access.
Option B may limit outbound traffic but does not invalidate already issued credentials. Option C is a detection and classification service and does not prevent active exfiltration. Option D would block all access to the bucket, including legitimate access, and is considered overly disruptive for incident containment.
AWS incident response best practices emphasizecredential revocation as the first containment
stepwhen compromise of temporary credentials is confirmed.
AWS Certified Security C Specialty Official Study Guide
Amazon GuardDuty User Guide C S3 Protection
AWS STS and IAM Role Security Documentation
AWS Incident Response Best Practices
A company is operating an open-source software platform that is internet facing. The legacy software platform no longer receives security updates. The software platform operates using Amazon Route
53 weighted load balancing to send traffic to two Amazon EC2 instances that connect to an Amazon RDS cluster. A recent report suggests this software platform is vulnerable to SQL injection attacks, with samples of attacks provided. The company’s security engineer must secure this system against SQL injection attacks within 24 hours. The security engineer’s solution must involve the least amount of effort and maintain normal operations during implementation.
What should the security engineer do to meet these requirements?
- A . Create an Application Load Balancer with the existing EC2 instances as a target group. Create an AWS WAF web ACL containing rules that protect the application from this attack, then apply it to the ALB. Test to ensure the vulnerability has been mitigated, then redirect the Route 53 records to point to the ALB. Update security groups on the EC2 instances to prevent direct access from the internet.
- B . Create an Amazon CloudFront distribution specifying one EC2 instance as an origin. Create an AWS WAF web ACL containing rules that protect the application from this attack, then apply it to the distribution. Test to ensure the vulnerability has been mitigated, then redirect the Route 53 records to point to CloudFront.
- C . Obtain the latest source code for the platform and make the necessary updates. Test the updated code to ensure that the vulnerability has been mitigated, then deploy the patched version of the platform to the EC2 instances.
- D . Update the security group that is attached to the EC2 instances, removing access from the internet to the TCP port used by the SQL database. Create an AWS WAF web ACL containing rules that protect the application from this attack, then apply it to the EC2 instances. Test to ensure the vulnerability has been mitigated, then restore the security group to the original setting.
A
Explanation:
The fastest, least-effort way to mitigate SQL injection at the edge―without modifying legacy application code―is to place the application behind a component that supportsAWS WAFand applymanaged SQL injection protections. AnApplication Load Balancerintegrates directly with AWS WAF, allowing the security engineer to deploy a web ACL (including AWS Managed Rules for SQL injection and custom patterns based on the provided samples) and immediately start blocking malicious payloads before they reach the EC2 instances and the database.
Option A also preserves normal operations during rollout: you can create the ALB, register the existing EC2 instances as targets, validate health checks and traffic behavior, apply WAF protections, and then shift Route 53 weighted records to the ALB with minimal downtime. Finally, tightening the EC2 security groups to prevent direct internet access ensures all inbound web traffic is forced through the ALB + WAF inspection point, reducing exposure quickly.
Option B is risky because it uses only one EC2 origin (reducing availability) and adds CloudFront origin configuration complexity under a 24-hour deadline. Option C requires code changes on unsupported software and is unlikely to be safely delivered in time. Option D is invalid because AWS WAF cannot be attached directly to EC2 instances, and changing DB-port exposure doesn’t address SQL injection on the web layer.
