Practice Free SCS-C03 Exam Online Questions
A company uses an organization in AWS Organizations to manage multiple AWS accounts. The company wants to centrally give users the ability to access Amazon Q Developer.
Which solution will meet this requirement?
- A . Enable AWS IAM Identity Center and set up Amazon Q Developer as an AWS managed application.
- B . Enable Amazon Cognito and create a new identity pool for Amazon Q Developer.
- C . Enable Amazon Cognito and set up Amazon Q Developer as an AWS managed application.
- D . Enable AWS IAM Identity Center and create a new identity pool for Amazon Q Developer.
A
Explanation:
AWS IAM Identity Center is the recommended service for centrally managing workforce access across multiple AWS accounts within an organization. According to AWS Certified Security C Specialty documentation, Amazon Q Developer integrates natively with IAM Identity Center as an AWS managed application.
By enabling IAM Identity Center and assigning Amazon Q Developer to users or groups, the company can centrally control access using permission sets and organizational boundaries. This approach provides centralized authentication, authorization, and auditing with minimal overhead.
Amazon Cognito is intended for customer and application user authentication, not workforce access to AWS services. Identity pools are not applicable to IAM Identity Center integrations.
AWS best practices clearly recommend IAM Identity Center for workforce access to AWS-managed applications.
Referenced AWS Specialty Documents:
AWS Certified Security C Specialty Official Study Guide
AWS IAM Identity Center Integrations
Amazon Q Developer Access Management
A company is building a secure solution that relies on an AWS Key Management Service (AWS KMS) customer managed key. The company wants to allow AWS Lambda to use the KMS key. However, the company wants to prevent Amazon EC2 from using the key.
Which solution will meet these requirements?
- A . Use IAM explicit deny for EC2 instance profiles and allow for Lambda roles.
- B . Use a KMS key policy with kms:ViaService conditions to allow Lambda usage and deny EC2 usage.
- C . Use aws:SourceIp and aws:AuthorizedService condition keys in the KMS key policy.
- D . Use an SCP to deny EC2 and allow Lambda.
B
Explanation:
AWS KMS access control is primarily enforced through key policies (and optionally grants), and AWS recommends using key policy condition keys to restrict how keys can be used. The kms:ViaService condition key is specifically designed to restrict KMS API usage to requests that come through a particular AWS service endpoint in a specific Region. This is the most robust way to ensure a key can be used only via AWS Lambda (for example, lambda.<region>.amazonaws.com) and not via Amazon EC2 (ec2.<region>.amazonaws.com), even if IAM permissions exist elsewhere. By writing a key policy that uses the Lambda execution role as the principal and conditions on kms:ViaService, the company can tightly bind key usage to Lambda-originated cryptographic operations while preventing use through EC2 service paths.
Option A is weaker because EC2 is not the only way an IAM principal might use KMS, and relying on attaching explicit deny policies broadly is harder to manage and can miss principals.
Option C is incorrect because aws:AuthorizedService is not the typical mechanism for KMS service restriction, and SourceIp is unreliable for service-to-service calls.
Option D is not ideal because SCPs do not provide fine-grained service-path restrictions for KMS usage and cannot “allow” beyond IAM; key policy controls still apply.
Referenced AWS Specialty Documents:
AWS Certified Security C Specialty Official Study Guide
AWS KMS Key Policies and Condition Keys
AWS KMS Best Practices for Service-Scoped Key Usage
A company uses AWS Organizations and has an SCP at the root that prevents sharing resources with external accounts. The company now needs to allow only the marketing account to share resources externally while preventing all other accounts from doing so. All accounts are in the same OU.
Which solution will meet these requirements?
- A . Create a new SCP in the marketing account to explicitly allow sharing.
- B . Edit the existing SCP to add a condition that excludes the marketing account.
- C . Edit the SCP to include an Allow statement for the marketing account.
- D . Use a permissions boundary in the marketing account.
B
Explanation:
Service control policies (SCPs) define the maximum available permissions for accounts and are evaluated as guardrails. AWS Certified Security C Specialty documentation states SCPs are typically used to apply organization-wide restrictions, and exceptions are commonly handled by using conditions (for example, excluding specific accounts) or by structuring OUs differently. Because all accounts are in the same OU and the company must continue blocking external sharing for everyone except one account, modifying the existing SCP to exclude the marketing account is the most direct solution. An SCP attached at the root affects all accounts unless conditions narrow its scope. Adding a condition that excludes the marketing account allows that account to retain the ability to share resources externally while the SCP continues to block sharing for other accounts.
Option A is not feasible because account-level SCPs cannot override a deny applied by a parent SCP; explicit denies always win.
Option C misunderstands SCP behavior because SCPs do not grant permissions; they only limit.
Option D is an IAM control that cannot override an organization-level deny. Therefore, the only secure, scalable option is to modify the existing SCP with an exception condition for the marketing account.
Referenced AWS Specialty Documents:
AWS Certified Security C Specialty Official Study Guide
AWS Organizations SCP Evaluation Logic
SCP Deny Precedence and Exception Patterns
A company hosts its public website on Amazon EC2 instances behind an Application Load Balancer (ALB). The website is experiencing a global DDoS attack from a specific IoT device brand that uses a unique user agent. A security engineer is creating an AWS WAF web ACL and will associate it with the ALB.
Which rule statement will mitigate the current attack and future attacks from these IoT devices without blocking legitimate customers?
- A . Use an IP set match rule statement.
- B . Use a geographic match rule statement.
- C . Use a rate-based rule statement.
- D . Use a string match rule statement on the user agent.
D
Explanation:
AWS WAF string match rule statements allow inspection of HTTP headers, including the User-Agent header. According to AWS Certified Security C Specialty guidance, when malicious traffic can be uniquely identified by a consistent request attribute, such as a device-specific user agent, a string match rule provides precise mitigation with minimal false positives.
IP-based blocking is ineffective for globally distributed botnets. Geographic blocking risks denying access to legitimate users. Rate-based rules limit request volume but do not prevent low-and-slow attacks.
By matching the unique IoT device brand in the User-Agent header, the security engineer can block only malicious requests while preserving customer access.
Referenced AWS Specialty Documents:
AWS Certified Security C Specialty Official Study Guide
AWS WAF Rule Statements
AWS DDoS Mitigation Best Practices
A security administrator is setting up a new AWS account. The security administrator wants to secure the data that a company stores in an Amazon S3 bucket. The security administrator also wants to reduce the chance of unintended data exposure and the potential for misconfiguration of objects that are in the S3 bucket.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Configure the S3 Block Public Access feature for the AWS account.
- B . Configure the S3 Block Public Access feature for all objects that are in the bucket.
- C . Deactivate ACLs for objects that are in the bucket.
- D . Use AWS PrivateLink for Amazon S3 to access the bucket.
A
Explanation:
Amazon S3 Block Public Access configured at the AWS account level is the recommended and most effective approach to protect data stored in Amazon S3 while minimizing operational overhead. AWS Security Specialty documentation explains that S3 Block Public Access provides centralized, preventative controls designed to block public access to S3 buckets and objects regardless of individual bucket policies or object-level ACL configurations. When enabled at the account level, these controls automatically apply to all existing and newly created buckets, significantly reducing the risk of accidental exposure caused by misconfigured permissions.
The AWS Certified Security C Specialty Study Guide emphasizes that public access misconfiguration is a leading cause of data leaks in cloud environments. Account-level S3 Block Public Access acts as a guardrail by overriding any attempt to grant public permissions through bucket policies or ACLs. This eliminates the need to manage security settings on a per-bucket or per-object basis, thereby reducing administrative complexity and human error.
Configuring Block Public Access at the object level, as in option B, requires continuous monitoring and manual configuration, which increases operational overhead. Disabling ACLs alone, as described in option C, does not fully prevent public access because bucket policies can still allow public permissions. Using AWS PrivateLink, as in option D, controls network access but does not protect against public exposure through misconfigured S3 policies.
AWS security best practices explicitly recommend enabling S3 Block Public Access at the account level as the primary mechanism for preventing unintended public data exposure with minimal management effort.
Referenced AWS Specialty Documents:
AWS Certified Security C Specialty Official Study Guide
Amazon S3 Security Best Practices Documentation
Amazon S3 Block Public Access Overview
AWS Well-Architected Framework C Security Pillar
A company must immediately disable compromised IAM users across all AWS accounts and collect all actions performed by the user in the last 7 days.
Which solution will meet these requirements?
- A . Disable the IAM user and query CloudTrail logs in Amazon S3 using Athena.
- B . Remove IAM policies and query logs in Security Hub.
- C . Remove permission sets and query logs using CloudWatch Logs Insights.
- D . Disable the user in IAM Identity Center and query the organizational event data store.
D
Explanation:
AWS IAM Identity Center centrally manages user access across an AWS Organization. Disabling the user in Identity Center immediately revokes access to all AWS accounts. According to AWS Certified Security C Specialty documentation, organizational CloudTrail event data stores provide centralized, queryable access to all events across accounts.
Using CloudTrail Lake enables direct querying of activity without exporting logs. Disabling the user at the Identity Center level ensures full containment.
Referenced AWS Specialty Documents:
AWS Certified Security C Specialty Official Study Guide
AWS IAM Identity Center Incident Response
AWS CloudTrail Lake
A company has several Amazon S3 buckets that do not enforce encryption in transit. A security engineer must implement a solution that enforces encryption in transit for all the company’s existing and future S3 buckets.
Which solution will meet these requirements?
- A . Enable AWS Config. Create a proactive AWS Config Custom Policy rule. Create a Guard clause to evaluate the S3 bucket policies to check for a value of True for the aws:SecureTransport condition key. If the AWS Config rule evaluates to NON_COMPLIANT, block resource creation.
- B . Enable AWS Config. Configure the s3-bucket-ssl-requests-only AWS Config managed rule and set the rule trigger type to Hybrid. Create an AWS Systems Manager Automation runbook that applies a bucket policy to deny requests when the value of the aws:SecureTransport condition key is False.
Configure automatic remediation. Set the runbook as the target of the rule. - C . Enable Amazon Inspector. Create a custom AWS Lambda rule. Create a Lambda function that applies a bucket policy to deny requests when the value of the aws:SecureTransport condition key is False. Set the Lambda function as the target of the rule.
- D . Create an AWS CloudTrail trail. Enable S3 data events on the trail. Create an AWS Lambda function that applies a bucket policy to deny requests when the value of the aws:SecureTransport condition key is False. Configure the CloudTrail trail to invoke the Lambda function.
B
Explanation:
To enforce encryption in transit for Amazon S3, AWS best practice is to require HTTPS (TLS) by using a bucket policy condition that denies any request where aws:SecureTransport is false. The requirement includes both existing buckets and future buckets, so the control must continuously evaluate configuration drift and automatically remediate. AWS Config is the service intended for continuous configuration compliance monitoring across resources, and AWS Config managed rules provide standardized checks with low operational overhead. The s3-bucket-ssl-requests-only managed rule evaluates whether S3 buckets enforce SSL-only requests, aligning directly with enforcing encryption in transit. Setting the trigger type to Hybrid ensures evaluation both on configuration changes and periodically. Automatic remediation with an AWS Systems Manager Automation runbook allows the organization to apply or correct the bucket policy consistently at scale without manual work. This approach also supports governance by maintaining a measurable compliance status while actively fixing noncompliance.
Option A is not the best fit because a “proactive” custom policy rule does not by itself remediate existing buckets and “block resource creation” is not how AWS Config enforces controls.
Option C is incorrect because Amazon Inspector is a vulnerability management service and does not govern S3 bucket transport policies.
Option D is inefficient and indirect because CloudTrail data events are not a compliance engine and would require custom processing.
Referenced AWS Specialty Documents:
AWS Certified Security C Specialty Official Study Guide
AWS Config Managed Rules for S3 Compliance
Amazon S3 Security Best Practices for SSL-only Access
A security team manages a company’s AWS Key Management Service (AWS KMS) customer managed keys. Only members of the security team can administer the KMS keys. The company’s application team has a software process that needs temporary access to the keys occasionally. The security team needs to provide the application team’s software process with access to the keys.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Export the KMS key material to an on-premises hardware security module (HSM). Give the application team access to the key material.
- B . Edit the key policy that grants the security team access to the KMS keys by adding the application team as principals. Revert this change when the application team no longer needs access.
- C . Create a key grant to allow the application team to use the KMS keys. Revoke the grant when the application team no longer needs access.
- D . Create a new KMS key by generating key material on premises. Import the key material to AWS KMS whenever the application team needs access. Grant the application team permissions to use the key.
C
Explanation:
AWS KMS key grants are specifically designed to provide temporary, granular permissions to use customer managed keys without modifying key policies. According to the AWS Certified Security C Specialty Study Guide, grants are the preferred mechanism for delegating key usage permissions to AWS principals for short-term or programmatic access scenarios. Grants allow permissions such as Encrypt, Decrypt, or GenerateDataKey and can be created and revoked dynamically.
Using a key grant avoids the operational risk and overhead of editing key policies, which are long-term control mechanisms and should remain stable. AWS documentation emphasizes that frequent key policy changes increase the risk of misconfiguration and accidental privilege escalation. Grants can be revoked immediately when access is no longer required, ensuring strong adherence to the principle of least privilege.
Options A and D violate AWS security best practices because AWS KMS does not allow direct export of key material unless the key was explicitly created as an importable key, and exporting key material
increases exposure risk.
Option B requires manual policy changes and rollback, which introduces operational overhead and audit complexity.
AWS recommends key grants as the most efficient and secure way to provide temporary access to KMS keys for applications.
Referenced AWS Specialty Documents:
AWS Certified Security C Specialty Official Study Guide
AWS KMS Key Policies and Grants Documentation
AWS KMS Best Practices
An application is running on an Amazon EC2 instance that has an IAM role attached. The IAM role provides access to an AWS Key Management Service (AWS KMS) customer managed key and an Amazon S3 bucket. The key is used to access 2 TB of sensitive data that is stored in the S3 bucket. A security engineer discovers a potential vulnerability on the EC2 instance that could result in the compromise of the sensitive data. Due to other critical operations, the security engineer cannot immediately shut down the EC2 instance for vulnerability patching.
What is the FASTEST way to prevent the sensitive data from being exposed?
- A . Download the data from the existing S3 bucket to a new EC2 instance. Then delete the data from the S3 bucket. Re-encrypt the data with a client-based key. Upload the data to a new S3 bucket.
- B . Block access to the public range of S3 endpoint IP addresses by using a host-based firewall. Ensure that internet-bound traffic from the affected EC2 instance is routed through the host-based firewall.
- C . Revoke the IAM role’s active session permissions. Update the S3 bucket policy to deny access to the IAM role. Remove the IAM role from the EC2 instance profile.
- D . Disable the current key. Create a new KMS key that the IAM role does not have access to, and re-encrypt all the data with the new key. Schedule the compromised key for deletion.
C
Explanation:
AWS incident response best practices emphasize rapid containment to prevent further data exposure. According to the AWS Certified Security C Specialty Study Guide, the fastest and least disruptive containment method for compromised compute resources is to immediately revoke credentials and permissions rather than modifying data or infrastructure.
Revoking the IAM role’s active sessions prevents the EC2 instance from continuing to access AWS services. Updating the S3 bucket policy to explicitly deny access to the IAM role ensures immediate enforcement, even if temporary credentials remain cached. Removing the IAM role from the instance profile further prevents new credentials from being issued.
Option A and D involve large-scale data movement or re-encryption, which is time-consuming and operationally expensive.
Option B relies on network-level controls that do not prevent access through private AWS endpoints.
AWS guidance explicitly recommends credential revocation and policy-based denial as the fastest containment step during active incidents.
Referenced AWS Specialty Documents:
AWS Certified Security C Specialty Official Study Guide
AWS Incident Response Best Practices
AWS IAM Role Session Management
A company wants to establish separate AWS Key Management Service (AWS KMS) keys to use for different AWS services. The company’s security engineer created a key policy to allow the infrastructure deployment team to create encrypted Amazon Elastic Block Store (Amazon EBS) volumes by assuming the InfrastructureDeployment IAM role. The security engineer recently discovered that IAM roles other than the InfrastructureDeployment role used this key for other services.
Which change to the policy should the security engineer make to resolve these issues?
- A . In the statement block that contains the Sid "Allow use of the key", under the "Condition" block,
change StringEquals to StringLike. - B . In the policy document, remove the statement block that contains the Sid "Enable IAM User Permissions". Add key management policies to the KMS policy.
- C . In the statement block that contains the Sid "Allow use of the key", under the "Condition" block, change the kms:ViaService value to ec2.us-east-1.amazonaws.com.
- D . In the policy document, add a new statement block that grants the kms:Disable* permission to the security engineer’s IAM role.
C
Explanation:
AWS KMS key policies can restrict how and where a key is used by leveraging condition keys such as kms:ViaService. According to the AWS Certified Security C Specialty documentation, kms:ViaService limits key usage to requests that originate from a specific AWS service in a specific Region. If this condition is overly broad or incorrect, other IAM roles and services may unintentionally use the key.
By explicitly setting the kms:ViaService condition value to ec2.us-east-1.amazonaws.com, the key policy ensures that the KMS key can only be used when requests are made through the Amazon EC2 service in that Region, such as for EBS volume encryption. This prevents other services or unintended IAM roles from using the key.
Option A weakens the condition logic and can broaden access.
Option B removes essential permissions that allow IAM policies to function with KMS keys and is not recommended.
Option D relates to administrative control of the key, not service-level usage restrictions.
AWS best practices recommend using kms:ViaService and precise condition values to enforce service-specific key usage and strong separation of duties.
Referenced AWS Specialty Documents:
AWS Certified Security C Specialty Official Study Guide
AWS KMS Key Policy Condition Keys
AWS KMS Best Practices
