Practice Free SCS-C03 Exam Online Questions
A company experienced a security incident caused by a vulnerable container image that was pushed from an external CI/CD pipeline into Amazon ECR.
Which solution will prevent vulnerable images from being pushed?
- A . Enable ECR enhanced scanning with Lambda blocking.
- B . Use Amazon Inspector with EventBridge and Lambda.
- C . Integrate Amazon Inspector into the CI/CD pipeline using SBOM generation and fail the pipeline on critical findings.
- D . Enable basic continuous ECR scanning.
C
Explanation:
Amazon Inspector provides native CI/CD integration capabilities that allow security checks to occur before container images are pushed to Amazon ECR. According to AWS Certified Security C Specialty documentation, Inspector does not block image pushes automatically. Instead, prevention must occur inside the CI/CD pipeline itself.
By generating a Software Bill of Materials (SBOM) using the Amazon Inspector SBOM generator and submitting it to Inspector for scanning, the pipeline can detect critical vulnerabilities before the image is uploaded. If vulnerabilities exceed policy thresholds, the pipeline fails, preventing
deployment.
Post-push scanning solutions only detect vulnerabilities after exposure. Event-driven blocking does not prevent the initial risk.
AWS best practices require “shift-left” security controls to prevent vulnerable artifacts from entering production.
Referenced AWS Specialty Documents:
AWS Certified Security C Specialty Official Study Guide
Amazon Inspector CI/CD Integration
A security engineer wants to evaluate configuration changes to a specific AWS resource to ensure that the resource meets compliance standards. However, the security engineer is concerned about a situation in which several configuration changes are made to the resource in quick succession. The security engineer wants to record only the latest configuration of that resource to indicate the cumulative impact of the set of changes.
Which solution will meet this requirement in the MOST operationally efficient way?
- A . Use AWS CloudTrail to detect the configuration changes by filtering API calls to monitor the changes. Use the most recent API call to indicate the cumulative impact of multiple calls.
- B . Use AWS Config to detect the configuration changes and to record the latest configuration in case of multiple configuration changes.
- C . Use Amazon CloudWatch to detect the configuration changes by filtering API calls to monitor the changes. Use the most recent API call to indicate the cumulative impact of multiple calls.
- D . Use AWS Cloud Map to detect the configuration changes. Generate a report of configuration changes from AWS Cloud Map to track the latest state by using a sliding time window.
B
Explanation:
AWS Config is purpose-built torecord resource configuration stateand track how that state changes over time. When multiple updates occur close together, AWS Config captures configuration items that represent the resource’scurrent recorded configuration, which is exactly what a compliance engineer needs to evaluate the final (cumulative) state after a burst of changes. AWS Config also integrates directly with compliance evaluation throughConfig rules, which can continuously assess whether the latest configuration meets required standards. This is the most operationally efficient approach because it avoids building custom log filtering, state reconstruction, or timing logic.
CloudTrail and CloudWatch-based approaches (Options A and C) captureAPI events, not authoritative “current configuration state.” Reconstructing the final configuration from a series of API calls can be error-prone, especially when changes are made by different services, via consoles, or through chained automation. CloudTrail is excellent for “who did what and when,” but AWS Config is the service that maintains thelatest configuration snapshotsuitable for compliance posture evaluation.
AWS Cloud Map (Option D) is for service discovery and naming, not compliance configuration history.
Therefore, AWS Config is the correct and most efficient solution.
A security engineer needs to prepare a company’s Amazon EC2 instances for quarantine during a security incident. The AWS Systems Manager Agent (SSM Agent) has been deployed to all EC2 instances. The security engineer has developed a script to install and update forensics tools on the EC2 instances.
Which solution will quarantine EC2 instances during a security incident?
- A . Create a rule in AWS Config to track SSM Agent versions.
- B . Configure Systems Manager Session Manager to deny all connection requests from external IP addresses.
- C . Store the script in Amazon S3 and grant read access to the instance profile.
- D . Configure IAM permissions for the SSM Agent to run the script as a predefined Systems Manager Run Command document.
D
Explanation:
AWS Systems Manager Run Command enables security engineers toremotely and securely execute scripts on EC2 instanceswithout requiring SSH or inbound network access. According to AWS Certified Security C Specialty incident response guidance, Run Command is a foundational tool forinstance quarantine and forensic preparation.
By configuring IAM permissions that allow the SSM Agent to execute a predefined Run Command document, the security engineer can rapidly deploy forensic tools, disable services, or modify system configurations across affected EC2 instances during an incident. This approach aligns with AWS best practices forcontainment and evidence preservation, while maintaining auditability through Systems Manager logs.
Option A only provides visibility, not quarantine capability. Option B restricts access but does not allow forensic tooling. Option C enables access to the script but does not execute it.
AWS documentation emphasizes thatSystems Manager Run Command is the recommended mechanism for incident response automation and quarantine actionson EC2 instances.
AWS Certified Security C Specialty Official Study Guide
AWS Systems Manager Run Command Documentation
AWS Incident Response Best Practices
A company uses AWS Organizations. The company has teams that use an AWS CloudHSM hardware security module (HSM) that is hosted in a central AWS account. One of the teams creates its own new dedicated AWS account and wants to use the HSM that is hosted in the central account.
How should a security engineer share the HSM that is hosted in the central account with the new dedicated account?
- A . Use AWS Resource Access Manager (AWS RAM) to share the VPC subnet ID of the HSM that is hosted in the central account with the new dedicated account. Configure the CloudHSM security group to accept inbound traffic from the private IP addresses of client instances in the new dedicated account.
- B . Use AWS Identity and Access Management (IAM) to create a cross-account role to access the CloudHSM cluster that is in the central account. Create a new IAM user in the new dedicated account. Assign the cross-account role to the new IAM user.
- C . Use AWS IAM Identity Center to create an AWS Security Token Service (AWS STS) token to authenticate from the new dedicated account to the central account. Use the cross-account permissions that are assigned to the STS token to invoke an operation on the HSM in the central account.
- D . Use AWS Resource Access Manager (AWS RAM) to share the ID of the HSM that is hosted in the central account with the new dedicated account. Configure the CloudHSM security group to accept inbound traffic from the private IP addresses of client instances in the new dedicated account.
D
Explanation:
AWS CloudHSM is aVPC-scopedservice: the HSMs (and the CloudHSM cluster) live inside a VPC in the central account, and clients connect over the network to perform cryptographic operations. When another account needs to use a centrally managed CloudHSM cluster, the right approach is toshare the CloudHSM clusterwith that account and allow network connectivity from the consuming account’s clients. AWS provides cross-account resource sharing throughAWS Resource Access Manager (AWS RAM)for supported resources, including CloudHSM clusters. Sharing thecluster/HSM identifieris what grants the consuming account visibility/ability to create client configurations against that shared cluster.
After sharing, the consuming account’s EC2 instances (CloudHSM clients) still must be able to reach the HSM ENIs over the network, so the CloudHSM security group in the central account must allow inbound connections from the client sources (typically by security group referencing via VPC connectivity, or by allowing the relevant IP range/ports as appropriate).
Option A is incorrect because sharing asubnet IDdoes not share the CloudHSM resource itself. Options B and C misuse IAM/STS: CloudHSM cryptographic operations are not granted via assuming an IAM role into the central account; access is based oncluster sharing + network connectivity + CloudHSM user authenticationat the HSM level.
A company runs a global ecommerce website that is hosted on AWS. The company uses Amazon CloudFront to serve content to its user base. The company wants to block inbound traffic from a specific set of countries to comply with recent data regulation policies.
Which solution will meet these requirements MOST cost-effectively?
- A . Create an AWS WAF web ACL with an IP match condition to deny the countries’ IP ranges.
Associate the web ACL with the CloudFront distribution. - B . Create an AWS WAF web ACL with a geo match condition to deny the specific countries. Associate the web ACL with the CloudFront distribution.
- C . Use the geo restriction feature in CloudFront to deny the specific countries.
- D . Use geolocation headers in CloudFront to deny the specific countries.
C
Explanation:
Amazon CloudFront includes a native geo restriction (geoblocking) capability that allows content owners to control access to their distributions based on the geographic location of the viewer. The viewer’s country is determined using the IP address from which the request originates. According to the AWS Certified Security C Specialty Official Study Guide and the Amazon CloudFront Developer Guide, geo restriction is specifically designed for scenarios where organizations must comply with regional regulations, licensing requirements, or data sovereignty policies.
From a cost perspective, CloudFront geo restriction is the most cost-effective solution because it is configured directly within the CloudFront distribution and does not require AWS WAF. AWS WAF introduces additional costs for web ACLs, rules, and request processing, which is unnecessary when the requirement is limited strictly to blocking or allowing access based on country.
Option A is incorrect because maintaining IP ranges for entire countries is operationally complex, error-prone, and not scalable. Country-level IP ranges frequently change, making this approach unsuitable and inefficient. Option B, although technically valid, is not the most cost-effective choice because AWS WAF geo match rules incur additional charges and are intended for advanced Layer 7 security controls such as application-layer attacks. Option D is incorrect because geolocation headers provided by CloudFront are informational only and cannot independently enforce access control decisions.
AWS documentation explicitly recommends CloudFront geo restriction when the sole requirement is country-based access control, reserving AWS WAF for advanced security inspection and threat mitigation use cases.
AWS Certified Security C Specialty Official Study Guide
Amazon CloudFront Developer Guide C Geo Restriction
AWS Well-Architected Framework C Security Pillar
AWS Security Best Practices Documentation
A corporate cloud security policy states that communications between the company’s VPC and KMS must travel entirely within the AWS network and not use public service endpoints.
Which combination of the following actions MOST satisfies this requirement? (Select TWO.)
- A . Add theaws:sourceVpcecondition to the AWS KMS key policy referencing the company’s VPC endpoint ID.
- B . Remove the VPC internet gateway from the VPC and add a virtual private gateway to the VPC to prevent direct, public internet connectivity.
- C . Create a VPC endpoint for AWS KMS withprivate DNS enabled.
- D . Use the KMS Import Key feature to securely transfer the AWS KMS key over a VPN.
- E . Add the following condition to the AWS KMS key policy:"aws:SourceIp": "10.0.0.0/16".
A,C
Explanation:
To ensure traffic from a VPC to AWS KMS stays on the AWS network and does not use public endpoints, you should use aninterface VPC endpoint (AWS PrivateLink) for KMS. Creating aVPC endpoint for KMS with private DNS enabled(Option C) causes standard KMS DNS names (for example, kms.<region>.amazonaws.com) to resolve to theprivateendpoint IPs inside the VPC, routing requests over the AWS private network rather than through the internet. This is the core networking control that satisfies “no public service endpoints.”
To enforce that only calls that come through the intended VPC endpoint can use the key, add an authorization guardrail in theKMS key policyusing the aws:sourceVpce condition (Option A). This ensures that even if a principal has credentials, KMS will deny usage unless the request is made via the specified VPC endpoint, preventing accidental or malicious use over public paths.
Option B is neither necessary nor sufficient: removing an internet gateway does not prevent all public endpoint use (NAT, other egress paths, or other VPCs could still be involved) and can break workloads. Option D is unrelated to runtime KMS API traffic. Option E is weaker because SourceIp checks can be bypassed via other AWS network paths and does not guarantee PrivateLink usage the way sourceVpce does.
A company is running an application in the eu-west-1 Region. The application uses an AWS Key Management Service (AWS KMS) customer managed key to encrypt sensitive data. The company plans to deploy the application in the eu-north-1 Region. A security engineer needs to implement a key management solution for the application deployment in the new Region. The security engineer must minimize changes to the application code.
Which change should the security engineer make to the AWS KMS configuration to meet these requirements?
- A . Update the key policies in eu-west-1. Point the application in eu-north-1 to use the same customer managed key as the application in eu-west-1.
- B . Allocate a new customer managed key to eu-north-1 to be used by the application that is deployed in that Region.
- C . Allocate a new customer managed key to eu-north-1. Create the same alias name for both keys. Configure the application deployment to use the key alias.
- D . Allocate a new customer managed key to eu-north-1. Create an alias for eu–1. Change the application code to point to the alias for eu–1.
C
Explanation:
AWS KMS keys are regional resources and cannot be used across Regions. According to AWS Certified Security C Specialty documentation, applications that are deployed in multiple Regions should use region-specific customer managed keys while referencing keys by alias instead of key ID.
By creating a new customer managed key in eu-north-1 and assigning it the same alias as the key in eu-west-1, the application code can continue to reference the alias without modification. Each Region resolves the alias to the correct local key, ensuring encryption continues to function correctly.
Option A is invalid because KMS keys are regional. Option B requires application changes. Option D introduces unsupported alias patterns.
AWS best practices recommend alias-based key references for multi-Region deployments.
Referenced AWS Specialty Documents:
AWS Certified Security C Specialty Official Study Guide
AWS KMS Regional Keys and Aliases
AWS KMS Best Practices
A company uses AWS to run a web application that manages ticket sales in several countries. The company recently migrated the application to an architecture that includes Amazon API Gateway, AWS Lambda, and Amazon Aurora Serverless. The company needs the application to comply with Payment Card Industry Data Security Standard (PCI DSS) v4.0. A security engineer must generate a report that shows the effectiveness of the PCI DSS v4.0 controls that apply to the application. The company’s compliance team must be able to add manual evidence to the report.
Which solution will meet these requirements?
- A . Enable AWS Trusted Advisor. Configure all the Trusted Advisor checks. Manually map the checks against the PCI DSS v4.0 standard to generate the report.
- B . Enable and configure AWS Config. Deploy the Operational Best Practices for PCI DSS conformance pack in AWS Config. Use AWS Config to generate the report.
- C . Enable AWS Security Hub. Enable the Security Hub PCI DSS security standard. Use the AWS Management Console to download the report from the security standard.
- D . Create an AWS Audit Manager assessment that uses the AWS managed PCI DSS v4.0 standard framework. Add all evidence to the assessment. Generate the report in Audit Manager for download.
D
Explanation:
AWS Audit Manager is specifically designed to help organizations continuously audit their AWS usage against compliance frameworks and generate audit-ready reports. According to AWS Certified Security C Specialty documentation, Audit Manager includes AWS managed frameworks for compliance standards, including PCI DSS v4.0.
Audit Manager automatically collects evidence from AWS services such as API Gateway, Lambda, RDS, CloudTrail, and Config, and maps the evidence directly to PCI DSS controls. Importantly, Audit Manager allows compliance teams to upload and attach manual evidence, which is a key requirement in this scenario.
Option C provides visibility into control status but does not support adding manual evidence. Option B evaluates configuration compliance but does not generate formal compliance reports. Option A requires extensive manual effort and is not aligned with PCI reporting workflows.
AWS documentation positions Audit Manager as the authoritative service for compliance reporting and audit evidence management.
Referenced AWS Specialty Documents:
AWS Certified Security C Specialty Official Study Guide
AWS Audit Manager PCI DSS Framework
AWS Compliance Reporting Best Practices
A company is planning to migrate its applications to AWS in a single AWS Region. The company’s applications will use a combination of Amazon EC2 instances, Elastic Load Balancing (ELB) load balancers, and Amazon S3 buckets. The company wants to complete the migration as quickly as possible. All the applications must meet the following requirements:
• Data must be encrypted at rest.
• Data must be encrypted in transit.
• Endpoints must be monitored for anomalous network traffic.
Which combination of steps should a security engineer take to meet these requirements with the LEAST effort? (Select THREE.)
- A . Install the Amazon Inspector agent on EC2 instances by using AWS Systems Manager Automation.
- B . Enable Amazon GuardDuty in all AWS accounts.
- C . Create VPC endpoints for Amazon EC2 and Amazon S3. Update VPC route tables to use only the secure VPC endpoints.
- D . Configure AWS Certificate Manager (ACM). Configure the load balancers to use certificates from ACM.
- E . Use AWS Key Management Service (AWS KMS) for key management. Create an S3 bucket policy to deny any PutObject command with a condition for x-amz-meta-side-encryption.
- F . Use AWS Key Management Service (AWS KMS) for key management. Create an S3 bucket policy to deny any PutObject command with a condition for x-amz-server-side-encryption.
B,D,F
Explanation:
Amazon GuardDuty provides continuous monitoring for anomalous and malicious network activity by analyzing VPC Flow Logs, DNS logs, and CloudTrail events. Enabling GuardDuty across accounts requires minimal configuration and immediately satisfies the requirement to monitor endpoints for anomalous network traffic, as described in the AWS Certified Security C Specialty Study Guide.
Encrypting data in transit for applications behind Elastic Load Balancing is most efficiently achieved by using AWS Certificate Manager (ACM). ACM provisions and manages TLS certificates automatically, and integrating ACM with ELB enables encrypted communication without manual certificate management.
For encryption at rest in Amazon S3, AWS best practices recommend enforcing server-side encryption using AWS KMS. An S3 bucket policy that denies PutObject requests unless the x-amz-server-side-encryption condition is present ensures that all uploaded objects are encrypted at rest using KMS-managed keys. This provides strong encryption guarantees with minimal operational effort.
Option A is unnecessary because Amazon Inspector focuses on vulnerability assessment, not encryption or network anomaly detection. Option C adds network complexity and is not required to meet the stated requirements. Option E is incorrect because x-amz-meta-side-encryption is not a valid enforcement mechanism.
Referenced AWS Specialty Documents:
AWS Certified Security C Specialty Official Study Guide
Amazon GuardDuty Threat Detection
AWS Certificate Manager and ELB Integration
Amazon S3 Encryption Best Practices
A security team manages a company’s AWS Key Management Service (AWS KMS) customer managed keys. Only members of the security team can administer the KMS keys. The company’s application team has a software process that needs temporary access to the keys occasionally. The security team needs to provide the application team’s software process with access to the keys.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Export the KMS key material to an on-premises hardware security module (HSM). Give the application team access to the key material.
- B . Edit the key policy that grants the security team access to the KMS keys by adding the application team as principals. Revert this change when the application team no longer needs access.
- C . Create a key grant to allow the application team to use the KMS keys. Revoke the grant when the application team no longer needs access.
- D . Create a new KMS key by generating key material on premises. Import the key material to AWS KMS whenever the application team needs access. Grant the application team permissions to use the key.
C
Explanation:
AWS KMS key grants are specifically designed to provide temporary, granular permissions to use customer managed keys without modifying key policies. According to the AWS Certified Security C Specialty Study Guide, grants are the preferred mechanism for delegating key usage permissions to AWS principals for short-term or programmatic access scenarios. Grants allow permissions such as Encrypt, Decrypt, or GenerateDataKey and can be created and revoked dynamically.
Using a key grant avoids the operational risk and overhead of editing key policies, which are long-term control mechanisms and should remain stable. AWS documentation emphasizes that frequent key policy changes increase the risk of misconfiguration and accidental privilege escalation. Grants can be revoked immediately when access is no longer required, ensuring strong adherence to the principle of least privilege.
Options A and D violate AWS security best practices because AWS KMS does not allow direct export of key material unless the key was explicitly created as an importable key, and exporting key material increases exposure risk. Option B requires manual policy changes and rollback, which introduces operational overhead and audit complexity.
AWS recommends key grants as the most efficient and secure way to provide temporary access to
KMS keys for applications.
Referenced AWS Specialty Documents:
AWS Certified Security C Specialty Official Study Guide
AWS KMS Key Policies and Grants Documentation
AWS KMS Best Practices
