Practice Free SCS-C03 Exam Online Questions
A Security Engineer is asked to update an AWS CloudTrail log file prefix for an existing trail. When attempting to save the change in the CloudTrail console, the Security Engineer receives the following error message: `There is a problem with the bucket policy. `
What will enable the Security Engineer to save the change?
- A . Create a new trail with the updated log file prefix, and then delete the original trail. Update the existing bucket policy in the Amazon S3 console with the new log file prefix, and then update the log file prefix in the CloudTrail console.
- B . Update the existing bucket policy in the Amazon S3 console to allow the Security Engineer’s Principal to perform PutBucketPolicy, and then update the log file prefix in the CloudTrail console.
- C . Update the existing bucket policy in the Amazon S3 console with the new log file prefix, and then update the log file prefix in the CloudTrail console.
- D . Update the existing bucket policy in the Amazon S3 console to allow the Security Engineer’s Principal to perform GetBucketPolicy, and then update the log file prefix in the CloudTrail console.
C
Explanation:
The correct answer is C. Update the existing bucket policy in the Amazon S3 console with the new log file prefix, and then update the log file prefix in the CloudTrail console.
According to the AWS documentation1, a bucket policy is a resource-based policy that you can use to grant access permissions to your Amazon S3 bucket and the objects in it. Only the bucket owner can associate a policy with a bucket. The permissions attached to the bucket apply to all of the objects in the bucket that are owned by the bucket owner.
When you create a trail in CloudTrail, you can specify an existing S3 bucket or create a new one to store your log files. CloudTrail automatically creates a bucket policy for your S3 bucket that grants CloudTrail write-only access to deliver log files to your bucket. The bucket policy also grants read-only access to AWS services that you can use to view and analyze your log data, such as Amazon Athena, Amazon CloudWatch Logs, and Amazon QuickSight.
If you want to update the log file prefix for an existing trail, you must also update the existing bucket policy in the S3 console with the new log file prefix. The log file prefix is part of the resource ARN that identifies the objects in your bucket that CloudTrail can access. If you don’t update the bucket policy with the new log file prefix, CloudTrail will not be able to deliver log files to your bucket, and you will receive an error message when you try to save the change in the CloudTrail console.
The other options are incorrect because:
A) Creating a new trail with the updated log file prefix, and then deleting the original trail is not necessary and may cause data loss or inconsistency. You can simply update the existing trail and its associated bucket policy with the new log file prefix.
B) Updating the existing bucket policy in the S3 console to allow the Security Engineer’s Principal to perform PutBucketPolicy is not relevant to this issue. The PutBucketPolicy action allows you to create or replace a policy on a bucket, but it does not affect CloudTrail’s ability to deliver log files to your bucket. You still need to update the existing bucket policy with the new log file prefix.
D) Updating the existing bucket policy in the S3 console to allow the Security Engineer’s Principal to perform GetBucketPolicy is not relevant to this issue. The GetBucketPolicy action allows you to retrieve a policy on a bucket, but it does not affect CloudTrail’s ability to deliver log files to your bucket. You still need to update the existing bucket policy with the new log file prefix.
1: Using bucket policies – Amazon Simple Storage Service
A System Administrator is unable to start an Amazon EC2 instance in the eu-west-1 Region using an IAM role The same System Administrator is able to start an EC2 instance in the eu-west-2 and eu-west-3 Regions. The IAMSystemAdministrator access policy attached to the System Administrator IAM role allows unconditional access to all IAM services and resources within the account
Which configuration caused this issue?
A) An SCP is attached to the account with the following permission statement:

B) A permission boundary policy is attached to the System Administrator role with the following permission statement:

C) A permission boundary is attached to the System Administrator role with the following permission statement:

D) An SCP is attached to the account with the following statement:

- A . Option A
- B . Option B
- C . Option C
- D . Option D
A company purchased a subscription to a third-party cloud security scanning solution that integrates with AWS Security Hub. A security engineer needs to implement a solution that will remediate the findings from the third-party scanning solution automatically.
Which solution will meet this requirement?
- A . Set up an Amazon EventBridge rule that reacts to new Security Hub find-ings. Configure an AWS Lambda function as the target for the rule to reme-diate the findings.
- B . Set up a custom action in Security Hub. Configure the custom action to call AWS Systems Manager Automation runbooks to remediate the findings.
- C . Set up a custom action in Security Hub. Configure an AWS Lambda function as the target for the custom action to remediate the findings.
- D . Set up AWS Config rules to use AWS Systems Manager Automation runbooks to remediate the findings.
A company hosts a public website on an Amazon EC2 instance. HTTPS traffic must be able to access the website. The company uses SSH for management of the web server.
The website is on the subnet 10.0.1.0/24. The management subnet is 192.168.100.0/24. A security engineer must create a security group for the EC2 instance.
Which combination of steps should the security engineer take to meet these requirements in the MOST secure manner? (Select TWO.)
- A . Allow port 22 from source 0.0.0.0/0.
- B . Allow port 443 from source 0.0.0.0/0.
- C . Allow port 22 from 192.168.100.0/24.
- D . Allow port 22 from 10.0.1.0/24.
- E . Allow port 443 from 10.0.1.0/24.
BC
Explanation:
The correct answer is B and C)
B) Allow port 443 from source 0.0.0.0/0.
This is correct because port 443 is used for HTTPS traffic, which must be able to access the website from any source IP address.
C) Allow port 22 from 192.168.100.0/24.
This is correct because port 22 is used for SSH, which is the management protocol for the web server. The management subnet is 192.168.100.0/24, so only this subnet should be allowed to access port 22.
A) Allow port 22 from source 0.0.0.0/0.
This is incorrect because it would allow anyone to access port 22, which is a security risk. SSH should be restricted to the management subnet only.
D) Allow port 22 from 10.0.1.0/24.
This is incorrect because it would allow the website subnet to access port 22, which is unnecessary and a security risk. SSH should be restricted to the management subnet only.
E) Allow port 443 from 10.0.1.0/24.
This is incorrect because it would limit the HTTPS traffic to the website subnet only, which defeats the purpose of having a public website.
Reference: Security groups
A company’s policy requires that all API keys be encrypted and stored separately from source code in a centralized security account. This security account is managed by the company’s security team However, an audit revealed that an API key is steed with the source code of an IAM Lambda function m an IAM CodeCommit repository in the DevOps account
How should the security learn securely store the API key?
- A . Create a CodeCommit repository in the security account using IAM Key Management Service (IAM KMS) tor encryption Require the development team to migrate the Lambda source code to this repository
- B . Store the API key in an Amazon S3 bucket in the security account using server-side encryption with Amazon S3 managed encryption keys (SSE-S3) to encrypt the key Create a resigned URL tor the S3 key. and specify the URL m a Lambda environmental variable in the IAM CloudFormation template Update the Lambda function code to retrieve the key using the URL and call the API
- C . Create a secret in IAM Secrets Manager in the security account to store the API key using IAM Key Management Service (IAM KMS) tor encryption Grant access to the IAM role used by the Lambda function so that the function can retrieve the key from Secrets Manager and call the API
- D . Create an encrypted environment variable for the Lambda function to store the API key using IAM Key Management Service (IAM KMS) tor encryption Grant access to the IAM role used by the Lambda function so that the function can decrypt the key at runtime
A company is developing a highly resilient application to be hosted on multiple Amazon EC2 instances. The application will store highly sensitive user data in Amazon RDS tables
The application must
• Include migration to a different IAM Region in the application disaster recovery plan.
• Provide a full audit trail of encryption key administration events
• Allow only company administrators to administer keys.
• Protect data at rest using application layer encryption
A Security Engineer is evaluating options for encryption key management
Why should the Security Engineer choose IAM CloudHSM over IAM KMS for encryption key management in this situation?
- A . The key administration event logging generated by CloudHSM is significantly more extensive than IAM KMS.
- B . CloudHSM ensures that only company support staff can administer encryption keys, whereas IAM KMS allows IAM staff to administer keys
- C . The ciphertext produced by CloudHSM provides more robust protection against brute force decryption attacks than the ciphertext produced by IAM KMS
- D . CloudHSM provides the ability to copy keys to a different Region, whereas IAM KMS does not
A company plans to create individual child accounts within an existing organization in IAM Organizations for each of its DevOps teams. IAM CloudTrail has been enabled and configured on all accounts to write audit logs to an Amazon S3 bucket in a centralized IAM account. A security engineer needs to ensure that DevOps team members are unable to modify or disable this configuration.
How can the security engineer meet these requirements?
- A . Create an IAM policy that prohibits changes to the specific CloudTrail trail and apply the policy to the IAM account root user.
- B . Create an S3 bucket policy in the specified destination account for the CloudTrail trail that prohibits configuration changes from the IAM account root user in the source account.
- C . Create an SCP that prohibits changes to the specific CloudTrail trail and apply the SCP to the appropriate organizational unit or account in Organizations.
- D . Create an IAM policy that prohibits changes to the specific CloudTrail trail and apply the policy to a
new IAM group. Have team members use individual IAM accounts that are members of the new IAM group.
A company needs to create a centralized solution to analyze log files. The company uses an organization in AWS Organizations to manage its AWS accounts.
The solution must aggregate and normalize events from the following sources:
• The entire organization in Organizations
• All AWS Marketplace offerings that run in the company’s AWS accounts
• The company’s on-premises systems
Which solution will meet these requirements?
- A . Configure a centralized Amazon S3 bucket for the logs Enable VPC Flow Logs, AWS CloudTrail, and Amazon Route 53 logs in all accounts. Configure all accounts to use the centralized S3 bucket. Configure AWS Glue crawlers to parse the log files Use Amazon Athena to query the log data.
- B . Configure log streams in Amazon CloudWatch Logs for the sources that need monitoring. Create log subscription filters for each log stream. Forward the messages to Amazon OpenSearch Service for analysis.
- C . Set up a delegated Amazon Security Lake administrator account in Organizations. Enable and configure Security Lake for the organization. Add the accounts that need monitoring. Use Amazon Athena to query the log data.
- D . Apply an SCP to configure all member accounts and services to deliver log files to a centralized Amazon S3 bucket. Use Amazon OpenSearch Service to query the centralized S3 bucket for log entries.
C
Explanation:
Amazon Security Lake, when configured with a delegated administrator account in AWS Organizations, provides a centralized solution for aggregating, organizing, and prioritizing security data from multiple sources including AWS services, AWS Marketplace solutions, and on-premises systems. By enabling Security Lake for the organization and adding the necessary AWS accounts, the solution centralizes the collection and analysis of log data. This setup leverages the organization’s structure to streamline log aggregation and normalization, making it an efficient solution for the specified requirements. The use of Amazon Athena for querying the log data further enhances the ability to analyze and respond to security findings across the organization.
A company has AWS accounts in an organization in AWS Organizations. The company needs to install a corporate software package on all Amazon EC2 instances for all the accounts in the organization.
A central account provides base AMIs for the EC2 instances. The company uses AWS Systems Manager for software inventory and patching operations.
A security engineer must implement a solution that detects EC2 instances that do not have the required software. The solution also must automatically install the software if the software is not present.
Which solution will meet these requirements?
- A . Provide new AMIs that have the required software pre-installed. Apply a tag to the AMIs to indicate that the AMIs have the required software. Configure an SCP that allows new EC2 instances to be launched only if the instances have the tagged AMIs. Tag all existing EC2 instances.
- B . Configure a custom patch baseline in Systems Manager Patch Manager. Add the package name for the required software to the approved packages list. Associate the new patch baseline with all EC2 instances. Set up a maintenance window for software deployment.
- C . Centrally enable AWS Config. Set up the ec2-managedinstance-applications-required AWS Config rule for all accounts Create an Amazon EventBridge rule that reacts to AWS Config events. Configure the EventBridge rule to invoke an AWS Lambda function that uses Systems Manager Run Command to install the required software.
- D . Create a new Systems Manager Distributor package for the required software. Specify the download location. Select all EC2 instances in the different accounts. Install the software by using Systems Manager Run Command.
C
Explanation:
Utilizing AWS Config with a custom AWS Config rule (ec2-managedinstance-applications-required) enables detection of EC2 instances lacking the required software across all accounts in an organization. By creating an Amazon EventBridge rule that triggers on AWS Config events, and configuring it to invoke an AWS Lambda function, automated actions can be taken to ensure compliance. The Lambda function can leverage AWS Systems Manager Run Command to install the necessary software on non-compliant instances. This approach ensures continuous compliance and automated remediation, aligning with best practices for cloud security and management.
A company has a group of Amazon EC2 instances in a single private subnet of a VPC with no internet gateway attached. A security engineer has installed the Amazon CloudWatch agent on all instances in that subnet to capture logs from a specific application. To ensure that the logs flow securely, the company’s networking team has created VPC endpoints for CloudWatch monitoring and CloudWatch logs. The networking team has attached the endpoints to the VPC.
The application is generating logs. However, when the security engineer queries CloudWatch, the logs do not appear.
Which combination of steps should the security engineer take to troubleshoot this issue? (Choose three.)
- A . Ensure that the EC2 instance profile that is attached to the EC2 instances has permissions to create log streams and write logs.
- B . Create a metric filter on the logs so that they can be viewed in the AWS Management Console.
- C . Check the CloudWatch agent configuration file on each EC2 instance to make sure that the CloudWatch agent is collecting the proper log files.
- D . Check the VPC endpoint policies of both VPC endpoints to ensure that the EC2 instances have permissions to use them.
- E . Create a NAT gateway in the subnet so that the EC2 instances can communicate with CloudWatch.
- F . Ensure that the security groups allow all the EC2 instances to communicate with each other to aggregate logs before sending.
ACD
Explanation:
The possible steps to troubleshoot this issue are:
A) Ensure that the EC2 instance profile that is attached to the EC2 instances has permissions to create log streams and write logs. This is a necessary step because the CloudWatch agent uses the credentials from the instance profile to communicate with CloudWatch1.
C) Check the CloudWatch agent configuration file on each EC2 instance to make sure that the CloudWatch agent is collecting the proper log files. This is a necessary step because the CloudWatch agent needs to know which log files to monitor and send to CloudWatch2.
D) Check the VPC endpoint policies of both VPC endpoints to ensure that the EC2 instances have permissions to use them. This is a necessary step because the VPC endpoint policies control which principals can access the AWS services through the endpoints3.
The other options are incorrect because:
B) Creating a metric filter on the logs is not a troubleshooting step, but a way to extract metric data from the logs. Metric filters do not affect the visibility of the logs in the AWS Management Console.
E) Creating a NAT gateway in the subnet is not a solution, because the EC2 instances do not need internet access to communicate with CloudWatch through the VPC endpoints. A NAT gateway would also incur additional costs.
F) Ensuring that the security groups allow all the EC2 instances to communicate with each other is not a necessary step, because the CloudWatch agent does not require log aggregation before sending. Each EC2 instance can send its own logs independently to CloudWatch.
Reference: 1: IAM Roles for Amazon EC2
2: CloudWatch Agent Configuration File: Logs Section
3: Using Amazon VPC Endpoints: Metric Filters: NAT Gateways: CloudWatch Agent
Reference: Log Aggregation
