Practice Free SCS-C02 Exam Online Questions
A company needs to retain data that is stored in Amazon CloudWatch Logs log groups The company must retain this data for 90 days. The company must receive notification in AWS Security Hub when log group retention is not compliant with this requirement.
Which solution will provide the appropriate notification?
- A . Create a Security Hub custom action to assess the log group retention period.
- B . Create a data protection policy in CloudWatch Logs to assess the log group retention period.
- C . Create a Security Hub automation rule Configure the automation rule to assess the log group retention period.
- D . Use the AWS Config managed rule that assesses the log group retention period Ensure that AWS Config integration is enabled in Security Hub.
A developer operations team uses AWS Identity and Access Management (1AM) to manage user permissions The team created an Amazon EC2 instance profile role that uses an AWS managed Readonly Access policy. When an application that is running on Amazon EC2 tries to read a file from an encrypted Amazon S3 bucket, the application receives an Access Denied error.
The team administrator has verified that the S3 bucket policy allows everyone in the account to access the S3 bucket. There is no object ACL that is attached to the file.
What should the administrator do to fix the 1AM access issue?
- A . Edit the ReadOnlyAccess policy to add kms: Decrypt actions.
- B . Add the EC2 1AM role as the authorized Principal to the S3 bucket policy.
- C . Attach an inline policy with kms Decrypt permissions to the 1AM role
- D . Attach an inline policy with S3: * permissions to the 1AM role.
A company uses AWS Organizations to manage a multi-account AWS environment in a single AWS Region. The organization’s management account is named management-01. The company has turned on AWS Config in all accounts in the organization. The company has designated an account named security-01 as the delegated administrator for AWS Config.
All accounts report the compliance status of each account’s rules to the AWS Config delegated administrator account by using an AWS Config aggregator. Each account administrator can configure and manage the account’s own AWS Config rules to handle each account’s unique compliance requirements.
A security engineer needs to implement a solution to automatically deploy a set of 10 AWS Config rules to all existing and future AWS accounts in the organization. The solution must turn on AWS Config automatically during account creation.
Which combination of steps will meet these requirements? (Select TWO.)
- A . Create an AWS CloudFormation template that contains the 1 0 required AVVS Config rules. Deploy the template by using CloudFormation StackSets in the security-01 account.
- B . Create a conformance pack that contains the 10 required AWS Config rules. Deploy the conformance pack from the security-01 account.
- C . Create a conformance pack that contains the 10 required AWS Config rules. Deploy the conformance pack from the management-01 account.
- D . Create an AWS CloudFormation template that will activate AWS Config. De-ploy the template by using CloudFormation StackSets in the security-01 ac-count.
- E . Create an AWS CloudFormation template that will activate AWS Config. De-ploy the template by using CloudFormation StackSets in the management-01 account.
A company has an encrypted Amazon Aurora DB cluster in the us-east-1 Region. The DB cluster is encrypted with an AWS Key Management Service (AWS KMS) customer managed key. To meet compliance requirements, the company needs to copy a DB snapshot to the us-west-1 Region. However, when the company tries to copy the snapshot to us-west-1 the company cannot access the key that was used to encrypt the original database.
What should the company do to set up the snapshot in us-west-1 with proper encryption?
- A . Use AWS Secrets Manager to store the customer managed key in us-west-1 as a secret Use this secret to encrypt the snapshot in us-west-1.
- B . Create a new customer managed key in us-west-1. Use this new key to encrypt the snapshot in us-west-1.
- C . Create an IAM policy that allows access to the customer managed key in us-east-1. Specify am aws kms us-west-1 " as the principal.
- D . Create an IAM policy that allows access to the customer managed key in us-east-1. Specify arn aws rds us-west-1. * as the principal.
B
Explanation:
"If you copy an encrypted snapshot across Regions, you must specify a KMS key valid in the destination AWS Region. It can be a Region-specific KMS key, or a multi-Region key." https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-copy-snapshot.html#aurora-copy-sna
A company uses Amazon EC2 instances to host frontend services behind an Application Load Balancer. Amazon Elastic Block Store (Amazon EBS) volumes are attached to the EC2 instances. The company uses Amazon S3 buckets to store large files for images and music.
The company has implemented a security architecture oit>AWS to prevent, identify, and isolate potential ransomware attacks. The company now wants to further reduce risk.
A security engineer must develop a disaster recovery solution that can recover to normal operations if an attacker bypasses preventive and detective controls. The solution must meet an RPO of 1 hour.
Which solution will meet these requirements?
- A . Use AWS Backup to create backups of the EC2 instances and S3 buckets every hour. Create AWS CloudFormation templates that replicate existing architecture components. Use AWS CodeCommit to store the CloudFormation templates alongside application configuration code.
- B . Use AWS Backup to create backups of the EBS volumes and S3 objects every day. Use Amazon Security Lake to create a centralized data lake for AWS CloudTrail logs and VPC flow logs. Use the logs for automated response.
- C . Use Amazon Security Lake to create a centralized data lake for AWS CloudTrail logs and VPC flow logs. Use the logs for automated response Enable AWS Security Hub to establish a single location for recovery procedures. Create AWS CloudFormation templates that replicate existing architecture components. Use AWS CodeCommit to store the CloudFormation templates alongside application configuration code.
- D . Create EBS snapshots every 4 hours Enable Amazon GuardDuty Malware Protection. Create automation to immediately restore the most recent snapshot for any EC2 instances that produce an Execution:EC2/MaliciousFile finding in GuardDuty.
A
Explanation:
The correct answer is A because it meets the RPO of 1 hour by creating backups of the EC2 instances and S3 buckets every hour. It also uses AWS CloudFormation templates to replicate the existing architecture components and AWS CodeCommit to store the templates and the application configuration code. This way, the security engineer can quickly restore the environment in case of a ransomware attack.
The other options are incorrect because they do not meet the RPO of 1 hour or they do not provide a complete disaster recovery solution. Option B only creates backups of the EBS volumes and S3 objects every day, which is not frequent enough to meet the RPO. Option C does not create any backups of the EC2 instances or the S3 buckets, which are essential for the frontend services. Option D only creates EBS snapshots every 4 hours, which is also not frequent enough to meet the RPO. Additionally, option D relies on Amazon GuardDuty to detect and respond to ransomware attacks, which may not be effective if the attacker bypasses the preventive and detective controls.
Reference: AWS Backup, AWS CloudFormation, AWS CodeCommit
A company is hosting a web application on Amazon EC2 instances behind an Application Load Balancer (ALB). The application has become the target of a DoS attack. Application logging shows that requests are coming from small number of client IP addresses, but the addresses change regularly.
The company needs to block the malicious traffic with a solution that requires the least amount of ongoing effort.
Which solution meets these requirements?
- A . Create an AWS WAF rate-based rule, and attach it to the ALB.
- B . Update the security group that is attached to the ALB to block the attacking IP addresses.
- C . Update the ALB subnet’s network ACL to block the attacking client IP addresses.
- D . Create a AWS WAF rate-based rule, and attach it to the security group of the EC2 instances.
A company is evaluating the use of AWS Systems Manager Session Manager to gam access to the company’s Amazon EC2 instances. However, until the company implements the change, the company must protect the key file for the EC2 instances from read and write operations by any other users.
When a security administrator tries to connect to a critical EC2 Linux instance during an emergency, the security administrator receives the following error. "Error Unprotected private key file – Permissions for’ ssh/my_private_key pern’ are too open".
Which command should the security administrator use to modify the private key Me permissions to resolve this error?
- A . chmod 0040 ssh/my_private_key pern
- B . chmod 0400 ssh/my_private_key pern
- C . chmod 0004 ssh/my_private_key pern
- D . chmod 0777 ssh/my_private_key pern
B
Explanation:
The error message indicates that the private key file permissions are too open, meaning that other users can read or write to the file. This is a security risk, as the private key should be accessible only by the owner of the file. To fix this error, the security administrator should use the chmod command to change the permissions of the private key file to 0400, which means that only the owner can read the file and no one else can read or write to it.
The chmod command takes a numeric argument that represents the permissions for the owner, group, and others in octal notation. Each digit corresponds to a set of permissions: read (4), write (2), and execute (1). The digits are added together to get the final permissions for each category. For example, 0400 means that the owner has read permission (4) and no other permissions (0), and the group and others have no permissions at all (0).
The other options are incorrect because they either do not change the permissions at all (D), or they give too much or too little permissions to the owner, group, or others (A, C).
References:
https://superuser.com/questions/215504/permissions-on-private-key-in-ssh-folder
https://www.baeldung.com/linux/ssh-key-permissions
A company is using AWS Organizations to manage multiple accounts. The company needs to allow an IAM user to use a role to access resources that are in another organization’s AWS account.
Which combination of steps must the company perform to meet this requirement? (Select TWO.)
- A . Create an identity policy that allows the sts: AssumeRole action in the AWS account that contains the resources. Attach the identity policy to the IAM user.
- B . Ensure that the sts: AssumeRole action is allowed by the SCPs of the organization that owns the resources that the IAM user needs to access.
- C . Create a role in the AWS account that contains the resources. Create an entry in the role’s trust policy that allows the IAM user to assume the role. Attach the trust policy to the role.
- D . Establish a trust relationship between the IAM user and the AWS account that contains the resources.
- E . Create a role in the IAM user’s AWS account. Create an identity policy that allows the sts: AssumeRole action. Attach the identity policy to the role.
A C
Explanation:
Option A: Create an identity policy that allows the sts:AssumeRole action in the AWS account that contains the resources. Attach the identity policy to the IAM user. This will ensure that the IAM user has the necessary permissions to assume roles in the other account.
Option C: Create a role in the AWS account that contains the resources. Create an entry in the role’s trust policy that allows the IAM user to assume the role. Attach the trust policy to the role. This step is necessary to allow the IAM user from the other account to assume the role in this account.
Explanation of other options:
Option B: This option involves Service Control Policies (SCPs), which are used to define the maximum permissions for account members in AWS Organizations. While ensuring the SCPs allow the sts:AssumeRole action might be necessary, it doesn’t directly allow cross-account role assumption.
Option D: This option seems too vague and doesn’t clearly explain how the trust relationship would be established. Trust relationships are generally established via trust policies, as mentioned in option C.
Option E: This option suggests creating a role in the IAM user’s account and attaching a policy allowing sts:AssumeRole to this role. This wouldn’t be effective since the role that needs to be assumed would be in the other AWS account that contains the resources, not in the IAM user’s own account.
A Security Engineer creates an Amazon S3 bucket policy that denies access to all users. A few days later, the Security Engineer adds an additional statement to the bucket policy to allow read-only access to one other employee. Even after updating the policy, the employee still receives an access denied message.
What is the likely cause of this access denial?
- A . The ACL in the bucket needs to be updated
- B . The IAM policy does not allow the user to access the bucket
- C . It takes a few minutes for a bucket policy to take effect
- D . The allow permission is being overridden by the deny
A Security Engineer is asked to update an AWS CloudTrail log file prefix for an existing trail. When attempting to save the change in the CloudTrail console, the Security Engineer receives the following error message: `There is a problem with the bucket policy. `
What will enable the Security Engineer to save the change?
- A . Create a new trail with the updated log file prefix, and then delete the original trail. Update the existing bucket policy in the Amazon S3 console with the new log file prefix, and then update the log file prefix in the CloudTrail console.
- B . Update the existing bucket policy in the Amazon S3 console to allow the Security Engineer’s Principal to perform PutBucketPolicy, and then update the log file prefix in the CloudTrail console.
- C . Update the existing bucket policy in the Amazon S3 console with the new log file prefix, and then update the log file prefix in the CloudTrail console.
- D . Update the existing bucket policy in the Amazon S3 console to allow the Security Engineer’s Principal to perform GetBucketPolicy, and then update the log file prefix in the CloudTrail console.
C
Explanation:
The correct answer is C. Update the existing bucket policy in the Amazon S3 console with the new log file prefix, and then update the log file prefix in the CloudTrail console.
According to the AWS documentation1, a bucket policy is a resource-based policy that you can use to grant access permissions to your Amazon S3 bucket and the objects in it. Only the bucket owner can associate a policy with a bucket. The permissions attached to the bucket apply to all of the objects in the bucket that are owned by the bucket owner.
When you create a trail in CloudTrail, you can specify an existing S3 bucket or create a new one to store your log files. CloudTrail automatically creates a bucket policy for your S3 bucket that grants CloudTrail write-only access to deliver log files to your bucket. The bucket policy also grants read-only access to AWS services that you can use to view and analyze your log data, such as Amazon Athena, Amazon CloudWatch Logs, and Amazon QuickSight.
If you want to update the log file prefix for an existing trail, you must also update the existing bucket policy in the S3 console with the new log file prefix. The log file prefix is part of the resource ARN that identifies the objects in your bucket that CloudTrail can access. If you don’t update the bucket policy with the new log file prefix, CloudTrail will not be able to deliver log files to your bucket, and you will receive an error message when you try to save the change in the CloudTrail console.
The other options are incorrect because:
A) Creating a new trail with the updated log file prefix, and then deleting the original trail is not necessary and may cause data loss or inconsistency. You can simply update the existing trail and its associated bucket policy with the new log file prefix.
B) Updating the existing bucket policy in the S3 console to allow the Security Engineer’s Principal to perform PutBucketPolicy is not relevant to this issue. The PutBucketPolicy action allows you to create or replace a policy on a bucket, but it does not affect CloudTrail’s ability to deliver log files to your bucket. You still need to update the existing bucket policy with the new log file prefix.
D) Updating the existing bucket policy in the S3 console to allow the Security Engineer’s Principal to perform GetBucketPolicy is not relevant to this issue. The GetBucketPolicy action allows you to retrieve a policy on a bucket, but it does not affect CloudTrail’s ability to deliver log files to your bucket. You still need to update the existing bucket policy with the new log file prefix.
1: Using bucket policies – Amazon Simple Storage Service