Practice Free Professional Cloud Security Engineer Exam Online Questions
Your organization s record data exists in Cloud Storage. You must retain all record data for at least
seven years This policy must be permanent.
What should you do?
- A . • 1 Identify buckets with record data
• 2 Apply a retention policy and set it to retain for seven years
• 3 Monitor the bucket by using log-based alerts to ensure that no modifications to the retention policy occurs - B . • 1 Identify buckets with record data
• 2 Apply a retention policy and set it to retain for seven years
• 3 Remove any Identity and Access Management (IAM) roles that contain the storage buckets update permission - C . • 1 Identify buckets with record data
• 2 Enable the bucket policy only to ensure that data is retained
• 3 Enable bucket lock - D . • 1 Identify buckets with record data
• 2 Apply a retention policy and set it to retain for seven years
• 3 Enable bucket lock
D
Explanation:
To ensure that your organization’s record data is retained for at least seven years in Cloud Storage, you need to apply a retention policy and enable bucket lock. This prevents the policy from being altered or the data from being deleted before the retention period ends.
Identify Buckets: Determine which Cloud Storage buckets contain the record data that needs to be retained.
Apply Retention Policy:
Go to the Google Cloud Console and navigate to "Cloud Storage".
Select the bucket you identified.
Go to the "Retention" tab and set a retention policy to retain objects for seven years.
Enable Bucket Lock:
Once the retention policy is set, you need to lock the bucket to make the retention policy permanent.
This is done by enabling the bucket lock. Go to the "Retention" tab and click "Lock".
Confirm and Monitor:
Confirm that the bucket lock is applied.
Monitor the bucket using log-based alerts to ensure compliance.
Reference:
Cloud Storage Retention Policy
Cloud Storage Bucket Lock
You are a member of the security team at an organization. Your team has a single GCP project with credit card payment processing systems alongside web applications and data processing systems. You want to reduce the scope of systems subject to PCI audit standards.
What should you do?
- A . Use multi-factor authentication for admin access to the web application.
- B . Use only applications certified compliant with PA-DSS.
- C . Move the cardholder data environment into a separate GCP project.
- D . Use VPN for all connections between your office and cloud environments.
C
Explanation:
To reduce the scope of systems subject to PCI audit standards, segregate the cardholder data environment (CDE) into a separate GCP project. This ensures that only the project containing the CDE will be subject to PCI DSS compliance, reducing the audit scope for other projects. Create Separate GCP Project:
Go to the Cloud Console, navigate to IAM & Admin > Manage Resources.
Click "Create Project" and set up a new project for the CDE.
Migrate CDE:
Transfer the systems processing, storing, or transmitting cardholder data to the new project.
Apply PCI DSS Controls:
Implement PCI DSS required controls on the new project.
Use appropriate security measures such as firewalls, access controls, and encryption.
Reference:
Google Cloud and PCI DSS
Creating and Managing Projects
You plan to deploy your cloud infrastructure using a CI/CD cluster hosted on Compute Engine. You want to minimize the risk of its credentials being stolen by a third party.
What should you do?
- A . Create a dedicated Cloud Identity user account for the cluster. Use a strong self-hosted vault solution to store the user’s temporary credentials.
- B . Create a dedicated Cloud Identity user account for the cluster. Enable the constraints/iam.disableServiceAccountCreation organization policy at the project level.
- C . Create a custom service account for the cluster Enable the constraints/iam.disableServiceAccountKeyCreation organization policy at the project level.
- D . Create a custom service account for the cluster Enable the constraints/iam.allowServiceAccountCredentialLifetimeExtension organization policy at the project level.
C
Explanation:
Disable service account key creation You can use the iam.disableServiceAccountKeyCreation boolean constraint to disable the creation of new external service account keys. This allows you to control the use of unmanaged long-term credentials for service accounts. When this constraint is set, user-managed credentials cannot be created for service accounts in projects affected by the constraint. https://cloud.google.com/resource-manager/docs/organization-policy/restricting-service-accounts#example_policy_boolean_constraint
Your company must follow industry specific regulations. Therefore, you need to enforce customer-managed encryption keys (CMEK) for all new Cloud Storage resources in the organization called org1.
What command should you execute?
- A . • organization policy: constraints/gcp.restrictStorageNonCraekServices
• binding at: orgl
• policy type: deny
• policy value: storage.gcogleapis.com - B . • organization policy: constraints/gcp.restrictHonCmekServices
• binding at: orgl
• policy type: deny
• policy value: storage.googleapis.com - C . • organization policy:constraints/gcp.restrictStorageNonCraekServices
• binding at: orgl
• policy type: allow
• policy value: all supported services - D . • organization policy: constramts/gcp.restrictNonCmekServices
• binding at: orgl
• policy type: allow
• policy value: storage.googleapis.com
D
Explanation:
Requirement:
Enforce the use of Customer-Managed Encryption Keys (CMEK) for all new Cloud Storage resources
in the organization.
Policy Constraint:
Use the constraints/gcp.restrictNonCmekServices constraint to enforce CMEK usage.
Policy Type and Value:
Set the policy type to allow to specify which services must use CMEK.
In this case, the policy value should be storage.googleapis.com to target Cloud Storage.
Command:
Applying the organization policy with the appropriate binding ensures that all new Cloud Storage resources under the organization will require CMEK.
Steps:
Step 1: Go to the Google Cloud Console.
Step 2: Navigate to the Organization Policies page.
Step 3: Apply the policy constraint constraints/gcp.restrictNonCmekServices with the allow policy type and storage.googleapis.com as the policy value.
Reference:
Organization Policy Constraints
Customer-Managed Encryption Keys (CMEK)
A customer needs to launch a 3-tier internal web application on Google Cloud Platform (GCP). The customer’s internal compliance requirements dictate that end-user access may only be allowed if the traffic seems to originate from a specific known good CIDR. The customer accepts the risk that their application will only have SYN flood DDoS protection. They want to use GCP’s native SYN flood protection.
Which product should be used to meet these requirements?
- A . Cloud Armor
- B . VPC Firewall Rules
- C . Cloud Identity and Access Management
- D . Cloud CDN
B
Explanation:
To ensure end-user access is only allowed if the traffic originates from a specific known good CIDR and to utilize GCP’s native SYN flood protection, you can use the following product:
VPC Firewall Rules: By configuring VPC firewall rules, you can control traffic to and from your
instances based on IP address, protocol, and port. You can set rules to only allow traffic from a specific CIDR block, ensuring that only authorized traffic can reach your application.
Additionally, Google Cloud Platform provides built-in protections against SYN flood attacks, which are a type of DDoS attack. These protections are part of the underlying infrastructure and do not require additional configuration.
Using VPC firewall rules will help you comply with the internal requirement of allowing access only from a specific CIDR and provide the necessary SYN flood DDoS protection.
Reference: Google Cloud VPC Firewall Rules
Google Cloud DDoS Protection
You are responsible for managing your company’s identities in Google Cloud. Your company enforces 2-Step Verification (2SV) for all users. You need to reset a user’s access, but the user lost their second factor for 2SV. You want to minimize risk.
What should you do?
- A . On the Google Admin console, select the appropriate user account, and generate a backup code to allow the user to sign in. Ask the user to update their second factor.
- B . On the Google Admin console, temporarily disable the 2SV requirements for all users. Ask the user to log in and add their new second factor to their account. Re-enable the 2SV requirement for all users.
- C . On the Google Admin console, select the appropriate user account, and temporarily disable 2SV for this account Ask the user to update their second factor, and then re-enable 2SV for this account.
- D . On the Google Admin console, use a super administrator account to reset the user account’s credentials. Ask the user to update their credentials after their first login.
A
Explanation:
If a user loses their second factor for 2-Step Verification (2SV), you can help them regain access with minimal risk by generating a backup code.
Generate a Backup Code (A):
In the Google Admin console, navigate to the user’s account settings.
Generate a backup code for the user. This code allows them to sign in despite not having access to their usual second factor.
Instruct the user to log in using the backup code and then update their second factor in their account settings.
This method ensures that only the affected user’s access is temporarily adjusted, minimizing risk while maintaining overall security policies.
Reference: Google Admin console 2-Step Verification documentation
After completing a security vulnerability assessment, you learned that cloud administrators leave Google Cloud CLI sessions open for days. You need to reduce the risk of attackers who might exploit these open sessions by setting these sessions to the minimum duration.
What should you do?
- A . Set the session duration for the Google session control to one hour.
- B . Set the reauthentication frequency (or the Google Cloud Session Control to one hour.
- C . Set the organization policy constraint
constraints/iam.allowServiceAccountCredentialLifetimeExtension to one hour. - D . Set the organization policy constraint constraints/iam. serviceAccountKeyExpiryHours to one hour and inheritFromParent to false.
B
Explanation:
To mitigate the risk posed by long-running Google Cloud CLI sessions, it is essential to enforce a reauthentication frequency. This ensures that users must periodically reauthenticate, reducing the window of opportunity for an attacker to exploit an open session. Setting the reauthentication frequency to one hour forces users to reauthenticate after this period, thereby limiting the duration an attacker can use a compromised session.
Access Google Cloud Console: Log in to your Google Cloud Console using your admin credentials.
Navigate to Security Settings: Go to the "Security" section of the Cloud Console.
Set Session Control: Under the session management settings, locate the "Reauthentication frequency" setting. This controls how often users must reauthenticate.
Configure Reauthentication Frequency: Set the reauthentication frequency to "1 hour". This configuration will force users to reauthenticate every hour, thus limiting the duration of each session. Save Changes: Confirm and save your changes. This setting will now apply to all users, ensuring that open sessions are minimized to a duration of one hour.
Reference:
Google Cloud IAM Documentation
Google Cloud Security Best Practices
A company is using Google Kubernetes Engine (GKE) with container images of a mission-critical application The company wants to scan the images for known security issues and securely share the report with the security team without exposing them outside Google Cloud.
What should you do?
- A . 1. Enable Container Threat Detection in the Security Command Center Premium tier.
• 2. Upgrade all clusters that are not on a supported version of GKE to the latest possible GKE version.
• 3. View and share the results from the Security Command Center - B . • 1. Use an open source tool in Cloud Build to scan the images.
• 2. Upload reports to publicly accessible buckets in Cloud Storage by using gsutil
• 3. Share the scan report link with your security department. - C . • 1. Enable vulnerability scanning in the Artifact Registry settings.
• 2. Use Cloud Build to build the images
• 3. Push the images to the Artifact Registry for automatic scanning.
• 4. View the reports in the Artifact Registry. - D . • 1. Get a GitHub subscription.
• 2. Build the images in Cloud Build and store them in GitHub for automatic scanning
• 3. Download the report from GitHub and share with the Security Team
C
Explanation:
"The service evaluates all changes and remote access attempts to detect runtime attacks in near-real time." : https://cloud.google.com/security-command-center/docs/concepts-container-threat-detection-overview This has nothing to do with KNOWN security Vulns in images
Your organization has implemented synchronization and SAML federation between Cloud Identity and Microsoft Active Directory. You want to reduce the risk of Google Cloud user accounts being compromised.
What should you do?
- A . Create a Cloud Identity password policy with strong password settings, and configure 2-Step Verification with security keys in the Google Admin console.
- B . Create a Cloud Identity password policy with strong password settings, and configure 2-Step Verification with verification codes via text or phone call in the Google Admin console.
- C . Create an Active Directory domain password policy with strong password settings, and configure post-SSO (single sign-on) 2-Step Verification with security keys in the Google Admin console.
- D . Create an Active Directory domain password policy with strong password settings, and configure post-SSO (single sign-on) 2-Step Verification with verification codes via text or phone call in the Google Admin console.
C
Explanation:
Objective: Reduce the risk of Google Cloud user accounts being compromised.
Solution: Implement strong password policies and post-SSO 2-Step Verification using security keys.
Steps:
Step 1: In Active Directory, configure a domain password policy with strong settings (e.g., complexity, length, expiration).
Step 2: In the Google Admin console, navigate to the Security settings.
Step 3: Enable 2-Step Verification and configure it to use security keys for post-SSO verification.
Step 4: Ensure all users enroll in the 2-Step Verification with security keys.
Using strong password policies in Active Directory along with security keys for 2-Step Verification post-SSO provides enhanced security against account compromises.
Reference:
Active Directory Password Policies
Google Admin Console 2-Step Verification
Your organization deploys a large number of containerized applications on Google Kubernetes Engine (GKE). Node updates are currently applied manually. Audit findings show that a critical patch has not been installed due to a missed notification. You need to design a more reliable, cloud-first, and scalable process for node updates.
What should you do?
- A . Migrate the cluster infrastructure to a self-managed Kubernetes environment for greater control over the patching process.
- B . Develop a custom script to continuously check for patch availability, download patches, and apply the patches across all components of the cluster.
- C . Schedule a daily reboot for all nodes to automatically upgrade.
- D . Configure node auto-upgrades for node pools in the maintenance windows.
D
Explanation:
To establish a reliable, cloud-native, and scalable process for updating nodes in your GKE clusters, configuring node auto-upgrades within designated maintenance windows is the most effective approach.
Option A: Migrating to a self-managed Kubernetes environment would increase operational overhead and complexity, as your team would be responsible for managing the entire infrastructure, including patching and updates. This contradicts the goal of adopting a cloud-first strategy and does not inherently provide a more reliable update process.
Option B: Developing custom scripts for patch management introduces potential risks and maintenance burdens. Ensuring the reliability, security, and scalability of such scripts can be challenging, and this approach may not align with best practices for managing GKE environments.
Option C: Scheduling daily reboots does not guarantee that nodes will apply the latest patches or updates. Without a mechanism to manage and apply updates, reboots alone are insufficient to maintain node security and compliance.
Option D: Configuring node auto-upgrades ensures that GKE automatically keeps your nodes up-to-date with the latest stable versions, reducing the risk of missed critical patches. By setting maintenance windows, you can control when these upgrades occur, minimizing disruptions to your workloads. This approach leverages GKE’s managed services to maintain security and compliance efficiently.
Therefore, Option D is the optimal solution, as it aligns with a cloud-first strategy and leverages GKE’s native capabilities to automate and schedule node updates effectively.
Reference:
Auto-upgrading nodes | Google Kubernetes Engine (GKE) Maintenance windows and exclusions | Google Kubernetes Engine