Practice Free Professional Cloud Security Engineer Exam Online Questions
You manage your organization’s Security Operations Center (SOC). You currently monitor and detect network traffic anomalies in your Google Cloud VPCs based on packet header information. However, you want the capability to explore network flows and their payload to aid investigations.
Which Google Cloud product should you use?
- A . Marketplace IDS
- B . VPC Flow Logs
- C . VPC Service Controls logs
- D . Packet Mirroring
- E . Google Cloud Armor Deep Packet Inspection
D
Explanation:
Reference: https://aws.amazon.com/blogs/aws/new-vpc-traffic-mirroring/
Packet Mirroring clones the traffic of specified instances in your Virtual Private Cloud (VPC) network and forwards it for examination. Packet Mirroring captures all traffic and packet data, including payloads and headers. https://cloud.google.com/vpc/docs/packet-mirroring
Use customer-managed encryption keys to encrypt secrets.
Explanation:
Provide granular access to secrets: 2. Enforce access control to secrets using project-level identity and Access Management (IAM) bindings. Give you control over the rotation schedules for the encryption keys that wrap your secrets: 3. Use customer-managed encryption keys to encrypt secrets. Maintain environment separation: 1. Use separate Google Cloud projects to store Production and Non-Production secrets.
Which international compliance standard provides guidelines for information security controls applicable to the provision and use of cloud services?
- A . ISO 27001
- B . ISO 27002
- C . ISO 27017
- D . ISO 27018
C
Explanation:
.
Create a new Service Account that should be able to list the Compute Engine instances in the project.
You want to follow Google-recommended practices.
https://cloud.google.com/security/compliance/iso-27017
Your organization wants to publish yearly reports of your website usage analytics. You must ensure that no data with personally identifiable information (PII) is published by using the Cloud Data Loss Prevention (Cloud DLP) API. Data integrity must be preserved.
What should you do?
- A . Encrypt the PII from the report by using the Cloud DLP API.
- B . Discover and transform PII data in your reports by using the Cloud DLP API.
- C . Detect all PII in storage by using the Cloud DLP API. Create a cloud function to delete the PII.
- D . Discover and quarantine your PII data in your storage by using the Cloud DLP API.
B
Explanation:
To ensure that no personally identifiable information (PII) is published in your yearly website usage analytics reports while preserving data integrity, the Cloud Data Loss Prevention (Cloud DLP) API can be utilized to identify and transform PII within your datasets.
Option A: Encrypting PII does not remove it from the reports; it merely obscures it, which may not be sufficient for compliance or privacy requirements.
Option B: Discovering and transforming PII ensures that sensitive information is either masked, tokenized, or otherwise obfuscated, effectively removing PII from the reports while maintaining the overall structure and utility of the data.
Option C: Detecting and deleting PII could lead to loss of valuable data and may disrupt the integrity of the reports.
Option D: Quarantining PII data implies isolating it, which doesn’t address the need to publish reports without PII.
Therefore, Option B is the most appropriate approach, as it leverages the Cloud DLP API to identify and transform PII, ensuring that the published reports are free from sensitive information while preserving data integrity.
Reference:
Cloud DLP Overview
De-identifying Sensitive Data
Your organization leverages folders to represent different teams within your Google Cloud environment. To support Infrastructure as Code (IaC) practices, each team receives a dedicated service account upon onboarding. You want to ensure that teams have comprehensive permissions to manage resources within their assigned folders while adhering to the principle of least privilege. You must design the permissions for these team-based service accounts in the most effective way possible.
What should you do?
- A . Grant each service account the folder administrator role on its respective folder.
- B . Grant each service account the project creator role at the organization level and use folder-level IAM conditions to restrict project creation to specific folders.Reddit
- C . Assign each service account the project editor role at the organization level and instruct teams to use IAM bindings at the folder level for fine-grained permissions.
- D . Assign each service account the folder IAM administrator role on its respective folder to allow teams to create and manage additional custom roles if needed.
A
Explanation:
To ensure that each team’s service account has the necessary permissions to manage resources within their assigned folders while adhering to the principle of least privilege, the following considerations apply:
Folder Administrator Role: Granting each service account the Folder Administrator role on its respective folder provides comprehensive permissions to manage resources within that folder, including creating, updating, and deleting projects and resources. This approach ensures that teams have the necessary control over their environments without extending permissions beyond their assigned scope.
Principle of Least Privilege: By assigning permissions at the folder level, you limit the service account’s access to only the resources within its designated folder, aligning with the principle of least privilege and reducing the risk of unauthorized access to other parts of the organization.
Therefore, Option A is the most effective approach, as it provides the necessary permissions for teams to manage their resources within their assigned folders while adhering to security best practices.
Reference:
Understanding Roles
Best Practices for Enterprise Organizations
Your organization wants full control of the keys used to encrypt data at rest in their Google Cloud environments. Keys must be generated and stored outside of Google and integrate with many Google Services including BigQuery.
What should you do?
- A . Create a Cloud Key Management Service (KMS) key with imported key material Wrap the key for protection during import. Import the key generated on a trusted system in Cloud KMS.
- B . Create a KMS key that is stored on a Google managed FIPS 140-2 level 3 Hardware Security Module (HSM) Manage the Identity and Access Management (IAM) permissions settings, and set up the key rotation period.
- C . Use Cloud External Key Management (EKM) that integrates with an external Hardware Security Module
(HSM) system from supported vendors. - D . Use customer-supplied encryption keys (CSEK) with keys generated on trusted external systems Provide the raw CSEK as part of the API call.
C
Explanation:
Use Cloud External Key Management (EKM) that integrates with an external Hardware Security Module (HSM) system from supported vendors: Cloud EKM allows you to use encryption keys that are managed externally to Google Cloud. This means you can generate and store your keys in an on-premises HSM or another supported external HSM service, and integrate these keys with various Google Cloud services.
Integration with Google Services: Cloud EKM integrates seamlessly with many Google Cloud services, including BigQuery, Cloud Storage, Compute Engine, and more. This provides you with full control over your encryption keys while still taking advantage of Google Cloud’s powerful services.
Reference: Cloud External Key Management (EKM) documentation
External Key Management overview
You have numerous private virtual machines on Google Cloud. You occasionally need to manage the servers through Secure Socket Shell (SSH) from a remote location. You want to configure remote access to the servers in a manner that optimizes security and cost efficiency.
What should you do?
- A . Create a site-to-site VPN from your corporate network to Google Cloud.
- B . Configure server instances with public IP addresses Create a firewall rule to only allow traffic from your corporate IPs.
- C . Create a firewall rule to allow access from the Identity-Aware Proxy (IAP) IP range Grant the role of an IAP- secured Tunnel User to the administrators.
- D . Create a jump host instance with public IP Manage the instances by connecting through the jump host.
C
Explanation:
Using Identity-Aware Proxy (IAP) for managing SSH access to private VMs ensures secure access control and avoids the need for public IPs. IAP allows you to enforce identity-based access control policies.
Enable IAP: Ensure that IAP is enabled for your project. This can be done via the Google Cloud Console under "Security" -> "Identity-Aware Proxy".
Set Up Firewall Rule: Create a firewall rule to allow SSH traffic from the IAP IP ranges.
Navigate to "VPC network" -> "Firewall".
Create a new rule allowing ingress traffic on port 22 (SSH) from the IAP IP ranges.
Assign IAP-Secured Tunnel User Role: Grant the roles/iap.tunnelResourceAccessor role to the
administrators who need SSH access.
Go to "IAM & Admin" -> "IAM".
Assign the IAP-Secured Tunnel User role to the relevant users or groups.
SSH Using IAP: Administrators can now use IAP to SSH into the instances. This can be done using the gcloud command:
gcloud compute ssh [INSTANCE_NAME] –tunnel-through-iap
Reference:
Using Identity-Aware Proxy for TCP forwarding
Google Cloud Firewall Rules
While migrating your organization’s infrastructure to GCP, a large number of users will need to access GCP Console. The Identity Management team already has a well-established way to manage your users and want to keep using your existing Active Directory or LDAP server along with the existing SSO password.
What should you do?
- A . Manually synchronize the data in Google domain with your existing Active Directory or LDAP server.
- B . Use Google Cloud Directory Sync to synchronize the data in Google domain with your existing Active Directory or LDAP server.
- C . Users sign in directly to the GCP Console using the credentials from your on-premises Kerberos compliant identity provider.
- D . Users sign in using OpenID (OIDC) compatible IdP, receive an authentication token, then use that token to log in to the GCP Console.
B
Explanation:
To allow a large number of users to access the GCP Console while keeping the existing Active Directory or LDAP server for identity management, use Google Cloud Directory Sync (GCDS).
Install GCDS:
Download and install Google Cloud Directory Sync from here.
Configure GCDS:
Set up the synchronization by specifying the LDAP server details and the Google domain. Map the LDAP attributes to Google attributes to ensure user data is synchronized correctly.
Run Synchronization:
Perform an initial synchronization to populate the Google domain with existing users from the LDAP server.
Schedule regular synchronizations to keep the data up-to-date.
Benefits:
Automated Sync: Ensures that user data is consistently updated without manual intervention.
Secure Access: Users can log in to the GCP Console using their existing credentials, enhancing security and user experience.
Reference:
Google Cloud Directory Sync Documentation
GCDS Administration Guide
While migrating your organization’s infrastructure to GCP, a large number of users will need to access GCP Console. The Identity Management team already has a well-established way to manage your users and want to keep using your existing Active Directory or LDAP server along with the existing SSO password.
What should you do?
- A . Manually synchronize the data in Google domain with your existing Active Directory or LDAP server.
- B . Use Google Cloud Directory Sync to synchronize the data in Google domain with your existing Active Directory or LDAP server.
- C . Users sign in directly to the GCP Console using the credentials from your on-premises Kerberos compliant identity provider.
- D . Users sign in using OpenID (OIDC) compatible IdP, receive an authentication token, then use that token to log in to the GCP Console.
B
Explanation:
To allow a large number of users to access the GCP Console while keeping the existing Active Directory or LDAP server for identity management, use Google Cloud Directory Sync (GCDS).
Install GCDS:
Download and install Google Cloud Directory Sync from here.
Configure GCDS:
Set up the synchronization by specifying the LDAP server details and the Google domain. Map the LDAP attributes to Google attributes to ensure user data is synchronized correctly.
Run Synchronization:
Perform an initial synchronization to populate the Google domain with existing users from the LDAP server.
Schedule regular synchronizations to keep the data up-to-date.
Benefits:
Automated Sync: Ensures that user data is consistently updated without manual intervention.
Secure Access: Users can log in to the GCP Console using their existing credentials, enhancing security and user experience.
Reference:
Google Cloud Directory Sync Documentation
GCDS Administration Guide
You need to centralize your team’s logs for production projects. You want your team to be able to search and analyze the logs using Logs Explorer.
What should you do?
- A . Enable Cloud Monitoring workspace, and add the production projects to be monitored.
- B . Use Logs Explorer at the organization level and filter for production project logs.
- C . Create an aggregate org sink at the parent folder of the production projects, and set the destination to a Cloud Storage bucket.
- D . Create an aggregate org sink at the parent folder of the production projects, and set the destination to a logs bucket.
D
Explanation:
https://cloud.google.com/logging/docs/export/aggregated_sinks#supported-destinations
You can use aggregated sinks to route logs within or between the same organizations and folders to the following destinations: – Another Cloud Logging bucket: Log entries held in Cloud Logging log buckets.