Practice Free Professional Cloud Security Engineer Exam Online Questions
You are a member of your company’s security team. You have been asked to reduce your Linux bastion host external attack surface by removing all public IP addresses. Site Reliability Engineers (SREs) require access to the bastion host from public locations so they can access the internal VPC while off-site.
How should you enable this access?
- A . Implement Cloud VPN for the region where the bastion host lives.
- B . Implement OS Login with 2-step verification for the bastion host.
- C . Implement Identity-Aware Proxy TCP forwarding for the bastion host.
- D . Implement Google Cloud Armor in front of the bastion host.
You are using Security Command Center (SCC) to protect your workloads and receive alerts for suspected security breaches at your company You need to detect cryptocurrency mining software.
Which SCC service should you use?
- A . Web Security Scanner
- B . Container Threat Detection
- C . Rapid Vulnerability Detection
- D . Virtual Machine Threat Detection
D
Explanation:
The goal is to detect cryptocurrency mining software using Security Command Center (SCC) Security Command Center Threat Detection Services: SCC Premium and Enterprise tiers offer various specialized threat detection services
Virtual Machine Threat Detection (VMTD): This service is explicitly designed to scan virtual machines
(Compute Engine instances and GKE nodes) for specific threats, including cryptocurrency mining software It operates at the hypervisor level, performing deep scans of VM memory and disks Extract
Reference: "Virtual Machine Threat Detection (VMTD) helps you detect potential threats, such as cryptocurrency mining and malware, within your Compute Engine instances and GKE nodes" (Google Cloud Documentation: "Virtual Machine Threat Detection overview | Security Command Center" – https://cloudgooglecom/security-command-center/docs/concepts-vm-threat-detection-overview)
Extract
Reference: "This service scans virtual machines to detect potentially malicious applications, such as cryptocurrency mining software, kernel-mode rootkits, and malware running in compromised cloud environments" (Google Cloud Documentation: "Virtual Machine Threat Detection overview | Security Command Center" – https://cloudgooglecom/security-command-center/docs/concepts-vm-threat-detection-overview)
Let’s evaluate the other options:
A Web Security Scanner: This service scans for common web application vulnerabilities like XSS, Flash injection, and mixed content It is not designed to detect runtime threats like cryptocurrency mining
software
B Container Threat Detection: While Container Threat Detection (CTD) also detects cryptocurrency mining, it specifically focuses on runtime threats within GKE containers The question asks for detection of "cryptocurrency mining software" generally, and VMs are a common target for such activity (and GKE nodes are VMs) VMTD provides a more general detection across Compute Engine VMs and GKE nodes for this specific type of threat If the context explicitly mentioned containers or Cloud Run, CTD would be the more specific answer However, for a general detection of "software" on "workloads", and given that VMTD explicitly lists "cryptocurrency mining software" for VMs, it is the most direct and broadly applicable answer among the choices
C Rapid Vulnerability Detection: This service actively scans internet-exposed assets for network vulnerabilities and misconfigurations It focuses on finding known vulnerabilities, not detecting active malicious processes like cryptocurrency mining
Given the direct and explicit mention of cryptocurrency mining detection for VMs in its documentation, Virtual Machine Threat Detection is the correct SCC service to use
Your organization hosts a financial services application running on Compute Engine instances for a third-party company. The third-party company’s servers that will consume the application also run on Compute Engine in a separate Google Cloud organization. You need to configure a secure network connection between the Compute Engine instances. You have the following requirements: The network connection must be encrypted.
The communication between servers must be over private IP addresses.
What should you do?
- A . Configure a Cloud VPN connection between your organization’s VPC network and the third party’s that is controlled by VPC firewall rules.
- B . Configure a VPC peering connection between your organization’s VPC network and the third
party’s that is controlled by VPC firewall rules. - C . Configure a VPC Service Controls perimeter around your Compute Engine instances, and provide access to the third party via an access level.
- D . Configure an Apigee proxy that exposes your Compute Engine-hosted application as an API, and is encrypted with TLS which allows access only to the third party.
A
Explanation:
To meet the requirements of encrypted communication over private IP addresses between Compute Engine instances in different Google Cloud organizations, a Cloud VPN connection is appropriate: Cloud VPN: Cloud VPN creates a secure, encrypted tunnel between your organization’s VPC network and the third party’s VPC network. This ensures that data transmitted over the network is encrypted and secure.
Private IP Communication: Cloud VPN allows communication over private IP addresses, which helps maintain security by keeping traffic within the Google Cloud network and not exposing it to the public internet.
Firewall Rules: VPC firewall rules can be configured to control the traffic that flows through the VPN, ensuring that only authorized traffic is allowed, further enhancing security.
By setting up a Cloud VPN connection, you can achieve secure, encrypted communication over private IP addresses between different Google Cloud organizations.
Reference: Cloud VPN Overview
A security audit uncovered several inconsistencies in your project’s Identity and Access Management (IAM) configuration Some service accounts have overly permissive roles, and a few external collaborators have more access than necessary You need to gain detailed visibility into changes to IAM policies, user activity, service account behavior, and access to sensitive projects
What should you do?
- A . Deploy the OS Config Management agent to your VMs Use OS Config Management to create patch management jobs and monitor system modifications
- B . Enable the metrics explorer in Cloud Monitoring to follow the service account authentication events and build alerts linked on it
- C . Use Cloud Audit Logs Create log export sinks to send these logs to a security information and event management (SIEM) solution for correlation with other event sources
- D . Configure Google Cloud Functions to be triggered by changes to IAM policies Analyze changes by using the policy simulator, send alerts upon risky modifications, and store event details
C
Explanation:
The problem requires gaining "detailed visibility into changes to IAM policies, user activity, service account behavior, and access to sensitive projects" due to security inconsistencies
Cloud Audit Logs: Cloud Audit Logs records administrative activities, data access, and system events across Google Cloud These logs are the primary source of truth for tracking "who did what, where, and when" in your Google Cloud environment
Extract
Reference: "Cloud Audit Logs maintains the following audit logs for each project, folder, and organization: Admin Activity audit logs, Data Access audit logs, System Event audit logs, Policy Denied audit logs"
Extract
Reference: "Admin Activity audit logs contain log entries for API calls or other actions that modify the configuration or metadata of resources Data Access audit logs record API calls that read the configuration or metadata of resources, as well as user-provided data" (Google Cloud Documentation: "Cloud Audit Logs overview" – https://cloudgooglecom/logging/docs/audit) These logs directly capture:Changes to IAM policies: Recorded in Admin Activity logs User activity: Recorded in Admin Activity and Data Access logs
Service account behavior: Actions performed by service accounts are logged in the same way as user actions
Access to sensitive projects: Data Access logs, especially for sensitive data services, record access events
Log Export Sinks: To gain "detailed visibility" and enable "correlation with other event sources," these audit logs should be exported to a centralized Security Information and Event Management (SIEM) solution Log sinks allow you to route logs from Cloud Logging to various destinations, including BigQuery, Cloud Storage, or Pub/Sub (which can then feed into a SIEM)
Extract
Reference: "You can use sinks to route some or all of your logs to supported destinations" and "Many security information and event management (SIEM) systems can ingest logs through Cloud Pub/Sub" (Google Cloud Documentation: "Routing and storage overview | Cloud Logging" – https://cloudgooglecom/logging/docs/routing-overview) Let’s evaluate the other options:
A OS Config Management agent: This service manages operating system configurations, patching, and inventory on VMs It is not designed to monitor or log IAM policy changes, user activity, or service account behavior within Google Cloud’s IAM system
B Metrics Explorer in Cloud Monitoring: While Cloud Monitoring can provide some metrics related to service account authentication, it focuses on time-series data and operational health metrics It does not provide the detailed, event-level audit records necessary for forensic analysis of IAM policy changes, specific user actions, or granular access events to sensitive data that Cloud Audit Logs offer D Cloud Functions triggered by IAM policy changes + Policy Simulator: This describes a reactive automation pattern for some IAM changes While useful for immediate alerting on risky modifications, it’s a custom solution for a subset of the requirements It doesn’t inherently provide "detailed visibility" into all user activity or comprehensive service account behavior across all projects, nor does it replace the robust logging and correlation capabilities of a SIEM solution ingesting raw audit logs Cloud Audit Logs are the fundamental data source this approach would rely on
Therefore, leveraging Cloud Audit Logs and exporting them to a SIEM is the most comprehensive and recommended approach for gaining detailed visibility into IAM-related changes and activities across your Google Cloud organization
A retail customer allows users to upload comments and product reviews. The customer needs to make sure the text does not include sensitive data before the comments or reviews are published.
Which Google Cloud Service should be used to achieve this?
- A . Cloud Key Management Service
- B . Cloud Data Loss Prevention API
- C . BigQuery
- D . Cloud Security Scanner
B
Explanation:
To ensure user-uploaded comments and product reviews do not include sensitive data before publication, use the Cloud Data Loss Prevention (DLP) API. Enable DLP API:
Go to the Cloud Console and navigate to APIs & Services > Library.
Search for "Data Loss Prevention API" and enable it.
Configure DLP API:
Create an inspection template specifying the types of sensitive data to detect. Set up de-identification templates if you want to redact or mask sensitive data.
Implement DLP in Application:
Use the Google Cloud DLP Client Library for the desired programming language.
Send the text data to the DLP API for inspection before saving or publishing.
from google.cloud import dlp_v2 dlp_client = dlp_v2.DlpServiceClient() parent =
f"projects/{project_id}" item = {"value": "User comment text here"} inspect_config = {"info_types":
[{"name": "PERSON_NAME"}, {"name": "CREDIT_CARD_NUMBER"}]} response =
dlp_client.inspect_content(parent=parent, inspect_config=inspect_config, item=item)
Reference:
Cloud Data Loss Prevention API Documentation
DLP API Client Libraries
A retail customer allows users to upload comments and product reviews. The customer needs to make sure the text does not include sensitive data before the comments or reviews are published.
Which Google Cloud Service should be used to achieve this?
- A . Cloud Key Management Service
- B . Cloud Data Loss Prevention API
- C . BigQuery
- D . Cloud Security Scanner
B
Explanation:
To ensure user-uploaded comments and product reviews do not include sensitive data before publication, use the Cloud Data Loss Prevention (DLP) API. Enable DLP API:
Go to the Cloud Console and navigate to APIs & Services > Library.
Search for "Data Loss Prevention API" and enable it.
Configure DLP API:
Create an inspection template specifying the types of sensitive data to detect. Set up de-identification templates if you want to redact or mask sensitive data.
Implement DLP in Application:
Use the Google Cloud DLP Client Library for the desired programming language.
Send the text data to the DLP API for inspection before saving or publishing.
from google.cloud import dlp_v2 dlp_client = dlp_v2.DlpServiceClient() parent =
f"projects/{project_id}" item = {"value": "User comment text here"} inspect_config = {"info_types":
[{"name": "PERSON_NAME"}, {"name": "CREDIT_CARD_NUMBER"}]} response =
dlp_client.inspect_content(parent=parent, inspect_config=inspect_config, item=item)
Reference:
Cloud Data Loss Prevention API Documentation
DLP API Client Libraries
Your organization strives to be a market leader in software innovation. You provided a large number of Google Cloud environments so developers can test the integration of Gemini in Vertex AI into their existing applications or create new projects. Your organization has 200 developers and a five-person security team. You must prevent and detect proper security policies across the Google Cloud environments.
What should you do? (Choose 2 answers)
- A . Apply a predefined AI-recommended security posture template for Gemini in Vertex AI in Security Command Center Enterprise or Premium tiers.
- B . Publish internal policies and clear guidelines to securely develop applications.
- C . Implement the least privileged access Identity and Access Management roles to prevent misconfigurations.
- D . Apply organization policy constraints. Detect and monitor drifts by using Security Health Analytics.
- E . Use Cloud Logging to create log filters to detect misconfigurations. Trigger Cloud Run functions to remediate misconfigurations.
C, D
Explanation:
To maintain proper security policies across numerous Google Cloud environments, especially with a large developer base and a small security team, it’s crucial to implement automated and scalable security measures.
Option A: While applying AI-recommended security posture templates can be beneficial, as of now, there isn’t a specific predefined template for Gemini in Vertex AI within the Security Command Center.
Option B: Publishing internal policies and guidelines is essential for promoting secure development practices but may not be sufficient alone to enforce or detect security policies.
Option C: Implementing the principle of least privilege through Identity and Access Management (IAM) roles minimizes the risk of misconfigurations and unauthorized access by ensuring users have only the permissions necessary for their tasks.
Option D: Applying organization policy constraints enforces specific configurations and restrictions across projects. Utilizing Security Health Analytics helps in detecting and monitoring deviations from these policies, providing automated insights into potential security issues.
Option E: Using Cloud Logging to detect misconfigurations and triggering Cloud Run functions for remediation introduces complexity and may require significant maintenance, making it less practical for a small security team.
Therefore, Options C and D are the most effective strategies. They provide automated enforcement and monitoring of security policies, aligning with the need for scalable solutions given the organization’s size and resources.
Reference:
Identity and Access Management (IAM) Overview
Organization Policy Service Overview
Security Health Analytics Overview
Your company is using Cloud Dataproc for its Spark and Hadoop jobs. You want to be able to create, rotate, and destroy symmetric encryption keys used for the persistent disks used by Cloud Dataproc. Keys can be stored in the cloud.
What should you do?
- A . Use the Cloud Key Management Service to manage the data encryption key (DEK).
- B . Use the Cloud Key Management Service to manage the key encryption key (KEK).
- C . Use customer-supplied encryption keys to manage the data encryption key (DEK).
- D . Use customer-supplied encryption keys to manage the key encryption key (KEK).
B
Explanation:
This PD and bucket data is encrypted using a Google-generated data encryption key (DEK) and key encryption key (KEK). The CMEK feature allows you to create, use, and revoke the key encryption key (KEK). Google still controls the data encryption key (DEK). For more information on Google data encryption keys, see Encryption at Rest.
https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/customer-managed-encryption
https://codelabs.developers.google.com/codelabs/encrypt-and-decrypt-data-with-cloud-kms#0
A customer implements Cloud Identity-Aware Proxy for their ERP system hosted on Compute Engine. Their security team wants to add a security layer so that the ERP systems only accept traffic from Cloud Identity- Aware Proxy.
What should the customer do to meet these requirements?
- A . Make sure that the ERP system can validate the JWT assertion in the HTTP requests.
- B . Make sure that the ERP system can validate the identity headers in the HTTP requests.
- C . Make sure that the ERP system can validate the x-forwarded-for headers in the HTTP requests.
- D . Make sure that the ERP system can validate the user’s unique identifier headers in the HTTP requests.
A
Explanation:
Use Cryptographic Verification If there is a risk of IAP being turned off or bypassed, your app can check to make sure the identity information it receives is valid. This uses a third web request header added by IAP, called X-Goog-IAP-JWT-Assertion. The value of the header is a cryptographically signed object that also contains the user identity data. Your application can verify the digital signature and use the data provided in this object to be certain that it was provided by IAP without alteration.
A DevOps team will create a new container to run on Google Kubernetes Engine. As the application will be internet-facing, they want to minimize the attack surface of the container.
What should they do?
- A . Use Cloud Build to build the container images.
- B . Build small containers using small base images.
- C . Delete non-used versions from Container Registry.
- D . Use a Continuous Delivery tool to deploy the application.
B
Explanation:
To minimize the attack surface of the container for an internet-facing application running on Google Kubernetes Engine (GKE), the best practice is to build small containers using small base images. This approach helps in the following ways:
Reduce Vulnerabilities: Smaller base images contain fewer packages and dependencies, which minimizes the potential vulnerabilities that an attacker could exploit.
Improved Security: Using minimal base images such as distroless or Alpine Linux ensures that only the necessary components are included, reducing the attack surface significantly.
Easier Maintenance: Small containers are easier to maintain and update, ensuring that security patches can be applied quickly without dealing with unnecessary components.
Steps to Implement:
Choose a Minimal Base Image:
Use base images like gcr.io/distroless/base or alpine.
FROM gcr.io/distroless/base COPY myapp /myapp CMD ["/myapp"]
Optimize Container Image:
Remove unnecessary tools and libraries.
Use multi-stage builds to keep the final image small.
Regularly Update Base Images:
Keep the base images up-to-date with the latest security patches.
Reference:
Distroless Images
Best Practices for Building Containers