Practice Free Professional Cloud Security Engineer Exam Online Questions
How should a customer reliably deliver Stackdriver logs from GCP to their on-premises SIEM system?
- A . Send all logs to the SIEM system via an existing protocol such as syslog.
- B . Configure every project to export all their logs to a common BigQuery DataSet, which will be queried by the SIEM system.
- C . Configure Organizational Log Sinks to export logs to a Cloud Pub/Sub Topic, which will be sent to the SIEM via Dataflow.
- D . Build a connector for the SIEM to query for all logs in real time from the GCP RESTful JSON APIs.
C
Explanation:
Scenarios for exporting Cloud Logging data: Splunk This scenario shows how to export selected logs from Cloud Logging to Pub/Sub for ingestion into Splunk. Splunk is a security information and event management (SIEM) solution that supports several ways of ingesting data, such as receiving streaming data out of Google Cloud through Splunk HTTP Event Collector (HEC) or by fetching data from Google Cloud APIs through Splunk Add-on for Google Cloud. Using the Pub/Sub to Splunk Dataflow template, you can natively forward logs and events from a Pub/Sub topic into Splunk HEC. If Splunk HEC is not available in your Splunk deployment, you can use the Add-on to collect the logs and events from the Pub/Sub topic. https://cloud.google.com/solutions/exporting-stackdriver-logging-for-splunk
You want to evaluate GCP for PCI compliance. You need to identify Google’s inherent controls.
Which document should you review to find the information?
- A . Google Cloud Platform: Customer Responsibility Matrix
- B . PCI DSS Requirements and Security Assessment Procedures
- C . PCI SSC Cloud Computing Guidelines
- D . Product documentation for Compute Engine
A
Explanation:
To evaluate Google Cloud Platform (GCP) for PCI compliance and identify Google’s inherent controls, you should review the "Google Cloud Platform: Customer Responsibility Matrix". This document provides detailed information about the shared responsibility model, outlining the security controls managed by Google and those that are the responsibility of the customer.
Steps to access and use the document:
Access the Document:
Go to the Google Cloud compliance resource center.
Locate the "Customer Responsibility Matrix" for PCI DSS compliance.
Review Inherent Controls:
The document lists various controls and specifies whether they are managed by Google, the customer, or both.
It covers different aspects such as infrastructure security, data protection, and compliance requirements.
Analyze PCI Compliance:
Use the matrix to understand which PCI DSS requirements are inherently addressed by Google Cloud. Identify the controls you need to implement and manage as a customer to ensure full compliance. By reviewing this document, you can gain a comprehensive understanding of the inherent controls provided by Google Cloud and the responsibilities you must fulfill to achieve PCI compliance.
Reference:
Google Cloud Compliance Documentation
PCI DSS Compliance on Google Cloud
Your company runs a website that will store PII on Google Cloud Platform. To comply with data privacy regulations, this data can only be stored for a specific amount of time and must be fully deleted after this specific period. Data that has not yet reached the time period should not be deleted. You want to automate the process of complying with this regulation.
What should you do?
- A . Store the data in a single Persistent Disk, and delete the disk at expiration time.
- B . Store the data in a single BigQuery table and set the appropriate table expiration time.
- C . Store the data in a Cloud Storage bucket, and configure the bucket’s Object Lifecycle Management feature.
- D . Store the data in a single BigTable table and set an expiration time on the column families.
C
Explanation:
"To support common use cases like setting a Time to Live (TTL) for objects, retaining noncurrent versions of objects, or "downgrading" storage classes of objects to help manage costs, Cloud Storage offers the Object Lifecycle Management feature. This page describes the feature as well as the options available when using it. To learn how to enable Object Lifecycle Management, and for examples of lifecycle policies, see Managing Lifecycles." https://cloud.google.com/storage/docs/lifecycle
Your organization wants to be continuously evaluated against CIS Google Cloud Computing Foundations Benchmark v1 3 0 (CIS Google Cloud Foundation 1 3). Some of the controls are irrelevant to your organization and must be disregarded in evaluation. You need to create an automated system or process to ensure that only the relevant controls are evaluated.
What should you do?
- A . Mark all security findings that are irrelevant with a tag and a value that indicates a security exception Select all marked findings and mute them on the console every time they appear Activate Security Command Center (SCC) Premium.
- B . Activate Security Command Center (SCC) Premium Create a rule to mute the security findings in SCC so they are not evaluated.
- C . Download all findings from Security Command Center (SCC) to a CSV file Mark the findings that are part of CIS Google Cloud Foundation 1 3 in the file Ignore the entries that are irrelevant and out of scope for the company.
- D . Ask an external audit company to provide independent reports including needed CIS benchmarks.
In the scope of the audit clarify that some of the controls are not needed and must be disregarded.
B
Explanation:
Activate Security Command Center (SCC) Premium: Security Command Center (SCC) Premium provides advanced security analytics and best practice recommendations for your Google Cloud environment. It includes functionalities such as asset discovery, vulnerability scanning, and security findings.
Create a Custom Rule to Mute Irrelevant Security Findings:
Navigate to the Security Command Center (SCC) in the Google Cloud Console.
Go to the "Settings" tab and find the "Mute findings" section.
Create a new mute rule by specifying the conditions that match the irrelevant controls you want to disregard. These conditions can be based on attributes such as resource type, finding type, and other
metadata.
Apply this mute rule, which will ensure that the specified findings are not evaluated in your security posture assessments.
Ensure Continuous Compliance Monitoring:
The mute rules will automatically filter out the irrelevant findings, ensuring that only relevant controls from the CIS Google Cloud Computing Foundations Benchmark v1.3.0 are evaluated.
Regularly review and update the mute rules to adapt to any changes in your compliance requirements or security posture.
Reference:
Security Command Center Documentation
Creating and Managing Mute Rules
A customer is running an analytics workload on Google Cloud Platform (GCP) where Compute Engine instances are accessing data stored on Cloud Storage. Your team wants to make sure that this workload will not be able to access, or be accessed from, the internet.
Which two strategies should your team use to meet these requirements? (Choose two.)
- A . Configure Private Google Access on the Compute Engine subnet
- B . Avoid assigning public IP addresses to the Compute Engine cluster.
- C . Make sure that the Compute Engine cluster is running on a separate subnet.
- D . Turn off IP forwarding on the Compute Engine instances in the cluster.
- E . Configure a Cloud NAT gateway.
AB
Explanation:
Objective: Ensure that the analytics workload on Compute Engine instances accessing Cloud Storage does not interact with the public internet.
Solution:
Private Google Access: This allows Compute Engine instances that only have internal IP addresses to reach Google APIs and services through a private connection without the need for a public IP address.
No Public IP Addresses: By avoiding public IP addresses for the instances, you ensure that they are not accessible from the internet and do not initiate internet connections.
Steps:
Step 1: Open the Google Cloud Console.
Step 2: Navigate to the VPC Network page and select the subnet where the Compute Engine instances are located.
Step 3: Enable Private Google Access for the subnet.
Step 4: Ensure that when launching the Compute Engine instances, no public IP addresses are assigned to them.
Reference:
Configuring Private Google Access
Preventing External IP Address Assignment
Your Google Cloud environment has one organization node, one folder named Apps." and several projects within that folder The organizational node enforces the constraints/iam.allowedPolicyMemberDomains organization policy, which allows members from the terramearth.com organization The "Apps" folder enforces the constraints/iam.allowedPolicyMemberDomains organization policy, which allows members from the flowlogistic.com organization. It also has the inheritFromParent: false property.
You attempt to grant access to a project in the Apps folder to the user [email protected].
What is the result of your action and why?
- A . The action fails because a constraints/iam.allowedPolicyMemberDomains organization policy must
be defined on the current project to deactivate the constraint temporarily. - B . The action fails because a constraints/iam.allowedPolicyMemberDomains organization policy is in place and only members from the flowlogistic.com organization are allowed.
- C . The action succeeds because members from both organizations, terramearth. com or flowlogistic.com, are allowed on projects in the "Apps" folder
- D . The action succeeds and the new member is successfully added to the project’s Identity and Access Management (1AM) policy because all policies are inherited by underlying folders and projects.
B
Explanation:
The action fails because a constraints/iam.allowedPolicyMemberDomains organization policy is in place and only members from the flowlogistic.com organization are allowed. The inheritFromParent: false property on the “Apps” folder means that it does not inherit the organization policy from the organization node. Therefore, only the policy set at the folder level applies, which allows only members from the flowlogistic.com organization. As a result, the attempt to grant access to the user [email protected] fails because this user is not a member of the flowlogistic.com organization.
You are in charge of creating a new Google Cloud organization for your company.
Which two actions should you take when creating the super administrator accounts? (Choose two.)
- A . Create an access level in the Google Admin console to prevent super admin from logging in to Google Cloud.
- B . Disable any Identity and Access Management (1AM) roles for super admin at the organization level in the Google Cloud Console.
- C . Use a physical token to secure the super admin credentials with multi-factor authentication (MFA).
- D . Use a private connection to create the super admin accounts to avoid sending your credentials over the Internet.
- E . Provide non-privileged identities to the super admin users for their day-to-day activities.
CE
Explanation:
Physical Token for MFA: Implement multi-factor authentication (MFA) using physical tokens (such as security keys) for super admin accounts. This adds an extra layer of security to the highest privilege accounts.
Non-Privileged Identities: Provide super admins with separate non-privileged accounts for daily activities. This practice minimizes the risk associated with using highly privileged accounts for routine tasks.
Account Management: Ensure that super admin accounts are only used for tasks requiring elevated privileges, reducing exposure to potential security threats. These measures enhance the security of super admin accounts, protecting your Google Cloud organization from unauthorized access.
Reference:
Google Cloud – Best Practices for Securing Cloud Identity
Google Cloud – Using Security Keys
Your team sets up a Shared VPC Network where project co-vpc-prod is the host project. Your team has configured the firewall rules, subnets, and VPN gateway on the host project. They need to enable Engineering Group A to attach a Compute Engine instance to only the 10.1.1.0/24 subnet.
What should your team grant to Engineering Group A to meet this requirement?
- A . Compute Network User Role at the host project level.
- B . Compute Network User Role at the subnet level.
- C . Compute Shared VPC Admin Role at the host project level.
- D . Compute Shared VPC Admin Role at the service project level.
B
Explanation:
To enable Engineering Group A to attach a Compute Engine instance to a specific subnet (10.1.1.0/24) in a Shared VPC, you should grant the Compute Network User Role at the subnet level.
This role allows users to use the subnetwork for their instances without giving them broader permissions at the project level.
Step-by-Step:
Identify the Subnet: Locate the subnet (10.1.1.0/24) in the host project.
Grant Role:
Navigate to the GCP Console > VPC network > VPC networks.
Select the Shared VPC host project and locate the specific subnet.
Click on "Edit" and go to the "IAM & Admin" section.
Assign the "Compute Network User" role to Engineering Group A at the subnet level.
Verification: Ensure that Engineering Group A can now attach Compute Engine instances to the specified subnet.
Reference:
Shared VPC Overview
Compute Network User Role
Your team sets up a Shared VPC Network where project co-vpc-prod is the host project. Your team has configured the firewall rules, subnets, and VPN gateway on the host project. They need to enable Engineering Group A to attach a Compute Engine instance to only the 10.1.1.0/24 subnet.
What should your team grant to Engineering Group A to meet this requirement?
- A . Compute Network User Role at the host project level.
- B . Compute Network User Role at the subnet level.
- C . Compute Shared VPC Admin Role at the host project level.
- D . Compute Shared VPC Admin Role at the service project level.
B
Explanation:
To enable Engineering Group A to attach a Compute Engine instance to a specific subnet (10.1.1.0/24) in a Shared VPC, you should grant the Compute Network User Role at the subnet level.
This role allows users to use the subnetwork for their instances without giving them broader permissions at the project level.
Step-by-Step:
Identify the Subnet: Locate the subnet (10.1.1.0/24) in the host project.
Grant Role:
Navigate to the GCP Console > VPC network > VPC networks.
Select the Shared VPC host project and locate the specific subnet.
Click on "Edit" and go to the "IAM & Admin" section.
Assign the "Compute Network User" role to Engineering Group A at the subnet level.
Verification: Ensure that Engineering Group A can now attach Compute Engine instances to the specified subnet.
Reference:
Shared VPC Overview
Compute Network User Role
Your team sets up a Shared VPC Network where project co-vpc-prod is the host project. Your team has configured the firewall rules, subnets, and VPN gateway on the host project. They need to enable Engineering Group A to attach a Compute Engine instance to only the 10.1.1.0/24 subnet.
What should your team grant to Engineering Group A to meet this requirement?
- A . Compute Network User Role at the host project level.
- B . Compute Network User Role at the subnet level.
- C . Compute Shared VPC Admin Role at the host project level.
- D . Compute Shared VPC Admin Role at the service project level.
B
Explanation:
To enable Engineering Group A to attach a Compute Engine instance to a specific subnet (10.1.1.0/24) in a Shared VPC, you should grant the Compute Network User Role at the subnet level.
This role allows users to use the subnetwork for their instances without giving them broader permissions at the project level.
Step-by-Step:
Identify the Subnet: Locate the subnet (10.1.1.0/24) in the host project.
Grant Role:
Navigate to the GCP Console > VPC network > VPC networks.
Select the Shared VPC host project and locate the specific subnet.
Click on "Edit" and go to the "IAM & Admin" section.
Assign the "Compute Network User" role to Engineering Group A at the subnet level.
Verification: Ensure that Engineering Group A can now attach Compute Engine instances to the specified subnet.
Reference:
Shared VPC Overview
Compute Network User Role