Practice Free Professional Cloud Security Engineer Exam Online Questions
You are setting up a CI/CD pipeline to deploy containerized applications to your production clusters on Google Kubernetes Engine (GKE). You need to prevent containers with known vulnerabilities from being deployed.
You have the following requirements for your solution:
Must be cloud-native
Must be cost-efficient
Minimize operational overhead
How should you accomplish this? (Choose two.)
- A . Create a Cloud Build pipeline that will monitor changes to your container templates in a Cloud Source Repositories repository. Add a step to analyze Container Analysis results before allowing the build to continue.
- B . Use a Cloud Function triggered by log events in Google Cloud’s operations suite to automatically scan your container images in Container Registry.
- C . Use a cron job on a Compute Engine instance to scan your existing repositories for known vulnerabilities and raise an alert if a non-compliant container image is found.
- D . Deploy Jenkins on GKE and configure a CI/CD pipeline to deploy your containers to Container Registry. Add a step to validate your container images before deploying your container to the cluster.
- E . In your CI/CD pipeline, add an attestation on your container image when no vulnerabilities have been found. Use a Binary Authorization policy to block deployments of containers with no attestation in your cluster.
A, E
Explanation:
Your team needs to configure their Google Cloud Platform (GCP) environment so they can centralize the control over networking resources like firewall rules, subnets, and routes. They also have an on-premises environment where resources need access back to the GCP resources through a private VPN connection. The networking resources will need to be controlled by the network security team.
Which type of networking design should your team use to meet these requirements?
- A . Shared VPC Network with a host project and service projects
- B . Grant Compute Admin role to the networking team for each engineering project
- C . VPC peering between all engineering projects using a hub and spoke model
- D . Cloud VPN Gateway between all engineering projects using a hub and spoke model
A
Explanation:
Reference: https://cloud.google.com/docs/enterprise/best-practices-for-enterprise-organizations#centralize_network_control
Use Shared VPC to connect to a common VPC network. Resources in those projects can communicate with each other securely and efficiently across project boundaries using internal IPs. You can manage shared network resources, such as subnets, routes, and firewalls, from a central host project,
enabling you to apply and enforce consistent network policies across the projects.
Your organization is using GitHub Actions as a continuous integration and delivery (Cl/CD) platform.
You must enable access to Google Cloud resources from the Cl/CD pipelines in the most secure way.
What should you do?
- A . Create a service account key and add it to the GitHub pipeline configuration file.
- B . Create a service account key and add it to the GitHub repository content.
- C . Configure a Google Kubernetes Engine cluster that uses Workload Identity to supply credentials to GitHub.
- D . Configure workload identity federation to use GitHub as an identity pool provider.
D
Explanation:
Challenge:
Ensuring secure access to Google Cloud resources from GitHub Actions CI/CD pipelines without directly managing service account keys.
Workload Identity Federation:
Allows for the delegation of access to Google Cloud resources based on federated identities, such as those from GitHub.
Benefits:
This approach eliminates the need to manage service account keys, reducing the risk of key leakage.
It leverages GitHub’s identity provider capabilities to authenticate and authorize access.
Steps to Configure Workload Identity Federation:
Step 1: Create a workload identity pool in Google Cloud.
Step 2: Add GitHub as an identity provider within the pool.
Step 3: Configure the necessary permissions and bindings for the identity pool to allow GitHub Actions to access Google Cloud resources.
Step 4: Update the GitHub Actions workflow to use the identity federation for authentication.
Reference:
Workload Identity Federation
Configuring Workload Identity Federation with GitHub
Your team needs to prevent users from creating projects in the organization. Only the DevOps team should be allowed to create projects on behalf of the requester.
Which two tasks should your team perform to handle this request? (Choose two.)
- A . Remove all users from the Project Creator role at the organizational level.
- B . Create an Organization Policy constraint, and apply it at the organizational level.
- C . Grant the Project Editor role at the organizational level to a designated group of users.
- D . Add a designated group of users to the Project Creator role at the organizational level.
- E . Grant the billing account creator role to the designated DevOps team.
AD
Explanation:
Objective: Prevent users from creating projects while allowing only the DevOps team to create projects.
Solution: Modify IAM roles and permissions.
Steps:
Step 1: Open the Google Cloud Console.
Step 2: Navigate to the IAM & Admin page.
Step 3: At the organizational level, find and remove all users from the Project Creator role.
Step 4: Create or identify a group for the DevOps team.
Step 5: Assign the Project Creator role to the DevOps team group at the organizational level.
By removing all users from the Project Creator role and granting it only to the DevOps team, you ensure that only the designated team can create projects.
Reference:
GCP IAM Documentation
Project Creator Role
Your organization develops software involved in many open source projects and is concerned about software supply chain threats You need to deliver provenance for the build to demonstrate the software is untampered.
What should you do?
- A . • 1- Generate Supply Chain Levels for Software Artifacts (SLSA) level 3 assurance by using Cloud Build.
• 2. View the build provenance in the Security insights side panel within the Google Cloud console. - B . • 1. Review the software process.
• 2. Generate private and public key pairs and use Pretty Good Privacy (PGP) protocols to sign the output software artifacts together with a file containing the address of your enterprise and point of contact.
• 3. Publish the PGP signed attestation to your public web page. - C . • 1, Publish the software code on GitHub as open source.
• 2. Establish a bug bounty program, and encourage the open source community to review, report, and fix the vulnerabilities. - D . • 1. Hire an external auditor to review and provide provenance
• 2. Define the scope and conditions.
• 3. Get support from the Security department or representative.
• 4. Publish the attestation to your public web page.
A
Explanation:
Generate Supply Chain Levels for Software Artifacts (SLSA) level 3 assurance by using Cloud Build: SLSA is a framework for ensuring the integrity of software artifacts. By using Cloud Build, you can automate the build process and generate SLSA level 3 compliance, which includes verifiable build steps and provenance.
View the build provenance in the Security insights side panel within the Google Cloud console: The build provenance provides a detailed history of how the software was built, including the source code, build process, and any dependencies. This information is accessible through the Security insights side panel in the Google Cloud console, allowing you to verify the integrity and authenticity of your software artifacts.
Reference: Supply Chain Levels for Software Artifacts (SLSA) documentation Cloud Build documentation Security insights in Google Cloud console
You are migrating an application into the cloud The application will need to read data from a Cloud Storage bucket. Due to local regulatory requirements, you need to hold the key material used for encryption fully under your control and you require a valid rationale for accessing the key material.
What should you do?
- A . Encrypt the data in the Cloud Storage bucket by using Customer Managed Encryption Keys. Configure an 1AM deny policy for unauthorized groups
- B . Encrypt the data in the Cloud Storage bucket by using Customer Managed Encryption Keys backed by a Cloud Hardware Security Module (HSM). Enable data access logs.
- C . Generate a key in your on-premises environment and store it in a Hardware Security Module (HSM) that is managed on-premises Use this key as an external key in the Cloud Key Management Service (KMS). Activate Key Access Justifications (KAJ) and set the external key system to reject unauthorized accesses.
- D . Generate a key in your on-premises environment to encrypt the data before you upload the data to the Cloud Storage bucket Upload the key to the Cloud Key Management Service (KMS). Activate Key Access Justifications (KAJ) and have the external key system reject unauthorized accesses.
C
Explanation:
By generating a key in your on-premises environment and storing it in an HSM that you manage, you’re ensuring that the key material is fully under your control. Using the key as an external key in Cloud KMS allows you to use the key with Google Cloud services without having the key stored on Google Cloud. Activating Key Access Justifications (KAJ) provides a reason every time the key is accessed, and you can configure the external key system to reject unauthorized access attempts.
Your organization is using Vertex AI Workbench Instances. You must ensure that newly deployed instances are automatically kept up-to-date and that users cannot accidentally alter settings in the operating system.
What should you do?
- A . Enable the VM Manager and ensure the corresponding Google Compute Engine instances are added.
- B . Enforce the disableRootAccess and requireAutoUpgradeSchedule organization policies for newly deployed instances.
- C . Assign the AI Notebooks Runner and AI Notebooks Viewer roles to the users of the AI Workbench Instances.
- D . Implement a firewall rule that prevents Secure Shell access to the corresponding Google Compute Engine instances by using tags.
B
Explanation:
To ensure that Vertex AI Workbench Instances are automatically kept up-to-date and that users cannot alter operating system settings, implementing specific organization policies is essential.
Option A: Enabling VM Manager and adding Compute Engine instances assists in managing and monitoring VM instances but does not enforce automatic updates or restrict user modifications to the operating system.
Option B: Enforcing the disableRootAccess organization policy prevents users from gaining root access, thereby restricting unauthorized changes to the operating system. Additionally, the requireAutoUpgradeSchedule policy ensures that instances are automatically updated according to a defined schedule. Together, these policies maintain system integrity and compliance with update requirements.
Option C: Assigning AI Notebooks Runner and AI Notebooks Viewer roles controls user permissions related to running and viewing notebooks but does not directly influence operating system settings or update mechanisms.
Option D: Implementing firewall rules to prevent SSH access limits direct access to instances but does not ensure automatic updates or prevent alterations through other means.
Therefore, Option B is the most appropriate action, as it directly addresses both the enforcement of automatic updates and the prevention of unauthorized operating system modifications.
Reference:
Organization Policy Constraints
VM Manager Overview
An organization’s typical network and security review consists of analyzing application transit routes, request handling, and firewall rules. They want to enable their developer teams to deploy new applications without the overhead of this full review.
How should you advise this organization?
- A . Use Forseti with Firewall filters to catch any unwanted configurations in production.
- B . Mandate use of infrastructure as code and provide static analysis in the CI/CD pipelines to enforce policies.
- C . Route all VPC traffic through customer-managed routers to detect malicious patterns in production.
- D . All production applications will run on-premises. Allow developers free rein in GCP as their dev and QA platforms.
B
Explanation:
To enable developer teams to deploy new applications without the extensive overhead of network and security reviews, it’s recommended to mandate the use of infrastructure as code (IaC) and enforce policies through static analysis in CI/CD pipelines. This approach ensures that security and compliance policies are checked automatically during the development process.
Step-by-Step:
Adopt IaC: Use tools like Terraform or Google Cloud Deployment Manager to manage infrastructure as code.
CI/CD Pipeline Integration: Integrate static analysis tools such as TFLint or Checkov in the CI/CD pipeline to enforce security policies.
Policy Definition: Define security policies and best practices that need to be adhered to in the code. Automated Checks: Configure automated checks in the CI/CD pipeline to review code against these policies before deployment.
Monitor and Audit: Continuously monitor and audit deployed applications to ensure ongoing compliance.
Reference:
Infrastructure as Code on Google Cloud
Static Analysis for Terraform
Checkov for IaC
An organization’s typical network and security review consists of analyzing application transit routes, request handling, and firewall rules. They want to enable their developer teams to deploy new applications without the overhead of this full review.
How should you advise this organization?
- A . Use Forseti with Firewall filters to catch any unwanted configurations in production.
- B . Mandate use of infrastructure as code and provide static analysis in the CI/CD pipelines to enforce policies.
- C . Route all VPC traffic through customer-managed routers to detect malicious patterns in production.
- D . All production applications will run on-premises. Allow developers free rein in GCP as their dev and QA platforms.
B
Explanation:
To enable developer teams to deploy new applications without the extensive overhead of network and security reviews, it’s recommended to mandate the use of infrastructure as code (IaC) and enforce policies through static analysis in CI/CD pipelines. This approach ensures that security and compliance policies are checked automatically during the development process.
Step-by-Step:
Adopt IaC: Use tools like Terraform or Google Cloud Deployment Manager to manage infrastructure as code.
CI/CD Pipeline Integration: Integrate static analysis tools such as TFLint or Checkov in the CI/CD pipeline to enforce security policies.
Policy Definition: Define security policies and best practices that need to be adhered to in the code. Automated Checks: Configure automated checks in the CI/CD pipeline to review code against these policies before deployment.
Monitor and Audit: Continuously monitor and audit deployed applications to ensure ongoing compliance.
Reference:
Infrastructure as Code on Google Cloud
Static Analysis for Terraform
Checkov for IaC
Which two implied firewall rules are defined on a VPC network? (Choose two.)
- A . A rule that allows all outbound connections
- B . A rule that denies all inbound connections
- C . A rule that blocks all inbound port 25 connections
- D . A rule that blocks all outbound connections
- E . A rule that allows all inbound port 80 connections
AB
Explanation:
Implied IPv4 allow egress rule. An egress rule whose action is allow, destination is 0.0.0.0/0, and priority is the lowest possible (65535) lets any instance send traffic to any destination
Implied IPv4 deny ingress rule. An ingress rule whose action is deny, source is 0.0.0.0/0, and priority is the lowest possible (65535) protects all instances by blocking incoming connections to them.
https://cloud.google.com/vpc/docs/firewalls?hl=en#default_firewall_rules