Practice Free Professional Cloud Network Engineer Exam Online Questions
You are developing an HTTP API hosted on a Compute Engine virtual machine instance that must be invoked only by multiple clients within the same Virtual Private Cloud (VPC). You want clients to be able to get the IP address of the service.
What should you do?
- A . Reserve a static external IP address and assign it to an HTTP(S) load balancing service’s forwarding rule. Clients should use this IP address to connect to the service.
- B . Ensure that clients use Compute Engine internal DNS by connecting to the instance name with the url https://[INSTANCE_NAME].[ZONE].c.[PROJECT_ID].internal/.
- C . Reserve a static external IP address and assign it to an HTTP(S) load balancing service’s forwarding rule. Then, define an A record in Cloud DNS. Clients should use the name of the A record to connect to the service.
- D . Ensure that clients use Compute Engine internal DNS by connecting to the instance name with the url https://[API_NAME]/[API_VERSION]/.
You are migrating a three-tier application architecture from on-premises to Google Cloud. As a first step in the migration, you want to create a new Virtual Private Cloud (VPC) with an external HTTP(S) load balancer. This load balancer will forward traffic back to the on-premises compute resources that run the presentation tier. You need to stop malicious traffic from entering your VPC and consuming resources at the edge, so you must configure this policy to filter IP addresses and stop cross-site scripting (XSS) attacks.
What should you do?
- A . Create a Google Cloud Armor policy, and apply it to a backend service that uses an unmanaged instance group backend.
- B . Create a hierarchical firewall ruleset, and apply it to the VPC’s parent organization resource node.
- C . Create a Google Cloud Armor policy, and apply it to a backend service that uses an internet network endpoint group (NEG) backend.
- D . Create a VPC firewall ruleset, and apply it to all instances in unmanaged instance groups.
You created a new VPC for your development team. You want to allow access to the resources in this VPC via SSH only.
How should you configure your firewall rules?
- A . Create two firewall rules: one to block all traffic with priority 0, and another to allow port 22 with priority 1000.
- B . Create two firewall rules: one to block all traffic with priority 65536, and another to allow port 3389 with priority 1000.
- C . Create a single firewall rule to allow port 22 with priority 1000.
- D . Create a single firewall rule to allow port 3389 with priority 1000.
C
Explanation:
Reference: https://geekflare.com/gcp-firewall-configuration/
Your organization is deploying a single project for 3 separate departments. Two of these departments require network connectivity between each other, but the third department should remain in isolation. Your design should create separate network administrative domains between these departments. You want to minimize operational overhead.
How should you design the topology?
- A . Create a Shared VPC Host Project and the respective Service Projects for each of the 3 separate departments.
- B . Create 3 separate VPCs, and use Cloud VPN to establish connectivity between the two appropriate VPCs.
- C . Create 3 separate VPCs, and use VPC peering to establish connectivity between the two appropriate VPCs.
- D . Create a single project, and deploy specific firewall rules. Use network tags to isolate access between the departments.
C
Explanation:
https://cloud.google.com/vpc/docs/vpc-peering
You need to migrate multiple PostgreSQL databases from your on-premises data center to Google Cloud. You want to significantly improve the performance of your databases while minimizing changes to your data schema and application code. You expect to exceed 150 TB of data per geographical region. You want to follow Google-recommended practices and minimize your operational costs.
What should you do?
- A . Migrate your data to AlloyDB.
- B . Migrate your data to Spanner.
- C . Migrate your data to Firebase.
- D . Migrate your data to Bigtable.
A
Explanation:
Let’s analyze each option based on the requirements: PostgreSQL compatibility, significant performance improvement, minimal schema/code changes, handling large data volumes, Google-recommended practices, and cost minimization:
You want to configure load balancing for an internet-facing, standard voice-over-IP (VOIP) application.
Which type of load balancer should you use?
- A . HTTP(S) load balancer
- B . Network load balancer
- C . Internal TCP/UDP load balancer
- D . TCP/SSL proxy load balancer
You are managing an application deployed on Cloud Run. The development team has released a new version of the application. You want to deploy and redirect traffic to this new version of the application. To ensure traffic to the new version of the application is served with no startup time, you want to ensure that there are two idle instances available for incoming traffic before adjusting the traffic flow. You also want to minimize administrative overhead.
What should you do?
- A . Ensure the checkbox "Serve this revision immediately" is unchecked when deploying the new revision. Before changing the traffic rules, use a traffic simulation tool to send load to the new revision.
- B . Configure service autoscaling and set the minimum number of instances to 2.
- C . Configure revision autoscaling for the new revision and set the minimum number of instances to 2.
- D . Configure revision autoscaling for the existing revision and set the minimum number of instances to 2.
C
Explanation:
Let’s analyze each option to find the one that meets the requirements of no startup time for new traffic, two idle instances, and minimal administrative overhead:
Your organization is developing a landing zone architecture with the following requirements:
No communication between production and non-production environments.
Communication between applications within an environment may be necessary.
Network administrators should centrally manage all network resources, including subnets, routes, and firewall rules.
Each application should be billed separately.
Developers of an application within a project should have the autonomy to create their compute
resources.
Up to 1000 applications are expected per environment.
What should you do?
- A . Create a design that has a Shared VPC for each project. Implement hierarchical firewall policies to apply micro-segmentation between VPCs.
- B . Create a design where each project has its own VPC. Ensure all VPCs are connected by a Network Connectivity Center hub that is centrally managed by the network team.
- C . Create a design that implements a single Shared VPC. Use VPC firewall rules with secure tags to enforce micro-segmentation between environments.
- D . Create a design that has one host project with a Shared VPC for the production environment, another host project with a Shared VPC for the non-production environment, and a service project that is associated with the corresponding host project for each initiative.
D
Explanation:
Using separate Shared VPCs for production and non-production environments in different host projects (Option D) meets all requirements. This design allows network administrators to centrally manage resources within each Shared VPC while ensuring isolation between environments and separate billing. By associating service projects with each host project, developers can manage resources within their project without affecting the overall VPC network structure.
Reference: Google Cloud – Best Practices for Shared VPC
You are deploying GKE clusters in your organization’s Google Cloud environment. The pods in these clusters need to egress directly to the internet for a majority of their communications. You need to deploy the clusters and associated networking features using the most cost-efficient approach, and following Google-recommended practices.
What should you do?
- A . Deploy the GKE cluster with public cluster nodes. Do not deploy Cloud NAT or Secure Web Proxy for the cluster.
- B . Deploy the GKE cluster with private cluster nodes. Deploy Secure Web Proxy, and configure the pods to use Secure Web Proxy as an HTTP(S) proxy.
- C . Deploy the GKE cluster with public cluster nodes. Deploy Secure Web Proxy, and configure the pods to use Secure Web Proxy as an HTTP(S) proxy.
- D . Deploy the GKE cluster with private cluster nodes. Deploy Cloud NAT for the primary subnet of the cluster.
A
Explanation:
For GKE pods that need to egress directly to the internet for most of their communications, the most cost-efficient and straightforward approach is to deploy a GKE cluster with public cluster nodes. Public nodes have external IP addresses, allowing pods to directly reach the internet. This eliminates the need for additional services like Cloud NAT or Secure Web Proxy for outbound internet access, which would incur extra costs and management overhead. Exact Extract:
"Public clusters have nodes with external IP addresses, allowing them to directly initiate connections to the internet. This is the simplest configuration for clusters that require direct internet egress for their workloads."
"When using public clusters, Cloud NAT is not required for outbound internet connectivity from the nodes or pods, as they can use their external IP addresses. This can reduce operational overhead and cost compared to private clusters that need NAT."
Reference: Google Kubernetes Engine Documentation – Cluster network configuration, Public clusters vs Private clusters
Your organization is developing a landing zone architecture with the following requirements:
There should be no communication between production and non-production environments.
Communication between applications within an environment may be necessary.
Network administrators should centrally manage all network resources, including subnets, routes, and firewall rules.
Each application should be billed separately.
Developers of an application within a project should have the autonomy to create their compute resources.
Up to 1000 applications are expected per environment.
You need to create a design that accommodates these requirements.
What should you do?
- A . Create a design where each project has its own VPC. Ensure all VPCs are connected by a Network Connectivity Center hub that is centrally managed by the network team.
- B . Create a design that implements a single Shared VPC. Use VPC firewall rules with secure tags to enforce micro-segmentation between environments.
- C . Create a design that has one host project with a Shared VPC for the production environment, another host project with a Shared VPC for the non-production environment, and a service project that is associated with the corresponding host project for each initiative.
- D . Create a design that has a Shared VPC for each project. Implement hierarchical firewall policies to apply micro-segmentation between VPCs.
C
Explanation:
This design allows you to separate production and non-production environments while using Shared VPCs. Each environment has its own Shared VPC, and a service project is associated with each, allowing for separate billing and autonomy for developers. Centralized management of network resources is handled by the host projects.
Reference: Google Cloud Shared VPC Documentation