Practice Free Professional Cloud Network Engineer Exam Online Questions
You are configuring the firewall endpoints as part of the Cloud Next Generation Firewall (Cloud NGFW) intrusion prevention service in Google Cloud. You have configured a threat prevention security profile, and you now need to create an endpoint for traffic inspection.
What should you do?
- A . Attach the profile to the VPC network, create a firewall endpoint within the zone, and use a firewall policy rule to apply the L7 inspection.
- B . Create a firewall endpoint within the zone, associate the endpoint to the VPC network, and use a firewall policy rule to apply the L7 inspection.
- C . Create a firewall endpoint within the region, associate the endpoint to the VPC network, and use a firewall policy rule to apply the L7 inspection.
- D . Create a Private Service Connect endpoint within the zone, associate the endpoint to the VPC network, and use a firewall policy rule to apply the L7 inspection.
C
Explanation:
For Cloud NGFW in Google Cloud, firewall endpoints are typically created at the regional level, allowing you to associate these with your VPC network for Layer 7 traffic inspection. This regional setup ensures high availability and scales the inspection service across the network.
Reference: Google Cloud – Cloud NGFW
In your project my-project, you have two subnets in a Virtual Private Cloud (VPC): subnet-a with IP range 10.128.0.0/20 and subnet-b with IP range 172.16.0.0/24. You need to deploy database servers in subnet-a. You will also deploy the application servers and web servers in subnet-b. You want to configure firewall rules that only allow database traffic from the application servers to the database servers.
What should you do?
- A . Create network tag app-server and service account [email protected]. Add the tag to the application servers, and associate the service account with the database servers.
Run the following command:
gcloud compute firewall-rules create app-db-firewall-rule
–action allow
–direction ingress
–rules top:3306
–source-tags app-server
–target-service-accounts sa-db@my-
project.iam.gserviceaccount.com - B . Create service accounts [email protected] and [email protected]. Associate service account sa-app with the application servers, and associate the service account sa-db with the database servers.
Run the following command:
gcloud compute firewall-rules create app-db-firewall-ru
–allow TCP:3306
–source-service-accounts sa-app@democloud-idp-
demo.iam.gserviceaccount.com
–target-service-accounts sa-db@my-
project.iam.gserviceaccount.com - C . Create service accounts [email protected] and [email protected]. Associate the service account sa-app with the application servers, and associate the service account sa-db with the database servers.
Run the following command:
gcloud compute firewall-rules create app-db-firewall-ru
–allow TCP:3306
–source-ranges 10.128.0.0/20
–source-service-accounts sa-app@my-
project.iam.gserviceaccount.com
–target-service-accounts sa-db@my-
project.iam.gserviceaccount.com - D . Create network tags app-server and db-server. Add the app-server tag to the application servers, and add the db-server tag to the database servers.
Run the following command:
gcloud compute firewall-rules create app-db-firewall-rule
–action allow
–direction ingress
–rules tcp:3306
–source-ranges 10.128.0.0/20
–source-tags app-server
–target-tags db-server
Your company has provisioned 2000 virtual machines (VMs) in the private subnet of your Virtual Private Cloud (VPC) in the us-east1 region. You need to configure each VM to have a minimum of 128 TCP connections to a public repository so that users can download software updates and packages over the internet. You need to implement a Cloud NAT gateway so that the VMs are able to perform outbound NAT to the internet. You must ensure that all VMs can simultaneously connect to the public repository and download software updates and packages.
Which two methods can you use to accomplish this? (Choose two.)
- A . Configure the NAT gateway in manual allocation mode, allocate 2 NAT IP addresses, and update the minimum number of ports per VM to 256.
- B . Create a second Cloud NAT gateway with the default minimum number of ports configured per VM to 64.
- C . Use the default Cloud NAT gateway’s NAT proxy to dynamically scale using a single NAT IP address.
- D . Use the default Cloud NAT gateway to automatically scale to the required number of NAT IP addresses, and update the minimum number of ports per VM to 128.
- E . Configure the NAT gateway in manual allocation mode, allocate 4 NAT IP addresses, and update the minimum number of ports per VM to 128.
Your company has recently installed a Cloud VPN tunnel between your on-premises data center and your Google Cloud Virtual Private Cloud (VPC). You need to configure access to the Cloud Functions API for your on-premises servers. The configuration must meet the following requirements:
Certain data must stay in the project where it is stored and not be exfiltrated to other projects.
Traffic from servers in your data center with RFC 1918 addresses do not use the internet to access Google Cloud APIs.
All DNS resolution must be done on-premises.
The solution should only provide access to APIs that are compatible with VPC Service Controls.
What should you do?
- A . Create an A record for private.googleapis.com using the 199.36.153.8/30 address range.
Create a CNAME record for *.googleapis.com that points to the A record.
Configure your on-premises routers to use the Cloud VPN tunnel as the next hop for the addresses you used in the A record.
Remove the default internet gateway from the VPC where your Cloud VPN tunnel terminates. - B . Create an A record for restricted.googleapis.com using the 199.36.153.4/30 address range.
Create a CNAME record for *.googleapis.com that points to the A record.
Configure your on-premises routers to use the Cloud VPN tunnel as the next hop for the addresses you used in the A record.
Configure your on-premises firewalls to allow traffic to the restricted.googleapis.com addresses. - C . Create an A record for restricted.googleapis.com using the 199.36.153.4/30 address range.
Create a CNAME record for *.googleapis.com that points to the A record.
Configure your on-premises routers to use the Cloud VPN tunnel as the next hop for the addresses you used in the A record.
Remove the default internet gateway from the VPC where your Cloud VPN tunnel terminates. - D . Create an A record for private.googleapis.com using the 199.36.153.8/30 address range.
Create a CNAME record for *.googleapis.com that points to the A record.
Configure your on-premises routers to use the Cloud VPN tunnel as the next hop for the addresses you used in the A record.
Configure your on-premises firewalls to allow traffic to the private.googleapis.com addresses.
Your company acquired a new division. The new division’s network team requires complete control over their networking infrastructure. You need to extend your existing Google Cloud network infrastructure, that consists of a single VPC, to allow workloads from all divisions to communicate with each other. You want to avoid incurring extra costs and granting unnecessary permissions to the new division’s networking team.
What should you do?
- A . • Create a new project for the new division’s network team.
• Create a new VPC within the new project.
• Establish a VPC peering between your existing VPC and the new division’s VPC.
• Grant roles/compute. networkAdmin on the newly created project to the new division’s network team group. - B . • Create a new project for the new division’s network team.
• Create a new VPC within the new project.
• Establish a VPC peering between your existing VPC and the new division’s VPC.
• Create a new subnet dedicated to the new division’s workloads.
• Grant roles/compute .networkuser on the new project to the new division’s network team group. - C . • Create a new project for the new division’s network team.
• Create a new VPC within the new project.
• Establish a VPN connection between your existing VPC and the new division’s VPC.
• Grant roles/compute .networkAdmin on the newly created project to the new division’s network team group. - D . • Ensure that the project hosting the existing network infrastructure is enabled as a host project.
• Create a new subnet dedicated to the new division’s workloads in the existing VPC.
• Grant roles/compute. networkuser on the newly created subnet to the new division’s network team group.
A
Explanation:
The requirement for the new division’s network team to have "complete control over their networking infrastructure" while allowing communication between divisions and avoiding unnecessary permissions points directly to VPC Network Peering. This approach allows each division to manage its own VPC independently (in its own project), provides full control to the new division’s network team within their project, and enables secure, private communication between the VPCs without traversing the public internet. Granting roles/compute.networkAdmin on their newly created project ensures they have the necessary control over their dedicated VPC. Using Shared VPC (option D) would centralize network administration under your existing project, which goes against the requirement of the new division having "complete control." VPN (option C) would incur additional costs and introduce more complexity than VPC peering for intra-Google Cloud connectivity. Option B is flawed because creating a subnet in the new VPC isn’t directly relevant to granting permissions on the new project for VPC peering setup, and networkuser role on the new project alone wouldn’t give complete network control. Exact Extract:
"VPC Network Peering allows you to connect two VPC networks so that resources in each network can communicate with each other using internal IP addresses. Traffic stays within Google’s network." "Each side of a VPC Network Peering connection is configured independently. This means that each network administrator retains full control over their own network, including routes, firewalls, and network services."
"VPC Network Peering is ideal for scenarios where different organizations or divisions want to maintain separate network administrative domains while still allowing their resources to communicate privately.”
Reference: Google Cloud VPC Network Peering Documentation – Overview, Use cases
You have configured a Compute Engine virtual machine instance as a NAT gateway.
You execute the following command:
gcloud compute routes create no-ip-internet-route
–network custom-network1
–destination-range 0.0.0.0/0
–next-hop instance nat-gateway
–next-hop instance-zone us-central1-a
–tags no-ip –priority 800
You want existing instances to use the new NAT gateway.
Which command should you execute?
- A . sudo sysctl -w net.ipv4.ip_forward=1
- B . gcloud compute instances add-tags [existing-instance] –tags no-ip
- C . gcloud builds submit –config=cloudbuild.waml –substitutions=TAG_NAME=no-ip
- D . gcloud compute instances create example-instance –network custom-network1 –subnet subnet-us-central
–no-address
–zone us-central1-a
–image-family debian-9
–image-project debian-cloud –tags no-ip
B
Explanation:
https://cloud.google.com/sdk/gcloud/reference/compute/routes/create
In order to apply a route to an existing instance we should use a tag to bind the route to it.
Reference: https://cloud.google.com/vpc/docs/special-configurations
You are the network administrator responsible for hybrid connectivity at your organization. Your developer team wants to use Cloud SQL in the us-west1 region in your Shared VPC. You configured a Dedicated Interconnect connection and a Cloud Router in us-west1, and the connectivity between your Shared VPC and on-premises data center is working as expected. You just created the private services access connection required for Cloud SQL using the reserved IP address range and default settings. However, your developers cannot access the Cloud SQL instance from on-premises. You want to resolve the issue.
What should you do?
- A . Modify the VPC Network Peering connection used for Cloud SQL, and enable the import and export of routes.
Create a custom route advertisement in your Cloud Router to advertise the Cloud SQL IP address range. - B . Change the VPC routing mode to global.
Create a custom route advertisement in your Cloud Router to advertise the Cloud SQL IP address range. - C . Create an additional Cloud Router in us-west2.
Create a new Border Gateway Protocol (BGP) peering connection to your on-premises data center.
Modify the VPC Network Peering connection used for Cloud SQL, and enable the import and export of routes. - D . Change the VPC routing mode to global.
Modify the VPC Network Peering connection used for Cloud SQL, and enable the import and export of routes.
You are deploying an application that runs on Compute Engine instances. You need to determine how to expose your application to a new customer.
You must ensure that your application meets the following requirements
• Maps multiple existing reserved external IP addresses to the Instance
• Processes IP Encapsulating Security Payload (ESP) traffic
What should you do?
- A . Configure a target pool, and create protocol forwarding rules for each external IP address.
- B . Configure a backend service, and create an external network load balancer for each external IP address
- C . Configure a target instance, and create a protocol forwarding rule for each external IP address to be mapped to the instance.
- D . Configure the Compute Engine Instances’ network Interface external IP address from None to Ephemeral Add as many external IP addresses as required
C
Explanation:
The correct answer is C. Configure a target instance, and create a protocol forwarding rule for each external IP address to be mapped to the instance.
This answer is based on the following facts:
A target instance is a Compute Engine instance that handles traffic from one or more forwarding rules1. You can use target instances to forward traffic to a single VM instance from one or more external IP addresses2.
A protocol forwarding rule specifies the IP protocol and port range for the traffic that you want to forward3. You can use protocol forwarding rules to forward traffic of any IP protocol, including ESP4.
The other options are not correct because:
Option A is not possible. You cannot create protocol forwarding rules for a target pool. A target pool is a group of instances that receives traffic from a network load balancer5.
Option B is not suitable. You do not need to create an external network load balancer for each external IP address. An external network load balancer distributes traffic among multiple backend instances based on the destination IP address and port. You can use a single load balancer with multiple forwarding rules to map multiple external IP addresses to the same backend service.
Option D is not feasible. You cannot add multiple external IP addresses to a single network interface of a Compute Engine instance. Each network interface can have only one external IP address that is either ephemeral or static. You can use alias IP ranges to assign multiple internal IP addresses to a single network interface, but not external IP addresses.
You work for a university that is migrating to Google Cloud.
These are the cloud requirements:
On-premises connectivity with 10 Gbps
Lowest latency access to the cloud
Centralized Networking Administration Team
New departments are asking for on-premises connectivity to their projects. You want to deploy the most cost-efficient interconnect solution for connecting the campus to Google Cloud.
What should you do?
- A . Use Shared VPC, and deploy the VLAN attachments and Dedicated Interconnect in the host project.
- B . Use Shared VPC, and deploy the VLAN attachments in the service projects. Connect the VLAN attachment to the Shared VPC’s host project.
- C . Use standalone projects, and deploy the VLAN attachments in the individual projects. Connect the VLAN attachment to the standalone projects’ Dedicated Interconnects.
- D . Use standalone projects and deploy the VLAN attachments and Dedicated Interconnects in each of the individual projects.
Your company has 10 separate Virtual Private Cloud (VPC) networks, with one VPC per project in a single region in Google Cloud. Your security team requires each VPC network to have private connectivity to the main on-premises location via a Partner Interconnect connection in the same region. To optimize cost and operations, the same connectivity must be shared with all projects. You must ensure that all traffic between different projects, on-premises locations, and the internet can be inspected using the same third-party appliances.
What should you do?
- A . Configure the third-party appliances with multiple interfaces and specific Partner Interconnect VLAN attachments per project. Create the relevant routes on the third-party appliances and VPC networks.
- B . Configure the third-party appliances with multiple interfaces, with each interface connected to a separate VPC network. Create separate VPC networks for on- premises and internet connectivity. Create the relevant routes on the third-party appliances and VPC networks.
- C . Consolidate all existing projects’ subnetworks into a single VPC. Create separate VPC networks for on-premises and internet connectivity. Configure the third-party appliances with multiple interfaces, with each interface connected to a separate VPC network. Create the relevant routes on the third-party appliances and VPC networks.
- D . Configure the third-party appliances with multiple interfaces. Create a hub VPC network for all projects, and create separate VPC networks for on-premises and internet connectivity. Create the relevant routes on the third-party appliances and VPC networks. Use VPC Network Peering to connect all projects’ VPC networks to the hub VPC. Export custom routes from the hub VPC and import on all projects’ VPC networks.