Practice Free Professional Cloud Network Engineer Exam Online Questions
You are configuring a new HTTP application that will be exposed externally behind both IPv4 and IPv6 virtual IP addresses, using ports 80, 8080, and 443. You will have backends in two regions: us-west1 and us-east1. You want to serve the content with the lowest-possible latency while ensuring high availability and autoscaling, and create native content-based rules using the HTTP hostname and request path. The IP addresses of the clients that connect to the load balancer need to be visible to the backends.
Which configuration should you use?
- A . Use Network Load Balancing
- B . Use TCP Proxy Load Balancing with PROXY protocol enabled
- C . Use External HTTP(S) Load Balancing with URL Maps and custom headers
- D . Use External HTTP(S) Load Balancing with URL Maps and an X-Forwarded-For header
You successfully provisioned a single Dedicated Interconnect. The physical connection is at a colocation facility closest to us-west2. Seventy-five percent of your workloads are in us-east4, and the remaining twenty-five percent of your workloads are in us-central1. All workloads have the same network traffic profile. You need to minimize data transfer costs when deploying VLAN attachments.
What should you do?
- A . Keep the existing Dedicated interconnect. Deploy a VLAN attachment to a Cloud Router in us-west2, and use VPC global routing to access workloads in us-east4 and us-central1.
- B . Keep the existing Dedicated Interconnect. Deploy a VLAN attachment to a Cloud Router in us-east4, and deploy another VLAN attachment to a Cloud Router in us-central1.
- C . Order a new Dedicated Interconnect for a colocation facility closest to us-east4, and use VPC global routing to access workloads in us-central1.
- D . Order a new Dedicated Interconnect for a colocation facility closest to us-central1, and use VPC global routing to access workloads in us-east4.
You are designing a hub-and-spoke network architecture for your company’s cloud-based environment. You need to make sure that all spokes are peered with the hub. The spokes must use the hub’s virtual appliance for internet access.
The virtual appliance is configured in high-availability mode with two instances using an internal load balancer with IP address 10.0.0.5.
What should you do?
- A . Create a default route in the hub VPC that points to IP address 10.0.0.5.
Delete the default internet gateway route in the hub VPC, and create a new higher-priority route that is tagged only to the appliances with a next hop of the default internet gateway.
Export the custom routes in the hub.
Import the custom routes in the spokes. - B . Create a default route in the hub VPC that points to IP address 10.0.0.5.
Delete the default internet gateway route in the hub VPC, and create a new higher-priority route that is tagged only to the appliances with a next hop of the default internet gateway.
Export the custom routes in the hub. Import the custom routes in the spokes.
Delete the default internet gateway route of the spokes. - C . Create two default routes in the hub VPC that point to the next hop instances of the virtual appliances.
Delete the default internet gateway route in the hub VPC, and create a new higher-priority route that is tagged only to the appliances with a next hop of the default internet gateway.
Export the custom routes in the hub. Import the custom routes in the spokes. - D . Create a default route in the hub VPC that points to IP address 10.0.0.5.
Delete the default internet gateway route in the hub VPC, and create a new higher-priority route that is tagged only to the appliances with a next hop of the default internet gateway.
Create a new route in the spoke VPC that points to IP address 10.0.0.5.
You are in the process of deploying an internal HTTP(S) load balancer for your web server virtual machine (VM) Instances.
What two prerequisite tasks must be completed before creating the load balancer? Choose 2 answers
- A . Choose a region.
- B . Create firewall rules for health checks
- C . Reserve a static IP address for the load balancer
- D . Determine the subnet mask for a proxy-only subnet.
- E . Determine the subnet mask for Serverless VPC Access.
BC
Explanation:
The correct answer is B and C. You must create firewall rules for health checks and reserve a static IP address for the load balancer before creating the internal HTTP(S) load balancer.
The other options are not correct because:
Option A is not a prerequisite task. You can choose a region when you create the load balancer, but you do not need to do it beforehand.
Option D is not a prerequisite task. You can determine the subnet mask for a proxy-only subnet when you create the subnet, but you do not need to do it beforehand.
Option E is not related to the internal HTTP(S) load balancer. Serverless VPC Access is a feature that allows you to connect your serverless applications to your VPC network, but it is not required for the load balancer.
You are in the process of deploying an internal HTTP(S) load balancer for your web server virtual machine (VM) Instances.
What two prerequisite tasks must be completed before creating the load balancer? Choose 2 answers
- A . Choose a region.
- B . Create firewall rules for health checks
- C . Reserve a static IP address for the load balancer
- D . Determine the subnet mask for a proxy-only subnet.
- E . Determine the subnet mask for Serverless VPC Access.
BC
Explanation:
The correct answer is B and C. You must create firewall rules for health checks and reserve a static IP address for the load balancer before creating the internal HTTP(S) load balancer.
The other options are not correct because:
Option A is not a prerequisite task. You can choose a region when you create the load balancer, but you do not need to do it beforehand.
Option D is not a prerequisite task. You can determine the subnet mask for a proxy-only subnet when you create the subnet, but you do not need to do it beforehand.
Option E is not related to the internal HTTP(S) load balancer. Serverless VPC Access is a feature that allows you to connect your serverless applications to your VPC network, but it is not required for the load balancer.
After a network change window one of your company’s applications stops working. The application uses an on-premises database server that no longer receives any traffic from the application. The database server IP address is 10.2.1.25. You examine the change request, and the only change is that 3 additional VPC subnets were created. The new VPC subnets created are 10.1.0.0/16, 10.2.0.0/16, and 10.3.1.0/24/ The on-premises router is advertising 10.0.0.0/8.
What is the most likely cause of this problem?
- A . The less specific VPC subnet route is taking priority.
- B . The more specific VPC subnet route is taking priority.
- C . The on-premises router is not advertising a route for the database server.
- D . A cloud firewall rule that blocks traffic to the on-premises database server was created during the change.
You manage two VPCs: VPC1 and VPC2, each with resources spread across two regions. You connected the VPCs with HA VPN in both regions to ensure redundancy. You’ve observed that when one VPN gateway fails, workloads that are located within the same region but different VPCs lose communication with each other. After further debugging, you notice that VMs in VPC2 receive traffic but their replies never get to the VMs in VPC1. You need to quickly fix the issue.
What should you do?
- A . Q Enable regional dynamic routing mode in VPC2.
- B . Q Enable global dynamic routing mode in VPC1.
- C . Q Enable global dynamic routing mode in VPC2.
- D . Q Enable regional dynamic routing mode in VPC1.
C
Explanation:
When integrating on-premises networks with VPC Service Controls for private access to Google APIs, the recommended approach involves using Private Google Access for Hybrid Connectivity and configuring DNS resolution to the restricted.googleapis.com domain. This domain resolves to the 199.36.153.8/30 IP address range. It’s crucial to advertise this range from Google Cloud to your on-premises routers so that on-premises clients can route traffic to the Google APIs privately.
Additionally, to allow your on-premises network to access the APIs within the VPC Service Controls
perimeter, you must define an access level that includes the IP address range of your on-premises
network.
Exact Extract:
"To enable private access to Google APIs and services from on-premises networks protected by a VPC Service Controls perimeter, you must configure Private Google Access for Hybrid Connectivity."
"For on-premises hosts, configure your DNS to resolve *.googleapis.com to restricted.googleapis.com. The restricted.googleapis.com domain resolves to the IP address range 199.36.153.8/30."
"You must advertise the 199.36.153.8/30 range from your Cloud Routers to your on-premises routers through BGP."
"To allow on-premises networks to access protected resources within a VPC Service Controls perimeter, you must define an access level that includes the on-premises network’s IP ranges. This access level is then applied to the service perimeter.”
Reference: Google Cloud VPC Service Controls Documentation – Private connectivity to Google APIs, Private Google Access for Hybrid Connectivity
Your company’s Google Cloud-deployed, streaming application supports multiple languages. The application development team has asked you how they should support splitting audio and video traffic to different backend Google Cloud storage buckets. They want to use URL maps and minimize operational overhead.
They are currently using the following directory structure:
/fr/video
/en/video
/es/video
/../video
/fr/audio
/en/audio
/es/audio
/../audio
Which solution should you recommend?
- A . Rearrange the directory structure, create a URL map and leverage a path rule such as /video/* and /audio/*.
- B . Rearrange the directory structure, create DNS hostname entries for video and audio and leverage a path rule such as /video/* and /audio/*.
- C . Leave the directory structure as-is, create a URL map and leverage a path rule such as /[a-z]{2}/video and /[a-z]{2}/audio.
- D . Leave the directory structure as-is, create a URL map and leverage a path rule such as /*/video and /*/ audio.
A
Explanation:
https://cloud.google.com/load-balancing/docs/url-map#configuring_url_maps
Path matcher constraints Path matchers and path rules have the following constraints: A path rule can only include a wildcard character (*) after a forward slash character (/). For example, /videos/* and /videos/hd/* are valid for path rules, but /videos* and /videos/hd* are not. Path rules do not use regular expression or substring matching. For example, path rules for either /videos/hd or /videos/hd/* do not apply to a URL with the path /video/hd-abcd. However, a path rule for /video/* does apply to that path. https://cloud.google.com/load-balancing/docs/url-map-concepts#pm-constraints
You decide to set up Cloud NAT. After completing the configuration, you find that one of your instances is not using the Cloud NAT for outbound NAT.
What is the most likely cause of this problem?
- A . The instance has been configured with multiple interfaces.
- B . An external IP address has been configured on the instance.
- C . You have created static routes that use RFC1918 ranges.
- D . The instance is accessible by a load balancer external IP address.
You decide to set up Cloud NAT. After completing the configuration, you find that one of your instances is not using the Cloud NAT for outbound NAT.
What is the most likely cause of this problem?
- A . The instance has been configured with multiple interfaces.
- B . An external IP address has been configured on the instance.
- C . You have created static routes that use RFC1918 ranges.
- D . The instance is accessible by a load balancer external IP address.