Practice Free Professional Cloud Network Engineer Exam Online Questions
You are developing an internet of things (IoT) application that captures sensor data from multiple devices that have already been set up. You need to identify the global data storage product your company should use to store this data. You must ensure that the storage solution you choose meets your requirements of sub-millisecond latency.
What should you do?)
- A . Store the IoT data in Spanner. Use caches to speed up the process and avoid latencies.
- B . Store the IoT data in Bigtable.
- C . Capture IoT data in BigQuery datasets.
- D . Store the IoT data in Cloud Storage. Implement caching by using Cloud CDN.
B
Explanation:
Let’s evaluate each option based on the requirement of sub-millisecond latency for globally stored IoT data:
Your organization has Compute Engine instances in us-east1, us-west2, and us-central1. Your organization also has an existing Cloud Interconnect physical connection in the East Coast of the United States with a single VLAN attachment and Cloud Router in us-east1. You need to provide a design with high availability and ensure that if a region goes down, you still have access to all your other Virtual Private Cloud (VPC) subnets. You need to accomplish this in the most cost-effective manner possible.
What should you do?
- A . Configure your VPC routing in regional mode.
Add an additional Cloud Interconnect VLAN attachment in the us-east1 region, and configure a Cloud Router in us-east1. - B . Configure your VPC routing in global mode.
Add an additional Cloud Interconnect VLAN attachment in the us-east1 region, and configure a Cloud Router in us-east1. - C . Configure your VPC routing in global mode.
Add an additional Cloud Interconnect VLAN attachment in the us-west2 region, and configure a Cloud
Router in us-west2. - D . Configure your VPC routing in regional mode.
Add additional Cloud Interconnect VLAN attachments in the us-west2 and us-central1 regions, and configure Cloud Routers in us-west2 and us-central1.
You have applications running in the us-west1 and us-east1 regions. You want to build a highly available VPN that provides 99.99% availability to connect your applications from your project to the cloud services provided by your partner’s project while minimizing the amount of infrastructure required. Your partner’s services are also in the us-west1 and us-east1 regions. You want to implement the simplest solution.
What should you do?
- A . Create one Cloud Router and one HA VPN gateway in each region of your VPC and your partner’s VPC. Connect your VPN gateways to the partner’s gateways. Enable global dynamic routing in each VPC.
- B . Create one Cloud Router and one HA VPN gateway in the us-west1 region of your VPC. Create one OpenVPN Access Server in each region of your partner’s VPC. Connect your VPN gateway to your partner’s servers.
- C . Create one OpenVPN Access Server in each region of your VPC and your partner’s VPC. Connect
your servers to the partner’s servers. - D . Create one Cloud Router and one HA VPN gateway in the us-west1 region of your VPC and your partner’s VPC. Connect your VPN gateways to the partner’s gateways with a pair of tunnels. Enable global dynamic routing in each VPC.
You are creating an instance group and need to create a new health check for HTTP(s) load balancing.
Which two methods can you use to accomplish this? (Choose two.)
- A . Create a new health check using the gcloud command line tool.
- B . Create a new health check using the VPC Network section in the GCP Console.
- C . Create a new health check, or select an existing one, when you complete the load balancer’s backend configuration in the GCP Console.
- D . Create a new legacy health check using the gcloud command line tool.
- E . Create a new legacy health check using the Health checks section in the GCP Console.
AC
Explanation:
https://cloud.google.com/load-balancing/docs/health-checks#creating_and_modifying_health_checks
You want to use Partner Interconnect to connect your on-premises network with your VPC. You already have an Interconnect partner.
What should you first?
- A . Log in to your partner’s portal and request the VLAN attachment there.
- B . Ask your Interconnect partner to provision a physical connection to Google.
- C . Create a Partner Interconnect type VLAN attachment in the GCP Console and retrieve the pairing key.
- D . Run gcloud compute interconnect attachments partner update <attachment> / — region <region> – -admin-enabled.
B
Explanation:
https://cloud.google.com/network-connectivity/docs/interconnect/concepts/partner-overview?hl=En#provisioning "To provision a Partner Interconnect connection with a service provider, you start by connecting your on-premises network to a supported service provider. Work with the service provider to establish connectivity.
Your company has separate Virtual Private Cloud (VPC) networks in a single region for two departments: Sales and Finance. The Sales department’s VPC network already has connectivity to on-
premises locations using HA VPN, and you have confirmed that the subnet ranges do not overlap. You plan to peer both VPC networks to use the same HA tunnels for on-premises connectivity, while providing internet connectivity for the Google Cloud workloads through Cloud NAT. Internet access from the on-premises locations should not flow through Google Cloud. You need to propagate all routes between the Finance department and on-premises locations.
What should you do?
- A . Peer the two VPCs, and use the default configuration for the Cloud Routers.
- B . Peer the two VPCs, and use Cloud Router’s custom route advertisements to announce the peered VPC network ranges to the on-premises locations.
- C . Peer the two VPCs. Configure VPC Network Peering to export custom routes from Sales and import custom routes on Finance’s VPC network. Use Cloud Router’s custom route advertisements to announce a default route to the on-premises locations.
- D . Peer the two VPCs. Configure VPC Network Peering to export custom routes from Sales and import custom routes on Finance’s VPC network. Use Cloud Router’s custom route advertisements to announce the peered VPC network ranges to the on-premises locations.
You are designing an IP address scheme for new private Google Kubernetes Engine (GKE) clusters.
Due to IP address exhaustion of the RFC 1918 address space In your enterprise, you plan to use privately used public IP space for the new clusters. You want to follow Google-recommended practices.
What should you do after designing your IP scheme?
- A . Create the minimum usable RFC 1918 primary and secondary subnet IP ranges for the clusters. Re-use the secondary address range for the pods across multiple private GKE clusters
- B . Create the minimum usable RFC 1918 primary and secondary subnet IP ranges for the clusters Re-use the secondary address range for the services across multiple private GKE clusters
- C . Create privately used public IP primary and secondary subnet ranges for the clusters. Create a private GKE cluster with the following options selected and
- D . Create privately used public IP primary and secondary subnet ranges for the clusters. Create a private GKE cluster With the following options selected –disable-default-snat, ―enable-ip-alias, and―enable-private-nodes
D
Explanation:
This answer follows the Google-recommended practices for using privately used public IP (PUPI)
addresses for GKE Pod address blocks1. The benefits of this approach are:
It allows you to use any public IP addresses that are not owned by Google or your organization for your Pods, which can help mitigate address exhaustion in your enterprise.
It prevents any external traffic from reaching your Pods, as Google Cloud does not route PUPI addresses to the internet or to other VPC networks by default.
It enables you to use VPC Network Peering to connect your GKE cluster to other VPC networks that use different PUPI addresses, as long as you enable the export and import of custom routes for the peering connection.
It preserves the fully integrated network model of GKE, where Pods can communicate with nodes and other resources in the same VPC network without NAT.
The options that you need to select when creating a private GKE cluster with PUPI addresses are:
Cdisable-default-snat: This option disables source NAT for outbound traffic from Pods to destinations outside the cluster’s VPC network. This is necessary to prevent Pods from using RFC 1918 addresses as their source IP addresses, which could cause conflicts with other networks that use the same address space2.
Cenable-ip-alias: This option enables alias IP ranges for Pods and Services, which allows you to use separate subnet ranges for them. This is required to use PUPI addresses for Pods1.
Cenable-private-nodes: This option creates a private cluster, where nodes do not have external IP addresses and can only communicate with the control plane through a private endpoint. This enhances the security and privacy of your cluster3.
Option A is incorrect because it does not use PUPI addresses for Pods, but rather RFC 1918 addresses. This does not solve the problem of address exhaustion in your enterprise. Option B is incorrect because it reuses the secondary address range for Services across multiple private GKE clusters, which could cause IP conflicts and routing issues. Option C is incorrect because it does not specify the options that are needed to create a private GKE cluster with PUPI addresses.
1: Configuring privately used public IPs for GKE | Kubernetes Engine | Google Cloud 2: Using Cloud NAT with GKE | Kubernetes Engine | Google Cloud 3: Private clusters | Kubernetes Engine | Google Cloud
You are configuring a new application that will be exposed behind an external load balancer with both IPv4 and IPv6 addresses and support TCP pass-through on port 443. You will have backends in two regions: us-west1 and us-east1. You want to serve the content with the lowest possible latency while ensuring high availability and autoscaling.
Which configuration should you use?
- A . Use global SSL Proxy Load Balancing with backends in both regions.
- B . Use global TCP Proxy Load Balancing with backends in both regions.
- C . Use global external HTTP(S) Load Balancing with backends in both regions.
- D . Use Network Load Balancing in both regions, and use DNS-based load balancing to direct traffic to the closest region.
You created a VPC network named Retail in auto mode. You want to create a VPC network named Distribution and peer it with the Retail VPC.
How should you configure the Distribution VPC?
- A . Create the Distribution VPC in auto mode. Peer both the VPCs via network peering.
- B . Create the Distribution VPC in custom mode. Use the CIDR range 10.0.0.0/9. Create the necessary subnets, and then peer them via network peering.
- C . Create the Distribution VPC in custom mode. Use the CIDR range 10.128.0.0/9. Create the necessary subnets, and then peer them via network peering.
- D . Rename the default VPC as "Distribution" and peer it via network peering.
B
Explanation:
https://cloud.google.com/vpc/docs/vpc#ip-ranges
You are adding steps to a working automation that uses a service account to authenticate. You need to drive the automation the ability to retrieve files from a Cloud Storage bucket. Your organization requires using the least privilege possible.
What should you do?
- A . Grant the compute.instanceAdmin to your user account.
- B . Grant the iam.serviceAccountUser to your user account.
- C . Grant the read-only privilege to the service account for the Cloud Storage bucket.
- D . Grant the cloud-platform privilege to the service account for the Cloud Storage bucket.