Practice Free Professional Cloud Network Engineer Exam Online Questions
You have the following firewall ruleset applied to all instances in your Virtual Private Cloud (VPC):
You need to update the firewall rule to add the following rule to the ruleset:
You are using a new user account. You must assign the appropriate identity and Access Management (IAM) user roles to this new user account before updating the firewall rule. The new user account must be able to apply the update and view firewall logs.
What should you do?
- A . Assign the compute.securityAdmin and logging.viewer rule to the new user account. Apply the new firewall rule with a priority of 50.
- B . Assign the compute.securityAdmin and logging.bucketWriter role to the new user account. Apply the new firewall rule with a priority of 150.
- C . Assign the compute.orgSecurityPolicyAdmin and logging.viewer role to the new user account.
Apply the new firewall rule with a priority of 50. - D . Assign the compute.orgSecurityPolicyAdmin and logging.bucketWriter role to the new user account. Apply the new firewall rule with a priority of 150.
You want Cloud CDN to serve the https://www.example.com/images/spacetime.png static image file that is hosted in a private Cloud Storage bucket, You are using the VSE ORIG.-X_NZADERS cache mode You receive an HTTP 403 error when opening the file In your browser and you see that the HTTP response has a Cache-control: private, max-age=O header.
How should you correct this Issue?
- A . Configure a Cloud Storage bucket permission that gives the Storage Legacy Object Reader role
- B . Change the cache mode to cache all content.
- C . Increase the default time-to-live (TTL) for the backend service.
- D . Enable negative caching for the backend bucket
A
Explanation:
The correct answer is
36.153.8/30 to the list of advertisements. Advertise all visible subnets to the Cloud Router.
Your company uses Compute Engine instances that are exposed to the public internet. Each compute instance has a single network interface with a single public IP address. You need to block any connection attempt that originates from internet clients with IP addresses that belong to the bgp_asn_toblock BGP ASN.
What should you do?
- A . Create a new Cloud Armor edge security policy, and use the ―network-src-asns parameter.
- B . Create a new Cloud Armor network edge security policy, and use the ―network-src-asns parameter.
- C . Create a new firewall policy ingress rule, and use the ―network-src-asns parameter.
- D . Create a new Cloud Armor backend security policy, and use the ―network-src-asns parameter.
B
Explanation:
To block traffic based on BGP ASN, you need to use Cloud Armor network edge security policies. Backend security policies protect backend services (like those behind an external HTTP(S) Load Balancer), but for traffic directly to Compute Engine instances with public IPs, you need a network edge security policy. Firewall policies operate at the VPC level and do not have the capability to filter based on BGP ASN. The ―network-src-asns parameter is specifically used with Cloud Armor network edge security policies to filter traffic based on the source ASN.
Exact Extract:
"Cloud Armor network edge security policies can protect external IP addresses of virtual machine (VM) instances and load balancers. They support rules that allow or deny traffic based on various criteria, including source autonomous system numbers (ASNs) using the –network-src-asns parameter."
"Network edge security policies are designed for traffic destined for external IP addresses of Google Cloud resources that are not behind an external HTTP(S) load balancer, such as Compute Engine instances with public IP addresses.”
Reference: Google Cloud Armor Documentation – Network edge security policies overview
Your company is working with a partner to provide a solution for a customer. Both your company and the partner organization are using GCP. There are applications in the partner’s network that need access to some resources in your company’s VPC. There is no CIDR overlap between the VPCs.
Which two solutions can you implement to achieve the desired results without compromising the security? (Choose two.)
- A . VPC peering
- B . Shared VPC
- C . Cloud VPN
- D . Dedicated Interconnect
- E . Cloud NAT
AC
Explanation:
Google Cloud VPC Network Peering allows internal IP address connectivity across two Virtual Private Cloud (VPC) networks regardless of whether they belong to the same project or the same organization.
Your organization is running out of private IPv4 IP addresses. You need to create a new design pattern to reduce IP usage in your Google Kubernetes Engine clusters. Each new GKE cluster should have a unique /24 range of routable RFC1918 IP addresses.
What should you do?
- A . Configure NAT by using the IP masquerading agent in the GKE cluster.
- B . Share the primary and secondary ranges between multiple clusters.
- C . Use dual stack IPv4/IPv6 clusters, and assign IPv6 ranges for Pods and Services.
- D . Configure the secondary ranges outside the RFC1918 space, or use privately used public IPs.
C
Explanation:
The most effective long-term solution to address IPv4 address exhaustion in GKE clusters, while still ensuring routability and unique ranges per cluster, is to transition to dual-stack IPv4/IPv6 clusters and leverage IPv6 for Pods and Services. This allows you to conserve IPv4 addresses for critical use cases while providing a vast address space with IPv6 for pods and services, significantly reducing the pressure on your private IPv4 ranges. Google Cloud GKE fully supports dual-stack networking. Exact Extract:
"Dual-stack clusters enable you to assign both IPv4 and IPv6 addresses to Pods and Services. This approach helps conserve IPv4 address space by shifting a significant portion of the network communication to IPv6, particularly for internal cluster communication or communication with other IPv6-enabled services."
"When you enable dual-stack networking, GKE assigns an IPv6 address range to your Pods and can also assign IPv6 addresses to Services. This significantly expands the available addressing capacity within your cluster.”
Reference: Google Kubernetes Engine Documentation – Dual-stack networking
Your organization is running out of private IPv4 IP addresses. You need to create a new design pattern to reduce IP usage in your Google Kubernetes Engine clusters. Each new GKE cluster should have a unique /24 range of routable RFC1918 IP addresses.
What should you do?
- A . Configure NAT by using the IP masquerading agent in the GKE cluster.
- B . Share the primary and secondary ranges between multiple clusters.
- C . Use dual stack IPv4/IPv6 clusters, and assign IPv6 ranges for Pods and Services.
- D . Configure the secondary ranges outside the RFC1918 space, or use privately used public IPs.
C
Explanation:
The most effective long-term solution to address IPv4 address exhaustion in GKE clusters, while still ensuring routability and unique ranges per cluster, is to transition to dual-stack IPv4/IPv6 clusters and leverage IPv6 for Pods and Services. This allows you to conserve IPv4 addresses for critical use cases while providing a vast address space with IPv6 for pods and services, significantly reducing the pressure on your private IPv4 ranges. Google Cloud GKE fully supports dual-stack networking. Exact Extract:
"Dual-stack clusters enable you to assign both IPv4 and IPv6 addresses to Pods and Services. This approach helps conserve IPv4 address space by shifting a significant portion of the network communication to IPv6, particularly for internal cluster communication or communication with other IPv6-enabled services."
"When you enable dual-stack networking, GKE assigns an IPv6 address range to your Pods and can also assign IPv6 addresses to Services. This significantly expands the available addressing capacity within your cluster.”
Reference: Google Kubernetes Engine Documentation – Dual-stack networking
You have an application that is running in a managed instance group. Your development team has released an updated instance template which contains a new feature which was not heavily tested. You want to minimize impact to users if there is a bug in the new template.
How should you update your instances?
- A . Manually patch some of the instances, and then perform a rolling restart on the instance group.
- B . Using the new instance template, perform a rolling update across all instances in the instance group. Verify the new feature once the rollout completes.
- C . Deploy a new instance group and canary the updated template in that group. Verify the new feature in the new canary instance group, and then update the original instance group.
- D . Perform a canary update by starting a rolling update and specifying a target size for your instances to receive the new template. Verify the new feature on the canary instances, and then roll forward to the rest of the instances.
D
Explanation:
https://cloud.google.com/compute/docs/instance-groups/rolling-out-updates-to-managed-instance-groups#starting_a_canary_update
https://cloud.google.com/compute/docs/instance-groups/rolling-out-updates-to-managed-instance-groups
You have an application running on Compute Engine that uses BigQuery to generate some results that are stored in Cloud Storage. You want to ensure that none of the application instances have external IP addresses.
Which two methods can you use to accomplish this? (Choose two.)
- A . Enable Private Google Access on all the subnets.
- B . Enable Private Google Access on the VPC.
- C . Enable Private Services Access on the VPC.
- D . Create network peering between your VPC and BigQuery.
- E . Create a Cloud NAT, and route the application traffic via NAT gateway.
A,E
Explanation:
https://cloud.google.com/nat/docs/overview#interaction-pga Specifications
https://cloud.google.com/vpc/docs/configure-private-google-access#specifications
You are using a 10-Gbps direct peering connection to Google together with the gsutil tool to upload files to Cloud Storage buckets from on-premises servers. The on-premises servers are 100 milliseconds away from the Google peering point. You notice that your uploads are not using the full 10-Gbps bandwidth available to you. You want to optimize the bandwidth utilization of the connection.
What should you do on your on-premises servers?
- A . Tune TCP parameters on the on-premises servers.
- B . Compress files using utilities like tar to reduce the size of data being sent.
- C . Remove the -m flag from the gsutil command to enable single-threaded transfers.
- D . Use the perfdiag parameter in your gsutil command to enable faster performance: gsutil perfdiag gs://[BUCKET NAME].
A
Explanation:
https://cloud.google.com/solutions/tcp-optimization-for-network-performance-in-gcp-and-hybrid
https://cloud.google.com/solutions/tcp-optimization-for-network-performance-in-gcp-and-hybrid
https://cloud.google.com/blog/products/gcp/5-steps-to-better-gcp-network-performance?hl=ml