Practice Free Associate Cloud Engineer Exam Online Questions
You want to configure autohealing for network load balancing for a group of Compute Engine instances that run in multiple zones, using the fewest possible steps. You need to configure re-creation of VMs if they are unresponsive after 3 attempts of 10 seconds each.
What should you do?
- A . Create an HTTP load balancer with a backend configuration that references an existing instance group. Set the health check to healthy (HTTP).
- B . Create an HTTP load balancer with a backend configuration that references an existing instance group. Define a balancing mode and set the maximum RPS to 10.
- C . Create a managed instance group. Set the Autohealing health check to healthy (HTTP).
- D . Create a managed instance group. Verify that the autoscaling setting is on.
C
Explanation:
https://cloud.google.com/compute/docs/instance-groups https://cloud.google.com/load-balancing/docs/network/transition-to-backend-services#console
In order to enable auto-healing, you need to group the instances into a managed instance group. Managed instance groups (MIGs) maintain the high availability of your applications by proactively keeping your virtual machine (VM) instances available. An auto-healing policy on the MIG relies on an application-based health check to verify that an application is responding as expected. If the auto-healer determines that an application isnt responding, the managed instance group automatically recreates that instance.
It is important to use separate health checks for load balancing and for auto-healing. Health checks for load balancing can and should be more aggressive because these health checks determine whether an instance receives user traffic. You want to catch non-responsive instances quickly, so you can redirect traffic if necessary. In contrast, health checking for auto-healing causes Compute Engine to proactively replace failing instances, so this health check should be more conservative than a load balancing health check.
You want to configure autohealing for network load balancing for a group of Compute Engine instances that run in multiple zones, using the fewest possible steps. You need to configure re-creation of VMs if they are unresponsive after 3 attempts of 10 seconds each.
What should you do?
- A . Create an HTTP load balancer with a backend configuration that references an existing instance group. Set the health check to healthy (HTTP).
- B . Create an HTTP load balancer with a backend configuration that references an existing instance group. Define a balancing mode and set the maximum RPS to 10.
- C . Create a managed instance group. Set the Autohealing health check to healthy (HTTP).
- D . Create a managed instance group. Verify that the autoscaling setting is on.
C
Explanation:
https://cloud.google.com/compute/docs/instance-groups https://cloud.google.com/load-balancing/docs/network/transition-to-backend-services#console
In order to enable auto-healing, you need to group the instances into a managed instance group. Managed instance groups (MIGs) maintain the high availability of your applications by proactively keeping your virtual machine (VM) instances available. An auto-healing policy on the MIG relies on an application-based health check to verify that an application is responding as expected. If the auto-healer determines that an application isnt responding, the managed instance group automatically recreates that instance.
It is important to use separate health checks for load balancing and for auto-healing. Health checks for load balancing can and should be more aggressive because these health checks determine whether an instance receives user traffic. You want to catch non-responsive instances quickly, so you can redirect traffic if necessary. In contrast, health checking for auto-healing causes Compute Engine to proactively replace failing instances, so this health check should be more conservative than a load balancing health check.
Your company has workloads running on Compute Engine and on-premises. The Google Cloud Virtual Private Cloud (VPC) is connected to your WAN over a Virtual Private Network (VPN). You need to deploy a new Compute Engine instance and ensure that no public Internet traffic can be routed to it.
What should you do?
- A . Create the instance without a public IP address.
- B . Create the instance with Private Google Access enabled.
- C . Create a deny-all egress firewall rule on the VPC network.
- D . Create a route on the VPC to route all traffic to the instance over the VPN tunnel.
A
Explanation:
VMs cannot communicate over the internet without a public IP address. Private Google Access permits access to Google APIs and services in Google’s production infrastructure. https://cloud.google.com/vpc/docs/private-google-access
You are building a product on top of Google Kubernetes Engine (GKE). You have a single GKE cluster. For each of your customers, a Pod is running in that cluster, and your customers can run arbitrary code inside their Pod. You want to maximize the isolation between your customers’ Pods.
What should you do?
- A . Use Binary Authorization and whitelist only the container images used by your customers’ Pods.
- B . Use the Container Analysis API to detect vulnerabilities in the containers used by your customers’ Pods.
- C . Create a GKE node pool with a sandbox type configured to gvisor. Add the parameter runtimeClassName: gvisor to the specification of your customers’ Pods.
- D . Use the cos_containerd image for your GKE nodes. Add a nodeSelector with the value cloud.google.com/gke-os-distribution: cos_containerd to the specification of your customers’ Pods.
C
Explanation:
Reference: https://cloud.google.com/kubernetes-engine/sandbox/
GKE Sandbox provides an extra layer of security to prevent untrusted code from affecting the host kernel on your cluster nodes when containers in the Pod execute unknown or untrusted code. Multi-tenant clusters and clusters whose containers run untrusted workloads are more exposed to security vulnerabilities than other clusters. Examples include SaaS providers, web-hosting providers, or other organizations that allow their users to upload and run code. When you enable GKE Sandbox on a node pool, a sandbox is created for each Pod running on a node in that node pool. In addition, nodes running sandboxed Pods are prevented from accessing other Google Cloud services or cluster metadata. Each sandbox uses its own userspace kernel. With this in mind, you can make decisions
about how to group your containers into Pods, based on the level of isolation you require and the characteristics of your applications.
Ref: https://cloud.google.com/kubernetes-engine/docs/concepts/sandbox-pods
Add CPU and network Charts for each of the three projects.
D. 1. Create a fourth Google Cloud project
2 Create a Cloud Workspace from the fourth project and add the other three projects
After a recent security incident, your startup company wants better insight into what is happening in the Google Cloud environment. You need to monitor unexpected firewall changes and instance creation. Your company prefers simple solutions.
What should you do?
- A . Use Cloud Logging filters to create log-based metrics for firewall and instance actions. Monitor the changes and set up reasonable alerts.
- B . Install Kibana on a compute Instance. Create a log sink to forward Cloud Audit Logs filtered for firewalls and compute instances to Pub/Sub. Target the Pub/Sub topic to push messages to the Kibana instance.
Analyze the logs on Kibana in real time. - C . Turn on Google Cloud firewall rules logging, and set up alerts for any insert, update, or delete events.
- D . Create a log sink to forward Cloud Audit Logs filtered for firewalls and compute instances to Cloud Storage.
Use BigQuery to periodically analyze log events in the storage bucket.
You deployed a new application inside your Google Kubernetes Engine cluster using the YAML file specified below.
You check the status of the deployed pods and notice that one of them is still in PENDING status:
You want to find out why the pod is stuck in pending status.
What should you do?
- A . Review details of the myapp-service Service object and check for error messages.
- B . Review details of the myapp-deployment Deployment object and check for error messages.
- C . Review details of myapp-deployment-58ddbbb995-lp86m Pod and check for warning messages.
- D . View logs of the container in myapp-deployment-58ddbbb995-lp86m pod and check for warning
messages.
C
Explanation:
https://kubernetes.io/docs/tasks/debug-application-cluster/debug-application/#debugging-pods
Configure the instance’s crontab to execute these scripts daily at 1:00 AM.
Explanation:
Creating scheduled snapshots for persistent disk This document describes how to create a snapshot schedule to regularly and automatically back up your zonal and regional persistent disks. Use snapshot schedules as a best practice to back up your Compute Engine workloads. After creating a snapshot schedule, you can apply it to one or more persistent disks. https://cloud.google.com/compute/docs/disks/scheduled-snapshots
You create a Deployment with 2 replicas in a Google Kubernetes Engine cluster that has a single preemptible node pool. After a few minutes, you use kubectl to examine the status of your Pod and observe that one of them is still in Pending status:
What is the most likely cause?
- A . The pending Pod’s resource requests are too large to fit on a single node of the cluster.
- B . Too many Pods are already running in the cluster, and there are not enough resources left to schedule the pending Pod.
- C . The node pool is configured with a service account that does not have permission to pull the container image used by the pending Pod.
- D . The pending Pod was originally scheduled on a node that has been preempted between the creation of the Deployment and your verification of the Pods’ status. It is currently being rescheduled on a new node.
B
Explanation:
The pending Pods resource requests are too large to fit on a single node of the cluster. Too many Pods are already running in the cluster, and there are not enough resources left to schedule the pending Pod. is the right answer.
When you have a deployment with some pods in running and other pods in the pending state, more often than not it is a problem with resources on the nodes. Heres a sample output of this use case. We see that the problem is with insufficient CPU on the Kubernetes nodes so we have to either enable auto-scaling or manually scale up the nodes.
You are using multiple configurations for gcloud. You want to review the configured Kubernetes Engine cluster of an inactive configuration using the fewest possible steps.
What should you do?
- A . Use gcloud config configurations describe to review the output.
- B . Use gcloud config configurations activate and gcloud config list to review the output.
- C . Use kubectl config get-contexts to review the output.
- D . Use kubectl config use-context and kubectl config view to review the output.
D
Explanation:
Reference: https://medium.com/google-cloud/kubernetes-engine-kubectl-config-b6270d2b656c
kubectl config view -o jsonpath='{.users[].name}’ # display the first user kubectl config view -o jsonpath='{.users[*].name}’ # get a list of users kubectl config get-contexts # display list of contexts kubectl config current-context # display the current-context
kubectl config use-context my-cluster-name # set the default context to my-cluster-name
https://kubernetes.io/docs/reference/kubectl/cheatsheet/