Practice Free Associate Cloud Engineer Exam Online Questions
You are operating a Google Kubernetes Engine (GKE) cluster for your company where different teams can run non-production workloads. Your Machine Learning (ML) team needs access to Nvidia Tesla P100 GPUs to train their models. You want to minimize effort and cost.
What should you do?
- A . Ask your ML team to add the “accelerator: gpu” annotation to their pod specification.
- B . Recreate all the nodes of the GKE cluster to enable GPUs on all of them.
- C . Create your own Kubernetes cluster on top of Compute Engine with nodes that have GPUs.
Dedicate this cluster to your ML team. - D . Add a new, GPU-enabled, node pool to the GKE cluster. Ask your ML team to add the
cloud.google.com/gke -accelerator: nvidia-tesla-p100 nodeSelector to their pod specification.
D
Explanation:
This is the most optimal solution. Rather than recreating all nodes, you create a new node pool with GPU enabled. You then modify the pod specification to target particular GPU types by adding node selector to your workloads Pod specification. YOu still have a single cluster so you pay Kubernetes cluster management fee for just one cluster thus minimizing the cost.
Ref: https://cloud.google.com/kubernetes-engine/docs/how-to/gpus
Ref: https://cloud.google.com/kubernetes-engine/pricing
Example:
apiVersion: v1
kind: Pod
metadata:
name:my-gpu-pod
spec:
containers:
name:my-gpu-container
image: nvidia/cuda:10.0-runtime-ubuntu18.04
command: [/bin/bash]
resources:
limits:
nvidia.com/gpu: 2
nodeSelector:
cloud.google.com/gke-accelerator: nvidia-tesla-k80# or nvidia-tesla-p100 or nvidia-tesla-p4 or nvidia-tesla-v100 or nvidia-tesla-t4
You have an application that uses Cloud Spanner as a database backend to keep current state information about users. Cloud Bigtable logs all events triggered by users. You export Cloud Spanner data to Cloud Storage during daily backups. One of your analysts asks you to join data from Cloud Spanner and Cloud Bigtable for specific users. You want to complete this ad hoc request as efficiently as possible.
What should you do?
- A . Create a dataflow job that copies data from Cloud Bigtable and Cloud Storage for specific users.
- B . Create a dataflow job that copies data from Cloud Bigtable and Cloud Spanner for specific users.
- C . Create a Cloud Dataproc cluster that runs a Spark job to extract data from Cloud Bigtable and Cloud Storage for specific users.
- D . Create two separate BigQuery external tables on Cloud Storage and Cloud Bigtable. Use the BigQuery console to join these tables through user fields, and apply appropriate filters.
D
Explanation:
"The Cloud Spanner to Cloud Storage Text template is a batch pipeline that reads in data from a
Cloud Spanner table, optionally transforms the data via a JavaScript User Defined Function (UDF) that you provide, and writes it to Cloud Storage as CSV text files."
https://cloud.google.com/dataflow/docs/guides/templates/provided-batch#cloudspannertogcstext
"The Dataflow connector for Cloud Spanner lets you read data from and write data to Cloud Spanner in a Dataflow pipeline"
https://cloud.google.com/spanner/docs/dataflow-connector
https://cloud.google.com/bigquery/external-data-sources
You have a Google Cloud Platform account with access to both production and development projects. You need to create an automated process to list all compute instances in development and production projects on a daily basis.
What should you do?
- A . Create two configurations using gcloud config. Write a script that sets configurations as active, individually. For each configuration, use gcloud compute instances list to get a list of compute resources.
- B . Create two configurations using gsutil config. Write a script that sets configurations as active, individually. For each configuration, use gsutil compute instances list to get a list of compute resources.
- C . Go to Cloud Shell and export this information to Cloud Storage on a daily basis.
- D . Go to GCP Console and export this information to Cloud SQL on a daily basis.
A
Explanation:
You can create two configurations C one for the development project and another for the production project. And you do that by running “gcloud config configurations create” command.
https://cloud.google.com/sdk/gcloud/reference/config/configurations/create
In your custom script, you can load these configurations one at a time and execute gcloud compute instances list to list Google Compute Engine instances in the project that is active in the gcloud configuration.
Ref: https://cloud.google.com/sdk/gcloud/reference/compute/instances/list
Once you have this information, you can export it in a suitable format to a suitable target e.g. export as CSV or export to Cloud Storage/BigQuery/SQL, etc
You have 32 GB of data in a single file that you need to upload to a Nearline Storage bucket. The WAN connection you are using is rated at 1 Gbps, and you are the only one on the connection. You want to use as much of the rated 1 Gbps as possible to transfer the file rapidly.
How should you upload the file?
- A . Use the GCP Console to transfer the file instead of gsutil.
- B . Enable parallel composite uploads using gsutil on the file transfer.
- C . Decrease the TCP window size on the machine initiating the transfer.
- D . Change the storage class of the bucket from Nearline to Multi-Regional.
B
Explanation:
https://cloud.google.com/storage/docs/parallel-composite-uploads
https://cloud.google.com/storage/docs/uploads-downloads#parallel-composite-uploads
You need to create a new billing account and then link it with an existing Google Cloud Platform project.
What should you do?
- A . Verify that you are Project Billing Manager for the GCP project. Update the existing project to link it to the existing billing account.
- B . Verify that you are Project Billing Manager for the GCP project. Create a new billing account and link the new billing account to the existing project.
- C . Verify that you are Billing Administrator for the billing account. Create a new project and link the new project to the existing billing account.
- D . Verify that you are Billing Administrator for the billing account. Update the existing project to link it to the existing billing account.
B
Explanation:
Billing Administrators can not create a new billing account, and the project is presumably already created. Project Billing Manager allows you to link the created billing account to the project. It is vague on how the billing account gets created but by process of elimination
You created an instance of SQL Server 2017 on Compute Engine to test features in the new version. You want to connect to this instance using the fewest number of steps.
What should you do?
- A . Install a RDP client on your desktop. Verify that a firewall rule for port 3389 exists.
- B . Install a RDP client in your desktop. Set a Windows username and password in the GCP Console.
Use the credentials to log in to the instance. - C . Set a Windows password in the GCP Console. Verify that a firewall rule for port 22 exists. Click the RDP button in the GCP Console and supply the credentials to log in.
- D . Set a Windows username and password in the GCP Console. Verify that a firewall rule for port 3389 exists. Click the RDP button in the GCP Console, and supply the credentials to log in.
D
Explanation:
https://cloud.google.com/compute/docs/instances/connecting-to-windows#remote-desktop-connection-app
https://cloud.google.com/compute/docs/instances/windows/generating-credentials
https://cloud.google.com/compute/docs/instances/connecting-to-windows#before-you-begin
You created an instance of SQL Server 2017 on Compute Engine to test features in the new version. You want to connect to this instance using the fewest number of steps.
What should you do?
- A . Install a RDP client on your desktop. Verify that a firewall rule for port 3389 exists.
- B . Install a RDP client in your desktop. Set a Windows username and password in the GCP Console.
Use the credentials to log in to the instance. - C . Set a Windows password in the GCP Console. Verify that a firewall rule for port 22 exists. Click the RDP button in the GCP Console and supply the credentials to log in.
- D . Set a Windows username and password in the GCP Console. Verify that a firewall rule for port 3389 exists. Click the RDP button in the GCP Console, and supply the credentials to log in.
D
Explanation:
https://cloud.google.com/compute/docs/instances/connecting-to-windows#remote-desktop-connection-app
https://cloud.google.com/compute/docs/instances/windows/generating-credentials
https://cloud.google.com/compute/docs/instances/connecting-to-windows#before-you-begin
Your company uses a multi-cloud strategy that includes Google Cloud. You want to centralize application logs in a third-party software-as-a-service (SaaS) tool from all environments. You need tointegrate logs originating from Cloud Logging, and you want to ensure the export occurs with the least amount of delay possible.
What should you do?
- A . Use a Cloud Scheduler cron job to trigger a Cloud Function that queries Cloud Logging and sends the logs to the SaaS tool.
- B . Create a Cloud Logging sink and configure Pub/Sub as the destination. Configure the SaaS tool to subscribe to the Pub/Sub topic to retrieve the logs.
- C . Create a Cloud Logging sink and configure Cloud Storage as the destination. Configure the SaaS tool to read the Cloud Storage bucket to retrieve the logs.
- D . Create a Cloud Logging sink and configure BigQuery as the destination. Configure the SaaS tool to query BigQuery to retrieve the logs.
B
Explanation:
Comprehensive and Detailed In Depth
The requirement is to export logs from Cloud Logging to a third-party SaaS tool with the least amount of delay possible. Let’s analyze each option:
Your company uses a multi-cloud strategy that includes Google Cloud. You want to centralize application logs in a third-party software-as-a-service (SaaS) tool from all environments. You need tointegrate logs originating from Cloud Logging, and you want to ensure the export occurs with the least amount of delay possible.
What should you do?
- A . Use a Cloud Scheduler cron job to trigger a Cloud Function that queries Cloud Logging and sends the logs to the SaaS tool.
- B . Create a Cloud Logging sink and configure Pub/Sub as the destination. Configure the SaaS tool to subscribe to the Pub/Sub topic to retrieve the logs.
- C . Create a Cloud Logging sink and configure Cloud Storage as the destination. Configure the SaaS tool to read the Cloud Storage bucket to retrieve the logs.
- D . Create a Cloud Logging sink and configure BigQuery as the destination. Configure the SaaS tool to query BigQuery to retrieve the logs.
B
Explanation:
Comprehensive and Detailed In Depth
The requirement is to export logs from Cloud Logging to a third-party SaaS tool with the least amount of delay possible. Let’s analyze each option:
You have an application on a general-purpose Compute Engine instance that is experiencing excessive disk read throttling on its Zonal SSD Persistent Disk. The application primarily reads large files from disk. The disk size is currently 350 GB. You want to provide the maximum amount of throughput while minimizing costs.
What should you do?
- A . Increase the size of the disk to 1 TB.
- B . Increase the allocated CPU to the instance.
- C . Migrate to use a Local SSD on the instance.
- D . Migrate to use a Regional SSD on the instance.
C
Explanation:
Standard persistent disks are efficient and economical for handling sequential read/write operations, but they aren’t optimized to handle high rates of random input/output operations per second (IOPS). If your apps require high rates of random IOPS, use SSD persistent disks. SSD persistent disks are designed for single-digit millisecond latencies. Observed latency is application specific.
Reference: https://cloud.google.com/compute/docs/disks/performance Local SSDs
Local SSDs are physically attached to the server that hosts your VM instance. Local SSDs have higher throughput and lower latency than standard persistent disks or SSD persistent disks. The data that you store on a local SSD persists only until the instance is stopped or deleted. Each local SSD is 375 GB in size, but you can attach a maximum of 24 local SSD partitions for a total of 9 TB per instance.
Performance
Local SSDs are designed to offer very high IOPS and low latency. Unlike persistent disks, you must manage the striping on local SSDs yourself. Combine multiple local SSD partitions into a single logical volume to achieve the best local SSD performance per instance, or format local SSD partitions individually.
Local SSD performance depends on which interface you select. Local SSDs are available through both SCSI and NVMe interfaces.
