Practice Free Associate Cloud Engineer Exam Online Questions
You have deployed an application on a Compute Engine instance. An external consultant needs to access the Linux-based instance. The consultant is connected to your corporate network through a VPN connection, but the consultant has no Google account.
What should you do?
- A . Instruct the external consultant to use the gcloud compute ssh command line tool by using Identity-Aware Proxy to access the instance.
- B . Instruct the external consultant to use the gcloud compute ssh command line tool by using the public IP address of the instance to access it.
- C . Instruct the external consultant to generate an SSH key pair, and request the public key from the consultant. Add the public key to the instance yourself, and have the consultant access the instance through SSH with their private key.
- D . Instruct the external consultant to generate an SSH key pair, and request the private key from the consultant. Add the private key to the instance yourself, and have the consultant access the instance through SSH with their public key.
C
Explanation:
The best option is to instruct the external consultant to generate an SSH key pair, and request the public key from the consultant. Then, add the public key to the instance yourself, and have the consultant access the instance through SSH with their private key. This way, you can grant the consultant access to the instance without requiring a Google account or exposing the instance’s public IP address. This option also follows the best practice of using user-managed SSH keys instead of service account keys for SSH access1.
Option A is not feasible because the external consultant does not have a Google account, and therefore cannot use Identity-Aware Proxy (IAP) to access the instance. IAP requires the user to authenticate with a Google account and have the appropriate IAM permissions to access the instance2. Option B is not secure because it exposes the instance’s public IP address, which can increase the risk of unauthorized access or attacks. Option D is not correct because it reverses the roles of the public and private keys. The public key should be added to the instance, and the private key should be kept by the consultant. Sharing the private key with anyone else can compromise the security of the SSH connection3.
Reference:
1: https://cloud.google.com/compute/docs/instances/adding-removing-ssh-keys
2: https://cloud.google.com/iap/docs/using-tcp-forwarding
3: https://cloud.google.com/compute/docs/instances/connecting-advanced#sshbetweeninstances
You are migrating a production-critical on-premises application that requires 96 vCPUs to perform its task. You want to make sure the application runs in a similar environment on GCP.
What should you do?
- A . When creating the VM, use machine type n1-standard-96.
- B . When creating the VM, use Intel Skylake as the CPU platform.
- C . Create the VM using Compute Engine default settings. Use gcloud to modify the running instance to have 96 vCPUs.
- D . Start the VM using Compute Engine default settings, and adjust as you go based on Rightsizing Recommendations.
A
Explanation:
Ref: https://cloud.google.com/compute/docs/machine-types#n1_machine_type
You have a website hosted on App Engine standard environment. You want 1% of your users to see a new test version of the website. You want to minimize complexity.
What should you do?
- A . Deploy the new version in the same application and use the –migrate option.
- B . Deploy the new version in the same application and use the –splits option to give a weight of 99 to the current version and a weight of 1 to the new version.
- C . Create a new App Engine application in the same project. Deploy the new version in that application. Use the App Engine library to proxy 1% of the requests to the new version.
- D . Create a new App Engine application in the same project. Deploy the new version in that application. Configure your network load balancer to send 1% of the traffic to that new application.
B
Explanation:
https://cloud.google.com/appengine/docs/standard/python/splitting-traffic#gcloud
You have developed an application that consists of multiple microservices, with each microservice packaged in its own Docker container image. You want to deploy the entire application on Google Kubernetes Engine so that each microservice can be scaled individually.
What should you do?
- A . Create and deploy a Custom Resource Definition per microservice.
- B . Create and deploy a Docker Compose File.
- C . Create and deploy a Job per microservice.
- D . Create and deploy a Deployment per microservice.
You have developed an application that consists of multiple microservices, with each microservice packaged in its own Docker container image. You want to deploy the entire application on Google Kubernetes Engine so that each microservice can be scaled individually.
What should you do?
- A . Create and deploy a Custom Resource Definition per microservice.
- B . Create and deploy a Docker Compose File.
- C . Create and deploy a Job per microservice.
- D . Create and deploy a Deployment per microservice.
Your company wants to migrate their on-premises workloads to Google Cloud.
The current on-premises workloads consist of:
• A Flask web application
• AbackendAPI
• A scheduled long-running background job for ETL and reporting.
You need to keep operational costs low You want to follow Google-recommended practices to migrate these workloads to serverless solutions on Google Cloud.
What should you do?
- A . Migrate the web application to App Engine and the backend API to Cloud Run Use Cloud Tasks to run your background job on Compute Engine
- B . Migrate the web application to App Engine and the backend API to Cloud Run. Use Cloud Tasks to run your background job on Cloud Run.
- C . Run the web application on a Cloud Storage bucket and the backend API on Cloud Run Use Cloud Tasks to run your background job on Cloud Run.
- D . Run the web application on a Cloud Storage bucket and the backend API on Cloud Run. Use Cloud Tasks to run your background job on Compute Engine
You are building an application that will run in your data center. The application will use Google Cloud Platform (GCP) services like AutoML. You created a service account that has appropriate access to AutoML. You need to enable authentication to the APIs from your on-premises environment.
What should you do?
- A . Use service account credentials in your on-premises application.
- B . Use gcloud to create a key file for the service account that has appropriate permissions.
- C . Set up direct interconnect between your data center and Google Cloud Platform to enable authentication for your on-premises applications.
- D . Go to the IAM & admin console, grant a user account permissions similar to the service account
permissions, and use this user account for authentication from your data center.
B
Explanation:
Reference: https://cloud.google.com/vision/automl/docs/before-you-begin
To use a service account outside of Google Cloud, such as on other platforms or on-premises, you must first establish the identity of the service account. Public/private key pairs provide a secure way of accomplishing this goal. You can create a service account key using the Cloud Console, the gcloud tool, the serviceAccounts.keys.create() method, or one of the client libraries.
Ref: https://cloud.google.com/iam/docs/creating-managing-service-account-keys
You use Cloud Logging lo capture application logs. You now need to use SOL to analyze the application logs in Cloud Logging, and you want to follow Google-recommended practices.
What should you do?
- A . Develop SQL queries by using Gemini for Google Cloud.
- B . Enable Log Analytics for the log bucket and create a linked dataset in BigQuery.
- C . Create a schema for the storage bucket and run SQL queries for the data in the bucket.
- D . Export logs to a storage bucket and create an external view in BigQuery.
You use Cloud Logging lo capture application logs. You now need to use SOL to analyze the application logs in Cloud Logging, and you want to follow Google-recommended practices.
What should you do?
- A . Develop SQL queries by using Gemini for Google Cloud.
- B . Enable Log Analytics for the log bucket and create a linked dataset in BigQuery.
- C . Create a schema for the storage bucket and run SQL queries for the data in the bucket.
- D . Export logs to a storage bucket and create an external view in BigQuery.
You are operating a Google Kubernetes Engine (GKE) cluster for your company where different teams can run non-production workloads. Your Machine Learning (ML) team needs access to Nvidia Tesla P100 GPUs to train their models. You want to minimize effort and cost.
What should you do?
- A . Ask your ML team to add the “accelerator: gpu” annotation to their pod specification.
- B . Recreate all the nodes of the GKE cluster to enable GPUs on all of them.
- C . Create your own Kubernetes cluster on top of Compute Engine with nodes that have GPUs.
Dedicate this cluster to your ML team. - D . Add a new, GPU-enabled, node pool to the GKE cluster. Ask your ML team to add the cloud.google.com/gke -accelerator: nvidia-tesla-p100 nodeSelector to their pod specification.
D
Explanation:
This is the most optimal solution. Rather than recreating all nodes, you create a new node pool with GPU enabled. You then modify the pod specification to target particular GPU types by adding node selector to your workloads Pod specification. YOu still have a single cluster so you pay Kubernetes cluster management fee for just one cluster thus minimizing the cost.
Ref: https://cloud.google.com/kubernetes-engine/docs/how-to/gpus
Ref: https://cloud.google.com/kubernetes-engine/pricing
Example:
apiVersion: v1
kind: Pod
metadata:
name: my-gpu-pod
spec:
containers:
name: my-gpu-container
image: nvidia/cuda:10.0-runtime-ubuntu18.04
command: [/bin/bash]
resources:
limits:
nvidia.com/gpu: 2
nodeSelector:
cloud.google.com/gke-accelerator: nvidia-tesla-k80 # or nvidia-tesla-p100 or nvidia-tesla-p4 or nvidia-tesla-v100 or nvidia-tesla-t4