Practice Free Associate Cloud Engineer Exam Online Questions
You are working in a team that has developed a new application that needs to be deployed on Kubernetes. The production application is business critical and should be optimized for reliability. You need to provision a Kubernetes cluster and want to follow Google-recommended practices.
What should you do?
- A . Create a GKE Autopilot cluster. Enroll the cluster in the rapid release channel.
- B . Create a GKE Autopilot cluster. Enroll the cluster in the stable release channel.
- C . Create a zonal GKE standard cluster. Enroll the cluster in the stable release channel.
- D . Create a regional GKE standard cluster. Enroll the cluster in the rapid release channel.
B
Explanation:
Autopilot is more reliable and stable release gives more time to fix issues in new version of GKE
Your organization has decided to deploy all its compute workloads to Kubernetes on Google Cloud and two other cloud providers. You want to build an infrastructure-as-code solution to automate the provisioning process for all cloud resources.
What should you do?
- A . Build the solution by using YAML manifests, and provision the resources.
- B . Build the solution by using Terraform, and provision the resources.
- C . Build the solution by using Python and the cloud SDKs from all providers to provision the resources.
- D . Build the solution by using Config Connector, and provision the resources.
B
Explanation:
The requirement is for an infrastructure-as-code (IaC) solution that can manage Kubernetes resources and other cloud resources across multiple cloud providers (Google Cloud and two others).
Option A (YAML manifests): YAML manifests are primarily used for defining Kubernetes objects, not for provisioning general cloud resources (like VPCs, IAM policies, databases) across different cloud
providers.
Option C (Python + SDKs): While possible, writing custom scripts using each provider’s SDK requires significant development effort to handle state management, dependencies, and provider differences. It essentially reinvents much of what dedicated IaC tools provide and is not a standard IaC approach.
Option D (Config Connector): Config Connector allows managing Google Cloud resources using Kubernetes-style manifests and APIs. It is specific to Google Cloud and cannot manage resources in other cloud providers.
Option B (Terraform): Terraform is an open-source IaC tool explicitly designed for building, changing, and versioning infrastructure safely and efficiently across multiple cloud providers and on-premises data centers. It uses providers for different platforms (GCP, AWS, Azure, Kubernetes, etc.), allowing a unified workflow to manage diverse resources across the required environments (Google Cloud, other clouds, Kubernetes).
Terraform is the standard tool for multi-cloud IaC automation as described in the scenario.
Reference: Terraform on Google Cloud: "Terraform is an open source infrastructure as code (IaC) tool… Terraform lets you manage Google Cloud resources with declarative configuration files…" – https://cloud.google.com/docs/terraform
Terraform Providers (General): Terraform supports numerous providers for various cloud platforms and services. – https://registry.terraform.io/browse/providers
Config Connector Overview: "Config Connector is a Kubernetes addon that allows you to manage Google Cloud resources through Kubernetes." (Google Cloud specific) – https://cloud.google.com/config-connector/docs/overview
Your preview application, deployed on a single-zone Google Kubernetes Engine (GKE) cluster in us-centrall, has gained popularity. You are now ready to make the application generally available. You need to deploy the application to production while ensuring high availability and resilience. You also want to follow Google-recommended practices.
What should you do?
- A . Use the gcloud container clusters create command with the options–enable-multi-networking and–enable- autoscaling to create an autoscaling zonal cluster and deploy the application to it.
- B . Use the gcloud container clusters create-auto command to create an autopilot cluster and deploy the application to it.
- C . Use the gcloud container clusters update command with the option―region us-centrall to update the cluster and deploy the application to it.
- D . Use the gcloud container clusters update command with the option―node-locations us-centrall-a,us-centrall-b to update the cluster and deploy the application to the nodes.
You are using Deployment Manager to create a Google Kubernetes Engine cluster. Using the same Deployment Manager deployment, you also want to create a DaemonSet in the kube-system namespace of the cluster. You want a solution that uses the fewest possible services.
What should you do?
- A . Add the cluster’s API as a new Type Provider in Deployment Manager, and use the new type to create the DaemonSet.
- B . Use the Deployment Manager Runtime Configurator to create a new Config resource that contains the DaemonSet definition.
- C . With Deployment Manager, create a Compute Engine instance with a startup script that uses kubectl to create the DaemonSet.
- D . In the cluster’s definition in Deployment Manager, add a metadata that has kube-system as key and the DaemonSet manifest as value.
A
Explanation:
Adding an API as a type provider
This page describes how to add an API to Google Cloud Deployment Manager as a type provider. To learn more about types and type providers, read the Types overview documentation.
A type provider exposes all of the resources of a third-party API to Deployment Manager as base types that you can use in your configurations. These types must be directly served by a RESTful API that supports Create, Read, Update, and Delete (CRUD).
If you want to use an API that is not automatically provided by Google with Deployment Manager, you must add the API as a type provider.
https://cloud.google.com/deployment-manager/docs/configuration/type-providers/creating-type-provider
You are assigned to maintain a Google Kubernetes Engine (GKE) cluster named dev that was deployed on Google Cloud. You want to manage the GKE configuration using the command line interface (CLI). You have just downloaded and installed the Cloud SDK. You want to ensure that future CLI commands by default address this specific cluster.
What should you do?
- A . Use the command gcloud config set container/cluster dev.
- B . Use the command gcloud container clusters update dev.
- C . Create a file called gke.default in the ~/.gcloud aname.
- D . Create a file called defaults.json in the ~/.gcloud folder that contains the cluster name.
A
Explanation:
To set a default cluster for gcloud commands, run the following command: gcloud config set container/cluster CLUSTER_NAME https://cloud.google.com/kubernetes-engine/docs/how-to/managing-clusters?hl=en
You are building an application that will run in your data center. The application will use Google Cloud Platform (GCP) services like AutoML. You created a service account that has appropriate access to AutoML. You need to enable authentication to the APIs from your on-premises environment.
What should you do?
- A . Use service account credentials in your on-premises application.
- B . Use gcloud to create a key file for the service account that has appropriate permissions.
- C . Set up direct interconnect between your data center and Google Cloud Platform to enable authentication for your on-premises applications.
- D . Go to the IAM & admin console, grant a user account permissions similar to the service account permissions, and use this user account for authentication from your data center.
B
Explanation:
Reference: https://cloud.google.com/vision/automl/docs/before-you-begin
To use a service account outside of Google Cloud, such as on other platforms or on-premises, you must first establish the identity of the service account. Public/private key pairs provide a secure way of accomplishing this goal. You can create a service accountkey using the Cloud Console, the gcloud tool, the serviceAccounts.keys.create() method, or one of the client libraries.
Ref: https://cloud.google.com/iam/docs/creating-managing-service-account-keys
You are building an application that will run in your data center. The application will use Google Cloud Platform (GCP) services like AutoML. You created a service account that has appropriate access to AutoML. You need to enable authentication to the APIs from your on-premises environment.
What should you do?
- A . Use service account credentials in your on-premises application.
- B . Use gcloud to create a key file for the service account that has appropriate permissions.
- C . Set up direct interconnect between your data center and Google Cloud Platform to enable authentication for your on-premises applications.
- D . Go to the IAM & admin console, grant a user account permissions similar to the service account permissions, and use this user account for authentication from your data center.
B
Explanation:
Reference: https://cloud.google.com/vision/automl/docs/before-you-begin
To use a service account outside of Google Cloud, such as on other platforms or on-premises, you must first establish the identity of the service account. Public/private key pairs provide a secure way of accomplishing this goal. You can create a service accountkey using the Cloud Console, the gcloud tool, the serviceAccounts.keys.create() method, or one of the client libraries.
Ref: https://cloud.google.com/iam/docs/creating-managing-service-account-keys
You are configuring service accounts for an application that spans multiple projects. Virtual machines (VMs) running in the web-applications project need access to BigQuery datasets in crm-databases-proj. You want to follow Google-recommended practices to give access to the service account in the web-applications project.
What should you do?
- A . Give “project owner” for web-applications appropriate roles to crm-databases- proj
- B . Give “project owner” role to crm-databases-proj and the web-applications project.
- C . Give “project owner” role to crm-databases-proj and bigquery.dataViewer role to web-applications.
- D . Give bigquery.dataViewer role to crm-databases-proj and appropriate roles to web-applications.
C
Explanation:
Reference: https://cloud.google.com/blog/products/gcp/best-practices-for-working-with-google-cloud-audit- logging bigquery.dataViewer role provides permissions to read the datasets metadata and list tables in the dataset as well as Read data and metadata from the datasets tables. This is exactly what we need to fulfil this requirement and follows the least privilege principle.
Ref: https://cloud.google.com/iam/docs/understanding-roles#bigquery-roles
Your development team needs a new Jenkins server for their project. You need to deploy the server using the fewest steps possible.
What should you do?
- A . Download and deploy the Jenkins Java WAR to App Engine Standard.
- B . Create a new Compute Engine instance and install Jenkins through the command line interface.
- C . Create a Kubernetes cluster on Compute Engine and create a deployment with the Jenkins Docker image.
- D . Use GCP Marketplace to launch the Jenkins solution.
D
Explanation:
Reference: https://cloud.google.com/solutions/using-jenkins-for-distributed-builds-on-compute-engine
You have a Linux VM that must connect to Cloud SQL. You created a service account with the appropriate access rights. You want to make sure that the VM uses this service account instead of the default Compute Engine service account.
What should you do?
- A . When creating the VM via the web console, specify the service account under the ‘Identity and API Access’ section.
- B . Download a JSON Private Key for the service account. On the Project Metadata, add that JSON as the value for the key compute-engine-service-account.
- C . Download a JSON Private Key for the service account. On the Custom Metadata of the VM, add that JSON as the value for the key compute-engine-service-account.
- D . Download a JSON Private Key for the service account. After creating the VM, ssh into the VM and save the JSON under ~/.gcloud/compute-engine-service-account.json.
A
Explanation:
Reference:
https://cloud.google.com/compute/docs/access/create-enable-service-accounts-for-instances
https://cloud.google.com/compute/docs/access/create-enable-service-accounts-for-instances
Changing the service account and access scopes for an instance If you want to run the VM as a different identity, or you determine that the instance needs a different set of scopes to call the required APIs, you can change the service account and the access scopes of an existing instance. For example, you can change access scopes to grant access to a new API, or change an instance so that it runs as a service account that you created, instead of the Compute Engine default service account. However, Google recommends that you use the fine-grained IAM policies instead of relying on access scopes to control resource access for the service account. To change an instance’s service account and access scopes, the instance must be temporarily stopped. To stop your instance, read the documentation for Stopping an instance. After changing the service account or access scopes, remember to restart the instance. Use one of the following methods to the change service account or access scopes of the stopped instance.
