Practice Free Associate Cloud Engineer Exam Online Questions
Every employee of your company has a Google account. Your operational team needs to manage a large number of instances on Compute Engine. Each member of this team needs only administrative access to the servers. Your security team wants to ensure that the deployment of credentials is operationally efficient and must be able to determine who accessed a given instance.
What should you do?
- A . Generate a new SSH key pair. Give the private key to each member of your team. Configure the public key in the metadata of each instance.
- B . Ask each member of the team to generate a new SSH key pair and to send you their public key.
Use a configuration management tool to deploy those keys on each instance. - C . Ask each member of the team to generate a new SSH key pair and to add the public key to their Google account. Grant the “compute.osAdminLogin” role to the Google group corresponding to this team.
- D . Generate a new SSH key pair. Give the private key to each member of your team. Configure the public key as a project-wide public SSH key in your Cloud Platform project and allow project-wide public SSH keys on each instance.
C
Explanation:
https://cloud.google.com/compute/docs/instances/managing-instance-access
Every employee of your company has a Google account. Your operational team needs to manage a large number of instances on Compute Engine. Each member of this team needs only administrative access to the servers. Your security team wants to ensure that the deployment of credentials is operationally efficient and must be able to determine who accessed a given instance.
What should you do?
- A . Generate a new SSH key pair. Give the private key to each member of your team. Configure the public key in the metadata of each instance.
- B . Ask each member of the team to generate a new SSH key pair and to send you their public key.
Use a configuration management tool to deploy those keys on each instance. - C . Ask each member of the team to generate a new SSH key pair and to add the public key to their Google account. Grant the “compute.osAdminLogin” role to the Google group corresponding to this team.
- D . Generate a new SSH key pair. Give the private key to each member of your team. Configure the public key as a project-wide public SSH key in your Cloud Platform project and allow project-wide public SSH keys on each instance.
C
Explanation:
https://cloud.google.com/compute/docs/instances/managing-instance-access
You are assigned to maintain a Google Kubernetes Engine (GKE) cluster named dev that was deployed on Google Cloud. You want to manage the GKE configuration using the command line interface (CLI). You have just downloaded and installed the Cloud SDK. You want to ensure that future CLI commands by default address this specific cluster.
What should you do?
- A . Use the command gcloud config set container/cluster dev.
- B . Use the command gcloud container clusters update dev.
- C . Create a file called gke.default in the ~/.gcloud aname.
- D . Create a file called defaults.json in the ~/.gcloud folder that contains the cluster name.
A
Explanation:
To set a default cluster for gcloud commands, run the following command: gcloud config set container/cluster CLUSTER_NAME https://cloud.google.com/kubernetes-engine/docs/how-to/managing-clusters?hl=en
You are managing a project for the Business Intelligence (BI) department in your company. A data pipeline ingests data into BigQuery via streaming. You want the users in the BI department to be able to run the custom SQL queries against the latest data in BigQuery.
What should you do?
- A . Create a Data Studio dashboard that uses the related BigQuery tables as a source and give the BI team view access to the Data Studio dashboard.
- B . Create a Service Account for the BI team and distribute a new private key to each member of the BI team.
- C . Use Cloud Scheduler to schedule a batch Dataflow job to copy the data from BigQuery to the BI team’s internal data warehouse.
- D . Assign the IAM role of BigQuery User to a Google Group that contains the members of the BI team.
D
Explanation:
When applied to a dataset, this role provides the ability to read the dataset’s metadata and list tables in the dataset. When applied to a project, this role also provides the ability to run jobs, including queries, within the project. A member with this role can enumerate their own jobs, cancel their own jobs, and enumerate datasets within a project. Additionally, allows the creation of new datasets within the project; the creator is granted the BigQuery Data Owner role (roles/bigquery.dataOwner) on these new datasets.
https://cloud.google.com/bigquery/docs/access-control
Your company is modernizing its applications and refactoring them to containerized microservices. You need to deploy the infrastructure on Google Cloud so that teams can deploy their applications. The applications cannot be exposed publicly. You want to minimize management and operational overhead.
What should you do?
- A . Provision a Standard zonal Google Kubernetes Engine (GKE) cluster.
- B . Provision a fleet of Compute Engine instances and install Kubernetes.
- C . Provision a Google Kubernetes Engine (GKE) Autopilot cluster.
- D . Provision a Standard regional Google Kubernetes Engine (GKE) cluster.
C
Explanation:
GKE Autopilot is a mode of operation in GKE where Google manages the underlying infrastructure, including nodes, node pools, and their upgrades. This significantly reduces the management and operational overhead for the user, allowing teams to focus solely on deploying and managing their containerized applications. Since the applications are not exposed publicly, the zonal or regional nature of the cluster primarily impacts availability within Google Cloud, and Autopilot is available for
both. Autopilot minimizes the operational burden, which is a key requirement.
Option A: A Standard zonal GKE cluster requires you to manage the nodes yourself, including sizing, scaling, and upgrades, increasing operational overhead compared to Autopilot.
Option B: Manually installing and managing Kubernetes on a fleet of Compute Engine instances involves the highest level of management overhead, which contradicts the requirement to minimize it.
Option D: A Standard regional GKE cluster provides higher availability than a zonal cluster by replicating the control plane and nodes across multiple zones within a region. However, it still requires you to manage the underlying nodes, unlike Autopilot.
Reference to Google Cloud Certified – Associate Cloud Engineer Documents:
The different modes of GKE operation, including Standard and Autopilot, and their respective management responsibilities and benefits, are clearly outlined in the Google Kubernetes Engine documentation, a core topic for the Associate Cloud Engineer certification. The emphasis on reduced operational overhead with Autopilot is a key differentiator.
Your company is modernizing its applications and refactoring them to containerized microservices. You need to deploy the infrastructure on Google Cloud so that teams can deploy their applications. The applications cannot be exposed publicly. You want to minimize management and operational overhead.
What should you do?
- A . Provision a Standard zonal Google Kubernetes Engine (GKE) cluster.
- B . Provision a fleet of Compute Engine instances and install Kubernetes.
- C . Provision a Google Kubernetes Engine (GKE) Autopilot cluster.
- D . Provision a Standard regional Google Kubernetes Engine (GKE) cluster.
C
Explanation:
GKE Autopilot is a mode of operation in GKE where Google manages the underlying infrastructure, including nodes, node pools, and their upgrades. This significantly reduces the management and operational overhead for the user, allowing teams to focus solely on deploying and managing their containerized applications. Since the applications are not exposed publicly, the zonal or regional nature of the cluster primarily impacts availability within Google Cloud, and Autopilot is available for
both. Autopilot minimizes the operational burden, which is a key requirement.
Option A: A Standard zonal GKE cluster requires you to manage the nodes yourself, including sizing, scaling, and upgrades, increasing operational overhead compared to Autopilot.
Option B: Manually installing and managing Kubernetes on a fleet of Compute Engine instances involves the highest level of management overhead, which contradicts the requirement to minimize it.
Option D: A Standard regional GKE cluster provides higher availability than a zonal cluster by replicating the control plane and nodes across multiple zones within a region. However, it still requires you to manage the underlying nodes, unlike Autopilot.
Reference to Google Cloud Certified – Associate Cloud Engineer Documents:
The different modes of GKE operation, including Standard and Autopilot, and their respective management responsibilities and benefits, are clearly outlined in the Google Kubernetes Engine documentation, a core topic for the Associate Cloud Engineer certification. The emphasis on reduced operational overhead with Autopilot is a key differentiator.
You are managing several Google Cloud Platform (GCP) projects and need access to all logs for the past 60 days. You want to be able to explore and quickly analyze the log contents. You want to follow Google- recommended practices to obtain the combined logs for all projects.
What should you do?
- A . Navigate to Stackdriver Logging and select resource.labels.project_id="*"
- B . Create a Stackdriver Logging Export with a Sink destination to a BigQuery dataset. Configure the table expiration to 60 days.
- C . Create a Stackdriver Logging Export with a Sink destination to Cloud Storage. Create a lifecycle rule
to delete objects after 60 days. - D . Configure a Cloud Scheduler job to read from Stackdriver and store the logs in BigQuery. Configure the table expiration to 60 days.
B
Explanation:
Navigate to Stackdriver Logging and select resource.labels.project_id=*. is not right.
Log entries are held in Stackdriver Logging for a limited time known as the retention period which is 30 days (default configuration). After that, the entries are deleted. To keep log entries longer, you need to export them outside of Stackdriver Logging by configuring log sinks.
Ref: https://cloud.google.com/blog/products/gcp/best-practices-for-working-with-google-cloud-audit-logging
Configure a Cloud Scheduler job to read from Stackdriver and store the logs in BigQuery. Configure the table expiration to 60 days. is not right.
While this works, it makes no sense to use Cloud Scheduler job to read from Stackdriver and store the logs in BigQuery when Google provides a feature (export sinks) that does exactly the same thing and works out of the box.
Ref: https://cloud.google.com/logging/docs/export/configure_export_v2
Create a Stackdriver Logging Export with a Sink destination to Cloud Storage. Create a lifecycle rule to delete objects after 60 days. is not right.
You can export logs by creating one or more sinks that include a logs query and an export destination. Supported destinations for exported log entries are Cloud Storage, BigQuery, and Pub/Sub.
Ref: https://cloud.google.com/logging/docs/export/configure_export_v2
Sinks are limited to exporting log entries from the exact resource in which the sink was created: a Google Cloud project, organization, folder, or billing account. If it makes it easier to exporting from all projects of an organication, you can create an aggregated sink that can export log entries from all the projects, folders, and billing accounts of a Google Cloud organization.
Ref: https://cloud.google.com/logging/docs/export/aggregated_sinks
Either way, we now have the data in Cloud Storage, but querying logs information from Cloud Storage is harder than Querying information from BigQuery dataset. For this reason, we should prefer Big Query over Cloud Storage.
Create a Stackdriver Logging Export with a Sink destination to a BigQuery dataset. Configure the table expiration to 60 days. is the right answer.
You can export logs by creating one or more sinks that include a logs query and an export destination. Supported destinations for exported log entries are Cloud Storage, BigQuery, and Pub/Sub.
Ref: https://cloud.google.com/logging/docs/export/configure_export_v2
Sinks are limited to exporting log entries from the exact resource in which the sink was created: a Google Cloud project, organization, folder, or billing account. If it makes it easier to exporting from all projects of an organication, you can create an aggregated sink that can export log entries from all the projects, folders, and billing accounts of a Google Cloud organization.
Ref: https://cloud.google.com/logging/docs/export/aggregated_sinks
Either way, we now have the data in a BigQuery Dataset. Querying information from a Big Query dataset is easier and quicker than analyzing contents in Cloud Storage bucket. As our requirement is to Quickly analyze the log contents, we should prefer Big Query over Cloud Storage.
Also, You can control storage costs and optimize storage usage by setting the default table expiration for newly created tables in a dataset. If you set the property when the dataset is created, any table created in the dataset is deleted after the expiration period. If you set the property after the dataset is created, only new tables are deleted after the expiration period.
For example, if you set the default table expiration to 7 days, older data is automatically deleted after 1 week.
Ref: https://cloud.google.com/bigquery/docs/best-practices-storage
Reference: https://cloud.google.com/blog/products/gcp/best-practices-for-working-with-google-cloud-audit- logging
You are managing several Google Cloud Platform (GCP) projects and need access to all logs for the past 60 days. You want to be able to explore and quickly analyze the log contents. You want to follow Google- recommended practices to obtain the combined logs for all projects.
What should you do?
- A . Navigate to Stackdriver Logging and select resource.labels.project_id="*"
- B . Create a Stackdriver Logging Export with a Sink destination to a BigQuery dataset. Configure the table expiration to 60 days.
- C . Create a Stackdriver Logging Export with a Sink destination to Cloud Storage. Create a lifecycle rule
to delete objects after 60 days. - D . Configure a Cloud Scheduler job to read from Stackdriver and store the logs in BigQuery. Configure the table expiration to 60 days.
B
Explanation:
Navigate to Stackdriver Logging and select resource.labels.project_id=*. is not right.
Log entries are held in Stackdriver Logging for a limited time known as the retention period which is 30 days (default configuration). After that, the entries are deleted. To keep log entries longer, you need to export them outside of Stackdriver Logging by configuring log sinks.
Ref: https://cloud.google.com/blog/products/gcp/best-practices-for-working-with-google-cloud-audit-logging
Configure a Cloud Scheduler job to read from Stackdriver and store the logs in BigQuery. Configure the table expiration to 60 days. is not right.
While this works, it makes no sense to use Cloud Scheduler job to read from Stackdriver and store the logs in BigQuery when Google provides a feature (export sinks) that does exactly the same thing and works out of the box.
Ref: https://cloud.google.com/logging/docs/export/configure_export_v2
Create a Stackdriver Logging Export with a Sink destination to Cloud Storage. Create a lifecycle rule to delete objects after 60 days. is not right.
You can export logs by creating one or more sinks that include a logs query and an export destination. Supported destinations for exported log entries are Cloud Storage, BigQuery, and Pub/Sub.
Ref: https://cloud.google.com/logging/docs/export/configure_export_v2
Sinks are limited to exporting log entries from the exact resource in which the sink was created: a Google Cloud project, organization, folder, or billing account. If it makes it easier to exporting from all projects of an organication, you can create an aggregated sink that can export log entries from all the projects, folders, and billing accounts of a Google Cloud organization.
Ref: https://cloud.google.com/logging/docs/export/aggregated_sinks
Either way, we now have the data in Cloud Storage, but querying logs information from Cloud Storage is harder than Querying information from BigQuery dataset. For this reason, we should prefer Big Query over Cloud Storage.
Create a Stackdriver Logging Export with a Sink destination to a BigQuery dataset. Configure the table expiration to 60 days. is the right answer.
You can export logs by creating one or more sinks that include a logs query and an export destination. Supported destinations for exported log entries are Cloud Storage, BigQuery, and Pub/Sub.
Ref: https://cloud.google.com/logging/docs/export/configure_export_v2
Sinks are limited to exporting log entries from the exact resource in which the sink was created: a Google Cloud project, organization, folder, or billing account. If it makes it easier to exporting from all projects of an organication, you can create an aggregated sink that can export log entries from all the projects, folders, and billing accounts of a Google Cloud organization.
Ref: https://cloud.google.com/logging/docs/export/aggregated_sinks
Either way, we now have the data in a BigQuery Dataset. Querying information from a Big Query dataset is easier and quicker than analyzing contents in Cloud Storage bucket. As our requirement is to Quickly analyze the log contents, we should prefer Big Query over Cloud Storage.
Also, You can control storage costs and optimize storage usage by setting the default table expiration for newly created tables in a dataset. If you set the property when the dataset is created, any table created in the dataset is deleted after the expiration period. If you set the property after the dataset is created, only new tables are deleted after the expiration period.
For example, if you set the default table expiration to 7 days, older data is automatically deleted after 1 week.
Ref: https://cloud.google.com/bigquery/docs/best-practices-storage
Reference: https://cloud.google.com/blog/products/gcp/best-practices-for-working-with-google-cloud-audit- logging
You want to host your video encoding software on Compute Engine. Your user base is growing rapidly, and users need to be able 3 to encode their videos at any time without interruption or CPU limitations. You must ensure that your encoding solution is highly available, and you want to follow Google-recommended practices to automate operations.
What should you do?
- A . Deploy your solution on multiple standalone Compute Engine instances, and increase the number of existing instances wnen CPU utilization on Cloud Monitoring reaches a certain threshold.
- B . Deploy your solution on multiple standalone Compute Engine instances, and replace existing instances with high-CPU
instances when CPU utilization on Cloud Monitoring reaches a certain threshold. - C . Deploy your solution to an instance group, and increase the number of available instances whenever you see high CPU utilization in Cloud Monitoring.
- D . Deploy your solution to an instance group, and set the autoscaling based on CPU utilization.
D
Explanation:
Instance groups are collections of virtual machine (VM) instances that you can manage as a single entity. Instance groups can help you simplify the management of multiple instances, reduce operational costs, and improve the availability and performance of your applications. Instance groups support autoscaling, which automatically adds or removes instances from the group based on increases or decreases in load. Autoscaling helps your applications gracefully handle increases in traffic and reduces cost when the need for resources is lower. You can set the autoscaling policy based on CPU utilization, load balancing capacity, Cloud Monitoring metrics, or a queue-based workload. In this case, since the video encoding software is CPU-intensive, setting the autoscaling based on CPU utilization is the best option to ensure high availability and optimal performance.
Reference: Instance groups
Autoscaling groups of instances
You have created an application that is packaged into a Docker image. You want to deploy the Docker image as a workload on Google Kubernetes Engine.
What should you do?
- A . Upload the image to Cloud Storage and create a Kubernetes Service referencing the image.
- B . Upload the image to Cloud Storage and create a Kubernetes Deployment referencing the image.
- C . Upload the image to Container Registry and create a Kubernetes Service referencing the image.
- D . Upload the image to Container Registry and create a Kubernetes Deployment referencing the image.
D
Explanation:
A deployment is responsible for keeping a set of pods running. A service is responsible for enabling network access to a set of pods.