Practice Free Associate Cloud Engineer Exam Online Questions
You are managing a stateful application deployed on Google Kubernetes Engine (GKE) that can only have one replica. You recently discovered that the application becomes unstable at peak times. You have identified that the application needs more CPU than what has been configured in the manifest at these peak times. You want Kubernetes to allocate the application sufficient CPU resources during these peak times, while ensuring cost efficiency during off-peak periods.
What should you do?
- A . Enable cluster autoscaling on the GKE cluster.
- B . Configure a Vertical Pod Autoscaler on the Deployment.
- C . Configure a Horizontal Pod Autoscaler on the Deployment.
- D . Enable node auto-provisioning on the GKE cluster.
B
Explanation:
The Vertical Pod Autoscaler (VPA) in Kubernetes automatically adjusts the CPU and memory requests and limits of the containers within a pod based on historical and real-time resource usage. In this scenario, where a single-replica stateful application needs more CPU during peak times, VPA can dynamically increase the CPU allocated to the pod when needed and potentially decrease it during off-peak periods to optimize resource utilization and cost efficiency.
Option A: Cluster autoscaling adds or removes nodes in your GKE cluster based on the resource requests of your pods. While it can help with overall cluster capacity, it oesn’t directly address the need for more CPU for a specific pod.
Option C: Horizontal Pod Autoscaler (HPA) scales the number of pod replicas based on observed CPU utilization or other select metrics. Since the application can only have one replica, HPA is not suitable.
Option D: Node auto-provisioning is similar to cluster autoscaling, automatically creating and deleting node pools based on workload demands. It doesn’t directly manage the resources of individual pods.
Reference to Google Cloud Certified – Associate Cloud Engineer Documents:
The functionality and use cases of the Vertical Pod Autoscaler (VPA) are detailed in the Google Kubernetes Engine documentation, specifically within the resource management and autoscaling sections. Understanding how VPA can dynamically adjust pod resources is relevant to the Associate Cloud Engineer certification.
You are managing a stateful application deployed on Google Kubernetes Engine (GKE) that can only have one replica. You recently discovered that the application becomes unstable at peak times. You have identified that the application needs more CPU than what has been configured in the manifest at these peak times. You want Kubernetes to allocate the application sufficient CPU resources during these peak times, while ensuring cost efficiency during off-peak periods.
What should you do?
- A . Enable cluster autoscaling on the GKE cluster.
- B . Configure a Vertical Pod Autoscaler on the Deployment.
- C . Configure a Horizontal Pod Autoscaler on the Deployment.
- D . Enable node auto-provisioning on the GKE cluster.
B
Explanation:
The Vertical Pod Autoscaler (VPA) in Kubernetes automatically adjusts the CPU and memory requests and limits of the containers within a pod based on historical and real-time resource usage. In this scenario, where a single-replica stateful application needs more CPU during peak times, VPA can dynamically increase the CPU allocated to the pod when needed and potentially decrease it during off-peak periods to optimize resource utilization and cost efficiency.
Option A: Cluster autoscaling adds or removes nodes in your GKE cluster based on the resource requests of your pods. While it can help with overall cluster capacity, it oesn’t directly address the need for more CPU for a specific pod.
Option C: Horizontal Pod Autoscaler (HPA) scales the number of pod replicas based on observed CPU utilization or other select metrics. Since the application can only have one replica, HPA is not suitable.
Option D: Node auto-provisioning is similar to cluster autoscaling, automatically creating and deleting node pools based on workload demands. It doesn’t directly manage the resources of individual pods.
Reference to Google Cloud Certified – Associate Cloud Engineer Documents:
The functionality and use cases of the Vertical Pod Autoscaler (VPA) are detailed in the Google Kubernetes Engine documentation, specifically within the resource management and autoscaling sections. Understanding how VPA can dynamically adjust pod resources is relevant to the Associate Cloud Engineer certification.
Create an egress firewall rule with the following settings:
• Targets: all instances
• Source filter: IP ranges (with the range set to 10.0.1.0/24)
• Protocols: allow TCP: 8080
Explanation:
Your organization is a financial company that needs to store audit log files for 3 years. Your organization has hundreds of Google Cloud projects. You need to implement a cost-effective
approach for log file retention.
What should you do?
- A . Create an export to the sink that saves logs from Cloud Audit to BigQuery.
- B . Create an export to the sink that saves logs from Cloud Audit to a Coldline Storage bucket.
- C . Write a custom script that uses logging API to copy the logs from Stackdriver logs to BigQuery.
- D . Export these logs to Cloud Pub/Sub and write a Cloud Dataflow pipeline to store logs to Cloud SQL.
B
Explanation:
Coldline Storage is the perfect service to store audit logs from all the projects and is very cost-efficient as well. Coldline Storage is a very low-cost, highly durable storage service for storing infrequently accessed data.
Your organization is a financial company that needs to store audit log files for 3 years. Your organization has hundreds of Google Cloud projects. You need to implement a cost-effective
approach for log file retention.
What should you do?
- A . Create an export to the sink that saves logs from Cloud Audit to BigQuery.
- B . Create an export to the sink that saves logs from Cloud Audit to a Coldline Storage bucket.
- C . Write a custom script that uses logging API to copy the logs from Stackdriver logs to BigQuery.
- D . Export these logs to Cloud Pub/Sub and write a Cloud Dataflow pipeline to store logs to Cloud SQL.
B
Explanation:
Coldline Storage is the perfect service to store audit logs from all the projects and is very cost-efficient as well. Coldline Storage is a very low-cost, highly durable storage service for storing infrequently accessed data.
Your organization is a financial company that needs to store audit log files for 3 years. Your organization has hundreds of Google Cloud projects. You need to implement a cost-effective
approach for log file retention.
What should you do?
- A . Create an export to the sink that saves logs from Cloud Audit to BigQuery.
- B . Create an export to the sink that saves logs from Cloud Audit to a Coldline Storage bucket.
- C . Write a custom script that uses logging API to copy the logs from Stackdriver logs to BigQuery.
- D . Export these logs to Cloud Pub/Sub and write a Cloud Dataflow pipeline to store logs to Cloud SQL.
B
Explanation:
Coldline Storage is the perfect service to store audit logs from all the projects and is very cost-efficient as well. Coldline Storage is a very low-cost, highly durable storage service for storing infrequently accessed data.
You need to grant access for three users so that they can view and edit table data on a Cloud Spanner instance.
What should you do?
- A . Run gcloud iam roles describe roles/spanner.databaseUser. Add the users to the role.
- B . Run gcloud iam roles describe roles/spanner.databaseUser. Add the users to a new group. Add the group to the role.
- C . Run gcloud iam roles describe roles/spanner.viewer –project my-project. Add the users to the role.
- D . Run gcloud iam roles describe roles/spanner.viewer –project my-project. Add the users to a new group. Add the group to the role.
B
Explanation:
https://cloud.google.com/spanner/docs/iam#spanner.databaseUser
Using the gcloud tool, execute the gcloud iam roles describe roles/spanner.databaseUser command on Cloud Shell. Attach the users to a newly created Google group and add the group to the role.
You want to configure a solution for archiving data in a Cloud Storage bucket. The solution must be cost-effective. Data with multiple versions should be archived after 30 days. Previous versions are accessed once a month for reporting. This archive data is also occasionally updated at month-end.
What should you do?
- A . Add a bucket lifecycle rule that archives data with newer versions after 30 days to Coldline Storage.
- B . Add a bucket lifecycle rule that archives data with newer versions after 30 days to Nearline Storage.
- C . Add a bucket lifecycle rule that archives data from regional storage after 30 days to Coldline Storage.
- D . Add a bucket lifecycle rule that archives data from regional storage after 30 days to Nearline Storage.
B
Explanation:
Reference: https://cloud.google.com/storage/docs/managing-lifecycles
Nearline Storage is ideal for data you plan to read or modify on average once per month or less. And this option archives just the noncurrent versions which is what we want to do.
Ref: https://cloud.google.com/storage/docs/storage-classes#nearline
Your organization has a dedicated person who creates and manages all service accounts for Google Cloud projects. You need to assign this person the minimum role for projects.
What should you do?
- A . Add the user to roles/iam.roleAdmin role.
- B . Add the user to roles/iam.securityAdmin role.
- C . Add the user to roles/iam.serviceAccountUser role.
- D . Add the user to roles/iam.serviceAccountAdmin role.
D
Explanation:
Reference: https://cloud.google.com/iam/docs/creating-managing-service-accounts
Service Account User (roles/iam.serviceAccountUser): Includes permissions to list service accounts, get details about a service account, and impersonate a service account. Service Account Admin (roles/iam.serviceAccountAdmin): Includes permissions to list service accounts and get details about a service account. Also includes permissions to create, update, and delete service accounts, and to view or change the IAM policy on a service account.
You have deployed an application on a Compute Engine instance. An external consultant needs to access the Linux-based instance. The consultant is connected to your corporate network through a VPN connection, but the consultant has no Google account.
What should you do?
- A . Instruct the external consultant to use the gcloud compute ssh command line tool by using Identity-Aware Proxy to access the instance.
- B . Instruct the external consultant to use the gcloud compute ssh command line tool by using the public IP address of the instance to access it.
- C . Instruct the external consultant to generate an SSH key pair, and request the public key from the consultant. Add the public key to the instance yourself, and have the consultant access the instance through SSH with their private key.
- D . Instruct the external consultant to generate an SSH key pair, and request the private key from the consultant. Add the private key to the instance yourself, and have the consultant access the instance through SSH with their public key.
C
Explanation:
The best option is to instruct the external consultant to generate an SSH key pair, and request the public key from the consultant. Then, add the public key to the instance yourself, and have the consultant access the instance through SSH with their private key. This way, you can grant the consultant access to the instance without requiring a Google account or exposing the instance’s public IP address. This option also follows the best practice of using user-managed SSH keys instead of service account keys for SSH access1.
Option A is not feasible because the external consultant does not have a Google account, and therefore cannot use Identity-Aware Proxy (IAP) to access the instance. IAP requires the user to authenticate with a Google account and have the appropriate IAM permissions to access the instance2. Option B is not secure because it exposes the instance’s public IP address, which can increase the risk of unauthorized access or attacks. Option D is not correct because it reverses the roles of the public and private keys. The public key should be added to the instance, and the private key should be kept by the consultant. Sharing the private key with anyone else can compromise the security of the SSH connection3.
Reference:
1: https://cloud.google.com/compute/docs/instances/adding-removing-ssh-keys
2: https://cloud.google.com/iap/docs/using-tcp-forwarding
3: https://cloud.google.com/compute/docs/instances/connecting-advanced#sshbetweeninstances