Practice Free Associate Cloud Engineer Exam Online Questions
You are asked to set up application performance monitoring on Google Cloud projects A, B, and C as a single pane of glass. You want to monitor CPU, memory, and disk.
What should you do?
- A . Enable API and then share charts from project A, B, and C.
- B . Enable API and then give the metrics.reader role to projects A, B, and C.
- C . Enable API and then use default dashboards to view all projects in sequence.
- D . Enable API, create a workspace under project A, and then add project B and C.
D
Explanation:
https://cloud.google.com/monitoring/settings/multiple-projects
https://cloud.google.com/monitoring/workspaces
Your company was recently impacted by a service disruption that caused multiple Dataflow jobs to get stuck, resulting in significant downtime in downstream applications and revenue loss. You were able to resolve the issue by identifying and fixing an error you found in the code. You need to design a solution with minimal management effort to identify when jobs are stuck in the future to ensure that this issue does not occur again.
What should you do?
- A . Set up Error Reporting to identify stack traces that indicate slowdowns in Dataflow jobs. Set up alerts based on these log entries.
- B . Use the Personalized Service Health dashboard to identify issues with Dataflow jobs across regions.
- C . Update the Dataflow job configurations to send messages to a Pub/Sub topic when there are delays. Configure a backup Dataflow job to process jobs that are delayed. Use Cloud Tasks to trigger an alert when messages are pushed to the Pub/Sub topic.
- D . Set up Cloud Monitoring alerts on the data freshness metric for the Dataflow jobs to receive a notification when a certain threshold is reached.
D
Explanation:
Comprehensive and Detailed In Depth
The goal is to proactively identify stuck Dataflow jobs with minimal management effort. Let’s analyze each option:
You want to deploy a new containerized application into Google Cloud by using a Kubernetes manifest. You want to have full control over the Kubernetes deployment, and at the same time, you want to minimize configuring infrastructure.
What should you do?
- A . Deploy the application on GKE Autopilot.
- B . Deploy the application on GKE Standard.
- C . Deploy the application on Cloud Functions.
- D . Deploy the application on Cloud Run.
You recently discovered that your developers are using many service account keys during their development process. While you work on a long term improvement, you need to quickly implement a process to enforce short-lived service account credentials in your company.
You have the following requirements:
• All service accounts that require a key should be created in a centralized project called pj-sa.
• Service account keys should only be valid for one day.
You need a Google-recommended solution that minimizes cost.
What should you do?
- A . Implement a Cloud Run job to rotate all service account keys periodically in pj-sa. Enforce an org policy to deny service account key creation with an exception to pj-sa.
- B . Implement a Kubernetes Cronjob to rotate all service account keys periodically. Disable attachment ofservice accounts to resources in all projects with an exception to pj-sa.
- C . Enforce an org policy constraint allowing the lifetime of service account keys to be 24 hours.
Enforce an org policy constraint denying service account key creation with an exception on pj-sa. - D . Enforce a DENY org policy constraint over the lifetime of service account keys for 24 hours. Disable attachment of service accounts to resources in all projects with an exception to pj-sa.
A
Explanation:
According to the Google Cloud documentation, you can use organization policy constraints to control the creation and expiration of service account keys.
The constraints are:
constraints/iam.allowServiceAccountKeyCreation: This constraint allows you to specify which projects or folders can create service account keys. You can set the value to true or false, or use a condition to apply the constraint to specific service accounts. By setting this constraint to false for the organization and adding an exception for the pj-sa project, you can prevent developers from creating service account keys in other projects.
constraints/iam.serviceAccountKeyMaxLifetime: This constraint allows you to specify the maximum lifetime of service account keys. You can set the value to a duration in seconds, such as 86400 for one day. By setting this constraint to 86400 for the organization, you can ensure that all service account keys expire after one day.
These constraints are recommended by Google Cloud as best practices to minimize the risk of service account key misuse or compromise. They also help you reduce the cost of managing service account keys, as you do not need to implement a custom solution to rotate or delete them.
1: Associate Cloud Engineer Certification Exam Guide | Learn – Google Cloud
5: Create and delete service account keys – Google Cloud Organization policy constraints for service accounts
You recently discovered that your developers are using many service account keys during their development process. While you work on a long term improvement, you need to quickly implement a process to enforce short-lived service account credentials in your company.
You have the following requirements:
• All service accounts that require a key should be created in a centralized project called pj-sa.
• Service account keys should only be valid for one day.
You need a Google-recommended solution that minimizes cost.
What should you do?
- A . Implement a Cloud Run job to rotate all service account keys periodically in pj-sa. Enforce an org policy to deny service account key creation with an exception to pj-sa.
- B . Implement a Kubernetes Cronjob to rotate all service account keys periodically. Disable attachment ofservice accounts to resources in all projects with an exception to pj-sa.
- C . Enforce an org policy constraint allowing the lifetime of service account keys to be 24 hours.
Enforce an org policy constraint denying service account key creation with an exception on pj-sa. - D . Enforce a DENY org policy constraint over the lifetime of service account keys for 24 hours. Disable attachment of service accounts to resources in all projects with an exception to pj-sa.
A
Explanation:
According to the Google Cloud documentation, you can use organization policy constraints to control the creation and expiration of service account keys.
The constraints are:
constraints/iam.allowServiceAccountKeyCreation: This constraint allows you to specify which projects or folders can create service account keys. You can set the value to true or false, or use a condition to apply the constraint to specific service accounts. By setting this constraint to false for the organization and adding an exception for the pj-sa project, you can prevent developers from creating service account keys in other projects.
constraints/iam.serviceAccountKeyMaxLifetime: This constraint allows you to specify the maximum lifetime of service account keys. You can set the value to a duration in seconds, such as 86400 for one day. By setting this constraint to 86400 for the organization, you can ensure that all service account keys expire after one day.
These constraints are recommended by Google Cloud as best practices to minimize the risk of service account key misuse or compromise. They also help you reduce the cost of managing service account keys, as you do not need to implement a custom solution to rotate or delete them.
1: Associate Cloud Engineer Certification Exam Guide | Learn – Google Cloud
5: Create and delete service account keys – Google Cloud Organization policy constraints for service accounts
Your company uses BigQuery for data warehousing. Over time, many different business units in your company have created 1000+ datasets across hundreds of projects. Your CIO wants you to examine all datasets to find tables that contain an employee_ssn column. You want to minimize effort in performing this task.
What should you do?
- A . Go to Data Catalog and search for employee_ssn in the search box.
- B . Write a shell script that uses the bq command line tool to loop through all the projects in your organization.
- C . Write a script that loops through all the projects in your organization and runs a query on INFORMATION_SCHEMA.COLUMNS view to find the employee_ssn column.
- D . Write a Cloud Dataflow job that loops through all the projects in your organization and runs a query on INFORMATION_SCHEMA.COLUMNS view to find employee_ssn column.
A
Explanation:
https://cloud.google.com/bigquery/docs/quickstarts/quickstart-web-ui?authuser=4
Your company uses BigQuery for data warehousing. Over time, many different business units in your company have created 1000+ datasets across hundreds of projects. Your CIO wants you to examine all datasets to find tables that contain an employee_ssn column. You want to minimize effort in performing this task.
What should you do?
- A . Go to Data Catalog and search for employee_ssn in the search box.
- B . Write a shell script that uses the bq command line tool to loop through all the projects in your organization.
- C . Write a script that loops through all the projects in your organization and runs a query on INFORMATION_SCHEMA.COLUMNS view to find the employee_ssn column.
- D . Write a Cloud Dataflow job that loops through all the projects in your organization and runs a query on INFORMATION_SCHEMA.COLUMNS view to find employee_ssn column.
A
Explanation:
https://cloud.google.com/bigquery/docs/quickstarts/quickstart-web-ui?authuser=4
Your company has developed a new application that consists of multiple microservices. You want to deploy the application to Google Kubernetes Engine (GKE), and you want to ensure that the cluster can scale as more applications are deployed in the future. You want to avoid manual intervention when each new application is deployed.
What should you do?
- A . Deploy the application on GKE, and add a HorizontalPodAutoscaler to the deployment.
- B . Deploy the application on GKE, and add a VerticalPodAutoscaler to the deployment.
- C . Create a GKE cluster with autoscaling enabled on the node pool. Set a minimum and maximum for the size of the node pool.
- D . Create a separate node pool for each application, and deploy each application to its dedicated node pool.
C
Explanation:
https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-autoscaler#adding_a_node_pool_with_autoscaling
In Google Kubernetes Engine, what is the role of a node pool?
- A . To manage the type of storage used by cluster pods
- B . To group nodes of the same configuration within a cluster
- C . To exclusively manage network configurations
- D . To schedule jobs on the cluster
You are deploying an application to Google Kubernetes Engine (GKE) that needs to call an external third-party API. You need to provide the external API vendor with a list of IP addresses for their firewall to allow traffic from your application. You want to follow Google-recommended practices and avoid any risk of interrupting traffic to the API due to IP address changes.
What should you do?
- A . Configure your GKE cluster with one node, and set the node to have a static external IP address. Ensure that the GKE cluster autoscaler is off. Send the external IP address of the node to the vendor to be added to the allowlist.
- B . Configure your GKE cluster with private nodes. Configure a Cloud NAT instance with static IP addresses. Provide these IP addresses to the vendor to be added to the allowlist.
- C . Configure your GKE cluster with public nodes. Write a Cloud Function that pulls the public IP addresses of each node in the cluster. Trigger the function to run every day with Cloud Scheduler. Send the list to the vendor by email every day.
- D . Configure your GKE cluster with private nodes. Configure a Cloud NAT instance with dynamic IP addresses. Provide these IP addresses to the vendor to be added to the allowlist.
B
Explanation:
The requirement is for a stable set of egress IP addresses from a GKE cluster for allowlisting by a third party, following best practices.
Option A is not recommended: Using a single node lacks scalability and high availability. Relying on a single node’s static IP creates a single point of failure and doesn’t align with GKE’s design principles. Disabling autoscaling hinders elasticity.
Option C is complex and unreliable: Public nodes typically have ephemeral external IPs (unless manually configured per node, which is difficult to manage with autoscaling). Dynamically tracking
and emailing IPs daily is operationally burdensome and prone to race conditions where the allowlist might lag behind IP changes.
Option D uses Cloud NAT but with dynamic IPs. Dynamic IPs change over time, making them unsuitable for stable firewall allowlists.
Option B is the Google-recommended practice: Configuring the GKE cluster with private nodes enhances security as nodes don’t have direct external IPs. Cloud NAT provides managed network address translation for these private nodes to access the internet. By configuring Cloud NAT with a static allocation of external IP addresses, all egress traffic from the private GKE nodes will appear to originate from this stable, predictable set of IPs. This set can be given to the vendor for allowlisting without worrying about node IP changes due to scaling or maintenance.
This approach decouples the application’s egress IP from the individual nodes, providing stability and adhering to the principle of least privilege (private nodes).
Reference: Cloud NAT Overview: "Cloud NAT lets certain resources without external IP addresses create outbound connections to the internet." – https://cloud.google.com/nat/docs/overview
Cloud NAT IP Addresses: "When you configure a NAT gateway… You can configure the NAT gateway to automatically allocate regional external IP addresses… Alternatively, you can manually assign a fixed number of static external IP addresses to the gateway." – https://cloud.google.com/nat/docs/overview#ip-addresses
GKE and Cloud NAT: "Configure Cloud NAT with GKE… Use Case: You want a GKE pod to deterministically egress traffic from a static set of IP addresses that you control." – https://cloud.google.com/nat/docs/gke-example
Private Clusters: "Private nodes do not have endpoint-accessible external IP addresses." – https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters
