Practice Free Associate Cloud Engineer Exam Online Questions
You have an application that runs on Compute Engine VM instances in a custom Virtual Private Cloud (VPC). Your company’s security policies only allow the use to internal IP addresses on VM instances and do not let VM instances connect to the internet. You need to ensure that the application can access a file hosted in a Cloud Storage bucket within your project.
What should you do?
- A . Enable Private Service Access on the Cloud Storage Bucket.
- B . Add slorage.googleapis.com to the list of restricted services in a VPC Service Controls perimeter and add your project to the list to protected projects.
- C . Enable Private Google Access on the subnet within the custom VPC.
- D . Deploy a Cloud NAT instance and route the traffic to the dedicated IP address of the Cloud Storage bucket.
You need to configure IAM access audit logging in BigQuery for external auditors. You want to follow Google-recommended practices.
What should you do?
- A . Add the auditors group to the ‘logging.viewer’ and ‘bigQuery.dataViewer’ predefined IAM roles.
- B . Add the auditors group to two new custom IAM roles.
- C . Add the auditor user accounts to the ‘logging.viewer’ and ‘bigQuery.dataViewer’ predefined IAM roles.
- D . Add the auditor user accounts to two new custom IAM roles.
A
Explanation:
https://cloud.google.com/iam/docs/job-functions/auditing#scenario_external_auditors
Because if you directly add users to the IAM roles, then if any users left the organization then you have to remove the users from multiple places and need to revoke his/her access from multiple places. But, if you put a user into a group then its very easy to manage these type of situations. Now, if any user left then you just need to remove the user from the group and all the access got revoked
The organization creates a Google group for these external auditors and adds the current auditor to the group. This group is monitored and is typically granted access to the dashboard application. During normal access, the auditors’ Google group is only granted access to view the historic logs stored in BigQuery. If any anomalies are discovered, the group is granted permission to view the actual Cloud Logging Admin Activity logs via the dashboard’s elevated access mode. At the end of each audit period, the group’s access is then revoked. Data is redacted using Cloud DLP before being made accessible for viewing via the dashboard application. The table below explains IAM logging roles that an Organization Administrator can grant to the service account used by the dashboard, as well as the resource level at which the role is granted.
The DevOps group in your organization needs full control of Compute Engine resources in your development project. However, they should not have permission to create or update any other resources in the project. You want to follow Google’s recommendations for setting permissions for the DevOps group.
What should you do?
- A . Grant the basic role roles/viewer and the predefined role roles/compute.admin to the DevOps group.
- B . Create an IAM policy and grant all compute. instanceAdmln." permissions to the policy Attach the policy to the DevOps group.
- C . Create a custom role at the folder level and grant all compute. instanceAdmln. * permissions to the role Grant the custom role to the DevOps group.
- D . Grant the basic role roles/editor to the DevOps group.
You have one GCP account running in your default region and zone and another account running in a non-default region and zone. You want to start a new Compute Engine instance in these two Google Cloud Platform accounts using the command line interface.
What should you do?
- A . Create two configurations using gcloud config configurations create [NAME]. Run gcloud config configurations activate [NAME] to switch between accounts when running the commands to start the Compute Engine instances.
- B . Create two configurations using gcloud config configurations create [NAME]. Run gcloud
configurations list to start the Compute Engine instances. - C . Activate two configurations using gcloud configurations activate [NAME]. Run gcloud config list to start the Compute Engine instances.
- D . Activate two configurations using gcloud configurations activate [NAME]. Run gcloud configurations list to start the Compute Engine instances.
A
Explanation:
"Run gcloud configurations list to start the Compute Engine instances".
How the heck are you expecting to "start" GCE instances doing "configuration list".
Each gcloud configuration has a 1 to 1 relationship with the region (if a region is defined). Since we have two different regions, we would need to create two separate configurations using gcloud config configurations create
Ref: https://cloud.google.com/sdk/gcloud/reference/config/configurations/create
Secondly, you can activate each configuration independently by running gcloud config configurations activate [NAME]
Ref: https://cloud.google.com/sdk/gcloud/reference/config/configurations/activate
Finally, while each configuration is active, you can run the gcloud compute instances start [NAME] command to start the instance in the configurations region.
https://cloud.google.com/sdk/gcloud/reference/compute/instances/start
You are deploying a production application on Compute Engine. You want to prevent anyone from accidentally destroying the instance by clicking the wrong button.
What should you do?
- A . Disable the flag “Delete boot disk when instance is deleted.”
- B . Enable delete protection on the instance.
- C . Disable Automatic restart on the instance.
- D . Enable Preemptibility on the instance.
D
Explanation:
Preventing Accidental VM Deletion This document describes how to protect specific VM instances from deletion by setting the deletionProtection property on an Instance resource. To learn more about VM instances, read the Instances documentation. As part of your workload, there might be certain VM instances that are critical to running your application or services, such as an instance running a SQL server, a server used as a license manager, and so on. These VM instances might need
to stay running indefinitely so you need a way to protect these VMs from being deleted. By setting
the deletionProtection flag, a VM instance can be protected from accidental deletion. If a user
attempts to delete a VM instance for which you have set the deletionProtection flag, the request
fails. Only a user that has been granted a role with compute.instances.create permission can reset
the flag to allow the resource to be deleted. https://cloud.google.com/compute/docs/instances/preventing-accidental-vm-deletion
(You deployed an application on a managed instance group in Compute Engine. The application accepts Transmission Control Protocol (TCP) traffic on port 389 and requires you to preserve the IP address of the client who is making a request. You want to expose the application to the internet by using a load balancer.
What should you do?)
- A . Expose the application by using an internal passthrough Network Load Balancer.
- B . Expose the application by using an external passthrough Network Load Balancer.
- C . Expose the application by using a global external proxy Network Load Balancer.
- D . Expose the application by using a regional external proxy Network Load Balancer.
B
Explanation:
An external passthrough Network Load Balancer operates at layer 4 (TCP/UDP) and preserves the original client IP address. This is a key requirement stated in the question. It’s also designed for external-facing applications that need to handle TCP or UDP traffic directly without proxying at the application layer.
Option A: An internal passthrough Network Load Balancer is used for load balancing traffic within a Virtual Private Cloud (VPC) network, not for exposing applications to the public internet.
Option C & D: Global and regional external proxy Network Load Balancers operate at layer 7 (HTTP/HTTPS) or use a proxy for TCP/SSL, which means they terminate the client connection at the load balancer and establish a new connection to the backend instances. This does not preserve the original client IP address by default (though there are ways to retrieve it via headers, it’s not a direct passthrough).
Reference to Google Cloud Certified – Associate Cloud Engineer Documents:
The different types of Google Cloud Load Balancers, their operating layers (L4 vs. L7), and their capabilities, including client IP preservation, are thoroughly documented in the Google Cloud Load Balancing documentation, a critical area for the Associate Cloud Engineer certification. The distinction between passthrough and proxy load balancers is essential.
You are managing a stateful application deployed on Google Kubernetes Engine (GKE) that can only have one replica. You recently discovered that the application becomes unstable at peak times. You have identified that the application needs more CPU than what has been configured in the manifest at these peak times. You want Kubernetes to allocate the application sufficient CPU resources during these peak times, while ensuring cost efficiency during off-peak periods.
What should you do?
- A . Enable cluster autoscaling on the GKE cluster.
- B . Configure a Vertical Pod Autoscaler on the Deployment.
- C . Configure a Horizontal Pod Autoscaler on the Deployment.
- D . Enable node auto-provisioning on the GKE cluster.
B
Explanation:
The Vertical Pod Autoscaler (VPA) in Kubernetes automatically adjusts the CPU and memory requests and limits of the containers within a pod based on historical and real-time resource usage. In this scenario, where a single-replica stateful application needs more CPU during peak times, VPA can dynamically increase the CPU allocated to the pod when needed and potentially decrease it during off-peak periods to optimize resource utilization and cost efficiency.
Option A: Cluster autoscaling adds or removes nodes in your GKE cluster based on the resource requests of your pods. While it can help with overall cluster capacity, it oesn’t directly address the need for more CPU for a specific pod.
Option C: Horizontal Pod Autoscaler (HPA) scales the number of pod replicas based on observed CPU utilization or other select metrics. Since the application can only have one replica, HPA is not suitable.
Option D: Node auto-provisioning is similar to cluster autoscaling, automatically creating and deleting node pools based on workload demands. It doesn’t directly manage the resources of individual pods.
Reference to Google Cloud Certified – Associate Cloud Engineer Documents:
The functionality and use cases of the Vertical Pod Autoscaler (VPA) are detailed in the Google Kubernetes Engine documentation, specifically within the resource management and autoscaling sections. Understanding how VPA can dynamically adjust pod resources is relevant to the Associate Cloud Engineer certification.
Your company has workloads running on Compute Engine and on-premises. The Google Cloud Virtual Private Cloud (VPC) is connected to your WAN over a Virtual Private Network (VPN). You need to deploy a new Compute Engine instance and ensure that no public Internet traffic can be routed to it.
What should you do?
- A . Create the instance without a public IP address.
- B . Create the instance with Private Google Access enabled.
- C . Create a deny-all egress firewall rule on the VPC network.
- D . Create a route on the VPC to route all traffic to the instance over the VPN tunnel.
A
Explanation:
VMs cannot communicate over the internet without a public IP address. Private Google Access permits access to Google APIs and services in Google’s production infrastructure. https://cloud.google.com/vpc/docs/private-google-access
You are analyzing Google Cloud Platform service costs from three separate projects. You want to use this information to create service cost estimates by service type, daily and monthly, for the next six months using standard query syntax.
What should you do?
- A . Export your bill to a Cloud Storage bucket, and then import into Cloud Bigtable for analysis.
- B . Export your bill to a Cloud Storage bucket, and then import into Google Sheets for analysis.
- C . Export your transactions to a local file, and perform analysis with a desktop tool.
- D . Export your bill to a BigQuery dataset, and then write time window-based SQL queries for analysis.
D
Explanation:
"…we recommend that you enable Cloud Billing data export to BigQuery at the same time that you create a Cloud Billing account. "https://cloud.google.com/billing/docs/how-to/export-data-bigquery
https://medium.com/google-cloud/analyzing-google-cloud-billing-data-with-big-query-30bae1c2aae4
You are migrating a business critical application from your local data center into Google Cloud. As part of your high-availability strategy, you want to ensure that any data used by the application will be immediately available if a zonal failure occurs.
What should you do?
- A . Store the application data on a zonal persistent disk. Create a snapshot schedule for the disk. If an
outage occurs, create a new disk from the most recent snapshot and attach it to a new VM in another zone. - B . Store the application data on a zonal persistent disk. If an outage occurs, create an instance in another zone with this disk attached.
- C . Store the application data on a regional persistent disk. Create a snapshot schedule for the disk. If an outage occurs, create a new disk from the most recent snapshot and attach it to a new VM in another zone.
- D . Store the application data on a regional persistent disk If an outage occurs, create an instance in another zone with this disk attached.
