Practice Free Associate Cloud Engineer Exam Online Questions
Configure your email address in the notification channel.
Explanation:
Specifying conditions for alerting policies This page describes how to specify conditions for alerting policies. The conditions for an alerting policy define what is monitored and when to trigger an alert. For example, suppose you want to define an alerting policy that emails you if the CPU utilization of a Compute Engine VM instance is above 80% for more than 3 minutes. You use the conditions dialog to specify that you want to monitor the CPU utilization of a Compute Engine VM instance, and that you want an alerting policy to trigger when that utilization is above 80% for 3 minutes. https://cloud.google.com/monitoring/alerts/ui-conditions-ga
https://cloud.google.com/monitoring/alerts/using-alerting-uihttps://cloud.google.com/monitoring/support/notification-options
You used the gcloud container clusters command to create two Google Cloud Kubernetes (GKE) clusters prod-cluster and dev-cluster.
• prod-cluster is a standard cluster.
• dev-cluster is an auto-pilot duster.
When you run the Kubect1 get nodes command, you only see the nodes from prod-cluster Which commands should you run to check the node status for dev-cluster?
A)
![]()
B)
![]()
C)
![]()
D)
![]()
- A . Option A
- B . Option B
- C . Option C
- D . Option D
You are working for a startup that was officially registered as a business 6 months ago. As your customer base grows, your use of Google Cloud increases. You want to allow all engineers to create new projects without asking them for their credit card information.
What should you do?
- A . Create a Billing account, associate a payment method with it, and provide all project creators with permission to associate that billing account with their projects.
- B . Grant all engineer’s permission to create their own billing accounts for each new project.
- C . Apply for monthly invoiced billing, and have a single invoice tor the project paid by the finance team.
- D . Create a billing account, associate it with a monthly purchase order (PO), and send the PO to Google Cloud.
You have developed a web application that serves traffic for a local event and are expecting unpredictable traffic. You have containerized the application, and you now want to deploy the application on Google Cloud. You also want to minimize costs.
What should you do?
- A . Deploy the web application as a Cloud Run service.
- B . Deploy the web application on Google Kubernetes Engine In Standard mode.
- C . Deploy the web application as a Cloud Run job.
- D . Deploy the web application on Google Kubernetes Engine in Autopilot mode.
A
Explanation:
Comprehensive and Detailed Explanation From Exact Extract:
The key requirements are a containerized web application that handles unpredictable traffic and must minimize costs.
Cloud Run is the ideal solution because it is a fully managed, serverless platform for containers that
automatically scales to zero instances when there is no traffic. This directly fulfills the cost requirement by eliminating charges for idle resources. It excels at handling unpredictable, bursty traffic.
GKE Standard and Autopilot (Options B and D) incur costs for the cluster or nodes even when not serving traffic (unless carefully scaled down), making them less cost-efficient than Cloud Run’s native scale-to-zero for unpredictable, non-constant workloads.
Reference: Google Cloud Documentation – Cloud Run (Overview):
"Cloud Run is a managed compute platform that enables you to run containers that are invocable via HTTP requests… Because Cloud Run is serverless, it abstracts away all infrastructure management. It scales up or down automatically, including scaling to zero to minimize your cost."
Your company has a rapidly growing social media platform and a user base primarily located in North America. Due to increasing demand, your current on-premises PostgreSQL database, hosted in your United States headquarters data center, no longer meets your needs. You need to identify a cloud-based database solution that offers automatic scaling, multi-region support for future expansion, and maintains low latency.
- A . Use Bigtable.
- B . Use BigQuery.
- C . Use Spanner.
- D . Use Cloud SQL for PostgreSQL.
C
Explanation:
Comprehensive and Detailed In Depth
Let’s evaluate each database option against the requirements: automatic scaling, multi-region support, and low latency for a growing social media platform:
You have a developer laptop with the Cloud SDK installed on Ubuntu. The Cloud SDK was installed from the Google Cloud Ubuntu package repository. You want to test your application locally on your laptop with Cloud Datastore.
What should you do?
- A . Export Cloud Datastore data using gcloud datastore export.
- B . Create a Cloud Datastore index using gcloud datastore indexes create.
- C . Install the google-cloud-sdk-datastore-emulator component using the apt get install command.
- D . Install the cloud-datastore-emulator component using the gcloud components install command.
D
Explanation:
The Datastore emulator provides local emulation of the production Datastore environment. You can use the emulator to develop and test your application locally
Ref: https://cloud.google.com/datastore/docs/tools/datastore-emulator
You want to host your video encoding software on Compute Engine. Your user base is growing rapidly, and users need to be able 3 to encode their videos at any time without interruption or CPU limitations. You must ensure that your encoding solution is highly available, and you want to follow Google-recommended practices to automate operations.
What should you do?
- A . Deploy your solution on multiple standalone Compute Engine instances, and increase the number of existing instances wnen CPU utilization on Cloud Monitoring reaches a certain threshold.
- B . Deploy your solution on multiple standalone Compute Engine instances, and replace existing instances with high-CPU instances when CPU utilization on Cloud Monitoring reaches a certain threshold.
- C . Deploy your solution to an instance group, and increase the number of available instances whenever you see high CPU utilization in Cloud Monitoring.
- D . Deploy your solution to an instance group, and set the autoscaling based on CPU utilization.
D
Explanation:
Instance groups are collections of virtual machine (VM) instances that you can manage as a single entity. Instance groups can help you simplify the management of multiple instances, reduce operational costs, and improve the availability and performance of your applications. Instance groups support autoscaling, which automatically adds or removes instances from the group based on increases or decreases in load. Autoscaling helps your applications gracefully handle increases in traffic and reduces cost when the need for resources is lower. You can set the autoscaling policy based on CPU utilization, load balancing capacity, Cloud Monitoring metrics, or a queue-based workload. In this case, since the video encoding software is CPU-intensive, setting the autoscaling based on CPU utilization is the best option to ensure high availability and optimal performance.
Reference: Instance groups
Autoscaling groups of instances
Your company requires that Google Cloud products are created with a specific configuration to comply with your company’s security policies You need to implement a mechanism that will allow software engineers at your company to deploy and update Google Cloud products in a preconfigured and approved manner.
What should you do?
- A . Create Java packages that utilize the Google Cloud Client Libraries for Java to configure Google Cloud products. Store and share the packages in a source code repository.
- B . Create bash scripts that utilize the Google Cloud CLI to configure Google Cloud products. Store and share the bash scripts in a source code repository.
- C . Create Terraform modules that utilize the Google Cloud Terraform Provider to configure Google Cloud products. Store and share the modules in a source code repository.
- D . Use the Google Cloud APIs by using curl to configure Google Cloud products. Store and share the curl commands in a source code repository.
You are building an application that processes data files uploaded from thousands of suppliers. Your primary goals for the application are data security and the expiration of aged data.
You need to design the application to:
• Restrict access so that suppliers can access only their own data.
• Give suppliers write access to data only for 30 minutes.
• Delete data that is over 45 days old.
You have a very short development cycle, and you need to make sure that the application requires minimal maintenance.
Which two strategies should you use? (Choose two.)
- A . Build a lifecycle policy to delete Cloud Storage objects after 45 days.
- B . Use signed URLs to allow suppliers limited time access to store their objects.
- C . Set up an SFTP server for your application, and create a separate user for each supplier.
- D . Build a Cloud function that triggers a timer of 45 days to delete objects that have expired.
- E . Develop a script that loops through all Cloud Storage buckets and deletes any buckets that are older than 45 days.
A,B
Explanation:
(A) Object Lifecycle Management Delete
The Delete action deletes an object when the object meets all conditions specified in the lifecycle rule.
Exception: In buckets with Object Versioning enabled, deleting the live version of an object causes it to become a noncurrent version, while deleting a noncurrent version deletes that version permanently.
https://cloud.google.com/storage/docs/lifecycle#delete
(B) Signed URLs
This page provides an overview of signed URLs, which you use to give time-limited resource access to anyone in possession of the URL, regardless of whether they have a Google account
https://cloud.google.com/storage/docs/access-control/signed-urls
You need to update a deployment in Deployment Manager without any resource downtime in the deployment.
Which command should you use?
- A . gcloud deployment-manager deployments create –config <deployment-config-path>
- B . gcloud deployment-manager deployments update –config <deployment-config-path>
- C . gcloud deployment-manager resources create –config <deployment-config-path>
- D . gcloud deployment-manager resources update –config <deployment-config-path>
B
Explanation:
Reference: https://cloud.google.com/sdk/gcloud/reference/deployment-manager/deployments/update
