Practice Free Professional Cloud DevOps Engineer Exam Online Questions
As part of your company’s initiative to shift left on security, the infoSec team is asking all teams to implement guard rails on all the Google Kubernetes Engine (GKE) clusters to only allow the deployment of trusted and approved images You need to determine how to satisfy the InfoSec teams goal of shifting left on security.
What should you do?
- A . Deploy Falco or Twistlock on GKE to monitor for vulnerabilities on your running Pods
- B . Configure Identity and Access Management (1AM) policies to create a least privilege model on your GKE clusters
- C . Use Binary Authorization to attest images during your CI CD pipeline
- D . Enable Container Analysis in Artifact Registry, and check for common vulnerabilities and exposures (CVEs) in your container images
You use Spinnaker to deploy your application and have created a canary deployment stage in the pipeline. Your application has an in-memory cache that loads objects at start time. You want to automate the comparison of the canary version against the production version.
How should you configure the canary analysis?
- A . Compare the canary with a new deployment of the current production version.
- B . Compare the canary with a new deployment of the previous production version.
- C . Compare the canary with the existing deployment of the current production version.
- D . Compare the canary with the average performance of a sliding window of previous production versions.
Your team is preparing to launch a new API in Cloud Run. The API uses an OpenTelemetry agent to send distributed tracing data to Cloud Trace to monitor the time each request takes. The team has noticed inconsistent trace collection. You need to resolve the issue.
What should you do?
- A . Increase the CPU limit in Cloud Run from 2 to 4.
- B . Use an HTTP health check.
- C . Configure CPU to be allocated only during request processing.
- D . Configure CPU to be always-allocated.
You support a user-facing web application. When analyzing the application’s error budget over the previous six months, you notice that the application has never consumed more than 5% of its error budget in any given time window. You hold a Service Level Objective (SLO) review with business stakeholders and confirm that the SLO is set appropriately. You want your application’s SLO to more closely reflect its observed reliability.
What steps can you take to further that goal while balancing velocity, reliability, and business needs? (Choose two.)
- A . Add more serving capacity to all of your application’s zones.
- B . Have more frequent or potentially risky application releases.
- C . Tighten the SLO match the application’s observed reliability.
- D . Implement and measure additional Service Level Indicators (SLIs) fro the application.
- E . Announce planned downtime to consume more error budget, and ensure that users are not depending on a tighter SLO.
You are configuring your CI/CD pipeline natively on Google Cloud. You want builds in a pre-production Google Kubernetes Engine (GKE) environment to be automatically load-tested before being promoted to the production GKE environment. You need to ensure that only builds that have passed this test are deployed to production. You want to follow Google-recommended practices.
How should you configure this pipeline with Binary Authorization?
- A . Create an attestation for the builds that pass the load test by requiring the lead quality assurance engineer to sign the attestation by using a key stored in Cloud Key Management Service (Cloud KMS).
- B . Create an attestation for the builds that pass the load test by using a private key stored in Cloud Key Management Service (Cloud KMS) authenticated through Workload Identity.
- C . Create an attestation for the builds that pass the load test by using a private key stored in Cloud Key Management Service (Cloud KMS) with a service account JSON key stored as a Kubernetes Secret.
- D . Create an attestation for the builds that pass the load test by requiring the lead quality assurance engineer to sign the attestation by using their personal private key.
You are configuring your CI/CD pipeline natively on Google Cloud. You want builds in a pre-production Google Kubernetes Engine (GKE) environment to be automatically load-tested before being promoted to the production GKE environment. You need to ensure that only builds that have passed this test are deployed to production. You want to follow Google-recommended practices.
How should you configure this pipeline with Binary Authorization?
- A . Create an attestation for the builds that pass the load test by requiring the lead quality assurance engineer to sign the attestation by using a key stored in Cloud Key Management Service (Cloud KMS).
- B . Create an attestation for the builds that pass the load test by using a private key stored in Cloud Key Management Service (Cloud KMS) authenticated through Workload Identity.
- C . Create an attestation for the builds that pass the load test by using a private key stored in Cloud Key Management Service (Cloud KMS) with a service account JSON key stored as a Kubernetes Secret.
- D . Create an attestation for the builds that pass the load test by requiring the lead quality assurance engineer to sign the attestation by using their personal private key.
You are developing reusable infrastructure as code modules. Each module contains integration tests that launch the module in a test project. You are using GitHub for source control. You need to Continuously test your feature branch and ensure that all code is tested before changes are accepted. You need to implement a solution to automate the integration tests.
What should you do?
- A . Use a Jenkins server for Cl/CD pipelines. Periodically run all tests in the feature branch.
- B . Use Cloud Build to run the tests. Trigger all tests to run after a pull request is merged.
- C . Ask the pull request reviewers to run the integration tests before approving the code.
- D . Use Cloud Build to run tests in a specific folder. Trigger Cloud Build for every GitHub pull request.
Your team has recently deployed an NGINX-based application into Google Kubernetes Engine (GKE) and has exposed it to the public via an HTTP Google Cloud Load Balancer (GCLB) ingress. You want to scale the deployment of the application’s frontend using an appropriate Service Level Indicator (SLI).
What should you do?
- A . Configure the horizontal pod autoscaler to use the average response time from the Liveness and Readiness probes.
- B . Configure the vertical pod autoscaler in GKE and enable the cluster autoscaler to scale the cluster as pods expand.
- C . Install the Stackdriver custom metrics adapter and configure a horizontal pod autoscaler to use the number of requests provided by the GCLB.
- D . Expose the NGINX stats endpoint and configure the horizontal pod autoscaler to use the request metrics exposed by the NGINX deployment.
You need to run a business-critical workload on a fixed set of Compute Engine instances for several months. The workload is stable with the exact amount of resources allocated to it. You want to lower the costs for this workload without any performance implications.
What should you do?
- A . Purchase Committed Use Discounts.
- B . Migrate the instances to a Managed Instance Group.
- C . Convert the instances to preemptible virtual machines.
- D . Create an Unmanaged Instance Group for the instances used to run the workload.
You support a high-traffic web application and want to ensure that the home page loads in a timely manner. As a first step, you decide to implement a Service Level Indicator (SLI) to represent home page request latency with an acceptable page load time set to 100 ms.
What is the Google-recommended way of calculating this SLI?
- A . Buckelize Ihe request latencies into ranges, and then compute the percentile at 100 ms.
- B . Bucketize the request latencies into ranges, and then compute the median and 90th percentiles.
- C . Count the number of home page requests that load in under 100 ms, and then divide by the total number of home page requests.
- D . Count the number of home page requests that load in under 100 ms. and then divide by the total number of all web application requests.