Practice Free Professional Cloud Developer Exam Online Questions
Your development team has been asked to refactor an existing monolithic application into a set of composable microservices.
Which design aspects should you implement for the new application? (Choose two.)
- A . Develop the microservice code in the same programming language used by the microservice caller.
- B . Create an API contract agreement between the microservice implementation and microservice caller.
- C . Require asynchronous communications between all microservice implementations and microservice callers.
- D . Ensure that sufficient instances of the microservice are running to accommodate the performance requirements.
- E . Implement a versioning scheme to permit future changes that could be incompatible with the current interface.
Your organization has recently begun an initiative to replatform their legacy applications onto Google Kubernetes Engine. You need to decompose a monolithic application into microservices. Multiple instances have read and write access to a configuration file, which is stored on a shared file system. You want to minimize the effort required to manage this transition, and you want to avoid rewriting the application code.
What should you do?
- A . Create a new Cloud Storage bucket, and mount it via FUSE in the container.
- B . Create a new persistent disk, and mount the volume as a shared PersistentVolume.
- C . Create a new Filestore instance, and mount the volume as an NFS PersistentVolume.
- D . Create a new ConfigMap and volumeMount to store the contents of the configuration file.
D
Explanation:
https://cloud.google.com/kubernetes-engine/docs/concepts/configmap
ConfigMaps bind non-sensitive configuration artifacts such as configuration files, command-line arguments, and environment variables to your Pod containers and system components at runtime.
A ConfigMap separates your configurations from your Pod and components, which helps keep your workloads portable. This makes their configurations easier to change and manage, and prevents hardcoding configuration data to Pod specifications.
You need to deploy a new European version of a website hosted on Google Kubernetes Engine. The current and new websites must be accessed via the same HTTP(S) load balancer’s external IP address, but have different domain names.
What should you do?
- A . Define a new Ingress resource with a host rule matching the new domain
- B . Modify the existing Ingress resource with a host rule matching the new domain
- C . Create a new Service of type LoadBalancer specifying the existing IP address as the loadBalancerIP
- D . Generate a new Ingress resource and specify the existing IP address as the kubernetes.io/ingress.global-static-ip-name annotation value
B
Explanation:
https://kubernetes.io/docs/concepts/services-networking/ingress/#name-based-virtual-hosting Name-based virtual hosts support routing HTTP traffic to multiple host names at the same IP address.
Your company’s product team has a new requirement based on customer demand to autoscale your stateless and distributed service running in a Google Kubernetes Engine (GKE) duster. You want to find a solution that minimizes changes because this feature will go live in two weeks.
What should you do?
- A . Deploy a Vertical Pod Autoscaler, and scale based on the CPU load.
- B . Deploy a Vertical Pod Autoscaler, and scale based on a custom metric.
- C . Deploy a Horizontal Pod Autoscaler, and scale based on the CPU toad.
- D . Deploy a Horizontal Pod Autoscaler, and scale based on a custom metric.
C
Explanation:
https://cloud.google.com/kubernetes-engine/docs/concepts/horizontalpodautoscaler
The Horizontal Pod Autoscaler changes the shape of your Kubernetes workload by automatically increasing or decreasing the number of Pods in response to the workload’s CPU or memory consumption, or in response to custom metrics reported from within Kubernetes or external metrics from sources outside of your cluster.
Your company has a data warehouse that keeps your application information in BigQuery. The BigQuery data warehouse keeps 2 PBs of user data.
Recently, your company expanded your user base to include EU users and needs to comply with these requirements:
Your company must be able to delete all user account information upon user request.
All EU user data must be stored in a single region specifically for EU users.
Which two actions should you take? (Choose two.)
- A . Use BigQuery federated queries to query data from Cloud Storage.
- B . Create a dataset in the EU region that will keep information about EU users only.
- C . Create a Cloud Storage bucket in the EU region to store information for EU users only.
- D . Re-upload your data using to a Cloud Dataflow pipeline by filtering your user records out.
- E . Use DML statements in BigQuery to update/delete user records based on their requests.
C,E
Explanation:
Reference: https://cloud.google.com/solutions/bigquery-data-warehouse
You work on an application that relies on Cloud Spanner as its main datastore. New application features have occasionally caused performance regressions. You want to prevent performance issues by running an automated performance test with Cloud Build for each commit made. If multiple commits are made at the same time, the tests might run concurrently.
What should you do?
- A . Create a new project with a random name for every build. Load the required data. Delete the project after the test is run.
- B . Create a new Cloud Spanner instance for every build. Load the required data. Delete the Cloud Spanner instance after the test is run.
- C . Create a project with a Cloud Spanner instance and the required data. Adjust the Cloud Build build file to automatically restore the data to its previous state after the test is run.
- D . Start the Cloud Spanner emulator locally. Load the required data. Shut down the emulator after the test is run.
B
Explanation:
Since the testing needs to accommodate scenarios where multiple commits are made simultaneously, and hence multiple tests might run concurrently, the testing environment should support isolated and independent testing instances to avoid interference among tests. Given these requirements, using the Cloud Spanner emulator would not be the best choice for this scenario. The emulator is primarily suited for local development, unit, and integration testing, and is not built for production-scale performance testing. It may not accurately replicate performance characteristics at scale or under load, which are crucial aspects in this case.
Your application is logging to Stackdriver. You want to get the count of all requests on all /api/alpha/* endpoints.
What should you do?
- A . Add a Stackdriver counter metric for path:/api/alpha/.
- B . Add a Stackdriver counter metric for endpoint:/api/alpha/*.
- C . Export the logs to Cloud Storage and count lines matching /api/alphA.
- D . Export the logs to Cloud Pub/Sub and count lines matching /api/alphA.
Your application is logging to Stackdriver. You want to get the count of all requests on all /api/alpha/* endpoints.
What should you do?
- A . Add a Stackdriver counter metric for path:/api/alpha/.
- B . Add a Stackdriver counter metric for endpoint:/api/alpha/*.
- C . Export the logs to Cloud Storage and count lines matching /api/alphA.
- D . Export the logs to Cloud Pub/Sub and count lines matching /api/alphA.
You have an application deployed in Google Kubernetes Engine (GKE). You need to update the application to make authorized requests to Google Cloud managed services. You want this to be a one-time setup, and you need to follow security best practices of auto-rotating your security keys
and storing them in an encrypted store. You already created a service account with appropriate access to the Google Cloud service.
What should you do next?
- A . Assign the Google Cloud service account to your GKE Pod using Workload Identity.
- B . Export the Google Cloud service account, and share it with the Pod as a Kubernetes Secret.
- C . Export the Google Cloud service account, and embed it in the source code of the application.
- D . Export the Google Cloud service account, and upload it to HashiCorp Vault to generate a dynamic service account for your application.
A
Explanation:
https://cloud.google.com/kubernetes-engine/docs/concepts/workload-identity
Applications running on GKE might need access to Google Cloud APIs such as Compute Engine API, BigQuery Storage API, or Machine Learning APIs.
Workload Identity allows a Kubernetes service account in your GKE cluster to act as an IAM service account. Pods that use the configured Kubernetes service account automatically authenticate as the IAM service account when accessing Google Cloud APIs. Using Workload Identity allows you to assign distinct, fine-grained identities and authorization for each application in your cluster.
You are deploying your application to a Compute Engine virtual machine instance. Your application is configured to write its log files to disk. You want to view the logs in Stackdriver Logging without changing the application code.
What should you do?
- A . Install the Stackdriver Logging Agent and configure it to send the application logs.
- B . Use a Stackdriver Logging Library to log directly from the application to Stackdriver Logging.
- C . Provide the log file folder path in the metadata of the instance to configure it to send the application logs.
- D . Change the application to log to /var/log so that its logs are automatically sent to Stackdriver Logging.