Practice Free Associate Cloud Engineer Exam Online Questions
While working with a received document using Intelligent Document Automation, which three capabilities should a consultant leverage with Health Cloud out-of-the-box? Choose 3 answers
- A . Record Type Association
- B . Document Rotation
- C . Automated Document Checklist Item Creation
- D . eFax Connection
- E . Barcode Scanning
A, E, C
Explanation:
Salesforce Health Cloud’s Intelligent Document Automation streamlines document management and integrates directly into workflows.
Below are the key out-of-the-box capabilities leveraged with Intelligent Document Automation:
Record Type Association (A):
Automatically associates received documents with the appropriate record type, such as insurance claims, prescriptions, or referrals.
This ensures accurate categorization and alignment with organizational data models.
Automated Document Checklist Item Creation (C):
Automatically generates checklist items based on received documents, ensuring that follow-up tasks or reviews are created and assigned to appropriate users.
Barcode Scanning (E):
Supports barcode scanning to quickly identify and associate documents with the correct patient or case, improving operational efficiency and reducing errors.
Why Other Options Are Incorrect:
B (Document Rotation): Document rotation is not a built-in feature of Intelligent Document
Automation in Health Cloud.
D (eFax Connection): eFax integration requires additional setup or third-party tools and is not an out-of-the-box Health Cloud feature.
Reference: Intelligent Document Automation Overview
Salesforce Document Management Features
You deployed an App Engine application using gcloud app deploy, but it did not deploy to the intended project. You want to find out why this happened and where the application deployed.
What should you do?
- A . Check the app.yaml file for your application and check project settings.
- B . Check the web-application.xml file for your application and check project settings.
- C . Go to Deployment Manager and review settings for deployment of applications.
- D . Go to Cloud Shell and run gcloud config list to review the Google Cloud configuration used for deployment.
D
Explanation:
C:GCPappeng>gcloud config list
[core]
account = [email protected]
disable_usage_reporting = False
project = my-first-demo-xxxx
https://cloud.google.com/endpoints/docs/openapi/troubleshoot-gce-deployment
You are developing an internet of things (IoT) application that captures sensor data from multiple devices that have already been set up. You need to identify the global data storage product your company should use to store this data. You must ensure that the storage solution you choose meets your requirements of sub-millisecond latency.
What should you do?)
- A . Store the IoT data in Spanner. Use caches to speed up the process and avoid latencies.
- B . Store the IoT data in Bigtable.
- C . Capture IoT data in BigQuery datasets.
- D . Store the IoT data in Cloud Storage. Implement caching by using Cloud CDN.
B
Explanation:
Let’s evaluate each option based on the requirement of sub-millisecond latency for globally stored IoT data:
You have been asked to set up Object Lifecycle Management for objects stored in storage buckets. The objects are written once and accessed frequently for 30 days. After 30 days, the objects are not read again unless there is a special need. The object should be kept for three years, and you need to minimize cost.
What should you do?
- A . Set up a policy that uses Nearline storage for 30 days and then moves to Archive storage for three years.
- B . Set up a policy that uses Standard storage for 30 days and then moves to Archive storage for three years.
- C . Set up a policy that uses Nearline storage for 30 days, then moves the Coldline for one year, and then moves to Archive storage for two years.
- D . Set up a policy that uses Standard storage for 30 days, then moves to Coldline for one year, and then moves to Archive storage for two years.
B
Explanation:
The key to understand the requirement is : "The objects are written once and accessed frequently for
30 days" Standard Storage
Standard Storage is best for data that is frequently accessed ("hot" data) and/or stored for only brief periods of time.
Archive Storage
Archive Storage is the lowest-cost, highly durable storage service for data archiving, online backup, and disaster recovery. Unlike the "coldest" storage services offered by other Cloud providers, your data is available within milliseconds, not hours or days. Archive Storage is the best choice for data that you plan to access less than once a year.
https://cloud.google.com/storage/docs/storage-classes#standard
Your company developed an application to deploy on Google Kubernetes Engine. Certain parts of the application are not fault-tolerant and are allowed to have downtime Other parts of the application are critical and must always be available. You need to configure a Goorje Kubernfl:es Engine duster while optimizing for cost.
What should you do?
- A . Create a cluster with a single node-pool by using standard VMs. Label the fault-tolerant Deployments as spot-true.
- B . Create a cluster with a single node-pool by using Spot VMs. Label the critical Deployments as spot-false.
- C . Create a cluster with both a Spot W node pool and a rode pool by using standard VMs Deploy the critical. deployments on the Spot VM node pool and the fault; tolerant deployments on the node pool by using standard VMs.
- D . Create a cluster with both a Spot VM node pool and by using standard VMs. Deploy the critical deployments on the mode pool by using standard VMs and the fault-tolerant deployments on the Spot VM node pool.
You have an on-premises data analytics set of binaries that processes data files in memory for about 45 minutes every midnight. The sizes of those data files range from 1 gigabyte to 16 gigabytes. You want to migrate this application to Google Cloud with minimal effort and cost.
What should you do?
- A . Upload the code to Cloud Functions. Use Cloud Scheduler to start the application.
- B . Create a container for the set of binaries. Use Cloud Scheduler to start a Cloud Run job for the container.
- C . Create a container for the set of binaries Deploy the container to Google Kubernetes Engine (GKE) and use the Kubernetes scheduler to start the application.
- D . Lift and shift to a VM on Compute Engine. Use an instance schedule to start and stop the instance.
Your company has multiple projects linked to a single billing account in Google Cloud. You need to visualize the costs with specific metrics that should be dynamically calculated based on company-specific criteria. You want to automate the process.
What should you do?
- A . In the Google Cloud console, visualize the costs related to the projects in the Reports section.
- B . In the Google Cloud console, visualize the costs related to the projects in the Cost breakdown section.
- C . In the Google Cloud console, use the export functionality of the Cost table. Create a Looker Studiodashboard on top of the CSV export.
- D . Configure Cloud Billing data export to BigOuery for the billing account. Create a Looker Studio dashboard on top of the BigQuery export.
D
Explanation:
Cloud Billing export to BigQuery enables you to export detailed Google Cloud billing data (such as usage, cost estimates, and pricing data) automatically throughout the day to a BigQuery dataset that you specify. Then you can access your Cloud Billing data from BigQuery for detailed analysis, or use a tool like Looker Studio to visualize your data.
You installed the Google Cloud CLI on your workstation and set the proxy configuration. However, you are worried that your proxy credentials will be recorded in the gcloud CLI logs. You want to prevent your proxy credentials from being logged
What should you do?
- A . Configure username and password by using gcloud configure set proxy/username and gcloud configure set proxy/ proxy/password commands.
- B . Encode username and password in sha256 encoding, and save it to a text file. Use filename as a value in the gcloud configure set core/custom_ca_certs_file command.
- C . Provide values for CLOUDSDK_USERNAME and CLOUDSDK_PASSWORD in the gcloud CLI tool configure file.
- D . Set the CLOUDSDK_PROXY_USERNAME and CLOUDSDK_PROXY PASSWORD properties by using environment variables in your command line tool.
You are creating a Google Kubernetes Engine (GKE) cluster with a cluster autoscaler feature enabled. You need to make sure that each node of the cluster will run a monitoring pod that sends container metrics to a third-party monitoring solution.
What should you do?
- A . Deploy the monitoring pod in a StatefulSet object.
- B . Deploy the monitoring pod in a DaemonSet object.
- C . Reference the monitoring pod in a Deployment object.
- D . Reference the monitoring pod in a cluster initializer at the GKE cluster creation time.
B
Explanation:
https://cloud.google.com/kubernetes-engine/docs/concepts/daemonset
https://cloud.google.com/kubernetes-engine/docs/concepts/daemonset#usage_patterns
DaemonSets attempt to adhere to a one-Pod-per-node model, either across the entire cluster or a subset of nodes. As you add nodes to a node pool, DaemonSets automatically add Pods to the new nodes as needed.
In GKE, DaemonSets manage groups of replicated Pods and adhere to a one-Pod-per-node model, either across the entire cluster or a subset of nodes. As you add nodes to a node pool, DaemonSets automatically add Pods to the new nodes as needed. So, this is a perfect fit for our monitoring pod.
Ref: https://cloud.google.com/kubernetes-engine/docs/concepts/daemonset
DaemonSets are useful for deploying ongoing background tasks that you need to run on all or certain nodes, and which do not require user intervention. Examples of such tasks include storage daemons like ceph, log collection daemons like fluentd, and node monitoring daemons like collectd. For example, you could have DaemonSets for each type of daemon run on all of your nodes. Alternatively, you could run multiple DaemonSets for a single type of daemon, but have them use different configurations for different hardware types and resource needs.
You created a Kubernetes deployment by running kubectl run nginx image=nginx replicas=1. After a few days, you decided you no longer want this deployment. You identified the pod and deleted it by running kubectl delete pod.
You noticed the pod got recreated. $ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-84748895c4-nqqmt 1/1 Running 0 9m41s
$ kubectl delete pod nginx-84748895c4-nqqmt
pod nginx-84748895c4-nqqmt deleted
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-84748895c4-k6bzl 1/1 Running 0 25s
What should you do to delete the deployment and avoid pod getting recreated?
- A . kubectl delete deployment nginx
- B . kubectl delete Cdeployment=nginx
- C . kubectl delete pod nginx-84748895c4-k6bzl Cno-restart 2
- D . kubectl delete inginx
A
Explanation:
This command correctly deletes the deployment. Pods are managed by kubernetes workloads (deployments). When a pod is deleted, the deployment detects the pod is unavailable and brings up another pod to maintain the replica count. The only way to delete the workload is by deleting the deployment itself using the kubectl delete deployment command. $ kubectl delete deployment nginx
deployment.apps nginx deleted
Ref: https://kubernetes.io/docs/reference/kubectl/cheatsheet/#deleting-resources