Practice Free EX267 Exam Online Questions
Question #31
Which resource type represents a data science project in OpenShift AI?
- A . Project
- B . Namespace
- C . DataProject
- D . ClusterProject
Correct Answer: B
B
Explanation:
A Namespace is used to isolate and manage data science projects, providing a dedicated environment within OpenShift AI.
B
Explanation:
A Namespace is used to isolate and manage data science projects, providing a dedicated environment within OpenShift AI.
Question #32
What command installs the scikit-learn library in Python?
- A . pip install numpy
- B . pip install sklearn
- C . pip install scikit-learn
- D . pip install tensorflow
Correct Answer: C
C
Explanation:
pip install scikit-learn installs the scikit-learn library, which offers tools for machine learning. It includes algorithms for classification, regression, and clustering tasks.
C
Explanation:
pip install scikit-learn installs the scikit-learn library, which offers tools for machine learning. It includes algorithms for classification, regression, and clustering tasks.
Question #33
Which two commands help diagnose issues with a workbench pod?
- A . oc logs <pod-name>
- B . oc describe pod <pod-name>
- C . oc debug notebook <name>
- D . oc inspect notebook <name>
Correct Answer: A, B
A, B
Explanation:
oc logs shows container logs, while oc describe pod provides detailed pod information for diagnosing issues.
A, B
Explanation:
oc logs shows container logs, while oc describe pod provides detailed pod information for diagnosing issues.
Question #34
What is the default namespace for DataScienceCluster objects?
- A . openshift-ai
- B . datascience
- C . default
- D . odh-system
Correct Answer: A
A
Explanation:
openshift-ai is the default namespace where DataScienceCluster objects and associated Open Data Hub components are deployed.
A
Explanation:
openshift-ai is the default namespace where DataScienceCluster objects and associated Open Data Hub components are deployed.
Question #35
What is the purpose of the DataScienceCluster object in OpenShift AI?
- A . To manage AI workloads
- B . To configure OpenShift nodes
- C . To manage Open Data Hub components
- D . To handle cluster networking
Correct Answer: C
C
Explanation:
The DataScienceCluster object manages the deployment and configuration of Open Data Hub components within OpenShift AI environments.
C
Explanation:
The DataScienceCluster object manages the deployment and configuration of Open Data Hub components within OpenShift AI environments.
Question #36
What is the purpose of a validation dataset during model training?
- A . To train the model
- B . To visualize predictions
- C . To deploy the model
- D . To evaluate performance
Correct Answer: D
D
Explanation:
A validation dataset helps assess the model’s performance on unseen data during training. It is used to tune hyperparameters and detect overfitting before final deployment.
D
Explanation:
A validation dataset helps assess the model’s performance on unseen data during training. It is used to tune hyperparameters and detect overfitting before final deployment.
Question #37
Which two storage backends are commonly used for PVCs in OpenShift AI?
- A . Kafka
- B . Redis
- C . AWS EBS
- D . Azure Disk
Correct Answer: C, D
C, D
Explanation:
AWS EBS and Azure Disk are frequently used as storage backends for PVCs, providing reliable, scalable storage solutions.
C, D
Explanation:
AWS EBS and Azure Disk are frequently used as storage backends for PVCs, providing reliable, scalable storage solutions.
Question #38
How do you create a .gitignore file to exclude .ipynb_checkpoints from Git tracking?
- A . echo "*.ipynb_checkpoints" > .gitignore
- B . git ignore *.ipynb_checkpoints
- C . git add-ignore .ipynb_checkpoints
- D . echo ".ipynb_checkpoints" >> .gitignore
Correct Answer: A
A
Explanation:
echo "*.ipynb_checkpoints" > .gitignore creates a .gitignore file and excludes .ipynb_checkpoints folders.
This prevents temporary notebook data from being tracked.
A
Explanation:
echo "*.ipynb_checkpoints" > .gitignore creates a .gitignore file and excludes .ipynb_checkpoints folders.
This prevents temporary notebook data from being tracked.
Question #39
Which Elyra feature allows scheduling of pipeline runs?
- A . Elyra Scheduler
- B . Task Runner
- C . Workflow Orchestrator
- D . Pipeline Scheduler
Correct Answer: D
D
Explanation:
The Pipeline Scheduler in Elyra lets users set automated schedules for running pipelines directly within JupyterLab.
D
Explanation:
The Pipeline Scheduler in Elyra lets users set automated schedules for running pipelines directly within JupyterLab.
Question #40
What field in ConfigMap sets the notebook cull timeout?
- A . data.cull_timeout
- B . data.cull_idle_timeout
- C . spec.cull_time
- D . spec.idleTimeout
Correct Answer: B
B
Explanation:
The data.cull_idle_timeout field in the ConfigMap sets the duration a notebook can stay idle before being culled to save resources.
B
Explanation:
The data.cull_idle_timeout field in the ConfigMap sets the duration a notebook can stay idle before being culled to save resources.