Practice Free KCNA Exam Online Questions
Question #21
What is the goal of load balancing?
- A . Automatically measure request performance across instances of an application.
- B . Automatically distribute requests across different versions of an application.
- C . Automatically distribute instances of an application across the cluster.
- D . Automatically distribute requests across instances of an application.
Correct Answer: D
D
Explanation:
The core goal of load balancing is to distribute incoming requests across multiple instances of a service so that no single instance becomes overloaded and so that the overall service is more available and responsive. That matches option D, which is the correct answer.
In Kubernetes, load balancing commonly appears through the Service abstraction. A Service selects a set of Pods using labels and provides stable access via a virtual IP (ClusterIP) and DNS name. Traffic sent to the Service is then forwarded to one of the healthy backend Pods. This spreads load across replicas and provides resilience: if one Pod fails, it is removed from endpoints (or becomes NotReady) and traffic shifts to remaining replicas. The actual traffic distribution mechanism depends on the networking implementation (kube-proxy using iptables/IPVS or an eBPF dataplane), but the intent remains consistent: distribute requests across multiple backends.
Option A describes monitoring/observability, not load balancing.
Option B describes progressive delivery patterns like canary or A/B routing; that can be implemented with advanced routing layers (Ingress controllers, service meshes), but it’s not the general definition of load balancing.
Option C describes scheduling/placement of instances (Pods) across cluster nodes, which is the role of the scheduler and controllers, not load balancing.
In cloud environments, load balancing may also be implemented by external load balancers (cloud LBs) in front of the cluster, then forwarded to NodePorts or ingress endpoints, and again balanced internally to Pods. At each layer, the objective is the same: spread request traffic across multiple service instances to improve performance and availability.
D
Explanation:
The core goal of load balancing is to distribute incoming requests across multiple instances of a service so that no single instance becomes overloaded and so that the overall service is more available and responsive. That matches option D, which is the correct answer.
In Kubernetes, load balancing commonly appears through the Service abstraction. A Service selects a set of Pods using labels and provides stable access via a virtual IP (ClusterIP) and DNS name. Traffic sent to the Service is then forwarded to one of the healthy backend Pods. This spreads load across replicas and provides resilience: if one Pod fails, it is removed from endpoints (or becomes NotReady) and traffic shifts to remaining replicas. The actual traffic distribution mechanism depends on the networking implementation (kube-proxy using iptables/IPVS or an eBPF dataplane), but the intent remains consistent: distribute requests across multiple backends.
Option A describes monitoring/observability, not load balancing.
Option B describes progressive delivery patterns like canary or A/B routing; that can be implemented with advanced routing layers (Ingress controllers, service meshes), but it’s not the general definition of load balancing.
Option C describes scheduling/placement of instances (Pods) across cluster nodes, which is the role of the scheduler and controllers, not load balancing.
In cloud environments, load balancing may also be implemented by external load balancers (cloud LBs) in front of the cluster, then forwarded to NodePorts or ingress endpoints, and again balanced internally to Pods. At each layer, the objective is the same: spread request traffic across multiple service instances to improve performance and availability.
Question #21
What is the goal of load balancing?
- A . Automatically measure request performance across instances of an application.
- B . Automatically distribute requests across different versions of an application.
- C . Automatically distribute instances of an application across the cluster.
- D . Automatically distribute requests across instances of an application.
Correct Answer: D
D
Explanation:
The core goal of load balancing is to distribute incoming requests across multiple instances of a service so that no single instance becomes overloaded and so that the overall service is more available and responsive. That matches option D, which is the correct answer.
In Kubernetes, load balancing commonly appears through the Service abstraction. A Service selects a set of Pods using labels and provides stable access via a virtual IP (ClusterIP) and DNS name. Traffic sent to the Service is then forwarded to one of the healthy backend Pods. This spreads load across replicas and provides resilience: if one Pod fails, it is removed from endpoints (or becomes NotReady) and traffic shifts to remaining replicas. The actual traffic distribution mechanism depends on the networking implementation (kube-proxy using iptables/IPVS or an eBPF dataplane), but the intent remains consistent: distribute requests across multiple backends.
Option A describes monitoring/observability, not load balancing.
Option B describes progressive delivery patterns like canary or A/B routing; that can be implemented with advanced routing layers (Ingress controllers, service meshes), but it’s not the general definition of load balancing.
Option C describes scheduling/placement of instances (Pods) across cluster nodes, which is the role of the scheduler and controllers, not load balancing.
In cloud environments, load balancing may also be implemented by external load balancers (cloud LBs) in front of the cluster, then forwarded to NodePorts or ingress endpoints, and again balanced internally to Pods. At each layer, the objective is the same: spread request traffic across multiple service instances to improve performance and availability.
D
Explanation:
The core goal of load balancing is to distribute incoming requests across multiple instances of a service so that no single instance becomes overloaded and so that the overall service is more available and responsive. That matches option D, which is the correct answer.
In Kubernetes, load balancing commonly appears through the Service abstraction. A Service selects a set of Pods using labels and provides stable access via a virtual IP (ClusterIP) and DNS name. Traffic sent to the Service is then forwarded to one of the healthy backend Pods. This spreads load across replicas and provides resilience: if one Pod fails, it is removed from endpoints (or becomes NotReady) and traffic shifts to remaining replicas. The actual traffic distribution mechanism depends on the networking implementation (kube-proxy using iptables/IPVS or an eBPF dataplane), but the intent remains consistent: distribute requests across multiple backends.
Option A describes monitoring/observability, not load balancing.
Option B describes progressive delivery patterns like canary or A/B routing; that can be implemented with advanced routing layers (Ingress controllers, service meshes), but it’s not the general definition of load balancing.
Option C describes scheduling/placement of instances (Pods) across cluster nodes, which is the role of the scheduler and controllers, not load balancing.
In cloud environments, load balancing may also be implemented by external load balancers (cloud LBs) in front of the cluster, then forwarded to NodePorts or ingress endpoints, and again balanced internally to Pods. At each layer, the objective is the same: spread request traffic across multiple service instances to improve performance and availability.
Question #23
What is the main role of the Kubernetes DNS within a cluster?
- A . Acts as a DNS server for virtual machines that are running outside the cluster.
- B . Provides a DNS as a Service, allowing users to create zones and registries for domains that they own.
- C . Allows Pods running in dual stack to convert IPv6 calls into IPv4 calls.
- D . Provides consistent DNS names for Pods and Services for workloads that need to communicate with each other.
Correct Answer: D
D
Explanation:
Kubernetes DNS (commonly implemented by CoreDNS) provides service discovery inside the cluster by assigning stable, consistent DNS names to Services and (optionally) Pods, which makes D correct. In a Kubernetes environment, Pods are ephemeral―IP addresses can change when Pods restart or move between nodes. DNS-based discovery allows applications to communicate using stable names rather than hardcoded IPs.
For Services, Kubernetes creates DNS records like service-name.namespace.svc.cluster.local, which resolve to the Service’s virtual IP (ClusterIP) or, for headless Services, to the set of Pod endpoints. This supports both load-balanced communication (standard Service) and per-Pod addressing (headless Service, commonly used with StatefulSets). Kubernetes DNS is therefore a core building block that enables microservices to locate each other reliably.
Option A is not Kubernetes DNS’s purpose; it serves cluster workloads rather than external VMs.
Option B describes a managed DNS hosting product (creating zones/registries), which is outside the scope of cluster DNS.
Option C describes protocol translation, which is not the role of DNS. Dual-stack support relates to IP families and networking configuration, not DNS translating IPv6 to IPv4.
In day-to-day Kubernetes operations, DNS reliability impacts everything: if DNS is unhealthy, Pods may fail to resolve Services, causing cascading outages. That’s why CoreDNS is typically deployed as a highly available add-on in kube-system, and why DNS caching and scaling are important for large clusters.
So the correct statement is D: Kubernetes DNS provides consistent DNS names so workloads can communicate reliably.
D
Explanation:
Kubernetes DNS (commonly implemented by CoreDNS) provides service discovery inside the cluster by assigning stable, consistent DNS names to Services and (optionally) Pods, which makes D correct. In a Kubernetes environment, Pods are ephemeral―IP addresses can change when Pods restart or move between nodes. DNS-based discovery allows applications to communicate using stable names rather than hardcoded IPs.
For Services, Kubernetes creates DNS records like service-name.namespace.svc.cluster.local, which resolve to the Service’s virtual IP (ClusterIP) or, for headless Services, to the set of Pod endpoints. This supports both load-balanced communication (standard Service) and per-Pod addressing (headless Service, commonly used with StatefulSets). Kubernetes DNS is therefore a core building block that enables microservices to locate each other reliably.
Option A is not Kubernetes DNS’s purpose; it serves cluster workloads rather than external VMs.
Option B describes a managed DNS hosting product (creating zones/registries), which is outside the scope of cluster DNS.
Option C describes protocol translation, which is not the role of DNS. Dual-stack support relates to IP families and networking configuration, not DNS translating IPv6 to IPv4.
In day-to-day Kubernetes operations, DNS reliability impacts everything: if DNS is unhealthy, Pods may fail to resolve Services, causing cascading outages. That’s why CoreDNS is typically deployed as a highly available add-on in kube-system, and why DNS caching and scaling are important for large clusters.
So the correct statement is D: Kubernetes DNS provides consistent DNS names so workloads can communicate reliably.
Question #24
Which group of container runtimes provides additional sandboxed isolation and elevated security?
- A . rune, cgroups
- B . docker, containerd
- C . runsc, kata
- D . crun, cri-o
Correct Answer: C
C
Explanation:
The runtimes most associated with sandboxed isolation are gVisor’s runsc and Kata Containers, making C correct. Standard container runtimes (like containerd with runc) rely primarily on Linux namespaces and cgroups for isolation. That isolation is strong for many use cases, but it shares the host kernel, which can be a concern for multi-tenant or high-risk workloads.
gVisor (runsc) provides a user-space kernel-like layer that intercepts and mediates system calls, reducing the container’s direct interaction with the host kernel. Kata Containers takes a different approach: it runs containers inside lightweight virtual machines, providing hardware-virtualization boundaries (or VM-like isolation) while still integrating into container workflows. Both are used to increase isolation compared to traditional containers, and both can be integrated with Kubernetes through compatible CRI/runtime configurations.
The other options are incorrect for the question’s intent. “rune, cgroups” is not a meaningful pairing here (cgroups is a Linux resource mechanism, not a runtime). “docker, containerd” are commonly used container platforms/runtimes but are not specifically the “sandboxed isolation” category (containerd typically uses runc for standard isolation). “crun, cri-o” represents a low-level OCI runtime (crun) and a CRI implementation (CRI-O), again not specifically a sandboxed-isolation grouping.
So, when the question asks for the group that provides additional sandboxing and elevated security, the correct, well-established answer is runsc + Kata.
C
Explanation:
The runtimes most associated with sandboxed isolation are gVisor’s runsc and Kata Containers, making C correct. Standard container runtimes (like containerd with runc) rely primarily on Linux namespaces and cgroups for isolation. That isolation is strong for many use cases, but it shares the host kernel, which can be a concern for multi-tenant or high-risk workloads.
gVisor (runsc) provides a user-space kernel-like layer that intercepts and mediates system calls, reducing the container’s direct interaction with the host kernel. Kata Containers takes a different approach: it runs containers inside lightweight virtual machines, providing hardware-virtualization boundaries (or VM-like isolation) while still integrating into container workflows. Both are used to increase isolation compared to traditional containers, and both can be integrated with Kubernetes through compatible CRI/runtime configurations.
The other options are incorrect for the question’s intent. “rune, cgroups” is not a meaningful pairing here (cgroups is a Linux resource mechanism, not a runtime). “docker, containerd” are commonly used container platforms/runtimes but are not specifically the “sandboxed isolation” category (containerd typically uses runc for standard isolation). “crun, cri-o” represents a low-level OCI runtime (crun) and a CRI implementation (CRI-O), again not specifically a sandboxed-isolation grouping.
So, when the question asks for the group that provides additional sandboxing and elevated security, the correct, well-established answer is runsc + Kata.
Question #25
Which command lists the running containers in the current Kubernetes namespace?
- A . kubectl get pods
- B . kubectl ls
- C . kubectl ps
- D . kubectl show pods
Correct Answer: A
A
Explanation:
The correct answer is A: kubectl get pods. Kubernetes does not manage “containers” as standalone top-level objects; the primary schedulable unit is the Pod, and containers run inside Pods. Therefore, the practical way to list what’s running in a namespace is to list the Pods in that namespace. kubectl get pods shows Pods and their readiness, status, restarts, and age―giving you the canonical view of running workloads.
If you need the container-level details (images, container names), you typically use additional commands and output formatting:
kubectl describe pod <pod> to view container specs, images, states, and events
kubectl get pods -o jsonpath=… or -o wide to surface more fields
kubectl get pods -o=json to inspect .spec.containers and .status.containerStatuses
But among the provided options, kubectl get pods is the only real kubectl command that lists the running workload objects in the current namespace.
The other options are not valid kubectl subcommands: kubectl ls, kubectl ps, and kubectl show pods are not standard Kubernetes CLI operations. Kubernetes intentionally centers around the API resource model, so listing resources uses kubectl get <resource>. This also aligns with Kubernetes’ declarative nature: you observe and manage the state via API objects, not by directly enumerating OS-level processes.
So while the question says “running containers,” the Kubernetes-correct interpretation is “containers in running Pods,” and the appropriate listing command in the namespace is kubectl get pods, option A.
A
Explanation:
The correct answer is A: kubectl get pods. Kubernetes does not manage “containers” as standalone top-level objects; the primary schedulable unit is the Pod, and containers run inside Pods. Therefore, the practical way to list what’s running in a namespace is to list the Pods in that namespace. kubectl get pods shows Pods and their readiness, status, restarts, and age―giving you the canonical view of running workloads.
If you need the container-level details (images, container names), you typically use additional commands and output formatting:
kubectl describe pod <pod> to view container specs, images, states, and events
kubectl get pods -o jsonpath=… or -o wide to surface more fields
kubectl get pods -o=json to inspect .spec.containers and .status.containerStatuses
But among the provided options, kubectl get pods is the only real kubectl command that lists the running workload objects in the current namespace.
The other options are not valid kubectl subcommands: kubectl ls, kubectl ps, and kubectl show pods are not standard Kubernetes CLI operations. Kubernetes intentionally centers around the API resource model, so listing resources uses kubectl get <resource>. This also aligns with Kubernetes’ declarative nature: you observe and manage the state via API objects, not by directly enumerating OS-level processes.
So while the question says “running containers,” the Kubernetes-correct interpretation is “containers in running Pods,” and the appropriate listing command in the namespace is kubectl get pods, option A.
Question #26
What feature must a CNI support to control specific traffic flows for workloads running in Kubernetes?
- A . Border Gateway Protocol
- B . IP Address Management
- C . Pod Security Policy
- D . Network Policies
Correct Answer: D
D
Explanation:
To control which workloads can communicate with which other workloads in Kubernetes, you use
NetworkPolicy resources―but enforcement depends on the cluster’s networking implementation.
Therefore, for traffic-flow control, the CNI/plugin must support Network Policies, making D correct.
Kubernetes defines the NetworkPolicy API as a declarative way to specify allowed ingress and egress traffic based on selectors (Pod labels, namespaces, IP blocks) and ports/protocols. However, Kubernetes itself does not enforce NetworkPolicy rules; enforcement is provided by the network plugin (or associated dataplane components). If your CNI does not implement NetworkPolicy, the objects may exist in the API but have no effect―Pods will communicate freely by default.
Option B (IP Address Management) is often part of CNI responsibilities, but IPAM is about assigning addresses, not enforcing L3/L4 security policy.
Option A (BGP) is used by some CNIs to advertise routes (for example, in certain Calico deployments), but BGP is not the general requirement for policy enforcement.
Option C (Pod Security Policy) is a deprecated/removed Kubernetes admission feature related to Pod security settings, not network flow control.
From a Kubernetes security standpoint, NetworkPolicies are a key tool for implementing least privilege at the network layer―limiting lateral movement, reducing blast radius, and segmenting environments. But they only work when the chosen CNI supports them. Thus, the correct answer is D: Network Policies.
D
Explanation:
To control which workloads can communicate with which other workloads in Kubernetes, you use
NetworkPolicy resources―but enforcement depends on the cluster’s networking implementation.
Therefore, for traffic-flow control, the CNI/plugin must support Network Policies, making D correct.
Kubernetes defines the NetworkPolicy API as a declarative way to specify allowed ingress and egress traffic based on selectors (Pod labels, namespaces, IP blocks) and ports/protocols. However, Kubernetes itself does not enforce NetworkPolicy rules; enforcement is provided by the network plugin (or associated dataplane components). If your CNI does not implement NetworkPolicy, the objects may exist in the API but have no effect―Pods will communicate freely by default.
Option B (IP Address Management) is often part of CNI responsibilities, but IPAM is about assigning addresses, not enforcing L3/L4 security policy.
Option A (BGP) is used by some CNIs to advertise routes (for example, in certain Calico deployments), but BGP is not the general requirement for policy enforcement.
Option C (Pod Security Policy) is a deprecated/removed Kubernetes admission feature related to Pod security settings, not network flow control.
From a Kubernetes security standpoint, NetworkPolicies are a key tool for implementing least privilege at the network layer―limiting lateral movement, reducing blast radius, and segmenting environments. But they only work when the chosen CNI supports them. Thus, the correct answer is D: Network Policies.
Question #27
What are the characteristics for building every cloud-native application?
- A . Resiliency, Operability, Observability, Availability
- B . Resiliency, Containerd, Observability, Agility
- C . Kubernetes, Operability, Observability, Availability
- D . Resiliency, Agility, Operability, Observability
Correct Answer: D
D
Explanation:
Cloud-native applications are typically designed to thrive in dynamic, distributed environments where infrastructure is elastic and failures are expected. The best set of characteristics listed is Resiliency, Agility, Operability, Observability, making D correct.
Resiliency means the application and its supporting platform can tolerate failures and continue providing service. In Kubernetes terms, resiliency is supported through self-healing controllers, replica management, health probes, and safe rollout mechanisms, but the application must also be designed to handle transient failures, retries, and graceful degradation.
Agility reflects the ability to deliver changes quickly and safely. Cloud-native systems emphasize automation, CI/CD, declarative configuration, and small, frequent releases―often enabled by Kubernetes primitives like Deployments and rollout strategies. Agility is about reducing the friction to ship improvements while maintaining reliability.
Operability is how manageable the system is in production: clear configuration, predictable deployments, safe scaling, and automation-friendly operations. Kubernetes encourages operability through consistent APIs, controllers, and standardized patterns for configuration and lifecycle.
Observability means you can understand what’s happening inside the system using telemetry― metrics, logs, and traces―so you can troubleshoot issues, measure SLOs, and improve performance. Kubernetes provides many integration points for observability, but cloud-native apps must also emit meaningful signals.
Options B and C include items that are not “characteristics” (containerd is a runtime; Kubernetes is a platform).
Option A includes “availability,” which is important, but the canonical cloud-native framing in this question emphasizes the four qualities in D as the foundational build characteristics.
D
Explanation:
Cloud-native applications are typically designed to thrive in dynamic, distributed environments where infrastructure is elastic and failures are expected. The best set of characteristics listed is Resiliency, Agility, Operability, Observability, making D correct.
Resiliency means the application and its supporting platform can tolerate failures and continue providing service. In Kubernetes terms, resiliency is supported through self-healing controllers, replica management, health probes, and safe rollout mechanisms, but the application must also be designed to handle transient failures, retries, and graceful degradation.
Agility reflects the ability to deliver changes quickly and safely. Cloud-native systems emphasize automation, CI/CD, declarative configuration, and small, frequent releases―often enabled by Kubernetes primitives like Deployments and rollout strategies. Agility is about reducing the friction to ship improvements while maintaining reliability.
Operability is how manageable the system is in production: clear configuration, predictable deployments, safe scaling, and automation-friendly operations. Kubernetes encourages operability through consistent APIs, controllers, and standardized patterns for configuration and lifecycle.
Observability means you can understand what’s happening inside the system using telemetry― metrics, logs, and traces―so you can troubleshoot issues, measure SLOs, and improve performance. Kubernetes provides many integration points for observability, but cloud-native apps must also emit meaningful signals.
Options B and C include items that are not “characteristics” (containerd is a runtime; Kubernetes is a platform).
Option A includes “availability,” which is important, but the canonical cloud-native framing in this question emphasizes the four qualities in D as the foundational build characteristics.
Question #28
Which statement about Ingress is correct?
- A . Ingress provides a simple way to track network endpoints within a cluster.
- B . Ingress is a Service type like NodePort and ClusterIP.
- C . Ingress is a construct that allows you to specify how a Pod is allowed to communicate.
- D . Ingress exposes routes from outside the cluster to Services in the cluster.
Correct Answer: D
D
Explanation:
Ingress is the Kubernetes API resource for defining external HTTP/HTTPS routing into the cluster, so D is correct. An Ingress object specifies rules such as hostnames (e.g., app.example.com), URL paths (e.g., /api), and TLS configuration, mapping those routes to Kubernetes Services. This provides Layer 7 routing capabilities beyond what a basic Service offers.
Ingress is not a Service type (so B is wrong). Service types (ClusterIP, NodePort, LoadBalancer, ExternalName) are part of the Service API and operate at Layer 4. Ingress is a separate API object that depends on an Ingress Controller to actually implement routing. The controller watches Ingress resources and configures a reverse proxy/load balancer (like NGINX, HAProxy, or a cloud load balancer integration) to enforce the desired routing. Without an Ingress Controller, creating an Ingress object alone will not route traffic.
Option A describes endpoint tracking (that’s closer to Endpoints/EndpointSlice).
Option C describes NetworkPolicy, which controls allowed network flows between Pods/namespaces. Ingress is about exposing and routing incoming application traffic from outside the cluster to internal Services.
So the verified correct statement is D: Ingress exposes routes from outside the cluster to Services in the cluster.
D
Explanation:
Ingress is the Kubernetes API resource for defining external HTTP/HTTPS routing into the cluster, so D is correct. An Ingress object specifies rules such as hostnames (e.g., app.example.com), URL paths (e.g., /api), and TLS configuration, mapping those routes to Kubernetes Services. This provides Layer 7 routing capabilities beyond what a basic Service offers.
Ingress is not a Service type (so B is wrong). Service types (ClusterIP, NodePort, LoadBalancer, ExternalName) are part of the Service API and operate at Layer 4. Ingress is a separate API object that depends on an Ingress Controller to actually implement routing. The controller watches Ingress resources and configures a reverse proxy/load balancer (like NGINX, HAProxy, or a cloud load balancer integration) to enforce the desired routing. Without an Ingress Controller, creating an Ingress object alone will not route traffic.
Option A describes endpoint tracking (that’s closer to Endpoints/EndpointSlice).
Option C describes NetworkPolicy, which controls allowed network flows between Pods/namespaces. Ingress is about exposing and routing incoming application traffic from outside the cluster to internal Services.
So the verified correct statement is D: Ingress exposes routes from outside the cluster to Services in the cluster.
Question #29
Which component of the Kubernetes architecture is responsible for integration with the CRI container runtime?
- A . kubeadm
- B . kubelet
- C . kube-apiserver
- D . kubectl
Correct Answer: B
B
Explanation:
The correct answer is B: kubelet. The Container Runtime Interface (CRI) defines how Kubernetes interacts with container runtimes in a consistent, pluggable way. The component that speaks CRI is the kubelet, the node agent responsible for running Pods on each node. When the kube-scheduler assigns a Pod to a node, the kubelet reads the PodSpec and makes the runtime calls needed to realize that desired state―pull images, create a Pod sandbox, start containers, stop containers, and retrieve status and logs. Those calls are made via CRI to a CRI-compliant runtime such as containerd or CRI-O.
Why not the others:
kubeadm bootstraps clusters (init/join/upgrade workflows) but does not run containers or speak CRI for workload execution.
kube-apiserver is the control plane API frontend; it stores and serves cluster state and does not directly integrate with runtimes.
kubectl is just a client tool that sends API requests; it is not involved in runtime integration on nodes.
This distinction matters operationally. If the runtime is misconfigured or CRI endpoints are unreachable, kubelet will report errors and Pods can get stuck in Container Creating, image pull failures, or runtime errors. Debugging often involves checking kubelet logs and runtime service health, because kubelet is the integration point bridging Kubernetes scheduling/state with actual container execution.
So, the node-level component responsible for CRI integration is the kubelet―option B.
B
Explanation:
The correct answer is B: kubelet. The Container Runtime Interface (CRI) defines how Kubernetes interacts with container runtimes in a consistent, pluggable way. The component that speaks CRI is the kubelet, the node agent responsible for running Pods on each node. When the kube-scheduler assigns a Pod to a node, the kubelet reads the PodSpec and makes the runtime calls needed to realize that desired state―pull images, create a Pod sandbox, start containers, stop containers, and retrieve status and logs. Those calls are made via CRI to a CRI-compliant runtime such as containerd or CRI-O.
Why not the others:
kubeadm bootstraps clusters (init/join/upgrade workflows) but does not run containers or speak CRI for workload execution.
kube-apiserver is the control plane API frontend; it stores and serves cluster state and does not directly integrate with runtimes.
kubectl is just a client tool that sends API requests; it is not involved in runtime integration on nodes.
This distinction matters operationally. If the runtime is misconfigured or CRI endpoints are unreachable, kubelet will report errors and Pods can get stuck in Container Creating, image pull failures, or runtime errors. Debugging often involves checking kubelet logs and runtime service health, because kubelet is the integration point bridging Kubernetes scheduling/state with actual container execution.
So, the node-level component responsible for CRI integration is the kubelet―option B.
Question #30
Which resource do you use to attach a volume in a Pod?
- A . StorageVolume
- B . PersistentVolume
- C . StorageClass
- D . PersistentVolumeClaim
Correct Answer: D
D
Explanation:
In Kubernetes, Pods typically attach persistent storage by referencing a PersistentVolumeClaim (PVC), making D correct. A PVC is a user’s request for storage with specific requirements (size, access mode, storage class). Kubernetes then binds the PVC to a matching PersistentVolume (PV) (either pre-provisioned statically or created dynamically via a StorageClass and CSI provisioner). The Pod does not directly attach a PV; it references the PVC, and Kubernetes handles the binding and mounting.
This design separates responsibilities: administrators (or CSI drivers) manage PV provisioning and backend storage details, while developers consume storage via PVCs. In a Pod spec, you define a volume of type persistentVolumeClaim and set claimName: <pvc-name>, then mount that volume
into containers at a path. The kubelet coordinates with the CSI driver (or in-tree plugin depending on environment) to attach/mount the underlying storage to the node and then into the Pod.
Option B (PersistentVolume) is not directly referenced by Pods; PVs are cluster resources that represent actual storage. Pods don’t “pick” PVs; claims do.
Option C (StorageClass) defines provisioning parameters (e.g., disk type, replication, binding mode) but is not what a Pod references to mount a volume.
Option A is not a Kubernetes resource type.
Operationally, using PVCs enables dynamic provisioning and portability: the same Pod spec can be deployed across clusters where the StorageClass name maps to appropriate backend storage. It also supports lifecycle controls like reclaim policies (Delete/Retain) and snapshot/restore workflows depending on CSI capabilities.
So the Kubernetes resource you use in a Pod to attach a persistent volume is PersistentVolumeClaim, option D.
D
Explanation:
In Kubernetes, Pods typically attach persistent storage by referencing a PersistentVolumeClaim (PVC), making D correct. A PVC is a user’s request for storage with specific requirements (size, access mode, storage class). Kubernetes then binds the PVC to a matching PersistentVolume (PV) (either pre-provisioned statically or created dynamically via a StorageClass and CSI provisioner). The Pod does not directly attach a PV; it references the PVC, and Kubernetes handles the binding and mounting.
This design separates responsibilities: administrators (or CSI drivers) manage PV provisioning and backend storage details, while developers consume storage via PVCs. In a Pod spec, you define a volume of type persistentVolumeClaim and set claimName: <pvc-name>, then mount that volume
into containers at a path. The kubelet coordinates with the CSI driver (or in-tree plugin depending on environment) to attach/mount the underlying storage to the node and then into the Pod.
Option B (PersistentVolume) is not directly referenced by Pods; PVs are cluster resources that represent actual storage. Pods don’t “pick” PVs; claims do.
Option C (StorageClass) defines provisioning parameters (e.g., disk type, replication, binding mode) but is not what a Pod references to mount a volume.
Option A is not a Kubernetes resource type.
Operationally, using PVCs enables dynamic provisioning and portability: the same Pod spec can be deployed across clusters where the StorageClass name maps to appropriate backend storage. It also supports lifecycle controls like reclaim policies (Delete/Retain) and snapshot/restore workflows depending on CSI capabilities.
So the Kubernetes resource you use in a Pod to attach a persistent volume is PersistentVolumeClaim, option D.
