Practice Free KCNA Exam Online Questions
What does the "nodeSelector" within a PodSpec use to place Pods on the target nodes?
- A . Annotations
- B . IP Addresses
- C . Hostnames
- D . Labels
D
Explanation:
nodeSelector is a simple scheduling constraint that matches node labels, so the correct answer is D (Labels). In Kubernetes, nodes have key/value labels (for example, disktype=ssd, topology.kubernetes.io/zone=us-east-1a, kubernetes.io/os=linux). When you set spec.nodeSelector in a Pod template, you provide a map of required label key/value pairs. The kube-scheduler will then only consider nodes that have all those labels with matching values as eligible placement targets for that Pod.
This is different from annotations: annotations are also key/value metadata, but they are not intended for selection logic and are not used by the scheduler for nodeSelector. IP addresses and hostnames are not the mechanism used by nodeSelector either. While Kubernetes nodes do have hostnames and IPs, nodeSelector specifically operates on labels because labels are designed for selection, grouping, and placement constraints.
Operationally, nodeSelector is the most basic form of node placement control. It is commonly used to pin workloads to specialized hardware (GPU nodes), compliance zones, or certain OS/architecture pools. However, it has limitations: it only supports exact match on labels and cannot express more complex rules (like “in this set of zones” or “prefer but don’t require”). For that, Kubernetes offers node affinity (requiredDuringSchedulingIgnoredDuringExecution, preferredDuringSchedulingIgnoredDuringExecution) which supports richer expressions.
Still, the underlying mechanism is the same concept: the scheduler evaluates your Pod’s placement requirements against node metadata, and for nodeSelector, that metadata is labels. Therefore, the verified correct answer is D.
Which of the following systems is NOT compatible with the CRI runtime interface standard? (Typo corrected: “CRI-0” → “CRI-O”)
- A . CRI-O
- B . dockershim
- C . systemd
- D . containerd
C
Explanation:
Kubernetes uses the Container Runtime Interface (CRI) to support pluggable container runtimes. The kubelet talks to a CRI-compatible runtime via gRPC, and that runtime is responsible for pulling images and running containers. In this context, containerd and CRI-O are CRI-compatible container runtimes (or runtime stacks) used widely with Kubernetes, and dockershim historically served as a compatibility layer that allowed kubelet to talk to Docker Engine as if it were CRI (before dockershim was removed from kubelet in newer Kubernetes versions). That leaves systemd as the correct “NOT compatible with CRI” answer, so C is correct.
systemd is an init system and service manager for Linux. While it can be involved in how services (like kubelet) are started and managed on the host, it is not a container runtime implementing CRI. It does not provide CRI gRPC endpoints for kubelet, nor does it manage containers in the CRI sense.
The deeper Kubernetes concept here is separation of responsibilities: kubelet is responsible for Pod lifecycle at the node level, but it delegates “run containers” to a runtime via CRI. Runtimes like containerd and CRI-O implement that contract; Kubernetes can swap them without changing kubelet logic. Historically, dockershim translated kubelet’s CRI calls into Docker Engine calls. Even though dockershim is no longer part of kubelet, it was still “CRI-adjacent” in purpose and often treated as compatible in older curricula.
Therefore, among the provided options, systemd is the only one that is clearly not a CRI-compatible runtime system, making C correct.
In a cloud native environment, how do containerization and virtualization differ in terms of resource management?
- A . Containerization uses hypervisors to manage resources, while virtualization does not.
- B . Containerization shares the host OS, while virtualization runs a full OS for each instance.
- C . Containerization consumes more memory than virtualization by default.
- D . Containerization allocates resources per container, virtualization does not isolate them.
B
Explanation:
The fundamental difference between containerization and virtualization in a cloud native environment lies in how they manage and isolate resources, particularly with respect to the operating system. The correct description is that containerization shares the host operating system, while virtualization runs a full operating system for each instance, making option B the correct answer.
In virtualization, each virtual machine (VM) includes its own complete guest operating system running on top of a hypervisor. The hypervisor virtualizes hardware resources―CPU, memory, storage, and networking―and allocates them to each VM. Because every VM runs a full OS, virtualization introduces significant overhead in terms of memory usage, disk space, and startup time. However, it provides strong isolation between workloads, which is useful for running different operating systems or untrusted workloads on the same physical hardware.
In contrast, containerization operates at the operating system level rather than the hardware level. Containers share the host OS kernel and isolate applications using kernel features such as namespaces and control groups (cgroups). This design makes containers much lighter weight than virtual machines. Containers start faster, consume fewer resources, and allow higher workload density on the same infrastructure. Resource limits and isolation are still enforced, but without duplicating the entire operating system for each application instance.
Option A is incorrect because hypervisors are a core component of virtualization, not containerization.
Option C is incorrect because containers generally consume less memory than virtual machines due to the absence of a full guest OS.
Option D is incorrect because virtualization does isolate resources very strongly, while containers rely on OS-level isolation rather than hardware-level isolation.
In cloud native architectures, containerization is preferred for microservices and scalable workloads because of its efficiency and portability. Virtualization is still valuable for stronger isolation and heterogeneous operating systems. Therefore, Option B accurately captures the key resource management distinction between the two models.
What components are common in a service mesh?
- A . Tracing and log storage
- B . Circuit breaking and Pod scheduling
- C . Data plane and runtime plane
- D . Service proxy and control plane
D
Explanation:
A service mesh is an architectural pattern that manages service-to-service communication in a microservices environment by inserting a dedicated networking layer. The two most common building blocks you’ll see across service mesh implementations are (1) a data plane of proxies and (2) a control plane that configures and manages those proxies―this aligns best with “service proxy and control plane,” option D.
In practice, the data plane is usually implemented via sidecar proxies (or sometimes node/ambient proxies) that sit “next to” workloads and handle traffic functions such as mTLS encryption, retries, timeouts, load balancing policies, traffic splitting, and telemetry generation. These proxies can capture inbound and outbound traffic without requiring changes to application code, which is one of the defining benefits of a mesh.
The control plane provides the management layer: it distributes policy and configuration to the proxies (routing rules, security policies, identities/certificates), discovers services/endpoints, and often coordinates certificate rotation and workload identity. In Kubernetes environments, meshes typically integrate with the Kubernetes API for service discovery and configuration.
Option C is close in spirit but uses non-standard wording (“runtime plane” is not a typical service mesh term; “control plane” is).
Options A and B describe capabilities that may exist in a mesh ecosystem (telemetry, circuit breaking), but they are not the universal “core components” across meshes. Tracing/log storage, for example, is usually handled by external observability backends (e.g., Jaeger, Tempo, Loki) rather than being intrinsic “mesh components.”
So, the most correct and broadly accepted answer is D: service proxy and control plane.
What is the purpose of the CRI?
- A . To provide runtime integration control when multiple runtimes are used.
- B . Support container replication and scaling on nodes.
- C . Provide an interface allowing Kubernetes to support pluggable container runtimes.
- D . Allow the definition of dynamic resource criteria across containers.
C
Explanation:
The Container Runtime Interface (CRI) exists so Kubernetes can support pluggable container runtimes behind a stable interface, which makes C correct. In Kubernetes, the kubelet is responsible for managing Pods on a node, but it does not implement container execution itself. Instead, it delegates container lifecycle operations (pull images, create pod sandbox, start/stop containers, fetch logs, exec/attach streaming) to a container runtime through a well-defined API. CRI is that API contract.
Because of CRI, Kubernetes can run with different container runtimes―commonly containerd or CRI-O―without changing kubelet core logic. This improves portability and keeps Kubernetes modular: runtime innovation can happen independently while Kubernetes retains a consistent operational model. CRI is accessed via gRPC and defines the services and message formats kubelet uses to communicate with runtimes.
Option B is incorrect because replication and scaling are handled by controllers (Deployments/ReplicaSets) and schedulers, not by CRI.
Option D is incorrect because resource criteria (requests/limits) are expressed in Pod specs and enforced via OS mechanisms (cgroups) and kubelet/runtime behavior, but CRI is not “for defining dynamic resource criteria.” Option A is vague and not the primary statement; while CRI enables runtime integration, its key purpose is explicitly to make runtimes pluggable and interoperable.
This design became even more important as Kubernetes moved away from Docker Engine integration (dockershim removal from kubelet). With CRI, Kubernetes focuses on orchestrating Pods, while runtimes focus on executing containers. That separation of responsibilities is a core container orchestration principle and is exactly what the question is testing.
So the verified answer is C.
In which framework do the developers no longer have to deal with capacity, deployments, scaling
and fault tolerance, and OS?
- A . Docker Swarm
- B . Kubernetes
- C . Mesos
- D . Serverless
D
Explanation:
Serverless is the model where developers most directly avoid managing server capacity, OS operations, and much of the deployment/scaling/fault-tolerance mechanics, which is why D is correct. In serverless computing (commonly Function-as-a-Service, FaaS, and managed serverless container platforms), the provider abstracts away the underlying servers. You typically deploy code (functions) or a container image, define triggers (HTTP events, queues, schedules), and the platform automatically provisions the required compute, scales it based on demand, and handles much of the availability and fault tolerance behind the scenes.
It’s important to compare this to Kubernetes: Kubernetes does automate scheduling, self-healing, rolling updates, and scaling, but it still requires you (or your platform team) to design and operate cluster capacity, node pools, upgrades, runtime configuration, networking, and baseline reliability controls. Even in managed Kubernetes services, you still choose node sizes, scale policies, and operational configuration. Kubernetes reduces toil, but it does not eliminate infrastructure concerns in the same way serverless does.
Docker Swarm and Mesos are orchestration platforms that schedule workloads, but they also require managing the underlying capacity and OS-level aspects. They are not “no longer have to deal with capacity and OS” frameworks.
From a cloud native viewpoint, serverless is about consuming compute as an on-demand utility. Kubernetes can be a foundation for a serverless experience (for example, with event-driven autoscaling or serverless frameworks), but the pure framework that removes the most operational burden from developers is serverless.
In a cloud native world, what does the IaC abbreviation stand for?
- A . Infrastructure and Code
- B . Infrastructure as Code
- C . Infrastructure above Code
- D . Infrastructure across Code
B
Explanation:
IaC stands for Infrastructure as Code, which is option
B. In cloud native environments, IaC is a core operational practice: infrastructure (networks, clusters, load balancers, IAM roles, storage classes, DNS records, and more) is defined using code-like, declarative configuration rather than manual, click-driven changes. This approach mirrors Kubernetes’ own declarative model―where you define desired state in manifests and controllers reconcile the cluster to match.
IaC improves reliability and velocity because it makes infrastructure repeatable, version-controlled, reviewable, and testable. Teams can store infrastructure definitions in Git, use pull requests for change review, and run automated checks to validate formatting, policies, and safety constraints. If an environment must be recreated (disaster recovery, test environments, regional expansion), IaC enables consistent reproduction with fewer human errors.
In Kubernetes-centric workflows, IaC often covers both the base platform and the workloads layered on top. For example, provisioning might include the Kubernetes control plane, node pools, networking, and identity integration, while Kubernetes manifests (or Helm/Kustomize) define Deployments, Services, RBAC, Ingress, and storage resources. GitOps extends this further by continuously reconciling cluster configuration from a Git source of truth.
The incorrect options (Infrastructure and Code / above / across) are not standard terms. The key idea is “infrastructure treated like software”: changes are made through code commits, go through CI checks, and are rolled out in controlled ways. This aligns with cloud native goals: faster iteration, safer operations, and easier auditing. In short, IaC is the operational backbone that makes Kubernetes and cloud platforms manageable at scale, enabling consistent environments and reducing configuration drift.
You’re right ― my previous 16C30 were not taken from your PDF. Below is the correct redo of Questions 16C30 extracted from your PDF, with verified answers, typos corrected, and formatted exactly as you requested.
What is the core metric type in Prometheus used to represent a single numerical value that can go up and down?
- A . Summary
- B . Counter
- C . Histogram
- D . Gauge
D
Explanation:
In Prometheus, a Gauge represents a single numerical value that can increase and decrease over time, which makes D the correct answer. Gauges are used for values like current memory usage, number of in-flight requests, queue depth, temperature, or CPU usage―anything that can move up and down.
This contrasts with a Counter, which is strictly monotonically increasing (it only goes up, except for resets when a process restarts). Counters are ideal for cumulative totals like total HTTP requests served, total errors, or bytes transmitted. Histograms and Summaries are used to capture distributions (often latency distributions), providing bucketed counts (histogram) or quantile approximations (summary), and are not the “single value that goes up and down” primitive the question asks for.
In Kubernetes observability, metrics are a primary signal for understanding system health and performance. Prometheus is widely used to scrape metrics from Kubernetes components (kubelet, API server, controller-manager), cluster add-ons, and applications. Gauges are common for resource utilization metrics and for instantaneous states, such as container_memory_working_set_bytes or go_goroutines.
When you build alerting and dashboards, selecting the right metric type matters. For example, if you want to alert on the current memory usage, a gauge is appropriate. If you want to compute request rates, you typically use counters with Prometheus functions like rate() to derive per-second rates. Histograms and summaries are used when you need latency percentiles or distribution analysis.
So, for “a single numerical value that can go up and down,” the correct Prometheus metric type is Gauge (D).
Which of the following sentences is true about container runtimes in Kubernetes?
- A . If you let iptables see bridged traffic, you don’t need a container runtime.
- B . If you enable IPv4 forwarding, you don’t need a container runtime.
- C . Container runtimes are deprecated, you must install CRI on each node.
- D . You must install a container runtime on each node to run pods on it.
D
Explanation:
A Kubernetes node must have a container runtime to run Pods, so D is correct. Kubernetes schedules Pods to nodes, but the actual execution of containers is performed by a runtime such as containerd or CRI-O. The kubelet communicates with that runtime via the Container Runtime Interface (CRI) to pull images, create sandboxes, and start/stop containers. Without a runtime, the node cannot launch container processes, so Pods cannot transition into running state.
Options A and B confuse networking kernel settings with runtime requirements. iptables bridged
traffic visibility and IPv4 forwarding can be relevant for node networking, but they do not replace the need for a container runtime. Networking and container execution are separate layers: you need networking for connectivity, and you need a runtime for running containers.
Option C is also incorrect and muddled. Container runtimes are not deprecated; rather, Kubernetes removed the built-in Docker shim integration from kubelet in favor of CRI-native runtimes. CRI is an interface, not “something you install instead of a runtime.” In practice you install a CRI-compatible runtime (containerd/CRI-O), which implements CRI endpoints that kubelet talks to.
Operationally, the runtime choice affects node behavior: image management, logging integration, performance characteristics, and compatibility. Kubernetes installation guides explicitly list installing a container runtime as a prerequisite for worker nodes. If a cluster has nodes without a properly configured runtime, workloads scheduled there will fail to start (often stuck in ContainerCreating/ImagePullBackOff/Runtime errors).
Therefore, the only fully correct statement is D: each node needs a container runtime to run Pods.
Imagine you’re releasing open-source software for the first time.
Which of the following is a valid semantic version?
- A . 1.0
- B . 2021-10-11
- C . 0.1.0-rc
- D . v1beta1
C
Explanation:
Semantic Versioning (SemVer) follows the pattern MAJOR.MINOR.PATCH with optional pre-release identifiers (e.g., -rc, -alpha.1) and build metadata. Among the options, 0.1.0-rc matches SemVer rules, so C is correct.
