Practice Free KCNA Exam Online Questions
Which tool is used to streamline installing and managing Kubernetes applications?
- A . apt
- B . helm
- C . service
- D . brew
B
Explanation:
Helm is the Kubernetes package manager used to streamline installing and managing applications, so B is correct. Helm packages Kubernetes resources into charts, which contain templates, default values, and metadata. When you install a chart, Helm renders templates into concrete manifests and applies them to the cluster. Helm also tracks a “release,” enabling upgrades, rollbacks, and consistent lifecycle operations across environments.
This is why Helm is widely used for complex applications that require multiple Kubernetes objects (Deployments/StatefulSets, Services, Ingresses, ConfigMaps, RBAC, CRDs). Rather than manually maintaining many YAML files per environment, teams can parameterize configuration with values and reuse the same chart across dev/stage/prod with different overrides.
Option A (apt) and option D (brew) are OS package managers (Debian/Ubuntu and macOS/Linuxbrew respectively), not Kubernetes application managers.
Option C (service) is a Linux service manager command pattern and not relevant here.
In cloud-native delivery pipelines, Helm often integrates with GitOps and CI/CD: the pipeline builds an image, updates chart values (image tag/digest), and deploys via Helm or via GitOps controllers that render/apply Helm charts. Helm also supports chart repositories and versioning, making it easier to standardize deployments and manage dependencies.
So, the verified tool for streamlined Kubernetes app install/management is Helm (B).
Which storage operator in Kubernetes can help the system to self-scale, self-heal, etc?
- A . Rook
- B . Kubernetes
- C . Helm
- D . Container Storage Interface (CSI)
A
Explanation:
Rook is a Kubernetes storage operator that helps manage and automate storage systems in a Kubernetes-native way, so A is correct. The key phrase in the question is “storage operator … self-scale, self-heal.” Operators extend Kubernetes by using controllers to reconcile a desired state. Rook applies that model to storage, commonly by managing storage backends like Ceph (and other systems depending on configuration).
With an operator approach, you declare how you want storage to look (cluster size, pools, replication, placement, failure domains), and the operator works continuously to maintain that state. That includes operational behaviors that feel “self-healing” such as reacting to failed storage Pods, rebalancing, or restoring desired replication counts (the exact behavior depends on the backend and configuration). The important KCNA-level idea is that Rook uses Kubernetes controllers to automate day-2 operations for storage in a way consistent with Kubernetes’ reconciliation loops.
The other options do not match the question:
“Kubernetes
” is the orchestrator itself, not a storage operator. “Helm” is a package manager for Kubernetes apps―it can install storage software, but it is not an operator that continuously reconciles and self-manages. “CSI” (Container Storage Interface) is an interface specification that enables pluggable storage drivers; CSI drivers provision and attach volumes, but CSI itself is not a “storage operator” with the broader self-managing operator semantics described here.
So, for “storage operator that can help with self-* behaviors,” Rook is the correct choice.
What is the Kubernetes object used for running a recurring workload?
- A . Job
- B . Batch
- C . DaemonSet
- D . CronJob
D
Explanation:
A recurring workload in Kubernetes is implemented with a CronJob, so the correct choice is
D. A CronJob is a controller that creates Jobs on a schedule defined in standard cron format (minute, hour, day of month, month, day of week). This makes CronJobs ideal for periodic tasks like backups, report generation, log rotation, and cleanup tasks.
A Job (option A) is run-to-completion but is typically a one-time execution; it ensures that a specified number of Pods successfully terminate. You can use a Job repeatedly, but something else must create it each time―CronJob is that built-in scheduler.
Option B (“Batch”) is not a standard workload resource type (batch is an API group, not the object name used here).
Option C (DaemonSet) ensures one Pod runs on every node (or selected nodes), which is not “recurring,” it’s “always present per node.”
CronJobs include operational controls that matter in real clusters. For example, concurrencyPolicy controls what happens if a scheduled run overlaps with a previous run (Allow, Forbid, Replace). startingDeadlineSeconds can handle missed schedules (e.g., if the controller was down). History limits (successfulJobsHistoryLimit, failedJobsHistoryLimit) help manage cleanup and troubleshooting. Each scheduled execution results in a Job with its own Pods, which can be inspected with kubectl get jobs and kubectl logs.
So the correct Kubernetes object for a recurring workload is CronJob (D): it provides native scheduling and creates Jobs automatically according to the defined cadence.
Which one of the following is an open source runtime security tool?
- A . lxd
- B . containerd
- C . falco
- D . gVisor
C
Explanation:
The correct answer is C: Falco. Falco is a widely used open-source runtime security tool (originally created by Sysdig and now a CNCF project) designed to detect suspicious behavior at runtime by monitoring system calls and other kernel-level signals. In Kubernetes environments, Falco helps identify threats such as unexpected shell access in containers, privilege escalation attempts, access to sensitive files, anomalous network tooling, crypto-mining patterns, and other behaviors that indicate compromise or policy violations.
The other options are not primarily “runtime security tools” in the detection/alerting sense:
containerd is a container runtime responsible for executing containers; it’s not a security detection tool.
lxd is a system container and VM manager; again, not a runtime threat detection tool.
gVisor is a sandboxed container runtime that improves isolation by interposing a user-space kernel; it’s a security mechanism, but the question asks for a runtime security tool (monitoring/detection). Falco fits that definition best.
In cloud-native security practice, Falco typically runs as a DaemonSet so it can observe activity on every node. It uses rules to define what “bad” looks like and can emit alerts to SIEM systems, logging backends, or incident response workflows. This complements preventative controls like RBAC, Pod Security Admission, seccomp, and least privilege configurations. Preventative controls reduce risk; Falco provides visibility and detection when something slips through.
Therefore, among the provided choices, the verified runtime security tool is Falco (C).
What’s the difference between a security profile and a security context?
- A . Security Contexts configure Clusters and Namespaces at runtime. Security profiles are control plane mechanisms to enforce specific settings in the Security Context.
- B . Security Contexts configure Pods and Containers at runtime. Security profiles are control plane mechanisms to enforce specific settings in the Security Context.
- C . Security Profiles configure Pods and Containers at runtime. Security Contexts are control plane mechanisms to enforce specific settings in the Security Profile.
- D . Security Profiles configure Clusters and Namespaces at runtime. Security Contexts are control plane mechanisms to enforce specific settings in the Security Profile.
B
Explanation:
The correct answer is B. In Kubernetes, a security Context is part of the Pod and container specification that configures runtime security settings for that workload―things like run As User, run As Non Root, Linux capabilities, read Only Root Filesystem, allow Privilege Escalation, SE Linux options, seccomp profile selection, and filesystem group (fsGroup). These settings directly affect how the Pod’s containers run on the node.
A security profile, in contrast, is a higher-level policy/standard enforced by the cluster control plane (typically via admission control) to ensure workloads meet required security constraints. In modern Kubernetes, this concept aligns with mechanisms like Pod Security Standards (Privileged, Baseline, Restricted) enforced through Pod Security Admission. The “profile” defines what is allowed or forbidden (for example, disallow privileged containers, disallow hostPath mounts, require non-root, restrict capabilities). The control plane enforces these constraints by validating or rejecting Pod specs that do not comply―ensuring consistent security posture across namespaces and teams.
Option A and D are incorrect because security contexts do not “configure clusters and namespaces at runtime”; security contexts apply to Pods/containers.
Option C reverses the relationship: security profiles don’t configure Pods at runtime; they constrain what security context settings (and other fields) are acceptable.
Practically, you can think of it as:
Security Context = workload-level configuration knobs (declared in manifests, applied at runtime).
Security Profile/Standards = cluster-level guardrails that determine which knobs/settings are permitted.
This separation supports least privilege: developers declare needed runtime settings, and cluster governance ensures those settings stay within approved boundaries. Therefore, B is the verified answer.
Which of the following options is true about considerations for large Kubernetes clusters?
- A . Kubernetes supports up to 1000 nodes and recommends no more than 1000 containers per node.
- B . Kubernetes supports up to 5000 nodes and recommends no more than 500 Pods per node.
- C . Kubernetes supports up to 5000 nodes and recommends no more than 110 Pods per node.
- D . Kubernetes supports up to 50 nodes and recommends no more than 1000 containers per node.
C
Explanation:
The correct answer is C: Kubernetes scalability guidance commonly cites support up to 5000 nodes and recommends no more than 110 Pods per node. The “110 Pods per node” recommendation is a practical limit based on kubelet, networking, and IP addressing constraints, as well as performance characteristics for scheduling, service routing, and node-level resource management. It is also historically aligned with common CNI/IPAM defaults where node Pod CIDRs are sized for ~110 usable Pod IPs.
Why the other options are incorrect: A and D reference “containers per node,” which is not the standard sizing guidance (Kubernetes typically discusses Pods per node). B’s “500 Pods per node” is far above typical recommended limits for many environments and would stress IPAM, kubelet, and node resources significantly.
In large clusters, several considerations matter beyond the headline limits: API server and etcd performance, watch/list traffic, controller reconciliation load, CoreDNS scaling, and metrics/observability overhead. You must also plan for IP addressing (cluster CIDR sizing), node sizes (CPU/memory), and autoscaling behavior. On each node, kubelet and the container runtime must handle churn (starts/stops), logging, and volume operations. Networking implementations (kube-proxy, eBPF dataplanes) also have scaling characteristics.
Kubernetes provides patterns to keep systems stable at scale: request/limit discipline, Pod disruption budgets, topology spread constraints, namespaces and quotas, and careful observability sampling. But the exam-style fact this question targets is the published scalability figure and per-node Pod recommendation.
Therefore, the verified true statement among the options is C.
What’s the most adopted way of conflict resolution and decision-making for the open-source projects under the CNCF umbrella?
- A . Financial Analysis
- B . Discussion and Voting
- C . Flipism Technique
- D . Project Founder Say
B
Explanation:
B (Discussion and Voting) is correct. CNCF-hosted open-source projects generally operate with open governance practices that emphasize transparency, community participation, and documented decision-making. While each project can have its own governance model (maintainers, technical steering committees, SIGs, TOC interactions, etc.), a very common and widely adopted approach to resolving disagreements and making decisions is to first pursue discussion (often on GitHub issues/PRs, mailing lists, or community meetings) and then use voting/consensus mechanisms when needed.
This approach is important because open-source communities are made up of diverse contributors across companies and geographies. “Project Founder Say” (D) is not a sustainable or typical CNCF governance norm for mature projects; CNCF explicitly encourages neutral, community-led governance rather than single-person control. “Financial Analysis” (A) is not a conflict resolution mechanism for technical decisions, and “Flipism Technique” (C) is not a real governance practice.
In Kubernetes specifically, community decisions are often made within structured groups (e.g., SIGs) using discussion and consensus-building, sometimes followed by formal votes where governance requires it. The goal is to ensure decisions are fair, recorded, and aligned with the project’s mission and contributor expectations. This also reduces risk of vendor capture and builds trust: anyone can review the rationale in meeting notes, issues, or PR threads, and decisions can be revisited with new evidence.
Therefore, the most adopted conflict resolution and decision-making method across CNCF open-source projects is discussion and voting, making B the verified correct answer.
What is ephemeral storage?
- A . Storage space that need not persist across restarts.
- B . Storage that may grow dynamically.
- C . Storage used by multiple consumers (e.g., multiple Pods).
- D . Storage that is always provisioned locally.
A
Explanation:
The correct answer is A: ephemeral storage is non-persistent storage whose data does not need to survive Pod restarts or rescheduling. In Kubernetes, ephemeral storage typically refers to storage tied to the Pod’s lifetime―such as the container writable layer, emptyDir volumes, and other temporary storage types. When a Pod is deleted or moved to a different node, that data is generally lost.
This is different from persistent storage, which is backed by PersistentVolumes and PersistentVolumeClaims and is designed to outlive individual Pod instances. Ephemeral storage is commonly used for caches, scratch space, temporary files, and intermediate build artifacts―data that can be recreated and is not the authoritative system of record.
Option B is incorrect because “may grow dynamically” describes an allocation behavior, not the defining characteristic of ephemeral storage.
Option C is incorrect because multiple consumers is about access semantics (ReadWriteMany etc.) and shared volumes, not ephemerality.
Option D is incorrect because ephemeral storage is not “always provisioned locally” in a strict sense; while many ephemeral forms are local to the node, the definition is about lifecycle and persistence guarantees, not necessarily physical locality.
Operationally, ephemeral storage is an important scheduling and reliability consideration. Pods can request/limit ephemeral storage similarly to CPU/memory, and nodes can evict Pods under disk pressure. Mismanaged ephemeral storage (logs written to the container filesystem, runaway temp files) can cause node disk exhaustion and cascading failures. Best practices include shipping logs off-node, using emptyDir intentionally with size limits where supported, and using persistent volumes for state that must survive restarts.
So, ephemeral storage is best defined as storage that does not need to persist across restarts/rescheduling, matching option A.
Which of the following statements is correct concerning Open Policy Agent (OPA)?
- A . The policies must be written in Python language.
- B . Kubernetes can use it to validate requests and apply policies.
- C . Policies can only be tested when published.
- D . It cannot be used outside Kubernetes.
B
Explanation:
Open Policy Agent (OPA) is a general-purpose policy engine used to define and enforce policy across different systems. In Kubernetes, OPA is commonly integrated through admission control (often via Gatekeeper or custom admission webhooks) to validate and/or mutate requests before they are persisted in the cluster. This makes B correct: Kubernetes can use OPA to validate API requests and apply policy decisions.
Kubernetes’ admission chain is where policy enforcement naturally fits. When a user or controller submits a request (for example, to create a Pod), the API server can call external admission webhooks. Those webhooks can evaluate the request against policy―such as “no privileged containers,” “images must come from approved registries,” “labels must include cost-center,” or “Ingress must enforce TLS.” OPA’s policy language (Rego) allows expressing these rules in a declarative form, and the decision (“allow/deny” and sometimes patches) is returned to the API server. This enforces governance consistently and centrally.
Option A is incorrect because OPA policies are written in Rego, not Python.
Option C is incorrect because policies can be tested locally and in CI pipelines before deployment; in fact, testability is a key advantage.
Option D is incorrect because OPA is designed to be platform-agnostic―it can be used with APIs, microservices, CI/CD pipelines, service meshes, and infrastructure tools, not only Kubernetes.
From a Kubernetes fundamentals view, OPA complements RBAC: RBAC answers “who can do what to which resources,” while OPA-style admission policies answer “even if you can create this resource, does it meet our organizational rules?” Together they help implement defense in depth: authentication + authorization + policy admission + runtime security controls. That is why OPA is widely used to enforce security and compliance requirements in Kubernetes environments.
Which control plane component is responsible for updating the node Ready condition if a node becomes unreachable?
- A . The kube-proxy
- B . The node controller
- C . The kubectl
- D . The kube-apiserver
B
Explanation:
The correct answer is B: the node controller. In Kubernetes, node health is monitored and reflected through Node conditions such as Ready. The Node Controller (a controller that runs as part of the control plane, within the controller-manager) is responsible for monitoring node heartbeats and updating node status when a node becomes unreachable or unhealthy.
Nodes periodically report status (including kubelet heartbeats) to the API server. The Node Controller watches these updates. If it detects that a node has stopped reporting within expected time windows, it marks the node condition Ready as Unknown (or otherwise updates conditions) to indicate the control plane can’t confirm node health. This status change then influences higher-level behaviors such as Pod eviction and rescheduling: after grace periods and eviction timeouts, Pods on an unhealthy node may be evicted so the workload can be recreated on healthy nodes (assuming a controller manages replicas).
Option A (kube-proxy) is a node component for Service traffic routing and does not manage node health conditions.
Option C (kubectl) is a CLI client; it does not participate in control plane health monitoring.
Option D (kube-apiserver) stores and serves Node status, but it doesn’t decide when a node is unreachable; it persists what controllers and kubelets report. The “decision logic” for updating the Ready condition in response to missing heartbeats is the Node Controller’s job.
So, the component that updates the Node Ready condition when a node becomes unreachable is the node controller, which is option B.
