Practice Free KCNA Exam Online Questions
What Linux namespace is shared by default by containers running within a Kubernetes Pod?
- A . Host Network
- B . Network
- C . Process ID
- D . Process Name
B
Explanation:
By default, containers in the same Kubernetes Pod share the network namespace, which means they share the same IP address and port space. Therefore, the correct answer is B (Network).
This shared network namespace is a key part of the Pod abstraction. Because all containers in a Pod share networking, they can communicate with each other over localhost and coordinate tightly, which is the basis for patterns like sidecars (service mesh proxies, log shippers, config reloaders). It also means containers must coordinate port usage: if two containers try to bind the same port on 0.0.0.0, they’ll conflict because they share the same port namespace.
Option A (“Host Network”) is different: hostNetwork: true is an optional Pod setting that puts the Pod into the node’s network namespace, not the Pod’s shared namespace. It is not the default and is generally used sparingly due to security and port-collision risks.
Option C (“Process ID”) is not shared by default in Kubernetes; PID namespace sharing requires explicitly enabling process namespace sharing (e.g., shareProcessNamespace: true).
Option D (“Process Name”) is not a Linux namespace concept.
The Pod model also commonly implies shared storage volumes (if defined) and shared IPC namespace in some configurations, but the universally shared-by-default namespace across containers in the same Pod is the network namespace. This default behavior is why Kubernetes documentation explains a Pod as a “logical host” for one or more containers: the containers are co-located and share certain namespaces as if they ran on the same host.
So, the correct, verified answer is B: containers in the same Pod share the Network namespace by default.
Which mechanism can be used to automatically adjust the amount of resources for an application?
- A . Horizontal Pod Autoscaler (HPA)
- B . Kubernetes Event-driven Autoscaling (KEDA)
- C . Cluster Autoscaler
- D . Vertical Pod Autoscaler (VPA)
A
Explanation:
The verified answer in the PDF is A (HPA), and that aligns with the common Kubernetes meaning of “adjust resources for an application” by scaling replicas. The Horizontal Pod Autoscaler automatically
changes the number of Pod replicas for a workload (typically a Deployment) based on observed metrics such as CPU utilization, memory (in some configurations), or custom/external metrics. By increasing replicas under load, the application gains more total CPU/memory capacity available across Pods; by decreasing replicas when load drops, it reduces resource consumption and cost.
It’s important to distinguish what each mechanism adjusts:
HPA adjusts replica count (horizontal scaling).
VPA adjusts Pod resource requests/limits (vertical scaling), which is literally “amount of CPU/memory per pod,” but it often requires restarts to apply changes depending on mode.
Cluster Autoscaler adjusts the number of nodes in the cluster, not application replicas.
KEDA is event-driven autoscaling that often drives HPA behavior using external event sources (queues, streams), but it’s not the primary built-in mechanism referenced in many foundational Kubernetes questions.
Given the wording and the provided answer key, the intended interpretation is: “automatically adjust the resources available to the application” by scaling out/in the number of replicas. That’s exactly HPA’s role. For example, if CPU utilization exceeds a target (say 60%), HPA computes a higher desired replica count and updates the workload. The Deployment then creates more Pods, distributing load and increasing available compute.
So, within this question set, the verified correct choice is A (Horizontal Pod Autoscaler).
At which layer would distributed tracing be implemented in a cloud native deployment?
- A . Network
- B . Application
- C . Database
- D . Infrastructure
B
Explanation:
Distributed tracing is implemented primarily at the application layer, so B is correct. The reason is simple: tracing is about capturing the end-to-end path of a request as it traverses services, libraries, queues, and databases. That “request context” (trace ID, span ID, baggage) must be created, propagated, and enriched as code executes. While infrastructure components (proxies, gateways, service meshes) can generate or augment trace spans, the fundamental unit of tracing is still tied to application operations (an HTTP handler, a gRPC call, a database query, a cache lookup).
In Kubernetes-based microservices, distributed tracing typically uses standards like Open Telemetry for instrumentation and context propagation. Application frameworks emit spans for key operations, attach attributes (route, status code, tenant, retry count), and propagate context via headers (e.g., W3C Trace Context). This is what lets you reconstruct “Service A → Service B → Service C” for one user request and identify the slow or failing hop.
Why other layers are not the best answer:
Network focuses on packets/flows, but tracing is not a packet-capture problem; it’s a causal request-path problem across services.
Database spans are part of traces, but tracing is not “implemented in the database layer” overall; DB spans are one component.
Infrastructure provides the platform and can observe traffic, but without application context it can’t fully represent business operations (and many useful attributes live in app code).
So the correct layer for “where tracing is implemented” is the application layer―even when a mesh or proxy helps, it’s still describing application request execution across components.
What is the default deployment strategy in Kubernetes?
- A . Rolling update
- B . Blue/Green deployment
- C . Canary deployment
- D . Recreate deployment
A
Explanation:
For Kubernetes Deployments, the default update strategy is RollingUpdate, which corresponds to “Rolling update” in option A. Rolling updates replace old Pods with new Pods gradually, aiming to maintain availability during the rollout. Kubernetes does this by creating a new ReplicaSet for the updated Pod template and then scaling the new ReplicaSet up while scaling the old one down.
The pace and safety of a rolling update are controlled by parameters like maxUnavailable and maxSurge. maxUnavailable limits how many replicas can be unavailable during the update, protecting availability. maxSurge controls how many extra replicas can be created temporarily above the desired count, helping speed up rollouts while maintaining capacity. If readiness probes fail, Kubernetes will pause progression because new Pods aren’t becoming Ready, helping prevent a bad version from fully replacing a good one.
Options B (Blue/Green) and C (Canary) are popular progressive delivery patterns, but they are not the default built-in Deployment strategy. They are typically implemented using additional tooling (service mesh routing, traffic splitting controllers, or specialized rollout controllers) or by operating multiple Deployments/Services.
Option D (Recreate) is a valid strategy but not the default; it terminates all old Pods before creating new ones, causing downtime unless you have external buffering or multi-tier redundancy.
From an application delivery perspective, RollingUpdate aligns with Kubernetes’ declarative model: you update the desired Pod template and let the controller converge safely. kubectl rollout status is commonly used to monitor progress. Rollbacks are also supported because the Deployment tracks history. Therefore, the verified correct answer is A: Rolling update.
How do you deploy a workload to Kubernetes without additional tools?
- A . Create a Bash script and run it on a worker node.
- B . Create a Helm Chart and install it with helm.
- C . Create a manifest and apply it with kubectl.
- D . Create a Python script and run it with kubectl.
C
Explanation:
The standard way to deploy workloads to Kubernetes using only built-in tooling is to create Kubernetes manifests (YAML/JSON definitions of API objects) and apply them with kubectl, so C is correct. Kubernetes is a declarative system: you describe the desired state of resources (e.g., a Deployment, Service, ConfigMap, Ingress) in a manifest file, then submit that desired state to the API server. Controllers reconcile the actual cluster state to match what you declared.
A manifest typically includes mandatory fields like apiVersion, kind, and metadata, and then a spec describing desired behavior. For example, a Deployment manifest declares replicas and the Pod template (containers, images, ports, probes, resources). Applying the manifest with kubectl apply -f <file> creates or updates the resources. kubectl apply is also designed to work well with iterative changes: you update the file, re-apply, and Kubernetes performs a controlled rollout based on controller logic.
Option B (Helm) is indeed a popular deployment tool, but Helm is explicitly an “additional tool” beyond kubectl and the Kubernetes API. The question asks “without additional tools,” so Helm is excluded by definition.
Option A (running Bash scripts on worker nodes) bypasses Kubernetes’ desired-state control and is not how Kubernetes workload deployment is intended; it also breaks portability and operational safety.
Option D is not a standard Kubernetes deployment mechanism; kubectl does not “run Python scripts” to deploy workloads (though scripts can automate kubectl, that’s still not the primary mechanism).
From a cloud native delivery standpoint, manifests support GitOps, reviewable changes, and repeatable deployments across environments. The Kubernetes-native approach is: declare resources in manifests and apply them to the cluster. Therefore, C is the verified correct answer.
In Kubernetes, which command is the most efficient way to check the progress of a Deployment rollout and confirm if it has completed successfully?
- A . kubectl get deployments –show-labels -o wide
- B . kubectl describe deployment my-deployment –namespace=default
- C . kubectl logs deployment/my-deployment –all-containers=true
- D . kubectl rollout status deployment/my-deployment
D
Explanation:
When performing rolling updates in Kubernetes, it is important to have a clear and efficient way to track the progress of a Deployment rollout and determine whether it has completed successfully. The most direct and purpose-built command for this task is kubectl rollout status deployment/my-deployment, making option D the correct answer.
The kubectl rollout status command is specifically designed to monitor the state of rollouts for resources such as Deployments, StatefulSets, and DaemonSets. It provides real-time feedback on the rollout process, including whether new Pods have been created, old Pods are being terminated, and if the desired number of updated replicas has become available. The command blocks until the rollout either completes successfully or fails, which makes it especially useful in automation and CI/CD pipelines.
Option A is incorrect because kubectl get deployments only provides a snapshot view of deployment status fields and does not actively track rollout progress.
Option B can provide detailed information and events, but it is verbose and not optimized for quickly confirming rollout completion.
Option C is incorrect because Deployment objects themselves do not produce logs; logs are generated by Pods and containers, not higher-level workload resources.
The rollout status command also integrates with Kubernetes’ revision history, ensuring that it accurately reflects the current state of the Deployment’s update strategy. If a rollout is stuck due to failed Pods, readiness probe failures, or resource constraints, the command will indicate that the rollout is not progressing, helping operators quickly identify issues.
In summary, kubectl rollout status deployment/my-deployment is the most efficient and reliable way to check rollout progress and confirm success. It is purpose-built for rollout tracking, easy to interpret, and widely used in production Kubernetes workflows, making Option D the correct and verified answer.
Which of these components is part of the Kubernetes Control Plane?
- A . CoreDNS
- B . cloud-controller-manager
- C . kube-proxy
- D . kubelet
B
Explanation:
The Kubernetes control plane is the set of components responsible for making cluster-wide decisions (like scheduling) and detecting and responding to cluster events (like starting new Pods when they fail). In upstream Kubernetes architecture, the canonical control plane components include kube-apiserver, etcd, kube-scheduler, and kube-controller-manager, and―when running on a cloud provider―the cloud-controller-manager. That makes option B the correct answer: cloud-controller-manager is explicitly a control plane component that integrates Kubernetes with the underlying cloud.
The cloud-controller-manager runs controllers that talk to cloud APIs for infrastructure concerns such as node lifecycle, routes, and load balancers. For example, when you create a Service of type LoadBalancer, a controller in this component is responsible for provisioning a cloud load balancer and updating the Service status. This is clearly control-plane behavior: reconciling desired state into real infrastructure state.
Why the others are not control plane components (in the classic classification): kubelet is a node component (agent) responsible for running and managing Pods on a specific node. kube-proxy is also a node component that implements Service networking rules on nodes. CoreDNS is usually deployed as a cluster add-on for DNS-based service discovery; it’s critical, but it’s not a control plane component in the strict architectural list.
So, while many clusters run CoreDNS in kube-system, the Kubernetes component that is definitively “part of the control plane” among these choices is cloud-controller-manager (B).
What can be used to create a job that will run at specified times/dates or on a repeating schedule?
- A . Job
- B . CalendarJob
- C . BatchJob
- D . CronJob
D
Explanation:
The correct answer is D: CronJob. A Kubernetes CronJob is specifically designed for creating Jobs on a schedule―either at specified times/dates (expressed via cron syntax) or on a repeating interval (hourly, daily, weekly). When the schedule triggers, the CronJob controller creates a Job, and the Job controller creates the Pods that execute the workload to completion.
Option A (Job) is not inherently scheduled. A Job runs when you create it, and it continues until it completes successfully or fails according to its retry/backoff behavior. If you want it to run periodically, you need something else to create the Job each time. CronJob is the built-in mechanism for that scheduling.
Options B and C are not standard Kubernetes workload objects. Kubernetes does not include “CalendarJob” or “BatchJob” as official API kinds. The scheduling primitive is CronJob.
CronJobs also include important operational controls: concurrency policies prevent overlapping runs, deadlines control missed schedules, and history limits manage old Job retention. This makes CronJobs more robust than ad-hoc scheduling approaches and keeps the workload lifecycle visible in the Kubernetes API (status/events/logs). It also means you can apply standard Kubernetes patterns: use a service account with least privilege, mount Secrets/ConfigMaps, run in specific namespaces, and manage resource requests/limits so that scheduled workloads don’t destabilize the cluster.
So the correct Kubernetes resource for scheduled and repeating job execution is CronJob (D).
Kubernetes Secrets are specifically intended to hold confidential data.
Which API object should be used to hold non-confidential data?
- A . CNI
- B . CSI
- C . ConfigMaps
- D . RBAC
C
Explanation:
In Kubernetes, different API objects are designed for different categories of configuration and operational data. Secrets are used to store sensitive information such as passwords, API tokens, and encryption keys. For data that is not confidential, Kubernetes provides the ConfigMap resource, making option C the correct answer.
ConfigMaps are intended to hold non-sensitive configuration data that applications need at runtime. Examples include application configuration files, feature flags, environment-specific settings, URLs, port numbers, and command-line arguments. ConfigMaps allow developers to decouple configuration from application code, which aligns with cloud-native and twelve-factor app principles. This separation makes applications more portable, easier to manage, and simpler to update without rebuilding container images.
ConfigMaps can be consumed by Pods in several ways: as environment variables, as command-line arguments, or as files mounted into a container’s filesystem. Because they are not designed for confidential data, ConfigMaps store values in plaintext and do not provide encryption by default. This is why sensitive data must always be stored in Secrets instead.
Option A, CNI (Container Network Interface), is a networking specification used to configure Pod networking and is unrelated to data storage.
Option B, CSI (Container Storage Interface), is used for integrating external storage systems with Kubernetes and does not store configuration data.
Option D, RBAC, defines authorization policies and access controls within the cluster and is not a data storage mechanism.
While both Secrets and ConfigMaps can technically be accessed in similar ways by Pods, Kubernetes clearly distinguishes their intended use cases based on data sensitivity. Using ConfigMaps for non-confidential data improves clarity, security posture, and maintainability of Kubernetes configurations.
Therefore, the correct and verified answer is Option C: ConfigMaps, which are explicitly designed to hold non-confidential configuration data in Kubernetes.
What is the difference between a Deployment and a ReplicaSet?
- A . With a Deployment, you can’t control the number of pod replicas.
- B . A ReplicaSet does not guarantee a stable set of replica pods running.
- C . A Deployment is basically the same as a ReplicaSet with annotations.
- D . A Deployment is a higher-level concept that manages ReplicaSets.
D
Explanation:
A Deployment is a higher-level controller that manages ReplicaSets and provides rollout/rollback behavior, so D is correct. A ReplicaSet’s primary job is to ensure that a specified number of Pod replicas are running at any time, based on a label selector and Pod template. It’s a fundamental “keep N Pods alive” controller.
Deployments build on that by managing the lifecycle of ReplicaSets over time. When you update a Deployment (for example, changing the container image tag or environment variables), Kubernetes creates a new ReplicaSet for the new Pod template and gradually shifts replicas from the old ReplicaSet to the new one according to the rollout strategy (RollingUpdate by default). Deployments also retain revision history, making it possible to roll back to a previous ReplicaSet if a rollout fails.
Why the other options are incorrect:
A is false: Deployments absolutely control the number of replicas via spec.replicas and can also be controlled by HPA.
B is false: ReplicaSets do guarantee that a stable number of replicas is running (that is their core purpose).
C is false: a Deployment is not “a ReplicaSet with annotations.” It is a distinct API resource with additional controller logic for declarative updates, rollouts, and revision tracking.
Operationally, most teams create Deployments rather than ReplicaSets directly because Deployments are safer and more feature-complete for application delivery. ReplicaSets still appear in real clusters because Deployments create them automatically; you’ll commonly see multiple ReplicaSets during rollout transitions. Understanding the hierarchy is crucial for troubleshooting: if Pods aren’t behaving as expected, you often trace from Deployment → ReplicaSet → Pod, checking selectors, events, and rollout status.
So the key difference is: ReplicaSet maintains replica count; Deployment manages ReplicaSets and orchestrates updates. Therefore, D is the verified answer.
