Practice Free KCNA Exam Online Questions
Question #11
What native runtime is Open Container Initiative (OCI) compliant?
- A . runC
- B . runV
- C . kata-containers
- D . gvisor
Correct Answer: A
A
Explanation:
The Open Container Initiative (OCI) publishes open specifications for container images and container runtimes so that tools across the ecosystem remain interoperable. When a runtime is “OCI-compliant,” it means it implements the OCI Runtime Specification (how to run a container from a filesystem bundle and configuration) and/or works cleanly with OCI image formats through the usual layers (image → unpack → runtime). runC is the best-known, widely used reference implementation of the OCI runtime specification and is the low-level runtime underneath many higher-level systems. In Kubernetes, you typically interact with a higher-level container runtime (such as containerd or CRI-O) through the Container Runtime Interface (CRI). That higher-level runtime then uses a low-level OCI runtime to actually create Linux namespaces/cgroups, set up the container process, and start it. In many default installations, containerd delegates to runC for this low-level “create/start” work.
The other options are related but differ in what they are: Kata Containers uses lightweight VMs to provide stronger isolation while still presenting a container-like workflow; gVisor provides a user-space kernel for sandboxing containers; these can be used with Kubernetes via compatible integrations, but the canonical “native OCI runtime” answer in most curricula is runC. Finally, “runV” is not a common modern Kubernetes runtime choice in typical OCI discussions. So the most correct, standards-based answer here is A (runC) because it directly implements the OCI runtime spec and is commonly used as the default low-level runtime behind CRI implementations.
A
Explanation:
The Open Container Initiative (OCI) publishes open specifications for container images and container runtimes so that tools across the ecosystem remain interoperable. When a runtime is “OCI-compliant,” it means it implements the OCI Runtime Specification (how to run a container from a filesystem bundle and configuration) and/or works cleanly with OCI image formats through the usual layers (image → unpack → runtime). runC is the best-known, widely used reference implementation of the OCI runtime specification and is the low-level runtime underneath many higher-level systems. In Kubernetes, you typically interact with a higher-level container runtime (such as containerd or CRI-O) through the Container Runtime Interface (CRI). That higher-level runtime then uses a low-level OCI runtime to actually create Linux namespaces/cgroups, set up the container process, and start it. In many default installations, containerd delegates to runC for this low-level “create/start” work.
The other options are related but differ in what they are: Kata Containers uses lightweight VMs to provide stronger isolation while still presenting a container-like workflow; gVisor provides a user-space kernel for sandboxing containers; these can be used with Kubernetes via compatible integrations, but the canonical “native OCI runtime” answer in most curricula is runC. Finally, “runV” is not a common modern Kubernetes runtime choice in typical OCI discussions. So the most correct, standards-based answer here is A (runC) because it directly implements the OCI runtime spec and is commonly used as the default low-level runtime behind CRI implementations.
Question #12
What is the role of a NetworkPolicy in Kubernetes?
- A . The ability to cryptic and obscure all traffic.
- B . The ability to classify the Pods as isolated and non isolated.
- C . The ability to prevent loopback or incoming host traffic.
- D . The ability to log network security events.
Correct Answer: B
B
Explanation:
A Kubernetes NetworkPolicy defines which traffic is allowed to and from Pods by selecting Pods and specifying ingress/egress rules. A key conceptual effect is that it can make Pods “isolated” (default deny except what is allowed) versus “non-isolated” (default allow). This aligns best with option B, so B is correct.
By default, Kubernetes networking is permissive: Pods can typically talk to any other Pod. When you apply a NetworkPolicy that selects a set of Pods, those selected Pods become “isolated” for the direction(s) covered by the policy (ingress and/or egress). That means only traffic explicitly allowed by the policy is permitted; everything else is denied (again, for the selected Pods and direction). This classification concept―isolated vs non-isolated―is a common way the Kubernetes documentation explains NetworkPolicy behavior.
Option A is incorrect: NetworkPolicy does not encrypt (“cryptic and obscure”) traffic. Encryption is typically handled by mTLS via a service mesh or application-layer TLS.
Option C is not the primary role; loopback and host traffic handling depend on the network plugin and node configuration, and NetworkPolicy is not a “prevent loopback” mechanism.
Option D is incorrect because NetworkPolicy is not a logging system; while some CNIs can produce logs about policy decisions, logging is not NetworkPolicy’s role in the API.
One critical Kubernetes detail: NetworkPolicy enforcement is performed by the CNI/network plugin. If your CNI doesn’t implement NetworkPolicy, creating these objects won’t change runtime traffic. In CNIs that do support it, NetworkPolicy becomes a foundational security primitive for segmentation and least privilege: restricting database access to app Pods only, isolating namespaces, and reducing lateral movement risk.
So, in the language of the provided answers, NetworkPolicy’s role is best captured as the ability to classify Pods into isolated/non-isolated by applying traffic-allow rules―option B.
B
Explanation:
A Kubernetes NetworkPolicy defines which traffic is allowed to and from Pods by selecting Pods and specifying ingress/egress rules. A key conceptual effect is that it can make Pods “isolated” (default deny except what is allowed) versus “non-isolated” (default allow). This aligns best with option B, so B is correct.
By default, Kubernetes networking is permissive: Pods can typically talk to any other Pod. When you apply a NetworkPolicy that selects a set of Pods, those selected Pods become “isolated” for the direction(s) covered by the policy (ingress and/or egress). That means only traffic explicitly allowed by the policy is permitted; everything else is denied (again, for the selected Pods and direction). This classification concept―isolated vs non-isolated―is a common way the Kubernetes documentation explains NetworkPolicy behavior.
Option A is incorrect: NetworkPolicy does not encrypt (“cryptic and obscure”) traffic. Encryption is typically handled by mTLS via a service mesh or application-layer TLS.
Option C is not the primary role; loopback and host traffic handling depend on the network plugin and node configuration, and NetworkPolicy is not a “prevent loopback” mechanism.
Option D is incorrect because NetworkPolicy is not a logging system; while some CNIs can produce logs about policy decisions, logging is not NetworkPolicy’s role in the API.
One critical Kubernetes detail: NetworkPolicy enforcement is performed by the CNI/network plugin. If your CNI doesn’t implement NetworkPolicy, creating these objects won’t change runtime traffic. In CNIs that do support it, NetworkPolicy becomes a foundational security primitive for segmentation and least privilege: restricting database access to app Pods only, isolating namespaces, and reducing lateral movement risk.
So, in the language of the provided answers, NetworkPolicy’s role is best captured as the ability to classify Pods into isolated/non-isolated by applying traffic-allow rules―option B.
Question #13
When a Kubernetes Secret is created, how is the data stored by default in etcd?
- A . As Base64-encoded strings that provide simple encoding but no actual encryption.
- B . As plain text values that are directly stored without any obfuscation or additional encoding.
- C . As compressed binary objects that are optimized for space but not secured against access.
- D . As encrypted records automatically protected using the Kubernetes control plane master key.
Correct Answer: A
A
Explanation:
By default, Kubernetes Secrets are stored in etcd as Base64-encoded values, which makes option A the correct answer. This is a common point of confusion because Base64 encoding is often mistaken for encryption, but in reality, it provides no security―only a reversible text encoding.
When a Secret is defined in a Kubernetes manifest or created via kubectl, its data fields are Base64-encoded before being persisted in etcd. This encoding ensures that binary data (such as certificates or keys) can be safely represented in JSON and YAML formats, which require text-based values.
However, anyone with access to etcd or the Secret object via the Kubernetes API can easily decode these values.
Option B is incorrect because Secrets are not stored as raw plaintext; they are encoded using Base64 before storage.
Option C is incorrect because Kubernetes does not compress Secret data by default.
Option D is incorrect because Secrets are not encrypted at rest by default. Encryption at rest must be explicitly configured using an encryption provider configuration in the Kubernetes API server.
Because of this default behavior, Kubernetes strongly recommends additional security measures when handling Secrets. These include enabling encryption at rest for etcd, restricting access to Secrets using RBAC, using short-lived ServiceAccount tokens, and integrating with external secret management systems such as HashiCorp Vault or cloud provider key management services.
Understanding how Secrets are stored is critical for designing secure Kubernetes clusters. While Secrets provide a convenient abstraction for handling sensitive data, they rely on cluster-level security controls to ensure confidentiality. Without encryption at rest and proper access restrictions, Secret data remains vulnerable to unauthorized access.
Therefore, the correct and verified answer is Option A: Kubernetes stores Secrets as Base64-encoded strings in etcd by default, which offers encoding but not encryption.
A
Explanation:
By default, Kubernetes Secrets are stored in etcd as Base64-encoded values, which makes option A the correct answer. This is a common point of confusion because Base64 encoding is often mistaken for encryption, but in reality, it provides no security―only a reversible text encoding.
When a Secret is defined in a Kubernetes manifest or created via kubectl, its data fields are Base64-encoded before being persisted in etcd. This encoding ensures that binary data (such as certificates or keys) can be safely represented in JSON and YAML formats, which require text-based values.
However, anyone with access to etcd or the Secret object via the Kubernetes API can easily decode these values.
Option B is incorrect because Secrets are not stored as raw plaintext; they are encoded using Base64 before storage.
Option C is incorrect because Kubernetes does not compress Secret data by default.
Option D is incorrect because Secrets are not encrypted at rest by default. Encryption at rest must be explicitly configured using an encryption provider configuration in the Kubernetes API server.
Because of this default behavior, Kubernetes strongly recommends additional security measures when handling Secrets. These include enabling encryption at rest for etcd, restricting access to Secrets using RBAC, using short-lived ServiceAccount tokens, and integrating with external secret management systems such as HashiCorp Vault or cloud provider key management services.
Understanding how Secrets are stored is critical for designing secure Kubernetes clusters. While Secrets provide a convenient abstraction for handling sensitive data, they rely on cluster-level security controls to ensure confidentiality. Without encryption at rest and proper access restrictions, Secret data remains vulnerable to unauthorized access.
Therefore, the correct and verified answer is Option A: Kubernetes stores Secrets as Base64-encoded strings in etcd by default, which offers encoding but not encryption.
Question #14
A CronJob is scheduled to run by a user every one hour.
What happens in the cluster when it’s time for this CronJob to run?
- A . Kubelet watches API Server for CronJob objects. When it’s time for a Job to run, it runs the Pod directly.
- B . Kube-scheduler watches API Server for CronJob objects, and this is why it’s called kube-scheduler.
- C . CronJob controller component creates a Pod and waits until it finishes to run.
- D . CronJob controller component creates a Job. Then the Job controller creates a Pod and waits until it finishes to run.
Correct Answer: D
D
Explanation:
CronJobs are implemented through Kubernetes controllers that reconcile desired state. When the scheduled time arrives, the CronJob controller (part of the controller-manager set of control plane controllers) evaluates the CronJob object’s schedule and determines whether a run should be started. Importantly, CronJob does not create Pods directly as its primary mechanism. Instead, it creates a Job object for each scheduled execution. That Job object then becomes the responsibility of the Job controller, which creates one or more Pods to complete the Job’s work and monitors them until completion. This separation of concerns is why option D is correct.
This design has practical benefits. Jobs encapsulate “run-to-completion” semantics: retries, backoff limits, completion counts, and tracking whether the work has succeeded. CronJob focuses on the temporal triggering aspect (schedule, concurrency policy, starting deadlines, history limits), while Job focuses on the execution aspect (create Pods, ensure completion, retry on failure).
Option A is incorrect because kubelet is a node agent; it does not watch CronJob objects and doesn’t decide when a schedule triggers. Kubelet reacts to Pods assigned to its node and ensures containers run there.
Option B is incorrect because kube-scheduler schedules Pods to nodes after they exist (or are created by controllers); it does not trigger CronJobs.
Option C is incorrect because CronJob does not usually create a Pod and wait directly; it delegates via a Job, which then manages Pods and completion.
So, at runtime: CronJob controller creates a Job; Job controller creates the Pod(s); scheduler assigns those Pods to nodes; kubelet runs them; Job controller observes success/failure and updates status; CronJob controller manages run history and concurrency rules.
D
Explanation:
CronJobs are implemented through Kubernetes controllers that reconcile desired state. When the scheduled time arrives, the CronJob controller (part of the controller-manager set of control plane controllers) evaluates the CronJob object’s schedule and determines whether a run should be started. Importantly, CronJob does not create Pods directly as its primary mechanism. Instead, it creates a Job object for each scheduled execution. That Job object then becomes the responsibility of the Job controller, which creates one or more Pods to complete the Job’s work and monitors them until completion. This separation of concerns is why option D is correct.
This design has practical benefits. Jobs encapsulate “run-to-completion” semantics: retries, backoff limits, completion counts, and tracking whether the work has succeeded. CronJob focuses on the temporal triggering aspect (schedule, concurrency policy, starting deadlines, history limits), while Job focuses on the execution aspect (create Pods, ensure completion, retry on failure).
Option A is incorrect because kubelet is a node agent; it does not watch CronJob objects and doesn’t decide when a schedule triggers. Kubelet reacts to Pods assigned to its node and ensures containers run there.
Option B is incorrect because kube-scheduler schedules Pods to nodes after they exist (or are created by controllers); it does not trigger CronJobs.
Option C is incorrect because CronJob does not usually create a Pod and wait directly; it delegates via a Job, which then manages Pods and completion.
So, at runtime: CronJob controller creates a Job; Job controller creates the Pod(s); scheduler assigns those Pods to nodes; kubelet runs them; Job controller observes success/failure and updates status; CronJob controller manages run history and concurrency rules.
Question #15
Which tools enable Kubernetes HorizontalPodAutoscalers to use custom, application-generated metrics to trigger scaling events?
- A . Prometheus and the prometheus-adapter.
- B . Graylog and graylog-autoscaler metrics.
- C . Graylog and the kubernetes-adapter.
- D . Grafana and Prometheus.
Correct Answer: A
A
Explanation:
To scale on custom, application-generated metrics, the Horizontal Pod Autoscaler (HPA) needs those metrics exposed through the Kubernetes custom metrics (or external metrics) API. A common and Kubernetes-documented approach is Prometheus + prometheus-adapter, making A correct. Prometheus scrapes application metrics (for example, request rate, queue depth, in-flight requests) from /metrics endpoints. The prometheus-adapter then translates selected Prometheus time series into the Kubernetes Custom Metrics API so the HPA controller can fetch them and make scaling decisions.
Why not the other options: Grafana is a visualization tool; it does not provide the metrics API translation layer required by HPA, so “Grafana and Prometheus” is incomplete. Graylog is primarily a log management system; it’s not the standard solution for feeding custom metrics into HPA via the Kubernetes metrics APIs. The “kubernetes-adapter” term in option C is not the standard named adapter used in the common Kubernetes ecosystem for Prometheus-backed custom metrics (the recognized component is prometheus-adapter).
This matters operationally because HPA is not limited to CPU/memory. CPU and memory use resource metrics (often from metrics-server), but modern autoscaling often needs application signals: message queue length, requests per second, latency, or business metrics. With Prometheus and prometheus-adapter, you can define HPA rules such as “scale to maintain queue depth under X” or “scale based on requests per second per pod.” This can produce better scaling behavior than CPU-based scaling alone, especially for I/O-bound services or workloads with uneven CPU profiles.
So the correct tooling combination in the provided choices is Prometheus and the prometheus-adapter, option A.
A
Explanation:
To scale on custom, application-generated metrics, the Horizontal Pod Autoscaler (HPA) needs those metrics exposed through the Kubernetes custom metrics (or external metrics) API. A common and Kubernetes-documented approach is Prometheus + prometheus-adapter, making A correct. Prometheus scrapes application metrics (for example, request rate, queue depth, in-flight requests) from /metrics endpoints. The prometheus-adapter then translates selected Prometheus time series into the Kubernetes Custom Metrics API so the HPA controller can fetch them and make scaling decisions.
Why not the other options: Grafana is a visualization tool; it does not provide the metrics API translation layer required by HPA, so “Grafana and Prometheus” is incomplete. Graylog is primarily a log management system; it’s not the standard solution for feeding custom metrics into HPA via the Kubernetes metrics APIs. The “kubernetes-adapter” term in option C is not the standard named adapter used in the common Kubernetes ecosystem for Prometheus-backed custom metrics (the recognized component is prometheus-adapter).
This matters operationally because HPA is not limited to CPU/memory. CPU and memory use resource metrics (often from metrics-server), but modern autoscaling often needs application signals: message queue length, requests per second, latency, or business metrics. With Prometheus and prometheus-adapter, you can define HPA rules such as “scale to maintain queue depth under X” or “scale based on requests per second per pod.” This can produce better scaling behavior than CPU-based scaling alone, especially for I/O-bound services or workloads with uneven CPU profiles.
So the correct tooling combination in the provided choices is Prometheus and the prometheus-adapter, option A.
Question #16
In Kubernetes, what is the primary responsibility of the kubelet running on each worker node?
- A . To allocate persistent storage volumes and manage distributed data replication for Pods.
- B . To manage cluster state information and handle all scheduling decisions for workloads.
- C . To ensure that containers defined in Pod specifications are running and remain healthy on the node.
- D . To provide internal DNS resolution and route service traffic between Pods and nodes.
Correct Answer: C
C
Explanation:
The kubelet is the primary node-level agent in Kubernetes and plays a critical role in ensuring that workloads run correctly on each worker node. Its main responsibility is to ensure that the containers
described in Pod specifications are running and remain healthy on that node, which makes option C the correct answer.
Once the Kubernetes scheduler assigns a Pod to a node, the kubelet on that node takes over execution responsibilities. It watches the API server for Pod specifications that are scheduled to its node and then interacts with the container runtime to start, stop, and manage the containers defined in those Pods. The kubelet continuously monitors container health and reports Pod and node status back to the API server, enabling Kubernetes to make informed decisions about restarts, rescheduling, or remediation.
Health checks are another key responsibility of the kubelet. It executes liveness, readiness, and startup probes as defined in the Pod specification. Based on probe results, the kubelet may restart containers or update Pod status to reflect whether the application is ready to receive traffic. This behavior directly supports Kubernetes’ self-healing capabilities.
Option A is incorrect because persistent storage allocation and data replication are handled by storage systems, CSI drivers, and controllers―not by the kubelet itself.
Option B is incorrect because cluster state management and scheduling decisions are the responsibility of control plane components such as the API server, controller manager, and kube-scheduler.
Option D is incorrect because DNS resolution and service traffic routing are handled by components like CoreDNS and kube-proxy.
In summary, the kubelet acts as the “node supervisor” for Kubernetes workloads. By ensuring containers are running as specified and continuously reporting their status, the kubelet forms the essential link between the Kubernetes control plane and the actual execution of applications on worker nodes. This clearly aligns with Option C as the correct and verified answer.
C
Explanation:
The kubelet is the primary node-level agent in Kubernetes and plays a critical role in ensuring that workloads run correctly on each worker node. Its main responsibility is to ensure that the containers
described in Pod specifications are running and remain healthy on that node, which makes option C the correct answer.
Once the Kubernetes scheduler assigns a Pod to a node, the kubelet on that node takes over execution responsibilities. It watches the API server for Pod specifications that are scheduled to its node and then interacts with the container runtime to start, stop, and manage the containers defined in those Pods. The kubelet continuously monitors container health and reports Pod and node status back to the API server, enabling Kubernetes to make informed decisions about restarts, rescheduling, or remediation.
Health checks are another key responsibility of the kubelet. It executes liveness, readiness, and startup probes as defined in the Pod specification. Based on probe results, the kubelet may restart containers or update Pod status to reflect whether the application is ready to receive traffic. This behavior directly supports Kubernetes’ self-healing capabilities.
Option A is incorrect because persistent storage allocation and data replication are handled by storage systems, CSI drivers, and controllers―not by the kubelet itself.
Option B is incorrect because cluster state management and scheduling decisions are the responsibility of control plane components such as the API server, controller manager, and kube-scheduler.
Option D is incorrect because DNS resolution and service traffic routing are handled by components like CoreDNS and kube-proxy.
In summary, the kubelet acts as the “node supervisor” for Kubernetes workloads. By ensuring containers are running as specified and continuously reporting their status, the kubelet forms the essential link between the Kubernetes control plane and the actual execution of applications on worker nodes. This clearly aligns with Option C as the correct and verified answer.
Question #17
What is CRD?
- A . Custom Resource Definition
- B . Custom Restricted Definition
- C . Customized RUST Definition
- D . Custom RUST Definition
Correct Answer: A
A
Explanation:
A CRD is a CustomResourceDefinition, making A correct. Kubernetes is built around an API-driven model: resources like Pods, Services, and Deployments are all objects served by the Kubernetes API. CRDs allow you to extend the Kubernetes API by defining your own resource types. Once a CRD is installed, the API server can store and serve custom objects (Custom Resources) of that new type, and Kubernetes tooling (kubectl, RBAC, admission, watch mechanisms) can interact with them just like built-in resources.
CRDs are a core building block of the Kubernetes ecosystem because they enable operators and platform extensions. A typical pattern is: define a CRD that represents the desired state of some higher-level concept (for example, a database cluster, a certificate request, an application release), and then run a controller (often called an “operator”) that watches those custom resources and reconciles the cluster to match. That controller may create Deployments, StatefulSets, Services, Secrets, or cloud resources to implement the desired state encoded in the custom resource.
The incorrect answers are made-up expansions. CRDs are not related to Rust in Kubernetes terminology, and “custom restricted definition” is not the standard meaning.
So the verified meaning is: CRD = CustomResourceDefinition, used to extend Kubernetes APIs and enable Kubernetes-native automation via controllers/operators.
A
Explanation:
A CRD is a CustomResourceDefinition, making A correct. Kubernetes is built around an API-driven model: resources like Pods, Services, and Deployments are all objects served by the Kubernetes API. CRDs allow you to extend the Kubernetes API by defining your own resource types. Once a CRD is installed, the API server can store and serve custom objects (Custom Resources) of that new type, and Kubernetes tooling (kubectl, RBAC, admission, watch mechanisms) can interact with them just like built-in resources.
CRDs are a core building block of the Kubernetes ecosystem because they enable operators and platform extensions. A typical pattern is: define a CRD that represents the desired state of some higher-level concept (for example, a database cluster, a certificate request, an application release), and then run a controller (often called an “operator”) that watches those custom resources and reconciles the cluster to match. That controller may create Deployments, StatefulSets, Services, Secrets, or cloud resources to implement the desired state encoded in the custom resource.
The incorrect answers are made-up expansions. CRDs are not related to Rust in Kubernetes terminology, and “custom restricted definition” is not the standard meaning.
So the verified meaning is: CRD = CustomResourceDefinition, used to extend Kubernetes APIs and enable Kubernetes-native automation via controllers/operators.
Question #18
What is the name of the Kubernetes resource used to expose an application?
- A . Port
- B . Service
- C . DNS
- D . Deployment
Correct Answer: B
B
Explanation:
To expose an application running on Pods so that other components can reliably reach it, Kubernetes uses a Service, making B the correct answer. Pods are ephemeral: they can be recreated, rescheduled, and scaled, which means Pod IPs change. A Service provides a stable endpoint (virtual IP and DNS name) and load-balances traffic across the set of Pods selected by its label selector.
Services come in multiple forms. The default is ClusterIP, which exposes the application inside the cluster. NodePort exposes the Service on a static port on each node, and LoadBalancer (in supported clouds) provisions an external load balancer that routes traffic to the Service. ExternalName maps a Service name to an external DNS name. But across these variants, the abstraction is consistent: a Service defines how to access a logical group of Pods.
Option A (Port) is not a Kubernetes resource type; ports are fields within resources.
Option C (DNS) is a supporting mechanism (CoreDNS creates DNS entries for Services), but DNS is not the resource you create to expose the app.
Option D (Deployment) manages Pod replicas and rollouts, but it does not directly provide stable networking access; you typically pair a Deployment with a Service to expose it.
This is a core cloud-native pattern: controllers manage compute, Services manage stable connectivity, and higher-level gateways like Ingress provide L7 routing for HTTP/HTTPS. So, the Kubernetes resource used to expose an application is Service (B).
B
Explanation:
To expose an application running on Pods so that other components can reliably reach it, Kubernetes uses a Service, making B the correct answer. Pods are ephemeral: they can be recreated, rescheduled, and scaled, which means Pod IPs change. A Service provides a stable endpoint (virtual IP and DNS name) and load-balances traffic across the set of Pods selected by its label selector.
Services come in multiple forms. The default is ClusterIP, which exposes the application inside the cluster. NodePort exposes the Service on a static port on each node, and LoadBalancer (in supported clouds) provisions an external load balancer that routes traffic to the Service. ExternalName maps a Service name to an external DNS name. But across these variants, the abstraction is consistent: a Service defines how to access a logical group of Pods.
Option A (Port) is not a Kubernetes resource type; ports are fields within resources.
Option C (DNS) is a supporting mechanism (CoreDNS creates DNS entries for Services), but DNS is not the resource you create to expose the app.
Option D (Deployment) manages Pod replicas and rollouts, but it does not directly provide stable networking access; you typically pair a Deployment with a Service to expose it.
This is a core cloud-native pattern: controllers manage compute, Services manage stable connectivity, and higher-level gateways like Ingress provide L7 routing for HTTP/HTTPS. So, the Kubernetes resource used to expose an application is Service (B).
Question #19
Which of the following characteristics is associated with container orchestration?
- A . Application message distribution
- B . Dynamic scheduling
- C . Deploying application JAR files
- D . Virtual machine distribution
Correct Answer: B
B
Explanation:
A core capability of container orchestration is dynamic scheduling, so B is correct. Orchestration platforms (like Kubernetes) are responsible for deciding where containers (packaged as Pods in Kubernetes) should run, based on real-time cluster conditions and declared requirements.
“Dynamic” means the system makes placement decisions continuously as workloads are created, updated, or fail, and as cluster capacity changes.
In Kubernetes, the scheduler evaluates Pods that have no assigned node, filters nodes that don’t meet requirements (resources, taints/tolerations, affinity/anti-affinity, topology constraints), and then scores remaining nodes to pick the best target. This scheduling happens at runtime and adapts to the current state of the cluster. If nodes go down or Pods crash, controllers create replacements and the scheduler places them again―another aspect of dynamic orchestration.
The other options don’t define container orchestration: “application message distribution” is more about messaging systems or service communication patterns, not orchestration. “Deploying application JAR files” is a packaging/deployment detail relevant to Java apps but not a defining orchestration capability. “Virtual machine distribution” refers to VM management rather than container orchestration; Kubernetes focuses on containers and Pods (even if those containers sometimes run in lightweight VMs via sandbox runtimes).
So, the defining trait here is that an orchestrator automatically and continuously schedules and reschedules workloads, rather than relying on static placement decisions.
B
Explanation:
A core capability of container orchestration is dynamic scheduling, so B is correct. Orchestration platforms (like Kubernetes) are responsible for deciding where containers (packaged as Pods in Kubernetes) should run, based on real-time cluster conditions and declared requirements.
“Dynamic” means the system makes placement decisions continuously as workloads are created, updated, or fail, and as cluster capacity changes.
In Kubernetes, the scheduler evaluates Pods that have no assigned node, filters nodes that don’t meet requirements (resources, taints/tolerations, affinity/anti-affinity, topology constraints), and then scores remaining nodes to pick the best target. This scheduling happens at runtime and adapts to the current state of the cluster. If nodes go down or Pods crash, controllers create replacements and the scheduler places them again―another aspect of dynamic orchestration.
The other options don’t define container orchestration: “application message distribution” is more about messaging systems or service communication patterns, not orchestration. “Deploying application JAR files” is a packaging/deployment detail relevant to Java apps but not a defining orchestration capability. “Virtual machine distribution” refers to VM management rather than container orchestration; Kubernetes focuses on containers and Pods (even if those containers sometimes run in lightweight VMs via sandbox runtimes).
So, the defining trait here is that an orchestrator automatically and continuously schedules and reschedules workloads, rather than relying on static placement decisions.
Question #20
How are ReplicaSets and Deployments related?
- A . Deployments manage ReplicaSets and provide declarative updates to Pods.
- B . ReplicaSets manage stateful applications, Deployments manage stateless applications.
- C . Deployments are runtime instances of ReplicaSets.
- D . ReplicaSets are subsets of Jobs and CronJobs which use imperative Deployments.
Correct Answer: A
A
Explanation:
In Kubernetes, a Deployment is a higher-level controller that manages ReplicaSets, and ReplicaSets in turn manage Pods. That is exactly what option A states, making it the correct answer.
A ReplicaSet’s job is straightforward: ensure that a specified number of Pod replicas matching a selector are running. It continuously reconciles actual state to desired state by creating new Pods when replicas are missing or removing Pods when there are too many. However, ReplicaSets alone do not provide the richer application rollout lifecycle features most teams need.
A Deployment adds those features by managing ReplicaSets across versions of your Pod template. When you update a Deployment (for example, change the container image tag), Kubernetes creates a new ReplicaSet with the new Pod template and then gradually scales the new ReplicaSet up and the old one down according to the Deployment strategy (RollingUpdate by default). Deployments also maintain rollout history, support rollback (kubectl rollout undo), and allow pause/resume of rollouts. This is why the common guidance is: you almost always create Deployments rather than ReplicaSets directly for stateless apps.
Option B is incorrect because stateful workloads are typically handled by StatefulSets, not ReplicaSets. Deployments can run stateless apps, but ReplicaSets are also used under Deployments and are not “for stateful only.” Option C is reversed: ReplicaSets are not “instances” of Deployments; Deployments create/manage ReplicaSets.
Option D is incorrect because Jobs/CronJobs are separate controllers for run-to-completion workloads and do not define ReplicaSets as subsets.
So the accurate relationship is: Deployment → manages ReplicaSets → which manage Pods, enabling declarative updates and controlled rollouts.
A
Explanation:
In Kubernetes, a Deployment is a higher-level controller that manages ReplicaSets, and ReplicaSets in turn manage Pods. That is exactly what option A states, making it the correct answer.
A ReplicaSet’s job is straightforward: ensure that a specified number of Pod replicas matching a selector are running. It continuously reconciles actual state to desired state by creating new Pods when replicas are missing or removing Pods when there are too many. However, ReplicaSets alone do not provide the richer application rollout lifecycle features most teams need.
A Deployment adds those features by managing ReplicaSets across versions of your Pod template. When you update a Deployment (for example, change the container image tag), Kubernetes creates a new ReplicaSet with the new Pod template and then gradually scales the new ReplicaSet up and the old one down according to the Deployment strategy (RollingUpdate by default). Deployments also maintain rollout history, support rollback (kubectl rollout undo), and allow pause/resume of rollouts. This is why the common guidance is: you almost always create Deployments rather than ReplicaSets directly for stateless apps.
Option B is incorrect because stateful workloads are typically handled by StatefulSets, not ReplicaSets. Deployments can run stateless apps, but ReplicaSets are also used under Deployments and are not “for stateful only.” Option C is reversed: ReplicaSets are not “instances” of Deployments; Deployments create/manage ReplicaSets.
Option D is incorrect because Jobs/CronJobs are separate controllers for run-to-completion workloads and do not define ReplicaSets as subsets.
So the accurate relationship is: Deployment → manages ReplicaSets → which manage Pods, enabling declarative updates and controlled rollouts.
