Kubernetes Important Interview Questions
Table of contents
- What is Kubernetes and why it is important?
- What is the difference between docker swarm and Kubernetes?
- How does Kubernetes handle network communication between containers?
- How does Kubernetes handle the scaling of applications?
- What is a Kubernetes Deployment and how does it differ from a ReplicaSet?
- Can you explain the concept of rolling updates in Kubernetes?
- How does Kubernetes handle network security and access control?
- Can you give an example of how Kubernetes can be used to deploy a highly available application?
- What is a namespace in Kubernetes? Which namespace any pod takes if we don't specify any namespace?
- How does ingress help in Kubernetes?
- Explain different types of services in Kubernetes.
- Can you explain the concept of self-healing in Kubernetes and give examples of how it works?
- How does Kubernetes handle storage management for containers?
- How does the NodePort service work?
- What is a multinode cluster and a single-node cluster in Kubernetes?
- Difference between creating and applying in Kubernetes?
What is Kubernetes and why it is important?
Kubernetes (often abbreviated as K8s) is an open-source container orchestration platform designed to automate containerized applications' deployment, scaling, and management. Google originally developed it and is now maintained by the Cloud Native Computing Foundation (CNCF).
Kubernetes helps manage clusters of containers, ensuring that applications run reliably across different environments, whether on-premises or in the cloud.
Kubernetes Important for:
Automated Scaling – Kubernetes can automatically scale applications up or down based on demand, optimizing resource usage.
High Availability – It ensures application uptime by automatically restarting failed containers and distributing workloads across nodes.
Efficient Resource Management – Kubernetes intelligently schedules workloads based on CPU, memory, and other resource requirements.
Multi-Cloud & Hybrid Support – It allows applications to run seamlessly across different cloud providers (AWS, Azure, GCP) or on-premises data centers.
Self-Healing – If a container crashes, Kubernetes automatically restarts it or replaces it to maintain system stability.
Load Balancing – Kubernetes automatically distributes network traffic to ensure no single container is overwhelmed.
Declarative Configuration – Uses YAML/JSON configuration files to define infrastructure and application states, making it easier to automate deployments using Infrastructure as Code (IaC).
Rolling Updates & Rollbacks – Ensures zero-downtime deployments by gradually updating applications and rolling back in case of failure.
Microservices & CI/CD Integration: Kubernetes integrates well with DevOps and Continuous Integration/Continuous Deployment (CI/CD) pipelines, facilitating agile development.
Extensibility – Supports a vast ecosystem of plugins, extensions, and integrations (e.g., Helm, Istio, Prometheus).
What is the difference between docker swarm and Kubernetes?
Difference Between Docker Swarm and Kubernetes
Ease of Use
Docker Swarm: Simple setup, easy to learn.
Kubernetes: The steeper learning curve, more complex.
Installation
Docker Swarm: Lightweight, built into Docker.
Kubernetes: Requires manual setup or managed services like EKS, GKE, or AKS.
Scalability
Docker Swarm: Scales well but is limited to Docker environments.
Kubernetes: Highly scalable, and supports large-scale enterprise applications.
Load Balancing
Docker Swarm: Built-in, simpler internal load balancing.
Kubernetes: Advanced, supports external load balancers such as Ingress.
Networking
Docker Swarm: Uses overlay networking for container communication.
Kubernetes: More advanced networking using Container Network Interface (CNI).
Storage Management
Docker Swarm: Limited storage options.
Kubernetes: Supports persistent storage with dynamic provisioning.
Auto-Scaling
Docker Swarm: No built-in auto-scaling, requires manual intervention.
Kubernetes: Native auto-scaling based on CPU/Memory usage.
Self-Healing
Docker Swarm: Can restart failed containers but lacks advanced features.
Kubernetes: Automatically replaces unhealthy nodes and pods.
Rolling Updates & Rollbacks
Docker Swarm: Supports rolling updates but offers less control over rollback.
Kubernetes: More granular control with rolling updates, blue-green deployments, and canary releases.
Multi-Cloud & Hybrid Support
Docker Swarm: Primarily Docker ecosystem, less cloud-native.
Kubernetes: Strong multi-cloud and hybrid-cloud support.
Security
Docker Swarm: Basic role-based access control (RBAC).
Kubernetes: Advanced RBAC, secrets management, and network policies.
Ecosystem & Community Support
Docker Swarm: Smaller community, fewer third-party integrations.
Kubernetes: Large ecosystem, widely adopted, with many integrations.
Monitoring & Logging
Docker Swarm: Limited built-in tools, needs third-party solutions.
Kubernetes: Native monitoring tools like Prometheus and Grafana with logging support.
How does Kubernetes handle network communication between containers?
Kubernetes handles network communication between containers using a flat network model where each Pod is assigned a unique IP address. Containers within the same Pod share the same network namespace and communicate via
localhost
. For inter-Pod communication, Kubernetes ensures that all Pods can reach each other directly without the need for NAT, regardless of which node they are running on.To manage dynamic Pod IPs, Kubernetes provides Services, which act as stable network endpoints. Services use DNS to allow Pods to communicate without relying on direct IP addresses.
For cross-node communication, Kubernetes relies on Container Network Interface (CNI) plugins such as Flannel, Calico, or Cilium to establish an overlay network. Additionally, Kube Proxy is responsible for maintaining network rules and load-balancing traffic to the correct Pods within a Service.
How does Kubernetes handle the scaling of applications?
Kubernetes handles application scaling through three primary mechanisms:
Manual Scaling – Users can manually scale the number of Pod replicas using the
kubectl scale
command or by updating thereplicas
field in a Deployment, ReplicaSet, or StatefulSet YAML file.Horizontal Pod Autoscaler (HPA) – HPA automatically scales the number of Pods based on CPU/memory usage or custom metrics. It continuously monitors resource utilization and adjusts the replica count accordingly.
Vertical Pod Autoscaler (VPA): VPA automatically adjusts the CPU and memory requests/limits of existing Pods instead of adding new ones, which is useful for optimizing resource allocation.
Cluster Autoscaler – This dynamically adjusts the number of worker nodes in the cluster based on demand. If Pods cannot be scheduled due to insufficient resources, Cluster Autoscaler provisions new nodes, and it removes underutilized nodes when demand decreases.
These mechanisms ensure that Kubernetes applications scale efficiently, maintaining optimal performance and resource utilization.
What is a Kubernetes Deployment and how does it differ from a ReplicaSet?
A Kubernetes Deployment is a higher-level abstraction that manages the lifecycle of application Pods, ensuring declarative updates, rollbacks, and scaling. It provides self-healing capabilities by automatically replacing failed Pods and allows for rolling updates and rollbacks to minimize downtime.
A ReplicaSet, on the other hand, is a lower-level controller that ensures a specified number of Pod replicas are running at all times. It only handles scaling and maintaining the desired state of Pods but does not directly support rolling updates or rollbacks.
Key Differences Between Deployment and ReplicaSet
Purpose
Deployment: Manages application updates and scaling.
ReplicaSet: Ensures a fixed number of Pod replicas are running.
Rolling Updates & Rollbacks
Deployment: Supports rolling updates and rollbacks.
ReplicaSet: This does not support rolling updates directly.
Declarative Management
Deployment: Allows defining update strategies like Recreate and RollingUpdate.
ReplicaSet: Only maintains the specified number of Pod replicas.
Self-Healing
Deployment: Ensures Pods are running and replaces failed ones.
ReplicaSet: Ensures the desired number of replicas but does not handle application updates.
Usage
Deployment: Preferred for managing applications that require frequent updates.
ReplicaSet: Used internally by Deployments but is rarely managed directly.
Can you explain the concept of rolling updates in Kubernetes?
A rolling update in Kubernetes is a deployment strategy that gradually replaces old application versions with new ones without downtime. Instead of stopping all existing Pods at once, Kubernetes updates them in a controlled manner, ensuring that the application remains available throughout the update process.
How Rolling Updates Work:
New Pods are created using the updated container image.
Old Pods are gradually terminated as new ones become ready.
The process continues until all old Pods are replaced with the new version.
Kubernetes ensures that a minimum number of Pods remain available at all times.
Key Features of Rolling Updates:
Zero Downtime: Ensures that some instances of the application are always running.
Controlled Rollout: Uses
maxUnavailable
andmaxSurge
settings to control how many Pods are replaced at a time.Automatic Rollback: If a failure occurs, Kubernetes can revert to the previous stable version.
Command to Trigger a Rolling Update:
kubectl set image deployment/my-app my-container=my-image:v2
or update the container image in the Deployment YAML and apply the changes:
kubectl apply -f deployment.yaml
Checking Update Status:
kubectl rollout status deployment/my-app
Rolling Back an Update (If Needed):
kubectl rollout undo deployment/my-app
How does Kubernetes handle network security and access control?
Kubernetes provides multiple mechanisms to ensure network security and access control at different levels, including Pod-to-Pod communication, API access control, and external traffic security.
1. Network Security (Pod-to-Pod and External Traffic Security)
a) Network Policies
Kubernetes uses Network Policies to control which Pods can communicate with each other.
Defined as YAML manifests and applied to specific Pods.
Uses labels and selectors to enforce rules.
Example: Allow traffic only from a specific namespace or Pod label:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-app-traffic
spec:
podSelector:
matchLabels:
app: my-app
ingress:
- from:
- podSelector:
matchLabels:
role: frontend
- It requires CNI plugins (Calico, Cilium, etc.) that support network policies.
b) Role-Based Access Control (RBAC) for API Security
Kubernetes RBAC defines who can access what in the cluster.
Uses Roles, ClusterRoles, RoleBindings, and ClusterRoleBindings to grant permissions.
Example: Granting a user read-only access to Pods:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: default
name: pod-reader
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "watch", "list"]
c) Service Accounts and Secrets Management
Each Pod can use a Service Account for API authentication.
Kubernetes manages Secrets for storing sensitive data such as passwords, API keys, and certificates.
Example: Storing a database password as a Secret:
apiVersion: v1
kind: Secret
metadata:
name: db-secret
type: Opaque
data:
password: cGFzc3dvcmQ= # Base64-encoded "password"
d) Ingress Security and TLS Encryption
Ingress Controllers (e.g., NGINX, Traefik) handle external traffic.
TLS certificates can be used to encrypt traffic.
Integration with Cert-Manager for automatic SSL certificate management.
Example: Enforcing HTTPS with TLS in an Ingress resource:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
spec:
tls:
- hosts:
- example.com
secretName: tls-secret
rules:
- host: example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-service
port:
number: 80
2. Access Control (Authentication & Authorization)
a) Authentication
Kubernetes supports multiple authentication methods:
Certificate-based authentication (TLS client certs)
Token-based authentication (Service Accounts, OAuth, OIDC)
Static password and file-based authentication (for basic use)
b) Authorization
Kubernetes uses RBAC, ABAC (Attribute-Based Access Control), and Webhook authorization to control access.
RBAC is the most commonly used method to restrict actions based on user roles.
3. Pod Security (Pod Security Standards & Policies)
Kubernetes Pod Security Standards (PSS) define three security levels:
Privileged – No restrictions (for trusted workloads).
Baseline – Minimal security restrictions (for general workloads).
Restricted – Strictest security (for highly secure workloads).
Kubernetes also supports Pod Security Admission (PSA) to enforce these policies at the namespace level.
Can you give an example of how Kubernetes can be used to deploy a highly available application?
A highly available (HA) application in Kubernetes is designed to minimize downtime and ensure continuous operation, even if some Pods or nodes fail. Kubernetes provides several built-in mechanisms to achieve high availability.
Example: Deploying a Highly Available Web Application
Key Components for High Availability:
ReplicaSet (Multiple Pods for Redundancy) – Ensures multiple instances of the application are running.
Load Balancing (Service with ClusterIP or LoadBalancer) – Distributes traffic across multiple Pods.
Rolling Updates & Rollbacks (Deployment Strategy) – Ensures smooth updates without downtime.
Auto-healing (Liveness & Readiness Probes) – Automatically restarts unhealthy Pods.
Persistent Storage (Stateful Applications) – Uses Persistent Volume Claims (PVCs) for data durability.
Multi-Node Deployment (Kubernetes Cluster on Multiple Nodes) – Ensures availability if a node fails.
Step-by-Step YAML for a Highly Available Web Application
1. Define the Deployment (Multiple Pods for Redundancy)
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-app
spec:
replicas: 3 # Ensures high availability with multiple instances
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
- name: web-container
image: nginx:latest
ports:
- containerPort: 80
livenessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 3
periodSeconds: 10
readinessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 5
periodSeconds: 10
2. Create a Service to Load Balance Traffic Across Pods
apiVersion: v1
kind: Service
metadata:
name: web-service
spec:
type: LoadBalancer # Exposes service externally, use ClusterIP for internal traffic
selector:
app: web
ports:
- protocol: TCP
port: 80
targetPort: 80
3. Add a Horizontal Pod Autoscaler (HPA)
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: web-app-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: web-app
minReplicas: 3
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 50
How Kubernetes Ensures High Availability:
Multiple Pods (
replicas: 3
) ensure the application remains available even if one Pod fails.Liveness and Readiness Probes monitor and restart unhealthy Pods automatically.
Service (
LoadBalancer
orClusterIP
) distributes traffic evenly across running Pods.Horizontal Pod Autoscaler (HPA) scales the number of Pods based on CPU usage to handle traffic spikes.
Kubernetes Scheduler ensures Pods are distributed across multiple nodes to prevent single-point failures.
What is a namespace in Kubernetes? Which namespace any pod takes if we don't specify any namespace?
A namespace in Kubernetes is a logical isolation mechanism that allows multiple teams or applications to share a cluster without interfering with each other. It helps organize and manage cluster resources by grouping related objects, such as Pods, Services, and Deployments, within separate environments.
Key Features of Namespaces:
Provide resource isolation within a cluster.
Allow setting resource quotas and access control policies per namespace.
Useful for multi-tenant environments and separating development, staging, and production environments.
Default Namespace:
If no namespace is specified when creating a Pod (or any other resource), Kubernetes places it in the default namespace.
Checking Available Namespaces:
kubectl get namespaces
Creating a Pod in a Specific Namespace:
apiVersion: v1
kind: Pod
metadata:
name: my-pod
namespace: my-namespace
spec:
containers:
- name: my-container
image: nginx
Creating a New Namespace:
kubectl create namespace my-namespace
Listing Resources in a Specific Namespace:
kubectl get pods -n my-namespace
How does ingress help in Kubernetes?
Ingress in Kubernetes manages external access to services inside a cluster. It provides routing rules to expose HTTP and HTTPS traffic from outside the cluster to specific Services within the cluster.
Key Benefits of Ingress:
Single Entry Point: Acts as a centralized access point for multiple services, reducing the need for multiple LoadBalancers or NodePorts.
Path-Based & Host-Based Routing: Routes requests based on URL paths (
/api
,/dashboard
) or hostnames (app.example.com
).TLS/SSL Termination: Handles HTTPS traffic and terminates TLS before passing requests to backend services.
Load Balancing: Distributes traffic across multiple Pods to ensure availability and performance.
Authentication & Authorization: Can integrate with authentication mechanisms for security.
Rewrite and Redirect Rules: Allows URL rewriting and redirection to optimize request handling.
Example: Defining an Ingress Resource
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: example.com
http:
paths:
- path: /app
pathType: Prefix
backend:
service:
name: my-app-service
port:
number: 80
How It Works:
Traffic coming to
http://example.com/app
is routed tomy-app-service
.Uses Nginx Ingress Controller (or another controller like Traefik) to manage the traffic.
Installing an Ingress Controller (Example for NGINX Ingress Controller)
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/cloud/deploy.yaml
Checking the Ingress Configuration:
kubectl get ingress
Explain different types of services in Kubernetes.
A Service in Kubernetes is an abstraction that defines a stable network endpoint to expose a set of Pods. Since Pods are ephemeral and can be created or destroyed dynamically, Services ensure consistent communication between components inside and outside the cluster.
Types of Kubernetes Services:
1. ClusterIP (Default)
Use Case: Internal communication between Pods within the cluster.
Behavior: Exposes the service on a cluster-internal IP that is not accessible externally.
Example:
apiVersion: v1 kind: Service metadata: name: my-clusterip-service spec: selector: app: my-app ports: - protocol: TCP port: 80 targetPort: 8080
Access Command:
kubectl port-forward svc/my-clusterip-service 8080:80
2. NodePort
Use Case: Exposes the service externally via a static port on each node.
Behavior: Assigns a port in the range 30000-32767 on every node, allowing external access.
Example:
apiVersion: v1 kind: Service metadata: name: my-nodeport-service spec: type: NodePort selector: app: my-app ports: - protocol: TCP port: 80 targetPort: 8080 nodePort: 31000
Access URL:
http://<Node-IP>:31000
3. LoadBalancer
Use Case: Exposes the service externally using a cloud provider’s load balancer (e.g., AWS ELB, GCP LB).
Behavior: Creates an external load balancer and directs traffic to the service.
Example:
apiVersion: v1 kind: Service metadata: name: my-loadbalancer-service spec: type: LoadBalancer selector: app: my-app ports: - protocol: TCP port: 80 targetPort: 8080
Access URL:
kubectl get svc my-loadbalancer-service
(Displays an external IP when provisioned)
4. ExternalName
Use Case: Maps a Kubernetes service to an external DNS name instead of forwarding traffic to Pods.
Behavior: Returns a CNAME record to the external service instead of an internal cluster IP.
Example:
apiVersion: v1 kind: Service metadata: name: my-external-service spec: type: ExternalName externalName: api.example.com
Usage: When Pods query
my-external-service
, they get redirected toapi.example.com
.
Can you explain the concept of self-healing in Kubernetes and give examples of how it works?
Self-healing in Kubernetes refers to the ability of the cluster to automatically detect and recover from failures, ensuring that applications remain highly available and resilient without manual intervention. Kubernetes continuously monitors the health of Pods and nodes and takes corrective actions if issues arise.
Self-Healing Mechanisms in Kubernetes:
1. Pod Restart with Liveness Probe
If a container becomes unresponsive or crashes, Kubernetes restarts it automatically using Liveness Probes.
Example: A container that stops responding to health checks will be restarted.
Implementation:
apiVersion: v1 kind: Pod metadata: name: self-healing-pod spec: containers: - name: my-container image: nginx livenessProbe: httpGet: path: / port: 80 initialDelaySeconds: 5 periodSeconds: 10
How it works:
- If the
/
the endpoint of the container does not respond, Kubernetes restarts the Pod.
- If the
2. Automatic Pod Replacement with ReplicaSet
If a Pod crashes or is manually deleted, ReplicaSet automatically creates a new Pod to maintain the desired state.
Example: If a ReplicaSet is configured with
3
replicas and one Pod fails, Kubernetes recreates it.Implementation:
apiVersion: apps/v1 kind: ReplicaSet metadata: name: my-replicaset spec: replicas: 3 selector: matchLabels: app: my-app template: metadata: labels: app: my-app spec: containers: - name: my-container image: nginx
How it works:
- If any of the 3 Pods fail, the ReplicaSet ensures a new one is created to maintain the desired count.
3. Node Failure Recovery with Scheduler
If a node goes offline, Kubernetes reschedules its Pods to a healthy node.
Example: If a node crashes due to hardware failure, Kubernetes moves affected Pods to available nodes.
How it works:
The Kubelet continuously monitors node health.
If a node becomes unresponsive, the scheduler places affected Pods on a different node.
4. Stateful Application Recovery with StatefulSet & Persistent Volumes
For databases or stateful applications, Kubernetes ensures that Pods retain their persistent storage when restarted.
Example: A database Pod (MySQL, PostgreSQL) restarts but keeps its stored data intact using a Persistent Volume Claim (PVC).
Implementation:
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: my-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi
How it works:
- When a stateful application crashes and restarts, it attaches to the same Persistent Volume.
5. Rolling Updates & Rollbacks for Safe Deployments
If a new deployment version causes failures, Kubernetes allows automatic rollback to the last working version.
Example: If an update fails, Kubernetes rolls back to the previous stable version.
Rollback Command:
kubectl rollout undo deployment/my-app
How it works:
- Kubernetes tracks deployment history and reverts if needed.
How does Kubernetes handle storage management for containers?
Kubernetes provides a flexible and scalable storage management system that allows containers to use storage dynamically while ensuring persistence, portability, and data availability across Pods and nodes. It abstracts the underlying storage systems and provides various mechanisms to handle persistent and ephemeral storage.
Key Concepts in Kubernetes Storage Management:
1. Volumes (Ephemeral & Persistent)
Kubernetes uses Volumes to attach storage to containers.
Volumes persist data only as long as the Pod exists (except for Persistent Volumes).
Example of an ephemeral volume:
apiVersion: v1 kind: Pod metadata: name: ephemeral-storage-pod spec: containers: - name: app-container image: nginx volumeMounts: - mountPath: "/data" name: temp-storage volumes: - name: temp-storage emptyDir: {} # Temporary storage deleted when the Pod stops
Use Case: Caching, temporary logs.
2. Persistent Volumes (PVs)
A Persistent Volume (PV) is a cluster-wide storage resource that remains independent of Pod lifecycles.
PVs are pre-provisioned by administrators or dynamically provisioned.
Example of a Persistent Volume using NFS:
apiVersion: v1 kind: PersistentVolume metadata: name: my-pv spec: capacity: storage: 5Gi accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain nfs: path: "/mnt/data" server: "192.168.1.100"
Use Case: Database storage, shared storage across Pods.
3. Persistent Volume Claims (PVCs)
A Persistent Volume Claim (PVC) allows Pods to request storage from a PV dynamically.
Example:
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: my-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi
Use Case: Applications that need persistent storage without depending on a specific PV.
4. Storage Classes (Dynamic Provisioning)
StorageClass automates PV provisioning based on cloud providers like AWS EBS, GCP Persistent Disks, or on-premises storage.
Example:
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: fast-storage provisioner: kubernetes.io/aws-ebs parameters: type: gp2
Use Case: Automatically provisioning volumes for cloud-based applications.
5. StatefulSets (For Stateful Applications)
StatefulSets ensure that storage is retained even if a Pod is rescheduled.
Works with Persistent Volume Claims (PVCs) to maintain consistent data.
Example (MySQL with StatefulSet and PVCs):
apiVersion: apps/v1 kind: StatefulSet metadata: name: mysql-db spec: serviceName: "mysql" replicas: 3 selector: matchLabels: app: mysql template: metadata: labels: app: mysql spec: containers: - name: mysql image: mysql:5.7 volumeMounts: - name: mysql-storage mountPath: /var/lib/mysql volumeClaimTemplates: - metadata: name: mysql-storage spec: accessModes: ["ReadWriteOnce"] resources: requests: storage: 5Gi
Use Case: Databases, distributed applications requiring persistent identity.
How does the NodePort service work?
A NodePort service in Kubernetes exposes a Pod to external traffic by opening a specific port on each node in the cluster. This allows users to access a service from outside the cluster using any node’s IP address and the assigned port.
How NodePort Works:
Assigns a Port:
Kubernetes allocates a port in the range 30000-32767 on all worker nodes.
This port is used to route traffic to the underlying service.
Routes Traffic:
- Incoming requests to
<Node-IP>:<NodePort>
are forwarded to the appropriate Pod through a ClusterIP.
- Incoming requests to
Works on All Nodes:
Even if the Pod runs on a single node, the NodePort is opened on all nodes in the cluster.
Kubernetes automatically routes traffic to the correct node where the Pod is running.
Example of a NodePort Service
apiVersion: v1
kind: Service
metadata:
name: my-nodeport-service
spec:
type: NodePort # Exposes the service externally
selector:
app: my-app
ports:
- protocol: TCP
port: 80 # Service port
targetPort: 8080 # Port inside the Pod
nodePort: 31000 # Exposed port (should be between 30000-32767)
Accessing the NodePort Service
Once the service is created, you can access it using:
http://<Node-IP>:31000
Node-IP: The external IP of any Kubernetes node.
NodePort (31000): The static port exposed on all nodes.
Internally, traffic is forwarded to Pods running on
targetPort: 8080
.
Checking NodePort Services
List services:
kubectl get svc
Get details of a specific service:
kubectl describe svc my-nodeport-service
Use Cases of NodePort:
Exposing a service without a cloud provider’s load balancer.
Allowing external access to an application for testing.
Enabling manual load balancing using external tools.
Limitations of NodePort:
Limited port range (30000-32767) may cause conflicts.
Not suitable for production due to lack of advanced traffic management.
Exposing nodes directly can pose security risks.
What is a multinode cluster and a single-node cluster in Kubernetes?
Kubernetes clusters can be categorized into single-node clusters and multinode clusters, depending on the number of nodes in the cluster. Nodes are the physical or virtual machines that run the containerized applications.
Single-Node Cluster
A single-node cluster consists of only one node, which acts as both the control plane and the worker node.
Characteristics:
The same node runs both control plane components (API Server, Controller Manager, Scheduler) and worker node components (Kubelet, Kube Proxy, container runtime).
Simpler to set up and manage.
Used mainly for local development, testing, and learning.
Limited scalability and not suitable for production environments.
Example:
- Minikube and Kind (Kubernetes in Docker) are tools that create a single-node cluster for local development.
Use Case:
Developers testing Kubernetes applications on their local machine.
Running small-scale, non-production workloads.
Multinode Cluster
A multinode cluster consists of multiple nodes, typically divided into control plane nodes and worker nodes.
Characteristics:
The control plane nodes manage cluster operations, scheduling, and API requests.
The worker nodes run containerized applications and handle workloads.
Supports high availability, scalability, and fault tolerance.
Used in production environments.
Example Architecture:
1+ control plane nodes (Master node)
Multiple worker nodes running application Pods
Use Case:
Deploying large-scale applications in production.
High availability and redundancy to prevent single points of failure.
Difference between creating and applying in Kubernetes?
Both
kubectl create
andkubectl apply
are used to manage Kubernetes resources, but they serve different purposes in terms of how they handle resource creation and updates.
1.
kubectl create
Used to create a new resource only if it does not exist.
If the resource already exists, the command will fail with an error.
Primarily used for one-time resource creation.
Does not support updating an existing resource.
Example Usage:
kubectl create -f deployment.yaml
Behavior:
Creates the resource defined in
deployment.yaml
.Running the same command again will result in an error:
Error from server (AlreadyExists): deployments.apps "my-app" already exists
Use Case:
When creating a resource for the first time.
Not ideal for managing updates to resources.
2. kubectl apply
Used to create or update a resource declaratively.
If the resource does not exist, it will be created.
If the resource already exists, it will be updated with the new configuration.
Works well with YAML manifests and GitOps workflows.
Example Usage:
kubectl apply -f deployment.yaml
Behavior:
If the deployment does not exist, it will be created.
If the deployment already exists, Kubernetes will apply only the changes from
deployment.yaml
without affecting other configurations.
Use Case:
When managing resources declaratively.
Updating existing resources without manual intervention.
Useful in CI/CD pipelines.