Mastering ConfigMaps and Secrets in Kubernetes
Table of contents
- What are ConfigMaps and Secrets in k8s:
- 1. ConfigMaps
- Use Cases of ConfigMaps:
- Creating a ConfigMap:
- Using a ConfigMap in a Pod:
- 2. Secrets
- Use Cases of Secrets:
- Creating a Secret:
- Using a Secret in a Pod:
- Key Differences:
- Task 1:
- Step 1: Create a ConfigMap
- Option 1: Using a YAML File
- Option 2: Using the Command Line
- Step 2: Update deployment.yml to Use the ConfigMap
- Step 3: Apply the Updated Deployment
- Step 4: Verify the ConfigMap
- Task 2:
- Step 1: Create a Secret
- Option 1: Using a YAML File
- Option 2: Using the Command Line
- Step 2: Update deployment.yml To use the Secret
- Step 3: Apply the Updated Deployment
- Step 4: Verify the Secret
- Managing Persistent Volumes in Your Deployment
- What are Persistent Volumes in k8s:
- 1. Understanding Persistent Volumes
- 2. Persistent Volume Components
- 3. Persistent Volume Lifecycle
- 4. Persistent Volume Example
- Step 1: Create a Persistent Volume (pv.yml)
- Step 2: Create a Persistent Volume Claim (pvc.yml)
- Step 3: Mount the PVC in a Pod (pod.yml)
- Step 4: Apply the Configurations
- Step 5: Verify
- 5. Access Modes in PV
- 6. Reclaim Policies
- 7. Dynamic Provisioning with StorageClass
- 8. Benefits of Persistent Volumes
- Task 3:
- Step 1: Create a Persistent Volume (PV)
- Step 2: Create a Persistent Volume Claim (PVC)
- Step 3: Update deployment.yml To use the Persistent Volume
- Step 4: Apply the Updated Deployment
- Step 5: Verify the Persistent Volume in the Deployment
- Additional Verification
- Task 4:
- Step 1: Find the Running Pod
- Step 2: Connect to the Pod
- Step 3: Navigate to the Mounted Volume
- Step 4: Create a Test File (Optional)
- Step 5: Verify Data Persistence
What are ConfigMaps and Secrets in k8s:
In Kubernetes, ConfigMaps and Secrets manage configuration data for applications running in a cluster.
1. ConfigMaps
A ConfigMap is an API object used to store non-sensitive configuration data in key-value pairs. It helps keep configuration separate from the application code, making it easier to modify it without rebuilding container images.
Use Cases of ConfigMaps:
Storing environment variables
Configuring application settings
Managing command-line arguments or configuration files
Creating a ConfigMap:
There are multiple ways to create a ConfigMap:
1. From a literal key-value pair:
kubectl create configmap my-config --from-literal=APP_MODE=production
2. From a file:
kubectl create configmap my-config --from-file=config.properties
3. Using a YAML manifest:
apiVersion: v1
kind: ConfigMap
metadata:
name: my-config
data:
APP_MODE: "production"
LOG_LEVEL: "debug"
Using a ConfigMap in a Pod:
ConfigMaps can be used in environment variables and command arguments or mounted as a volume.
containers:
- name: my-app
image: my-app-image
env:
- name: APP_MODE
valueFrom:
configMapKeyRef:
name: my-config
key: APP_MODE
2. Secrets
A Secret is similar to a ConfigMap but is designed to store sensitive information like passwords, API keys, or certificates. Data in Secrets is base64 encoded, though not encrypted by default.
Use Cases of Secrets:
Storing database credentials
API keys, tokens, SSH keys
TLS certificates
Creating a Secret:
1. From a literal value:
kubectl create secret generic my-secret --from-literal=DB_PASSWORD=mysecurepassword
2. From a YAML manifest:
apiVersion: v1
kind: Secret
metadata:
name: my-secret
type: Opaque
data:
DB_PASSWORD: bXlzZWN1cmVwYXNzd29yZA== # Base64 encoded
You can encode a value manually using:
echo -n "mysecurepassword" | base64
Using a Secret in a Pod:
Secrets can be used as environment variables or mounted as volumes.
containers:
- name: my-app
image: my-app-image
env:
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: my-secret
key: DB_PASSWORD
Key Differences:
Feature | ConfigMap | Secret |
Purpose | Stores non-sensitive config data | Stores sensitive data (passwords, API keys) |
Data Encoding | Plain text | Base64 encoded |
Storage in etcd | Unencrypted | Base64 encoded (not encrypted by default) |
Use Cases | Application settings, env variables | Credentials, API keys, certificates |
To improve security, you should use Kubernetes secrets with encryption at rest, RBAC restrictions, and external secret managers (e.g., HashiCorp Vault, AWS Secrets Manager).
Task 1:
Create a ConfigMap for your Deployment
Create a ConfigMap for your Deployment using a file or the command line
Update the deployment.yml file to include the ConfigMap
Apply the updated deployment using the command:
kubectl apply -f deployment.yml -n <namespace-name>
Verify that the ConfigMap has been created by checking the status of the ConfigMaps in your Namespace.
Step 1: Create a ConfigMap
You can create a ConfigMap using a YAML file or the command line.
Option 1: Using a YAML File
Create a file named configmap.yml
:
apiVersion: v1
kind: ConfigMap
metadata:
name: my-configmap
namespace: <namespace-name>
data:
APP_ENV: "production"
LOG_LEVEL: "info"
DATABASE_URL: "mysql://db-service:3306/mydb"
Apply the ConfigMap:
kubectl apply -f configmap.yml -n <namespace-name>
Option 2: Using the Command Line
Alternatively, you can create a ConfigMap directly:
kubectl create configmap my-configmap --from-literal=APP_ENV=production --from-literal=LOG_LEVEL=info --from-literal=DATABASE_URL="mysql://db-service:3306/mydb" -n <namespace-name>
Verify its creation:
kubectl get configmaps -n <namespace-name>
Step 2: Update deployment.yml
to Use the ConfigMap
Modify your deployment.yml
To include environment variables from the ConfigMap:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
namespace: <namespace-name>
spec:
replicas: 2
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-container
image: my-image:latest
envFrom:
- configMapRef:
name: my-configmap
Step 3: Apply the Updated Deployment
Run the following command to apply the changes:
kubectl apply -f deployment.yml -n <namespace-name>
Step 4: Verify the ConfigMap
Check if the ConfigMap has been created successfully:
kubectl get configmaps -n <namespace-name>
To check its details:
kubectl describe configmap my-configmap -n <namespace-name>
You can also verify that the environment variables are correctly injected into the running pods:
kubectl exec -it <pod-name> -n <namespace-name> -- printenv | grep APP_ENV
This should return APP_ENV=production
, confirming the ConfigMap is applied correctly.
Task 2:
Create a Secret for your Deployment
Create a Secret for your Deployment using a file or the command line
Update the deployment.yml file to include the Secret
Apply the updated deployment using the command:
kubectl apply -f deployment.yml -n <namespace-name>
Verify that the Secret has been created by checking the status of the Secrets in your Namespace.
Step 1: Create a Secret
You can create a Kubernetes Secret using a YAML file or the command line.
Option 1: Using a YAML File
Create a file named secret.yml
:
apiVersion: v1
kind: Secret
metadata:
name: my-secret
namespace: <namespace-name>
type: Opaque
data:
DB_PASSWORD: cGFzc3dvcmQ= # Base64 encoded value of 'password'
API_KEY: dXNlcmFwaWtleQ== # Base64 encoded value of 'userapikey'
Note: Use
echo -n 'your-secret-value' | base64
to generate the base64-encoded values.
Apply the Secret:
kubectl apply -f secret.yml -n <namespace-name>
Option 2: Using the Command Line
Alternatively, create the Secret directly:
kubectl create secret generic my-secret \
--from-literal=DB_PASSWORD='password' \
--from-literal=API_KEY='userapikey' \
-n <namespace-name>
Verify its creation:
kubectl get secrets -n <namespace-name>
Step 2: Update deployment.yml
To use the Secret
Modify your deployment.yml
To include the Secret as an environment variable:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
namespace: <namespace-name>
spec:
replicas: 2
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-container
image: my-image:latest
env:
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: my-secret
key: DB_PASSWORD
- name: API_KEY
valueFrom:
secretKeyRef:
name: my-secret
key: API_KEY
Step 3: Apply the Updated Deployment
Run the following command to apply the changes:
kubectl apply -f deployment.yml -n <namespace-name>
Step 4: Verify the Secret
Check if the Secret has been created successfully:
kubectl get secrets -n <namespace-name>
To check its details (without revealing the values):
kubectl describe secret my-secret -n <namespace-name>
To verify the Secret is accessible inside a running pod:
kubectl exec -it <pod-name> -n <namespace-name> -- printenv | grep DB_PASSWORD
Since Secrets are sensitive, they are not printed directly using kubectl get secrets
. If you need to decode a secret value, use:
kubectl get secret my-secret -n <namespace-name> -o jsonpath="{.data.DB_PASSWORD}" | base64 --decode
This should return the original value DB_PASSWORD
, confirming that the Secret is applied correctly.
Managing Persistent Volumes in Your Deployment
What are Persistent Volumes in k8s:
A Persistent Volume (PV) in Kubernetes is a storage resource that exists independently of pods. It allows data to persist beyond the lifecycle of individual pods, providing durable storage and enabling stateful applications.
1. Understanding Persistent Volumes
In Kubernetes, storage is ephemeral by default. If a pod is deleted or rescheduled, any data stored within its local storage is lost. Persistent Volumes (PVs) solve this problem by providing a persistent storage solution that outlives pods.
2. Persistent Volume Components
Persistent Volumes in Kubernetes involve two key components:
Persistent Volume (PV)
A physical storage resource in the cluster.
Can be backed by local disks, cloud storage (AWS EBS, Azure Disk, GCE Persistent Disk), or network storage (NFS, Ceph, GlusterFS, etc.).
Created by administrators.
Persistent Volume Claim (PVC)
A storage request was made by a pod.
Users specify storage size, access mode, and storage class.
Kubernetes binds the claim to a suitable PV.
3. Persistent Volume Lifecycle
Provisioning
Static Provisioning: Admins manually create PVs.
Dynamic Provisioning: Kubernetes automatically provisions PVs based on StorageClass.
Binding
- A PVC is bound to a suitable PV based on requested storage requirements.
Using the Volume
- Pods mount the PV using the PVC.
Releasing
- When a PVC is deleted, the PV is released.
Reclaiming
- PVs can be Retained, Recycled, or Deleted based on the
reclaimPolicy
.
- PVs can be Retained, Recycled, or Deleted based on the
4. Persistent Volume Example
Step 1: Create a Persistent Volume (pv.yml
)
apiVersion: v1
kind: PersistentVolume
metadata:
name: my-pv
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
hostPath:
path: "/mnt/data"
Step 2: Create a Persistent Volume Claim (pvc.yml
)
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
Step 3: Mount the PVC in a Pod (pod.yml
)
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
volumes:
- name: storage
persistentVolumeClaim:
claimName: my-pvc
containers:
- name: my-container
image: nginx
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: storage
Step 4: Apply the Configurations
kubectl apply -f pv.yml
kubectl apply -f pvc.yml
kubectl apply -f pod.yml
Step 5: Verify
Check PV and PVC status:
kubectl get pv
kubectl get pvc
5. Access Modes in PV
Access Mode | Description |
ReadWriteOnce (RWO) | The volume can be mounted as read-write by a single node. |
ReadOnlyMany (ROX) | The volume can be mounted as read-only by multiple nodes. |
ReadWriteMany (RWX) | The volume can be mounted as read-write by multiple nodes. |
6. Reclaim Policies
Reclaim Policy | Description |
Retain | Keeps the PV and its data even after PVC deletion. |
Recycle | Performs a basic cleanup (deprecated in newer Kubernetes versions). |
Delete | Deletes the PV and its underlying storage. |
7. Dynamic Provisioning with StorageClass
Instead of manually creating PVs, StorageClasses allows Kubernetes to automatically provision storage based on predefined configurations.
Example StorageClass
:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: my-storage-class
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
PVC requesting dynamic storage:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: dynamic-pvc
spec:
storageClassName: my-storage-class
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
8. Benefits of Persistent Volumes
Decouples storage from pods, allowing stateful applications.
Supports various storage backends, including cloud and on-premise solutions.
Automated provisioning with StorageClass simplifies management.
Improves availability and reliability of data.
Task 3:
Add a Persistent Volume to your Deployment todo app.
Create a Persistent Volume using a file on your node. Template
Create a Persistent Volume Claim that references the Persistent Volume. Template
Update your deployment.yml file to include the Persistent Volume Claim. After Applying pv.yml pvc.yml your deployment file looks like this. Template
Apply the updated deployment using the command:
kubectl apply -f deployment.yml
Verify that the Persistent Volume has been added to your Deployment by checking the status of the Pods and Persistent Volumes in your cluster. Use this commands
kubectl get pods
,kubectl get pv
Step 1: Create a Persistent Volume (PV)
Create a file named pv.yml
with the following content:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-todo-app
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
hostPath:
path: "/tmp/data"
Apply the PV file:
kubectl apply -f pv.yml
Verify that the Persistent Volume is created:
kubectl get pv
Step 2: Create a Persistent Volume Claim (PVC)
Create a file named pvc.yml
:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-todo-app
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
Apply the PVC file:
kubectl apply -f pvc.yml
Verify that the PVC is created and bound to the PV:
kubectl get pvc
Step 3: Update deployment.yml
To use the Persistent Volume
Modify your deployment.yml
file to include the PVC:
apiVersion: apps/v1
kind: Deployment
metadata:
name: todo-app
spec:
replicas: 1
selector:
matchLabels:
app: todo-app
template:
metadata:
labels:
app: todo-app
spec:
containers:
- name: todo-app-container
image: my-todo-app:latest
volumeMounts:
- mountPath: "/app/data"
name: todo-storage
volumes:
- name: todo-storage
persistentVolumeClaim:
claimName: pvc-todo-app
Step 4: Apply the Updated Deployment
Run the following command:
kubectl apply -f deployment.yml
Step 5: Verify the Persistent Volume in the Deployment
Check if the Pods are running:
kubectl get pods
Check if the Persistent Volume is bound:
kubectl get pv
Check if the Persistent Volume Claim is bound:
kubectl get pvc
Additional Verification
To ensure the volume is mounted correctly in the pod, you can exec into the pod and check the storage:
kubectl exec -it <pod-name> -- ls /app/data
This confirms that the Persistent Volume is correctly attached and used in the Todo App Deployment.
Task 4:
Accessing data in the Persistent Volume,
Connect to a Pod in your Deployment using the command: `kubectl exec -it -- /bin/bash`
Verify that you can access the data stored in the Persistent Volume from within the Pod.
Step 1: Find the Running Pod
First, list the running pods in your cluster:
kubectl get pods
Identify the pod running the Todo App, and note its name.
Step 2: Connect to the Pod
Use the following command to enter the pod’s terminal:
kubectl exec -it <pod-name> -- /bin/bash
Replace <pod-name>
with the actual name of your running pod.
Step 3: Navigate to the Mounted Volume
Inside the pod, navigate to the mounted Persistent Volume directory:
cd /app/data
List the files to check if data exists:
ls -la
If files exist, you should see them listed.
Step 4: Create a Test File (Optional)
To confirm that data persists, you can create a file inside the PV:
echo "Hello, Persistent Volume!" > testfile.txt
Now, verify that the file was created:
cat testfile.txt
Exit the pod:
exit
Step 5: Verify Data Persistence
Since the Persistent Volume is mounted, the file should persist even if the pod is restarted.
To test this:
Delete the pod (Kubernetes will recreate it):
kubectl delete pod <pod-name>
Check running pods again:
kubectl get pods
Re-enter the new pod and check if
testfile.txt
still exists:kubectl exec -it <new-pod-name> -- /bin/bash cd /app/data ls -la cat testfile.txt
If the file is still there, it confirms that the Persistent Volume is working correctly and preserving data across pod restarts.
This verifies that your Persistent Volume is successfully mounted and storing data for your Todo App Deployment.