Thursday, December 4, 2025

Kubernetes: Complete Feature Summary for Executive Decision‑Makers

Kubernetes: Complete Feature Summary for Executive Decision‑Makers


Kubernetes is a full‑scale platform that modernizes how applications are deployed, scaled, secured, and operated. It delivers value across eight major capability areas, each directly tied to business outcomes.

1. Reliability & High Availability

Self‑healing containers
Automatic failover
Rolling updates & instant rollbacks
Health checks (liveness/readiness probes)
Multi‑node clustering
ReplicaSets for redundancy
Business impact: Keeps applications online, reduces outages, and improves customer experience.

2. Scalability & Performance

Horizontal Pod Autoscaling (HPA)
Vertical Pod Autoscaling (VPA)
Cluster Autoscaler
Built‑in load balancing
Resource quotas & limits
Business impact: Handles traffic spikes automatically and optimizes resource usage.

3. Security & Compliance

Role‑Based Access Control (RBAC)
Network Policies
Secrets encryption
Pod Security Standards
Image scanning & signing
Namespace isolation
Audit logging
Business impact: Strengthens security posture and supports compliance requirements.

4. Automation & DevOps Enablement

CI/CD integration
GitOps workflows
Automated deployments & rollbacks
Declarative configuration
Infrastructure as Code (IaC)
Business impact: Accelerates delivery, reduces manual errors, and standardizes operations.

5. Environment Standardization

Namespaces for dev/test/prod
Consistent container images
ConfigMaps & Secrets for environment configs
Multi‑OS container support (CentOS, Ubuntu, Debian, etc.)
Business impact: Eliminates “works on my machine” issues and improves developer productivity.

6. Cost Optimization

Efficient bin‑packing
Autoscaling to reduce idle resources
Spot instance support
Multi‑cloud flexibility
High container density
Business impact: Lowers infrastructure costs and prevents over‑provisioning.

7. Multi‑Cloud & Hybrid Cloud Flexibility

Runs on AWS, Azure, GCP, on‑prem, or hybrid
No vendor lock‑in
Disaster recovery across regions
Edge computing support
Business impact: Future‑proofs the organization and enables global deployments.

8. Observability & Monitoring

Metrics (Prometheus, Metrics Server)
Logging (ELK, Loki)
Tracing (Jaeger, OpenTelemetry)
Dashboards (Grafana, Lens)
Business impact: Improves visibility, speeds up troubleshooting, and supports data‑driven decisions.

Wednesday, November 26, 2025

Kubernetes Knowledge Transfer Pack

Kubernetes Knowledge Transfer Pack


1. Namespaces
Definition: Logical partitions in a cluster, used to separate environments or teams.
Commands:
kubectl get namespaces
kubectl create namespace dev-team
kubectl delete namespace dev-team

2. Pods
Definition: Smallest deployable unit in Kubernetes, wraps one or more containers.

Commands:
kubectl get pods
kubectl get pods --all-namespaces
kubectl describe pod <pod-name>
kubectl delete pod <pod-name>

3. Containers
Definition: Actual running processes inside pods (Docker/containerd images).

Commands:
kubectl logs <pod-name> -c <container-name>
kubectl exec -it <pod-name> -c <container-name> -- /bin/sh

4. Deployments
Definition: Controller that manages pods, scaling, and rolling updates.

Commands:
kubectl create deployment nginx-deploy --image=nginx
kubectl scale deployment nginx-deploy --replicas=5
kubectl get deployments
kubectl delete deployment nginx-deploy

5. Services
Definition: Provides stable networking to pods.
Types: ClusterIP, NodePort, LoadBalancer.

Commands:
kubectl expose deployment nginx-deploy --port=80 --target-port=80 --type=ClusterIP
kubectl get svc
kubectl delete svc nginx-deploy

6. ConfigMaps
Definition: Store non‑confidential configuration data.

Commands:
kubectl create configmap app-config --from-literal=ENV=prod
kubectl get configmaps
kubectl describe configmap app-config

7. Secrets
Definition: Store sensitive data (passwords, tokens).

Commands:
kubectl create secret generic db-secret --from-literal=DB_PASSWORD=banking123
kubectl get secrets
kubectl describe secret db-secret

8. Volumes & Storage
Definition: Persistent storage for pods.

Commands:
kubectl get pv
kubectl get pvc --all-namespaces

9. StatefulSets
Definition: Manage stateful apps (databases, Kafka).

Commands:
kubectl apply -f redis-statefulset.yaml
kubectl get statefulsets

10. DaemonSets
Definition: Ensures one pod runs on every node (logging, monitoring).

Commands:
kubectl get daemonsets -n kube-system

11. Jobs & CronJobs
Job: Runs pods until completion.
CronJob: Runs jobs on a schedule.

Commands:
kubectl create job pi --image=perl -- perl -Mbignum=bpi -wle 'print bpi(2000)'
kubectl get jobs
kubectl create cronjob hello --image=busybox --schedule="*/1 * * * *" -- echo "Hello World"
kubectl get cronjobs

12. Ingress
Definition: Manages external HTTP/HTTPS access to services.

Commands:
kubectl apply -f ingress.yaml
kubectl get ingress

🏗 Kubernetes Architecture

Control Plane Components

API Server → Entry point for all requests.

etcd → Cluster state database.

Controller Manager → Ensures desired state.

Scheduler → Assigns pods to nodes.

Node Components

Kubelet → Agent ensuring containers run.

Kube-proxy → Networking rules.

Container Runtime → Runs containers (Docker, containerd).

Add‑ons
CoreDNS → DNS service discovery.

CNI Plugin (Flannel/Calico) → Pod networking.

Metrics Server → Resource monitoring.

📊 Monitoring & Health Commands

kubectl get nodes -o wide
kubectl get pods --all-namespaces -w
kubectl get events --all-namespaces --sort-by=.metadata.creationTimestamp
kubectl top nodes
kubectl top pods
systemctl status kubelet
systemctl status containerd

Start and Stop Kubernetes Services

Start and Stop Kubernetes Services


Proper Stop/Start Cycle

1. Stop services:

sudo systemctl stop kubelet
sudo systemctl stop containerd

2. Verify stopped:

systemctl status kubelet
systemctl status containerd

How to Confirm They’re Really Dow

# Check kubelet process
ps -ef | grep kubelet

# Check container runtime process
ps -ef | grep containerd

# List running containers (if using containerd)
sudo crictl ps

# If using Docker runtime
sudo docker ps

3. Start services again:

sudo systemctl start containerd
sudo systemctl start kubelet

4. Verify Recovery

After a minute or two, check again:

kubectl get nodes
kubectl get pods -n kube-system
kubectl get componentstatuses


How to Build Multi‑Environment Pods with CentOS, Ubuntu, Debian, and More

How to Build Multi‑Environment Pods with CentOS, Ubuntu, Debian, and More


Step 1: Create Namespaces

kubectl create namespace mqmdev
kubectl create namespace mqmtest
kubectl create namespace mqmprod

Step 2: Create Pods (example with 7 OS containers)

Save YAML files (mqmdev.yaml, mqmtest.yaml, mqmprod.yaml) with multiple containers inside each pod.

Example for mqmtest:

apiVersion: v1
kind: Pod
metadata:
name: mqmtest-pod
namespace: mqmtest
spec:
containers:
- name: centos-container
image: centos:7
command: ["/bin/bash", "-c", "sleep infinity"]
- name: redhat-container
image: registry.access.redhat.com/ubi8/ubi
command: ["/bin/bash", "-c", "sleep infinity"]
- name: ubuntu-container
image: ubuntu:22.04
command: ["/bin/bash", "-c", "sleep infinity"]
- name: debian-container
image: debian:stable
command: ["/bin/bash", "-c", "sleep infinity"]
- name: fedora-container
image: fedora:latest
command: ["/bin/bash", "-c", "sleep infinity"]
- name: oraclelinux-container
image: oraclelinux:8
command: ["/bin/bash", "-c", "sleep infinity"]
- name: alpine-container
image: alpine:latest
command: ["/bin/sh", "-c", "sleep infinity"]

Apply:

kubectl apply -f mqmdev.yaml
kubectl apply -f mqmtest.yaml
kubectl apply -f mqmprod.yaml

Step 3: Check Pod Status

kubectl get pods -n mqmdev
kubectl get pods -n mqmtest
kubectl get pods -n mqmprod

Step 4: List Container Names Inside a Pod

kubectl get pod mqmtest-pod -n mqmtest -o jsonpath="{.spec.containers[*].name}"

Output example:

centos-container redhat-container ubuntu-container debian-container fedora-container oraclelinux-container alpine-container

Step 5: Connect to a Particular Container

Use kubectl exec with -c <container-name>:

# CentOS
kubectl exec -it mqmtest-pod -n mqmtest -c centos-container -- /bin/bash

# Red Hat UBI
kubectl exec -it mqmtest-pod -n mqmtest -c redhat-container -- /bin/bash

# Ubuntu
kubectl exec -it mqmtest-pod -n mqmtest -c ubuntu-container -- /bin/bash

# Debian
kubectl exec -it mqmtest-pod -n mqmtest -c debian-container -- /bin/bash

# Fedora
kubectl exec -it mqmtest-pod -n mqmtest -c fedora-container -- /bin/bash

# Oracle Linux
kubectl exec -it mqmtest-pod -n mqmtest -c oraclelinux-container -- /bin/bash

# Alpine (use sh instead of bash)
kubectl exec -it mqmtest-pod -n mqmtest -c alpine-container -- /bin/sh

Step 6: Verify OS Inside Container

Once inside, run:
cat /etc/os-release

This confirms which OS environment you’re connected to.

✅ Summary

Create namespaces → mqmdev, mqmtest, mqmprod.
Apply pod YAMLs with 7 containers each.
Check pod status → kubectl get pods -n <namespace>.
List container names → kubectl get pod <pod> -n <namespace> -o jsonpath=....
Connect to container → kubectl exec -it ... -c <container-name> -- /bin/bash.
Verify OS → cat /etc/os-release.

Nodes in Kubernetes: The Unsung Heroes of Container Orchestration

Nodes in Kubernetes: The Unsung Heroes of Container Orchestration


What is a Node in Kubernetes?
A Node is a worker machine in Kubernetes. It can be a physical server or a virtual machine in the cloud. Nodes are where your pods (and therefore your containers) actually run.

Think of it like this:

Cluster = a team of machines.
Node = one machine in that team.
Pod = a unit of work scheduled onto a node.
Container = the actual application process inside the pod.


🔹 Node Components
Each node runs several critical services:

Kubelet → Agent that talks to the control plane and ensures pods are running.
Container Runtime → Runs containers (Docker, containerd, CRI‑O).
Kube‑proxy → Manages networking rules so pods can communicate with each other and with services.

🔹 Types of Nodes

Control Plane Node → Runs cluster management components (API server, etcd, scheduler, controller manager).
Worker Node → Runs user workloads (pods and containers).

🔹 Example: Checking Nodes

When you run:

[root@centosmqm ~]# kubectl get nodes
NAME        STATUS   ROLES           AGE   VERSION
centosmqm   Ready    control-plane   28h   v1.30.14

Explanation of Output:
NAME → centosmqm → the hostname of your node.
STATUS → Ready → the node is healthy and can accept pods.
ROLES → control-plane → this node is acting as the master/control plane, not a worker.
AGE → 28h → the node has been part of the cluster for 28 hours.
VERSION → v1.30.14 → the Kubernetes version running on this node.

👉 In this example, your cluster currently has one node (centosmqm), and it is the control plane node. If you added worker nodes, they would also appear in this list with roles like <none> or worker.

🔹 Commands to Work with Nodes

# List all nodes
kubectl get nodes

# Detailed info about a node
kubectl describe node centosmqm

# Show nodes with more details (IP, OS, version)
kubectl get nodes -o wide

✅ Summary

A Node is the machine (VM or physical) that runs pods.
Nodes can be control plane (managing the cluster) or worker nodes (running workloads).
Your example shows a single control-plane node named centosmqm, which is healthy and running Kubernetes v1.30.14.

Thursday, November 13, 2025

How to Copy an AMI Across AWS Regions: A Step-by-Step Guide

 How to Copy an AMI Across AWS Regions: A Step-by-Step Guide


Step-by-Step Guide: Copying an AMI to a Different AWS Region:

Step 1: Create an AMI from Your EC2 Instance

Go to the EC2 Dashboard in the AWS Console.
Select your instance → click Actions → Image and templates → Create image.
Enter a name and description.
Choose whether to reboot the instance (select NoReboot to avoid downtime).
Click Create image.

Step 2: Wait for the AMI to Become Available

Navigate to AMIs in the EC2 Dashboard.
Monitor the status until it changes to Available.

Step 3: Copy the AMI to Another Region

In the AMIs section, select your AMI.
Click Actions → Copy AMI.
Choose the destination region.
Optionally rename the AMI and configure encryption.
Click Copy AMI.

Step 4: Switch to the Destination Region

Change your region in the AWS Console to the target region.
Go to AMIs and confirm the copied AMI is listed and available.

Step 5: Launch an EC2 Instance from the Copied AMI

Select the copied AMI → click Launch instance.
Configure instance details, storage, and security groups.
Launch the instance.