Wednesday, November 26, 2025

Kubernetes Knowledge Transfer Pack

Kubernetes Knowledge Transfer Pack


1. Namespaces
Definition: Logical partitions in a cluster, used to separate environments or teams.
Commands:
kubectl get namespaces
kubectl create namespace dev-team
kubectl delete namespace dev-team

2. Pods
Definition: Smallest deployable unit in Kubernetes, wraps one or more containers.

Commands:
kubectl get pods
kubectl get pods --all-namespaces
kubectl describe pod <pod-name>
kubectl delete pod <pod-name>

3. Containers
Definition: Actual running processes inside pods (Docker/containerd images).

Commands:
kubectl logs <pod-name> -c <container-name>
kubectl exec -it <pod-name> -c <container-name> -- /bin/sh

4. Deployments
Definition: Controller that manages pods, scaling, and rolling updates.

Commands:
kubectl create deployment nginx-deploy --image=nginx
kubectl scale deployment nginx-deploy --replicas=5
kubectl get deployments
kubectl delete deployment nginx-deploy

5. Services
Definition: Provides stable networking to pods.
Types: ClusterIP, NodePort, LoadBalancer.

Commands:
kubectl expose deployment nginx-deploy --port=80 --target-port=80 --type=ClusterIP
kubectl get svc
kubectl delete svc nginx-deploy

6. ConfigMaps
Definition: Store non‑confidential configuration data.

Commands:
kubectl create configmap app-config --from-literal=ENV=prod
kubectl get configmaps
kubectl describe configmap app-config

7. Secrets
Definition: Store sensitive data (passwords, tokens).

Commands:
kubectl create secret generic db-secret --from-literal=DB_PASSWORD=banking123
kubectl get secrets
kubectl describe secret db-secret

8. Volumes & Storage
Definition: Persistent storage for pods.

Commands:
kubectl get pv
kubectl get pvc --all-namespaces

9. StatefulSets
Definition: Manage stateful apps (databases, Kafka).

Commands:
kubectl apply -f redis-statefulset.yaml
kubectl get statefulsets

10. DaemonSets
Definition: Ensures one pod runs on every node (logging, monitoring).

Commands:
kubectl get daemonsets -n kube-system

11. Jobs & CronJobs
Job: Runs pods until completion.
CronJob: Runs jobs on a schedule.

Commands:
kubectl create job pi --image=perl -- perl -Mbignum=bpi -wle 'print bpi(2000)'
kubectl get jobs
kubectl create cronjob hello --image=busybox --schedule="*/1 * * * *" -- echo "Hello World"
kubectl get cronjobs

12. Ingress
Definition: Manages external HTTP/HTTPS access to services.

Commands:
kubectl apply -f ingress.yaml
kubectl get ingress

🏗 Kubernetes Architecture

Control Plane Components

API Server → Entry point for all requests.

etcd → Cluster state database.

Controller Manager → Ensures desired state.

Scheduler → Assigns pods to nodes.

Node Components

Kubelet → Agent ensuring containers run.

Kube-proxy → Networking rules.

Container Runtime → Runs containers (Docker, containerd).

Add‑ons
CoreDNS → DNS service discovery.

CNI Plugin (Flannel/Calico) → Pod networking.

Metrics Server → Resource monitoring.

📊 Monitoring & Health Commands

kubectl get nodes -o wide
kubectl get pods --all-namespaces -w
kubectl get events --all-namespaces --sort-by=.metadata.creationTimestamp
kubectl top nodes
kubectl top pods
systemctl status kubelet
systemctl status containerd

Start and Stop Kubernetes Services

Start and Stop Kubernetes Services


Proper Stop/Start Cycle

1. Stop services:

sudo systemctl stop kubelet
sudo systemctl stop containerd

2. Verify stopped:

systemctl status kubelet
systemctl status containerd

How to Confirm They’re Really Dow

# Check kubelet process
ps -ef | grep kubelet

# Check container runtime process
ps -ef | grep containerd

# List running containers (if using containerd)
sudo crictl ps

# If using Docker runtime
sudo docker ps

3. Start services again:

sudo systemctl start containerd
sudo systemctl start kubelet

4. Verify Recovery

After a minute or two, check again:

kubectl get nodes
kubectl get pods -n kube-system
kubectl get componentstatuses


How to Build Multi‑Environment Pods with CentOS, Ubuntu, Debian, and More

How to Build Multi‑Environment Pods with CentOS, Ubuntu, Debian, and More


Step 1: Create Namespaces

kubectl create namespace mqmdev
kubectl create namespace mqmtest
kubectl create namespace mqmprod

Step 2: Create Pods (example with 7 OS containers)

Save YAML files (mqmdev.yaml, mqmtest.yaml, mqmprod.yaml) with multiple containers inside each pod.

Example for mqmtest:

apiVersion: v1
kind: Pod
metadata:
name: mqmtest-pod
namespace: mqmtest
spec:
containers:
- name: centos-container
image: centos:7
command: ["/bin/bash", "-c", "sleep infinity"]
- name: redhat-container
image: registry.access.redhat.com/ubi8/ubi
command: ["/bin/bash", "-c", "sleep infinity"]
- name: ubuntu-container
image: ubuntu:22.04
command: ["/bin/bash", "-c", "sleep infinity"]
- name: debian-container
image: debian:stable
command: ["/bin/bash", "-c", "sleep infinity"]
- name: fedora-container
image: fedora:latest
command: ["/bin/bash", "-c", "sleep infinity"]
- name: oraclelinux-container
image: oraclelinux:8
command: ["/bin/bash", "-c", "sleep infinity"]
- name: alpine-container
image: alpine:latest
command: ["/bin/sh", "-c", "sleep infinity"]

Apply:

kubectl apply -f mqmdev.yaml
kubectl apply -f mqmtest.yaml
kubectl apply -f mqmprod.yaml

Step 3: Check Pod Status

kubectl get pods -n mqmdev
kubectl get pods -n mqmtest
kubectl get pods -n mqmprod

Step 4: List Container Names Inside a Pod

kubectl get pod mqmtest-pod -n mqmtest -o jsonpath="{.spec.containers[*].name}"

Output example:

centos-container redhat-container ubuntu-container debian-container fedora-container oraclelinux-container alpine-container

Step 5: Connect to a Particular Container

Use kubectl exec with -c <container-name>:

# CentOS
kubectl exec -it mqmtest-pod -n mqmtest -c centos-container -- /bin/bash

# Red Hat UBI
kubectl exec -it mqmtest-pod -n mqmtest -c redhat-container -- /bin/bash

# Ubuntu
kubectl exec -it mqmtest-pod -n mqmtest -c ubuntu-container -- /bin/bash

# Debian
kubectl exec -it mqmtest-pod -n mqmtest -c debian-container -- /bin/bash

# Fedora
kubectl exec -it mqmtest-pod -n mqmtest -c fedora-container -- /bin/bash

# Oracle Linux
kubectl exec -it mqmtest-pod -n mqmtest -c oraclelinux-container -- /bin/bash

# Alpine (use sh instead of bash)
kubectl exec -it mqmtest-pod -n mqmtest -c alpine-container -- /bin/sh

Step 6: Verify OS Inside Container

Once inside, run:
cat /etc/os-release

This confirms which OS environment you’re connected to.

✅ Summary

Create namespaces → mqmdev, mqmtest, mqmprod.
Apply pod YAMLs with 7 containers each.
Check pod status → kubectl get pods -n <namespace>.
List container names → kubectl get pod <pod> -n <namespace> -o jsonpath=....
Connect to container → kubectl exec -it ... -c <container-name> -- /bin/bash.
Verify OS → cat /etc/os-release.

Nodes in Kubernetes: The Unsung Heroes of Container Orchestration

Nodes in Kubernetes: The Unsung Heroes of Container Orchestration


What is a Node in Kubernetes?
A Node is a worker machine in Kubernetes. It can be a physical server or a virtual machine in the cloud. Nodes are where your pods (and therefore your containers) actually run.

Think of it like this:

Cluster = a team of machines.
Node = one machine in that team.
Pod = a unit of work scheduled onto a node.
Container = the actual application process inside the pod.


🔹 Node Components
Each node runs several critical services:

Kubelet → Agent that talks to the control plane and ensures pods are running.
Container Runtime → Runs containers (Docker, containerd, CRI‑O).
Kube‑proxy → Manages networking rules so pods can communicate with each other and with services.

🔹 Types of Nodes

Control Plane Node → Runs cluster management components (API server, etcd, scheduler, controller manager).
Worker Node → Runs user workloads (pods and containers).

🔹 Example: Checking Nodes

When you run:

[root@centosmqm ~]# kubectl get nodes
NAME        STATUS   ROLES           AGE   VERSION
centosmqm   Ready    control-plane   28h   v1.30.14

Explanation of Output:
NAME → centosmqm → the hostname of your node.
STATUS → Ready → the node is healthy and can accept pods.
ROLES → control-plane → this node is acting as the master/control plane, not a worker.
AGE → 28h → the node has been part of the cluster for 28 hours.
VERSION → v1.30.14 → the Kubernetes version running on this node.

👉 In this example, your cluster currently has one node (centosmqm), and it is the control plane node. If you added worker nodes, they would also appear in this list with roles like <none> or worker.

🔹 Commands to Work with Nodes

# List all nodes
kubectl get nodes

# Detailed info about a node
kubectl describe node centosmqm

# Show nodes with more details (IP, OS, version)
kubectl get nodes -o wide

✅ Summary

A Node is the machine (VM or physical) that runs pods.
Nodes can be control plane (managing the cluster) or worker nodes (running workloads).
Your example shows a single control-plane node named centosmqm, which is healthy and running Kubernetes v1.30.14.

Thursday, November 13, 2025

How to Copy an AMI Across AWS Regions: A Step-by-Step Guide

 How to Copy an AMI Across AWS Regions: A Step-by-Step Guide


Step-by-Step Guide: Copying an AMI to a Different AWS Region:

Step 1: Create an AMI from Your EC2 Instance

Go to the EC2 Dashboard in the AWS Console.
Select your instance → click Actions → Image and templates → Create image.
Enter a name and description.
Choose whether to reboot the instance (select NoReboot to avoid downtime).
Click Create image.

Step 2: Wait for the AMI to Become Available

Navigate to AMIs in the EC2 Dashboard.
Monitor the status until it changes to Available.

Step 3: Copy the AMI to Another Region

In the AMIs section, select your AMI.
Click Actions → Copy AMI.
Choose the destination region.
Optionally rename the AMI and configure encryption.
Click Copy AMI.

Step 4: Switch to the Destination Region

Change your region in the AWS Console to the target region.
Go to AMIs and confirm the copied AMI is listed and available.

Step 5: Launch an EC2 Instance from the Copied AMI

Select the copied AMI → click Launch instance.
Configure instance details, storage, and security groups.
Launch the instance.

Friday, November 7, 2025

Creating EC2 Launch Template for Auto Scaling Web Server

Creating EC2 Launch Template for Auto Scaling Web Server


Step 1: Create a Launch Template

Go to EC2 Dashboard → Launch Templates → Create launch template

Fill in:

Name: WebServerTemplate
AMI: Amazon Linux 2 (or your preferred AMI)
Instance Type: t2.micro/t3.micro (Free Tier Eligible)
Key Pair: Select your key
Security Group: Must allow HTTP (port 80)

User Data:

#!/bin/bash
# Update the package index
sudo su
yum update -y

# Install Apache HTTP server
yum install -y httpd

# Install Stress Command
yum install -y stress

# Start the httpd service
systemctl start httpd

# Optional: Create a simple index.html page
echo "<h1>Hello from EC201</h1>" > /var/www/html/index.html

# Enable httpd to start on boot
chkconfig httpd on

✅ Enable Auto Scaling guidance At the bottom of the form, check the box labeled “Provide guidance for Auto Scaling”. This ensures the template is optimized for use with Auto Scaling Groups.

Click Create launch template