Pages

Sunday, June 18, 2023

What are Cgroups in Containers?

What are Cgroups in Containers?


Cgroups help to limit the use of resources so that a single container is not utilizing all the resources available. It allows managing various system resources such as:

-CPU - limit CPU utilization.
-Memory - limit memory usage.
-Disk I/O - limit disk I/O.
-Network - limit network bandwidth.

With the help of cgroups docker engine helps to share available hardware resources with the container and puts a limit on how much resources the container can use.

Example:

docker run -it -d --name CPUTEST --cpus=0.5 centos
38a4235cc66a9d8f9bb2a4c5e1d1e7146ffeb1c63d4e2a17653f174ee725be29

$ docker run -it -d --name MEMTEST --memory=100m centos
1c7609f234d3d51ae7f29fc52a54f1202d33d4a213fd51a28bd7c46c8809145a

$ docker run -it -d --name MEM_CPU_TEST --memory=100m --cpus=0.5 --hostname server01 centos
09214e7eb0ff3ca12f81d4ae6f4de89f545910b9ce5e70037df204ce4b563543

$ docker create --privileged=true --restart=always --memory 6m --ip 172.17.0.29 -u root -v data1:/MQMAPP -w /MQMAPP -h server01.mqm.com --dns 8.8.8.8 --name MYMQM1 -it --init centos
71831091f69b0034fd03a619885088afc14e6aee614769eedb6d3b2eaace3061

-w Working directory inside the container
-v  Volume name/Bind mount a volume
-h  hostname of a container

Docker Basic Commands

Docker Basic Commands 


# To start and enable docker 
$ sudo systemctl start docker 
$ sudo systemctl enable docker

# To check docker status 
$ sudo systemctl status docker

# To check docker Version 
$ docker --version

# To see docker info 
$ docker info

# To see docker images
$ docker images

# Pulling hello-world docker image 
$ docker pull hello-world

# To see docker image 
$ docker images

# Running hello-world docker image 
$ docker run hello-world

# Display Running Docker containers
$ docker ps

# Displaying Running + stopped containers
$ docker ps -a

# Inspect docker image
$ docker inspect <image-id>

# Remove Docker image
$ docker rmi <image-name / image-id>

# Remove docker image forcefully
$ docker rmi -f <image-name / image-id>

# Remove docker container
$ docker rm <container-id/container-name>

# To remove all images from the server:
$ docker rmi -f $(docker images -q)

# Remove all stopped containers + un-used images + un-used networks 
$ docker system prune -a

# To see docker images location:

[root@localhost sha256]# pwd
/var/lib/docker/image/overlay2/imagedb/content/sha256

To remove all stopped containers:
$ docker container prune

To start a container using image

[root@localhost sha256]# docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
docker.io/httpd     latest              d1676199e605        13 days ago         145 MB

Using 1 image we can create multiple containers.

docker run -it --name abcd httpd:latest
docker run -it --name abcde httpd:latest
docker run -it -d --name abcdef httpd:latest
docker run -it -d --name abcdefg httpd:latest

-d ------> detacched mode
-i ------> interactive mode
-t ------> tty terminal

Introduction To Docker Container?

Introduction To Docker Container?

A container is a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another. 

A docker container image is a lightweight, standalone, executable package of software that includes everything needed to run an application: code, runtime, system tools, system libraries and settings. Container images become containers at runtime and in the case of docker containers – images become containers when they run on docker Engine. They are available for both Linux and Windows-based applications. 

Containerized software will always run the same, regardless of the infrastructure. Containers isolate software from its environment and ensure that it works uniformly despite differences for instance between development and staging. 

Containerization is increasingly popular because containers are:

- Flexible: Even the most complex applications can be containerized.

- Lightweight: Containers leverage and share the host kernel, making them much more efficient in terms of system resources than virtual machines.

- Portable: You can build locally, deploy to the cloud, and run anywhere.

- Loosely coupled: Containers are highly self sufficient and encapsulated, allowing you to replace or upgrade one without disrupting others.

- Scalable: You can increase and automatically distribute container replicas across a datacenter.

- Secure: Containers apply aggressive constraints and isolations to processes without any configuration required on the part of the user.