Table of contents
- What is docker Volume..?
- What is Docker Network..?
- 01. Create a multi-container docker-compose file which will bring UP and bring DOWN containers in a single shot ( Example — Create application and database container )
- 02. How to use Docker Volumes and Named Volumes to share files and directories between multiple containers..?
- 03. How to create two or more containers that read and write data to the same volume using the docker run --mount command..?
- 04. How to verify that the data is the same in all containers by using the docker exec command to run commands inside each container..?
- 05. How to use the docker volume ls command to list all volumes and docker volume rm command to remove the volume when you’re done..?
- That’s all about todays task of DevOps journey.
What is docker Volume..?
Docker-Volume is a feature in the Docker containerization platform that enables data to persist between containers and to be shared among them. When you create a Docker container, any data that is generated or used by the container is stored inside the container itself. However, when the container is deleted, the data is also deleted.
To solve this problem, Docker-Volume was introduced. It allows you to store the data in a separate location outside the container, making it independent of the container’s lifecycle. This way, even if the container is deleted, the data remains accessible and can be used by other containers as well.
Docker-Volume can be used to manage the storage requirements of your containers. It enables you to easily manage the data for your applications, such as databases, log files, and other persistent data. Docker-Volume can also be used to store configuration files, templates, and other files that are required by the container.
Overall, Docker-Volume is a powerful feature that allows for flexible and scalable data management in Docker containers.
What is Docker Network..?
Docker network is a feature in the Docker containerization platform that enables communication between containers running on the same host or across multiple hosts. It provides a virtual network that connects containers to each other and to external networks, allowing them to communicate securely and efficiently.
When you create a Docker container, it is isolated from the host system and other containers by default. To enable communication between containers, you can create a Docker network and attach containers to it. Once the containers are attached to the same network, they can communicate with each other by their container names or IP addresses.
Docker network provides several types of network drivers, including Bridge, Host, Overlay, Macvlan, and none. Bridge is the default network driver and allows containers to communicate with each other on the same host. Host allows containers to use the host network stack, while Overlay enables communication between containers on different hosts. Macvlan allows containers to have a unique MAC address and appear as physical devices on the network, while none disables networking for the container.
Docker network also supports network segmentation and isolation, allowing you to create multiple networks and assign containers to specific networks based on their function or security requirements. This helps to improve the security and performance of your Docker environment.
Overall, Docker network is a powerful feature that enables communication between containers and provides flexible and secure networking options for your Docker environment.
Here is Today’s Tasks -
01. Create a multi-container docker-compose file which will bring UP and bring DOWN containers in a single shot ( Example — Create application and database container )
Here’s an example docker-compose.yml
file that creates two containers, one for an application and one for a database, and can bring up and bring down the containers in a single shot. It also includes an example of how to scale the application container using the docker-compose scale
command:
version: "3.9"
services:
app:
image: myapp:latest
container_name: myapp
ports:
- "80:80"
depends_on:
- db
environment:
DB_HOST: db
DB_PORT: 5432
DB_NAME: myapp
DB_USER: admin
DB_PASSWORD: password
db:
image: postgres:13
container_name: mydb
volumes:
- ./data:/var/lib/postgresql/data
environment:
POSTGRES_USER: admin
POSTGRES_PASSWORD: password
POSTGRES_DB: myapp
volumes:
data:
# Optional section to demonstrate scaling the application container
# Uncomment to use, and run `docker-compose up -d` to start the containers
# Then run `docker-compose scale app=3` to scale the application to 3 replicas
# Then run `docker-compose ps` to view the status of the containers
# And run `docker-compose logs app` to view the logs of the application containers
# And run `docker-compose down` to stop and remove all containers, networks, and volumes associated with the application
# deploy:
# replicas: 1
# resources:
# limits:
# cpus: "0.5"
# memory: "256M"
# restart_policy:
# condition: on-failure
# delay: 5s
# max_attempts: 3
# window: 120s
To bring up both containers, navigate to the directory containing the docker-compose.yml
file and run the following command:
docker-compose up -d
This will start both containers in the background and create a network for them to communicate with each other. The depends_on
option in the app
service ensures that the database container is started before the application container.
To scale the application container to 3 replicas, run the following command:
docker-compose scale app=3
This will create two additional replicas of the application container. You can view the status of all containers by running the following command:
docker-compose ps
And you can view the logs of the application containers by running the following command:
docker-compose logs app
To bring down both containers, run the following command:
docker-compose down
This will stop and remove both containers, and remove the network and volume created by the docker-compose.yml
file. If you want to remove the volume associated with the database container as well, add the -v
option:
docker-compose down -v
I hope this helps! Let me know if you have any questions comment down.
02. How to use Docker Volumes and Named Volumes to share files and directories between multiple containers..?
Docker Volumes and Named Volumes are two ways to share files and directories between multiple containers. Here’s how to use both:
- Docker Volumes:
Docker Volumes are a way to persist data generated by Docker containers. They are stored outside the container, which means the data can be shared across containers and persists even if the container is deleted.
To use Docker Volumes, you can use the -v
flag when you start a container. For example:
docker run -v /host/path:/container/path myimage
This will create a Docker Volume that maps the directory at /host/path
on the host machine to the directory at /container/path
inside the container. If the directory does not exist on the host machine, Docker will create it.
To use the same Docker Volume across multiple containers, you can use the same -v
flag when starting each container. For example:
docker run -v myvolume:/data myimage
This will create a Docker Volume named myvolume
and map it to the directory at /data
inside the container. You can then use the same myvolume
volume with other containers by using the same command.
- Named Volumes:
Named Volumes are similar to Docker Volumes, but they are managed by Docker and have additional features such as backup and restore.
To use Named Volumes, you can specify them in your docker-compose.yml
file. For example:
version: "3.9"
services:
db:
image: postgres:13
volumes:
- myvolume:/var/lib/postgresql/data
volumes:
myvolume:
This will create a Named Volume named myvolume
and map it to the directory at /var/lib/postgresql/data
inside the container. You can then use the same myvolume
volume with other services by specifying it in the volumes
section.
To use the same Named Volume across multiple containers, you can use the same volume name in the volumes
section of each service.
version: "3.9"
services:
app:
image: myapp:latest
volumes:
- myvolume:/app/data
db:
image: postgres:13
volumes:
- myvolume:/var/lib/postgresql/data
volumes:
myvolume:
This will create a Named Volume named myvolume
and map it to the directory at /app/data
inside the app
service and /var/lib/postgresql/data
inside the db
service. You can then use the same myvolume
volume with other services by specifying it in the volumes
section.
I hope this helps! Let me know if you have any questions about this topic.
03. How to create two or more containers that read and write data to the same volume using the docker run --mount
command..?
To create two or more containers that read and write data to the same volume using the docker run --mount
command, you can follow these steps:
- Create a Docker volume using the
docker volume create
command. For example:
docker volume create myvolume
2. Start the first container and mount the volume using the --mount
option with the docker run
command. For example:
docker run -d --name container1 --mount source=myvolume,target=/data myimage
This will start a container named container1
with the Docker volume named myvolume
mounted to the directory /data
inside the container. The myimage
image will be used to start the container.
3. Start the second container and mount the same volume using the same --mount
option with the docker run
command. For example:
docker run -d --name container2 --mount source=myvolume,target=/data myimage
This will start a container named container2
with the same Docker volume named myvolume
mounted to the directory /data
inside the container. The myimage
image will be used to start the container.
- Now both containers can read and write data to the same volume at
/data
. Any changes made by one container will be immediately visible to the other container.
Note that you can also use the --mount
option to specify additional options such as read-only access, specifying the mount type, and specifying the file system type.
I hope this helps! Let me know if you have any questions comment down.
04. How to verify that the data is the same in all containers by using the docker exec command to run commands inside each container..?
To verify that the data is the same in all containers, you can use the docker exec
command to run commands inside each container. Here are the steps:
Start the two or more containers that are sharing the same volume using the
docker run --mount
command as described in the previous answer.Use the
docker exec
command to run a command inside one of the containers that writes data to the shared volume. For example, you can run the following command insidecontainer1
:
docker exec container1 sh -c "echo 'data from container1' > /data/data.txt"
This will create a file named data.txt
in the /data
directory inside container1
and write the text "data from container1" to it.
3. Use the docker exec
command to run a command inside another container to verify that the data is the same. For example, you can run the following command inside container2
:
docker exec container2 cat /data/data.txt
This will output the contents of the data.txt
file in the /data
directory inside container2
. If everything is working correctly, the output should be "data from container1".
4. Repeat step 3 for all containers that are sharing the same volume to verify that the data is consistent across all containers.
By using the docker exec
command, you can run commands inside each container and verify that the data is the same in all containers.
05. How to use the docker volume ls command to list all volumes and docker volume rm command to remove the volume when you’re done..?
To list all volumes using the docker volume ls
command and remove a volume using the docker volume rm
command, you can follow these steps:
- Open a terminal or command prompt and run the following command to list all volumes:
docker volume ls
This will show a list of all Docker volumes on your system.
2. Find the name of the volume you want to remove from the list.
3. To remove the volume, run the following command:
docker volume rm <volume_name>
Replace <volume_name>
with the name of the volume you want to remove.
4. To verify that the volume has been removed, run the docker volume ls
command again and check if the volume is no longer in the list.
Note that if the volume is currently in use by a container, you will need to stop and remove the container before you can remove the volume. You can use the docker ps
command to list all running containers and the docker stop
and docker rm
commands to stop and remove containers, respectively.
It’s important to note that removing a volume will also delete all data stored in that volume. Therefore, make sure to only remove volumes that are no longer needed and that you have a backup of any important data stored in the volume.
I hope this helps! Let me know if you have any questions.
That’s all about todays task of DevOps journey.
I am Sunil kumar, Please do follow me here and support #devOps #trainwithshubham #github #devopscommunity #devops #cloud #devoparticles #trainwithshubham
Connect with me over linkedin : linkedin.com/in/sunilkumar2807