Let’s go back in time. Companies that wanted to run their applications back when computers were still novelties would have needed to purchase a server. The catch was that you could only run one application on each server. You would need to purchase numerous servers if you wanted to run multiple applications. This was a problem as costs would soar, affecting the business and the environment broadly.
By incorporating the idea of virtual machines into the game, IBM was able to resolve this problem. They allowed us to run numerous applications on a single server. You may have dual-booted your computer, installing Windows on a Mac or Ubuntu on a Windows computer. These machines are virtual ones. But they also faced a challenge. They required their own operating system, which tended to consume a significant amount of memory and disc space. They became inefficient and slow as a result.
Our Messiah Containers entered.
What exactly are containers?
Imagine relocating from India to Scotland. Would it be more convenient to ship your belongings one at a time or would you prefer to ship them all at once? Of course, the latter, isn’t it?
Simply put, that big box is a container.
Using the same analogy, suppose you created a website that functions flawlessly on your system but encounters issues when your friend tries to use it on their computer. We can use a container to ship the entire website along with all of its dependencies, including the web database, front end, back end, source code, etc., to avoid these hassles. This would guarantee that the website functions without a hitch on all devices.
Containerization is a productive way to run, deploy, and scale applications in the context of computing.
Virtual Machines vs. Containers (VMs)
Any application that runs inside of a virtual machine needs a guest operating system, which calls for a hypervisor. On the host operating system, multiple machines are created using a hypervisor, which also controls the virtual machines. Every OS would need a specific amount of space in the hardware, which is practically divided, as you can see from the image.
With containers, you only need to have one operating system and an additional container engine, which is used to run applications. The concept of separating your application from the main operating system is used.
To put it simply, while containers use only the host operating system to run applications through the container engine, virtual machines use multiple operating systems to run multiple applications.
However, in practice, virtual machines run on top of containers. You could describe them as a lighter option.
Then what is Docker?
One of the many container platforms is Docker, which enables you to create these containers to quickly and easily test, develop, and scale applications by running them in isolated environments.
This leads us to our following point.
Why should you use Docker?
- expedites the transfer of code
- facilitates application scaling, and deployment and issue identification
- save space because the entire application doesn’t have to be installed locally.
Let’s take a moment to become familiar with a few Docker terminologies that will simplify our lives.
Terminologies Used by Docker
We can start and stop containers thanks to it. The two are as follows:
- Low-level runtime (runc)
- High-level runtime (containerd)
It enables users to give the Docker Daemon commands. Client-server architecture is utilised by Docker.
It is in charge of ensuring that the Docker platform runs smoothly overall. It is a client-server application made up of three parts:
- Server – which runs the daemon
- Rest API – deals with the interaction of applications with their server
- Client – which is nothing but the command line interface (CLI)
It is the central component of the Docker architecture and is responsible for creating, executing, and dispersing the containers. Additionally, it controls the containers and Docker images.
The Docker image is the name of the file that houses the operating system files and source code needed to run the application and all of its dependencies inside the container. An application that has been containerized is an image’s active instance. Docker images cannot be changed.
It includes a list of guidelines for producing Docker images.
Images for Docker are kept in this location. The official online public repository for Docker, known as Docker Hub, is where you can find all the images of well-known programmes. We can also create our own images for our application using Docker Hub.
Introduction to our Docker commands
To get started, you can consult the official documentation.
Visit Play With Docker, an engaging online playground for Docker, if you want to skip installation.
Back to some hands-on work now : )
This command starts the container after creating it initially. It will first determine whether the most recent official image of hello-world is accessible on the Docker Host before creating the container (the local machine on which the docker engine is running). If not, as in the case mentioned above, the same will be downloaded automatically from Docker Hub before the container is created.
The container is created by running an image, and the image name in this case is hello-world.
This command displays a list of every Docker image available on the Docker Host.
Let’s explain each and every word that was used here:
- REPOSITORY – This shows the name of the docker image.
- TAG – This basically represents the version of the image being used. Each image has its own unique tag.
- IMAGE ID – a string consisting of alphanumeric characters which are associated with every image.
- CREATED – This depicts when the image was created
- SIZE – This represents the size of the image
It enables us to see each and every container that is active on the Docker Host.
docker inspect [image_name]
It provides details about the picture.
docker container prune -f
It will purge Docker of all stopped and inactive containers.
docker rmi [image_name]
It enables us to delete an image or images from the host.
We modify the RMI command in the following way to delete all the images at once:
docker rmi $(docker images -q)
I sincerely appreciate you reading this and spending the time that you did.