- Creating Dockerfile
- Building Docker image
- Run as Container
- Common facilities
- Other facilities
- Single Page Guide
This post explains about the steps involved in containerizing an application using docker tool. Also tried to disclose an overview on other areas related to this topic.
A Container is a running instance of a docker image. Containers are an abstraction at application layer that packages the codes and dependencies together, ships the application and a run-time environment.
Container are running on the docker engine almost as VM but not exactly. It has lot of merits in production basis and currently it is booming in IT-Industry now. For example containerizing an application with docker or kubernetes provides enormous facilities.
In simple, we can say that, we are going to releasing our application as an image which will run as a bounded isolated OS instead of releasing it without the environments and dependencies. It will fully reduce the environment and dependency problems which we often facing.
Totally we are just going to follow these three steps in overall. The steps are,
|Build||Dockerfile||Packaging the application with required dependencies and custom files|
|Ship||Docker image||Releasing it as an image file globally using docker registry|
|Run||Container||Run containers using the image which will act as your application along with an isolated VM like environment|
For very basic level, To understand the containers, I am just going to containerize a simple file printer shown below,
filename = "log.txt" myfile = open(filename, 'w') myfile.write( "Hi Here we are ..!" + '\n') myfile.close()
This code just create and save the content in a file named log.txt. Lets containerise.. see what is happening … !
Dockerfile is also like a Makefile, which is used to build the docker image. This file normally contains three type of instruction sets,
- Fundamental instruction
- Configuration instruction
- Execution instruction
For our scenario, The dockerfile would be like this,
FROM python:2 COPY test.py /usr/ WORKDIR /usr/ RUN python test.py CMD bash
- FROM command determines the base image. you can use scratch if you have total cross compiled system.
- COPY command just copying your local file into your container location.
- WORKDIR command changes the current directory for next command executions.
- RUN command executes the command inside the container.
- CMD is the container launching command. container will live until it runs.
Building Docker image
Go to the build directory and make sure you have Dockerfile in your current directory. To build the docker image use the command,
docker build -t testimage .
Sending build context to Docker daemon 3.072kB Step 1/5 : FROM python:2 —> 92c086fc9702 Step 2/5 : COPY test.py /usr/ —> Using cache —> 1b9aa6ce04cd Step 3/5 : WORKDIR /usr/ —> Using cache —> 6dcef3ef8785 Step 4/5 : RUN python test.py —> Using cache —> 6edf4708cf6c Step 5/5 : CMD bash —> Using cache —> 21303e2891d6 Successfully built 21303e2891d6 Successfully tagged testimage:latest
This command creates an image with the name and version you provided. When you build,
- If the base image not available in your local, It will pull it from Docker Hub registry.
- It creates layers for each commands inside the docker file and It will be remembered by docker.
- If any thing changed after once build, It will create new layers for that.
- You can’t use Capitals in image name.
You can check the created image by listing available images using the command,
docker image ls
or you can also specify the command
docker image ls testimage
REPOSITORY TAG IMAGE ID CREATED SIZE testimage latest 21303e2891d6 3 days ago 914MB
Run as Container
Method 1 :
We can containerize this image using, two steps.
docker container create --name testcontainer testimage docker container start testcontainer
Method 2 :
Or it can be done in single step by,
docker run -itd --name testcontainer testimage
The arguments itd means interactive, tty and detach. See the help for more details. we can check the container status using the command
docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES c706bd3fe268 testimage “/bin/sh -c bash” 5 seconds ago Up 3 seconds testcontainer
we can enter inside the container using exec facility with bash command like shown below,
docker exec -it testcontainer bash
Inside the container environment,
root@c706bd3fe268:/usr# ls bin games include lib local log.txt sbin share src test.py
Here we can see the file log.txt which was generated while this container started.
I always suggest to see this official documentation for commands. But here i have shared often some using commands and also felt useful for quick view :).
Some Basic commands,
- Docker inspect - Inspecting the information about the custom object.
- Docker rm - Removing the object provided in arg.
- Docker info - Printing the docker information.
- Docker rename - All objects can be renamed.
- Docker login - Registry Login
- Docker logout - Registry Logout
- Docker image
- Docker search - Search images in customized docker registry.
- Docker pull - Pulling images from registry.
- Docker push - Pushing images to the customized registry.
- Docker save - Save the image as compressed file.
- Docker load - Load the compressed file as an image.
- Docker rmi - Removing images.
- Docker run - Create + Start the container in single command.
- Docker container
- Docker ps - Process status for current running processes.
- Docker start - Initiating the container.
- Docker restart - Restarting the already started container.
- Docker cp - Copy the files from the local directory to the container path.
- Docker port - Port mapping on running container.
- Docker pause
- Docker unpause
Some Important facilities such as,
- Docker volume - Mounting the local directory as a volume object to the container.
- Docker network - Network configuration customizing for bridge, vlan and overlay networks.
- Docker stats - Statistics on the container information on the machine environment.
- Docker top - Normal Top command for quick view.
- Docker secret
- Docker plugin
Docker has many more awesome facilities such as,
Composing or managing multiple containers by a single configuration file docker-compose.yaml.
Swarm mode is for managing the docker clusters using manager and worker scenario.
It is used in swarm mode for deploying the application as a service with some facilities such as rollback, scale and update.
Docker stack is also in swarm mode for managing collection of multiple services.
These facilities used for High availability, distributed processing, scaling and zero down time deployment (almost).
Also I have created a single page guide which helps you to quickly act with docker. It consists simple 4 steps guide you to quickly done the dockerization.
Here i have shared the slides of docker - overview which explains the concepts behind this technology. Have a look at this to understand quickly and move on … :)