This post tried to cover the area of rarely shared and highly demanded methods in containerisation with docker and kubernetes. you can enhance this post by contributing in Docker-tutorial GitHub repo. I have added these kind of stuffs under ## Quick Details in every readme.md of all chapters.

###### What are the Dockerfile Instructions ?

Fundamental Instructions

• FROM Sets the Base Image for subsequent instructions.
• ARG Defines a build-time variable.
• MAINTAINER Deprecated - use LABEL instead) Set the Author field of the generated images.
• ENV Sets environment variable.

Configuration Instructions

• RUN Execute any commands in a new layer on top of the current image and commit the results.
• ADD Copies new files, directories or remote file to container. Invalidates caches. Avoid ADD and use COPY instead.
• COPY Copies new files or directories to container. Note that this only copies as root, so you have to chown manually regardless of your USER / WORKDIR setting, as same as ADD.
• VOLUME Creates a mount point for externally mounted volumes or other containers.
• USER Sets the user name for following RUN / CMD / ENTRYPOINT commands.
• WORKDIR Sets the working directory.

Execution Instructions

• CMD Provide defaults for an executing container.
• EXPOSE Informs Docker that the container listens on the specified network ports at runtime. NOTE: does not actually make ports accessible.
• ONBUILD Adds a trigger instruction when the image is used as the base for another build.
• STOPSIGNAL Sets the system call signal that will be sent to the container to exit.
• ENTRYPOINT Configures a container that will run as an executable.
###### How to update port Mapping on running Container ?
• Stop and commit a the container as an image.
• Start the snapshot image of our container with port mapping

docker stop containerName docker commit tempImageName docker run -p 8080:8080 –name -td tempImageName

Reference : -An answer in Stackoverflow by Fujimoto Youichi

###### How to connect and disconnect containers in docker bridge network ?

Follow these simple steps to connect and disconnect multiple containers with a bridge network.

• To create a bridge network with default info
docker network create --driver bridge testbridge
• Lets start connect containers to these network
docker network connect testbridge containerOne
docker network connect testbridge containerTwo
• Check the network for containers
docker network inspect testbridge

You can check network by verifying containers from the generated json file.

###### How to share the images locally ?

Follow this two reliable steps to share the image as .tar compressed file format.

• Save that image to as a .tar file.
• Load the image from .tar file

docker save –output imageName.tar imageName docker load –input imageName.tar

###### What is dangling image ?

Dangling images are created while creating new build of a image without renaming / updating the version. So that the old image converted into dangling images like shown below.

• List all dangling images
$docker images -f dangling=true REPOSITORY TAG IMAGE ID CREATED SIZE <none> <none> 3f4ae2ddf543 4 days ago 1.37GB <none> <none> b4c8cecab3bc 8 days ago 655MB • To remove all dangling images, Run this command,$ docker rmi $(docker images -f dangling=true -q) Reference : - What is Dangling images by SO ### How to import images inside kubernetes cluster in simple way ? It can be done by configuring the registry but i found it helpful. Initially go to the directory of the build path and make sure you have the Dockerfile in your directory. -currentdirectory |--- Dockerfile |--- Other-Project-Files • Start minikube minikube start • set docker environment by eval in current directory eval$(minikube docker-env)

• Start build your image now…

docker build -t imageName:version .

• Access the image by changing the image pull policy

kubectl run hello-foo –image=foo:0.0.1 –image-pull-policy=Never // Or it can be set inside the yaml config file like shown below -image: imageName:latest name: imageName imagePullPolicy: Never

### How Deployment used over Pod and ReplicaSet in kubernetes and Why ?

The need of ReplicaSet is,

• Pod, basically contains one or more than one containers.
• Pod is the most basic entity in kubernetes.
• ReplicaSet is like a manger of Pod which ensures the Pods activity. It always sees Pod as Replica (also like a Job). So that it is called as ReplicaSet (Set of replicas or Set of Pods).

Running Pod alone is dangerous because,

• When the machine crashes or some thing happens related to it, Pod will be deleted.
• That’s why ReplicaSets are used to provide guarantee for Pod’s life.

Deployment having the merits,

• It can update the replicas with zero down time.
• Deployment controller contains deployment objects which can create new Replicas, remove old replicas excluding their resources and adopt it with the updated one.

If you find any typos, inaccurate stuffs and doubtful contents in my post, feel free to comment it out.

Relevant and useful comments are always welcome. Lets make this tech community wonderful ...!

Share and Thanks