Docker
Docker is a tool designed to make it easier to create, deploy, and run applications by using containers. Containers allow a developer to package up an application with all of the parts it needs, such as libraries and other dependencies, and ship it all out as one package. By doing so, thanks to the container, the developer can rest assured that the application will run on any other Linux machine regardless of any customized settings that machine might have that could differ from the machine used for writing and testing the code.
Containers:-
Images and containers
Docker version => It is used to ensure the Docker command returns the Docker version installed
Docker info => It is used to ensure that the Docker command returns the detailed
Docker pull “Image Name” => Downloading an Image.
FROM
MAINTAINER
RUN
CMD
LABEL
EXPOSE
ENV
COPY
ADD
ENTRYPOINT
VOLUME
USER
WORKDIR
ONBUILD
S warm mode overview
Joining as Worker Node
Joining as Manager Node
Adding Worker Nodes to
our Swarm
Create a Service
Accessing the Service
Docker-machine
1. What is
Docker??
Docker is the company driving the container movement and the
only container platform provider to address every application across the hybrid
cloud. Today’s businesses are under pressure to digitally transform but are
constrained by existing applications and infrastructure while rationalizing an
increasingly diverse portfolio of clouds, datacenters and application
architectures. Docker enables true independence between applications and
infrastructure and developers and IT ops to unlock their potential and creates
a model for better collaboration and innovation.
2.
What is the Need of Docker?
Docker is a tool designed to make it easier to create, deploy, and run applications by using containers. Containers allow a developer to package up an application with all of the parts it needs, such as libraries and other dependencies, and ship it all out as one package. By doing so, thanks to the container, the developer can rest assured that the application will run on any other Linux machine regardless of any customized settings that machine might have that could differ from the machine used for writing and testing the code.
In
a way, Docker is a bit like a virtual machine. But unlike a virtual machine,
rather than creating a whole virtual operating system, Docker allows
applications to use the same Linux kernel as the system that they're running on
and only requires applications be shipped with things not already running on
the host computer. This gives a significant performance boost and reduces the
size of the application.And
importantly, Docker is open source. This means that anyone can
contribute to Docker and extend it to meet their own needs if they need
additional features that aren't available out of the box.
3. Who is
for docker?
Docker
is a tool that is designed to benefit both developers and system
administrators, making it a part of many DevOps (developers + operations)
toolchains. For developers, it means that they can focus on writing code
without worrying about the system that it will ultimately be running on.
It also allows them to get a head start by using one of thousands of programs
already designed to run in a Docker container as a part of their application.
For operations staff, Docker gives flexibility and potentially reduces the number
of systems needed because of its small footprint and lower overhead.
4.
Virtualization vs Containerization.Virtual machines (VMs):-
As server processing power and capacity increased, bare metal applications weren’t able to utilize the new abundance in resources. Thus VMs were born, designed by running software on top of physical servers in order to emulate a particular hardware system. A hypervisor, or a virtual machine monitor, is software, firmware, or hardware that creates and runs VMs. It’s what sits between the OS and hardware and is necessary to virtualize the server.Within each virtual machine runs a unique operating system. VMs with different operating systems can be run on the same physical server – a Unix VM can sit alongside a Linux-based VM, etc. Each VM has its own binaries/libraries and application(s) that it services, and the VM may be many gigabytes large.
As server processing power and capacity increased, bare metal applications weren’t able to utilize the new abundance in resources. Thus VMs were born, designed by running software on top of physical servers in order to emulate a particular hardware system. A hypervisor, or a virtual machine monitor, is software, firmware, or hardware that creates and runs VMs. It’s what sits between the OS and hardware and is necessary to virtualize the server.Within each virtual machine runs a unique operating system. VMs with different operating systems can be run on the same physical server – a Unix VM can sit alongside a Linux-based VM, etc. Each VM has its own binaries/libraries and application(s) that it services, and the VM may be many gigabytes large.
Containers:-
Operating
system (OS) virtualization has grown in popularity over the last decade as a
means to enable software to run predictably and well when moved from one server
environment to another. Containers provide a way to run these isolated systems
on a single server/host OS.
Containers
sit on top of a physical server and its host OS, e.g. Linux or Windows. Each
container shares the host OS kernel and, usually, the binaries and libraries,
too. Shared components are read-only, with each container able to be written to
through a unique mount. This makes containers exceptionally “light” –
containers are only megabytes in size and take just seconds to start, versus
minutes for a VM.
Conclusion
VMs and
Containers differ on quite a few dimensions, but primarily because containers
provide a way to virtualize an OS in order for multiple workloads to run on a
single OS instance, whereas with VMs, the hardware is being virtualized to run
multiple OS instances. Containers’ speed, agility and portability make them yet
another tool to help streamline software development.
5.
Benefits of Docker-Container.
·
Build-One run
everywhere
·
Environment is
self-contained – no dependency issues.
·
Existing Tools to
make containers work together: liking, discovery, orchestration.
·
Sharing
containerized components
·
Run on any Linux
Server Today: Physical, Virtual, Cloud…etc.
6.
Installation of Docker.
For Rhel or Centos
Step1:- Download docker epel
Step2:- yum localinstall epel*
Step3:-yum
install -y yum-utils device-mapper-persistent-data lvm2
Step4:-yum-config-manager
--add-repo https://download.docker.com/linux/centos/docker-ce.repo
Step5:- yum install docker-engine or yum install
docker-ce.
Step6:- systemctl enable docker
Step7:- systemctl start docker
For Ubuntu
Step1:- apt-get
install apt-transport-https ca-certificates curl software-properties-common
Step2:- curl -fsSL
https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
Step3:- apt-key fingerprint 0EBFCD88
Step4:- add-apt-repository "deb [arch=amd64]
https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
Step5:- apt-get install docker-ce
Step6:- service start docker
Step7:- Checkconf docker
Images and containers
A container is launched by running an image. An image is
an executable package that includes everything needed to run an
application--the code, a runtime, libraries, environment variables, and
configuration files.
A
container is a runtime instance of an image--what the image becomes in memory
when executed (that is, an image with state, or a user process). You can see a list
of your running containers with the command, docker ps, just as you would in
Linux.
Some Useful Docker Basic Command
Docker version => It is used to ensure the Docker command returns the Docker version installed
Docker info => It is used to ensure that the Docker command returns the detailed
information on the Docker service installed.
Docker search “Image Name” => Searching an Image.
Docker pull “Image Name” => Downloading an Image.
Docker
images =>
for display docker images.
Docker
rmi “Image Name” => Deletion of a
docker image.
Docker
run –it “Image Name” => for running a docker container (interactive
mode).
Docker
run –d “Image Name” => for running a container in detach mode.
Docker
start -ai “Container id” => for starting a stopped container.
Docker
stop “Container id” => for stopping a Container.
Docker
rm “Container id” => for deletion a container.
Docker
ps –a –q => listing all
running containers.
Docker
attach “Container id” => get shall of a
container.
Docker
logs “Container id” => for finding the
logs of a container.
Docker
diff “Container id” => for
finding diff b/w both images (previous image and new image).
Docker
commit “Container id” => for creating a new image with old one
Docker
commit “Container id” “name” => for giving a
new name to image.
Docker
tag “image id “ tag name => for giving a tag name to image.
Docker
tag “image id” “image name : tag name” => for
changing the docker image name or tag name
Docker
rename “container id” new_name => for changing the container name.
Docker
history “image id” => for finding
previous history of images.
Docker
top “Container id” => execute top
command on running container.
Docker
pause “container id” => for paused
a container.
Docker
unpause “container id” => for
unpaused a container.
Docker
login => Login through CLI
on docker hub.
Docker
tag “image id” “repository-name” => for tagging an image into repository.
Docker
push “repository name” => for pushup the
repository on Docker Hub.
Docker
inspect “Container name” => This command is used see the details of an image or
container.
Docker
kill “Container name” => This command is used to kill the processes in a
running container.
ð How to run an image.
docker run -it –d -p 8080:8080 -p 50000:50000 jenkins
-p is used for defines the port number.
8080:8080
=> Host port : Server Port.
It
=> interactive mode and -d for
detach mode.
Other
keywords: -
--hostname
=> set the hostname of the container.
--name
=> set the container name.
/bin/bash
is used to run the bash shell once CentOS is up and running.
For ex-
docker run centos –-it –-hostname=temp.localhost.local
–-name=abc
/bin/bash
ð How
to execute commands inside the container.
docker exec “Cotainer ID” Command
Docker-Container lifecycle –
How to
save your container from destroy:-
We used “docker run
–it imagename command” this command to
create a new container and then used ctrl+D. it will not be exist when you will
run docker ps . it will be destroyed.
Now there
is an easier way to attach to containers and exit them cleanly without the need
of destroying them. One way of achieving this is by
using the nsenter command.
Before we
run the nsenter command, you need to first install the nsenter image.
It can
be done by using the following command:
docker run --rm -v
/usr/local/bin:/target jpetazzo/nsenter
Before we
use the nsenter command, we need to get the Process ID of the container,
because
this is required by the nsenter command. We can get the Process ID via
the
Docker inspect command and filtering it via the
Pid.
Docker inspect “container id” | grep “Pid”
Nsenter –m –u –n –p
–I –t container Pid /bin/bash
EX-
Nsenter –m –u –n –p –I –t 2948 /bin/bash
Options
-u is
used to mention the Uts namespace
-m is
used to mention the mount namespace
-n is
used to mention the network namespace
-p is
used to mention the process namespace
-i is
to make the container run in interactive mode.
-t is
used to connect the I/O streams of the container to the host OS.
containerID
– This is the ID of the container.
Command – This is the command to run within
the container.
Now when you press ctrl + d and check “docker ps” .
the container will be exist.
Docker-File
Create a Dockefile:-
Docker
gives you the capability to create your own Docker images, and it can be done
with the help of Docker Files. A Docker File is a simple text file with
instructions on how to build your images.
Step 1:- Create
a file called Docker File and edit it using vim. Please note that
the name
of the
file has to be "Dockerfile" with "D" as capital.
Step 2:-
Build your Docker File using the following instructions:
#This is a sample Image
FROM ubuntu
MAINTAINER deepakkhandelwalji13@gmail.com
RUN apt-get update
RUN apt-get install –y nginx
CMD [“echo”,”Image created”]
The following points need to be noted about the above
file:
The first
line "#This is a sample Image" is a comment. You can add comments to
the
Docker File with the help of the # command.
The next
line has to start with the FROM keyword. It tells docker, from which
base
image you
want to base your image from. In our example, we are creating an
image
from the ubuntu image.
The next
command is the person who is going to maintain this image. Here you
specify
the MAINTAINER keyword and just mention the email ID.
The RUN
command is used to run instructions against the image. In our case, we
first
update our Ubuntu system and then install the nginx server on our ubuntu
image.
The last
command is used to display a message to the user.
Step 3:-
Save this file.
Build the Dockerfile:-
The
Docker File can be built with the following command:
docker build
Ex-
docker build -t ImageName:TagName dir
-t
is to mention a tag to the image
ImageName
– This is the name you want to give to your image
TagName
– This is the tag you want to give to your image
Dir
– The directory where the Docker File is present.
You will
then see the successfully built message and the ID of the new Image. When you
run the
Docker images command, you would then be able to see your new image.
Some Important Dockerfile Instruction Commands:
FROM
This instruction is used to set the base image for subsequent
instructions. It is mandatory to set this in the first line of a Dockerfile.
You can use it any number of times though.
Example:
FROM ubuntu
MAINTAINER
This is a non-executable instruction used to indicate the author
of the Dockerfile.
Example:
MAINTAINER <name>
RUN
This instruction lets you execute a command on top of an
existing layer and create a new layer with the results of command execution.
For example, if there is a pre-condition to install PHP before
running an application, you can run appropriate commands to install PHP on top
of base image (say Ubuntu) like this:
FROM ubuntu
RUN apt-get update update apt-get install php5
CMD
The major difference between
CMD and RUN is that CMD doesn’t execute anything during
the build time. It just specifies the intended command for the image. Whereas
RUN actually executes the command during build time.
Note: there can be only one
CMD instruction in a Dockerfile, if
you add more, only the last one takes effect.
Example:
CMD "echo" "Hello World!"
LABEL
You can assign metadata in the form of key-value pairs to the
image using this instruction. It is important to notice that each
LABEL instruction creates a new layer
in the image, so it is best to use as few LABEL instructions as possible.
Example:
LABEL version="1.0" description="This is a sample desc"
EXPOSE
While running your service in the container you may want your
container to listen on specified ports. The
EXPOSE instruction helps you do
this.
Example:
EXPOSE 6456
ENV
This instruction can be used to set the environment variables in
the container.
Example:
ENV var_home="/var/etc"
COPY
This instruction is used to copy files and directories from a
specified source to a destination (in the file system of the container).
Example:
COPY preconditions.txt /usr/temp
ADD
This instruction is similar to the
COPY instruction with few added
features like remote URL support in the source field and local-only tar
extraction.
Example:
ADD http://www.site.com/downloads/sample.tar.xz /usr/src
ENTRYPOINT
You can use this instruction to set the primary command for the
image.
For example, if you have installed only one application in your
image and want it to run whenever the image is executed,
ENTRYPOINT is the instruction for
you.
Note: arguments are optional, and you can pass them during the
runtime with something like
docker run <image-name>.
Also, all the elements specified using
CMD will be overridden, except the
arguments. They will be passed to the command specified in ENTRYPOINT.
Example:
CMD "Hello World!"
ENTRYPOINT echo
VOLUME
You can use the
VOLUME instruction to enable access to
a location on the host system from a container. Just pass the path of the
location to be accessed.
Example:
VOLUME /data
USER
This is used to set the UID (or username) to use when running
the image.
Example:
USER daemon
WORKDIR
This is used to set the currently active directory for other
instructions such as
RUN, CMD, ENTRYPOINT, COPY and ADD.
Note that if relative path is provided, the next
WORKDIR instruction will take it
as relative to the path of previous WORKDIR instruction.
Example:
WORKDIR /user
WORKDIR home
RUN pwd
This will output the path as
/user/home.
ONBUILD
This instruction adds a trigger instruction to be executed when
the image is used as the base for some other image. It behaves as if a
RUN instruction is inserted
immediately after the FROMinstruction of the downstream Dockerfile. This is
typically helpful in cases where you need a static base image with a dynamic
config value that changes whenever a new image has to be built (on top of the
base image).
Example:
ONBUILD RUN rm -rf /usr/temp
How to create Public Repositories:
Public
repositories can be used to host Docker images which can be used by everyone
else. An
example is the images which are available in Docker Hub. Most of the images
such as
Centos, Ubuntu, and Jenkins are all publicly available for all. We can also
make
our
images available by publishing it to the public repository on Docker Hub.
Step 1:- Log into Docker Hub and create your repository. This is the
repository where your
image will be stored. Go to https://hub.docker.com/
and log in with your credentials.
Step 2:- Click the button "Create
Repository" on the above screen and create a repository
with the name
demorep. Make sure that the visibility of the repository is public.
Step 3:-
Now go back to the Docker Host. Here we need to tag our myimage to the
new
repository
created in Docker Hub. We can do this via the Docker tag command.
docker tag
imageID Repositoryname
Ex- docker
tag ab0c1d3744dd demousr/demorep:1.0
Step 4:-
Issue the Docker login command to login into the Docker Hub repository from the
command
prompt. The Docker login command will prompt you for the username and
password
to the Docker Hub repository.
Ex - docker login
Usename :
Password:
Step 5:-
Once the image has been tagged, it’s now time to push the image to the Docker
Hub
repository. We can do this via the Docker push command. We will learn
more about
this
command later in this chapter.
docker
push Repositoryname
Ex-
docker push demousr/demorep:1.0
Now let’s
try to pull the repository we uploaded onto our Docker host. Let’s first delete
the
images, myimage:0.1
and demousr/demorep:1.0, from the local Docker host. Let’s
use the
Docker pull command to pull the repository from the Docker Hub.
How to create Private Repositories:
You might
have the need to have your own private repositories. You may not want to host
the
repositories on Docker Hub. For this, there is a repository container itself
from Docker.
Let’s see how we can download and use the container
for registry.
Step
1: Use the Docker run command to download the
private registry. This can be done
using the following command:
Docker run –d –p 5000:5000 –name registry registry:2
The
following points need to be noted about the above command:
Registry
is the container managed by Docker which can be used to host private
Repositories.
The
port number exposed by the container is 5000. Hence with the –p command,
we are
mapping the same port number to the 5000 port number on our localhost.
We are
just tagging the registry container as “2”, to differentiate it on the Docker
host.
The –d
option is used to run the container in detached mode. This is so that the
Container
can run in the background.
Step
3: Now let’s tag one of our existing images so that we
can push it to our local
repository.
In our example, since we have the centos image available locally, we are
going
to tag it
to our private repository and add a tag name of centos.
docker tag 67591570dd29 localhost:5000/centos
The
following points need to be noted about the above command:
67591570dd29
refers to the Image ID for the centos image.
localhost:5000
is the location of our private repository.
We are
tagging the repository name as centos in our private repository.
Step
4: Now let’s use the Docker push command to push
the repository to our private
repository.
docker push localhost:5000/centos
Here, we
are pushing the centos image to the private repository hosted at
localhost:5000.
Step
5: Now let’s delete the local images we have for centos
using the docker rmi
commands.
We can then download the required centos image from our private
repository
docker rmi centos:latest
docker rmi 67591570dd29
Step
6: Now that we don’t have any centos images on
our local machine, we can now
use the
following Docker pull command to pull the centos image from our
private
repository.
Here, we
are pulling the centos image to the private repository hosted at
localhost:5000
Docker-Container
Linking
Container
Linking allows multiple containers to link with each other. It is a better
option
than
exposing ports. Let’s go step by step and learn how it works.
Step
1: Download the Jenkins image, if it is not already
present, using the Jenkins pull
command.
Docker
pull jenkins
Step
2: Once the image is available, run the container, but
this time, you can specify a
name to
the container by using the –-name option. This will be our source
container.
Docker
run –-name Jenkins –d jenkins
Step
3: Next, it is time to launch the destination
container, but this time, we will link it
with our
source container. For our destination container, we will use the standard
Ubuntu
image.
Docker
run –name reca –link Jenkins:alias-src –it Ubuntu :latest /bin/bash
Step
4: Now, attach to the receiving container.
Docker ps
Then run
the env command. You will notice new variables for linking with the
source
container.
Docker Storage-
Storage
Drivers
Docker
has multiple storage drivers that allow one to work with the underlying storage
devices.
The following table shows the different storage drivers along with the
technology
used for
the storage drivers.
Technology Storage Driver
OverlayFS
:
overlay or overlay2
AUFS : aufs
Btrfs
: brtfs
Device
Manager
:
devicemanager
VFS : vfs
ZFS
: zfs
Let us
now discuss some of the instances in which you would use the various storage
drivers:
AUFS
This is
a stable driver; can be used for production-ready applications.
It has
good memory usage and is good for ensuring a smooth Docker experience
for
containers.
There
is a high-write activity associated with this driver which should be
considered.
It’s
good for systems which are of Platform as a service type work.
Devicemapper
This is
a stable driver; ensures a smooth Docker experience.
This
driver is good for testing applications in the lab.
This
driver is in line with the main Linux kernel functionality.
Btrfs
This
driver is in line with the main Linux kernel functionality.
There
is a high-write activity associated with this driver which should be
considered.
This
driver is good for instances where you maintain multiple build pools.
Ovelay
This is
a stable driver and it is in line with the main Linux kernel functionality.
It has
a good memory usage.
This
driver is good for testing applications in the lab.
ZFS
This is
a stable driver and it is good for testing applications in the lab.
It’s
good for systems which are of Platform-as-a-Service type work.
To see
the storage driver being used, issue the docker info command.
Create and Manage Volumes:
Docker
volume create my-vol
List Volumes:
Docker
volume ls
Inspect a volume:
Docker
inspect my-vol
Remove a volume:
Docker
volume rm my-vol
Volumes:
Volumes
are the preferred mechanism for persisting data generated by and used by Docker
containers. While bind mounts are dependent on the directory structure of the
host machine, volumes are completely managed by Docker. Volumes have several
advantages over bind mounts:
·
Volumes are easier to back up or
migrate than bind mounts.
·
You can manage volumes using Docker
CLI commands or the Docker API.
·
Volumes work on both Linux and
Windows containers.
·
Volumes can be more safely shared
among multiple containers.
·
Volume drivers allow you to store
volumes on remote hosts or cloud providers, to encrypt the contents of volumes,
or to add other functionality.
·
A new volume’s contents can be
pre-populated by a container.
Start a container with a volume
If you
start a container with a volume that does not yet exist, Docker creates the
volume for you. The following example mounts the volume myvol2 into /app/ in
the container.
docker
run -d --name devtest --mount
source=myvol2,target=/app nginx:latest
Start a service with volumes
When you
start a service and define a volume, each service container uses its own local
volume. None of the containers can share this data if you use the local volume
driver, but some volume drivers do support shared storage. Docker for AWS and
Docker for Azure both support persistent storage using the Cloudstor plugin.
docker
service create -d --replicas=4 --name devtest-service –mount source=myvol2,target=/app nginx:latest
Use a read-only volume
For some
development applications, the container needs to write into the bind mount so
that changes are propagated back to the Docker host. At other times, the
container only needs read access to the data. Remember that multiple containers
can mount the same volume, and it can be mounted read-write for some of them
and read-only for others, at the same time.
docker
run -d \
--name=nginxtest \
--mount
source=nginx-vol,destination=/usr/share/nginx/html,readonly \
nginx:latest
Use Bind Mounts:
Bind
mounts have been around since the early days of Docker. Bind mounts have
limited functionality compared to volumes. When you use a bind mount, a file or
directory on the host machine is mounted into a container. The file or
directory is referenced by its full or relative path on the host machine. By
contrast, when you use a volume, a new directory is created within Docker’s
storage directory on the host machine, and Docker manages that directory’s
contents.
The file
or directory does not need to exist on the Docker host already. It is created
on demand if it does not yet exist. Bind mounts are very performant, but they
rely on the host machine’s filesystem having a specific directory structure
available. If you are developing new Docker applications, consider using named
volumes instead. You can’t use Docker CLI commands to directly manage bind
mounts.
Start a container with a bind mount
Consider
a case where you have a directory source and that when you build the source
code, the artifacts are saved into another directory source/target/. You want
the artifacts to be available to the container at /app/, and you want the
container to get access to a new build each time you build the source on your
development host. Use the following command to bind-mount the target/ directory
into your container at /app/. Run the command from within the source directory.
The $(pwd) sub-command expands to the current working directory on Linux or
macOS hosts.
docker
run -d \
-it \
--name devtest \
--mount
type=bind,source="$(pwd)"/target,target=/app \
nginx:latest
Mounting into a non-empty directory on the container
If you
bind-mount into a non-empty directory on the container, the directory’s
existing contents are obscured by the bind mount. This can be beneficial, such
as when you want to test a new version of your application without building a
new image. However, it can also be surprising and this behavior differs from
that of docker volumes.
docker
run -d \
-it \
--name broken-container \
--mount type=bind,source=/tmp,target=/usr \
nginx:latest
Use a read-only bind mount
For some
development applications, the container needs to write into the bind mount, so
changes are propagated back to the Docker host. At other times, the container
only needs read access.
docker
run -d \
-it \
--name devtest \
--mount
type=bind,source="$(pwd)"/target,target=/app,readonly \
nginx:latest
Use tmpfs mounts
Volumes
and bind mounts are mounted into the container’s filesystem by default, and
their contents are stored on the host machine.
There may
be cases where you do not want to store a container’s data on the host machine,
but you also don’t want to write the data into the container’s writable layer,
for performance or security reasons, or if the data relates to non-persistent
application state. An example might be a temporary one-time password that the
container’s application creates and uses as-needed.
To give
the container access to the data without writing it anywhere permanently, you
can use a tmpfs mount, which is only stored in the host machine’s memory (or
swap, if memory is low). When the container stops, the tmpfs mount is removed.
If a container is committed, the tmpfs mount is not saved.
Use a tmpfs mount in a container
To use a
tmpfs mount in a container, use the --tmpfs flag, or use the --mount flag with
type=tmpfs and destination options. There is no source for tmpfs mounts. The
following example creates a tmpfs mount at /app in a Nginx container.
docker
run -d \
-it \
--name tmptest \
--mount type=tmpfs,destination=/app \
nginx:latest
Docker Networking:
Docker
takes care of the networking aspects so that the containers can communicate
with
other
containers and also with the Docker Host. If you do an ifconfig on the
Docker Host,
you will
see the Docker Ethernet adapter. This adapter is created when Docker is
installed
on the
Docker Host.
Listing all docker networks-
docker
network ls
Creating Your own network-
docker
network create –-driver drivername name
drivername
– This is the name used for the network driver.
name
– This is the name given to the network.
Ex- docker
network create –-driver bridge new_nw
Run a container with different network
Docker run –it -–net=”networkname” “imagename”
Ex- docker run –it –-net=new_nw centos:latest
Network driver summary
User-defined bridge
networks are best when you need multiple containers to communicate
on the same Docker host.
Ex - docker network create –d bridge “name”
Host networks are best when
the network stack should not be isolated from the Docker host, but you want
other aspects of the container to be isolated.
Overlay networks are best when
you need containers running on different Docker hosts to communicate, or when multiple applications
work together using swarm services.
Docker network create
–d overlay my-overlay
Macvlan networks are best when
you are migrating from a VM setup or need your containers to look like physical
hosts on your network, each with a unique MAC address.
Docker network create
–d macvlan –-subnet 10.0.0.0/24 --gateway 10.0.0.1 –ip-range 10.0.0.125/25 –o
parent eth0 macvlan0
Third-party network plugins allow you to
integrate Docker with specialized network stacks.
DOCKER-COMPOSE
Compose is a tool for defining and running multi-container
Docker applications. With Compose, you use a YAML file to configure your
application’s services. Then, with a single command, you create and start all
the services from your configuration. To learn more about all the features of
Compose, see the list of features.
Compose works in all environments: production, staging,
development, testing, as well as CI workflows. You can learn more about each
case in Common Use Cases.
Using Compose is basically a three-step process:
1. Define your app’s
environment with a Dockerfile so it can be reproduced anywhere.
2. Define the services
that make up your app in docker-compose.yml so they can be run together in an
isolated environment.
3. Run docker-compose up
and Compose starts and runs your entire app.
A docker-compose.yml
looks like this:
version: '3'
services:
web:
build: .
ports:
-
"5000:5000"
volumes:
-
.:/code
- logvolume01:/var/log
links:
- redis
redis:
image: redis
volumes: logvolume01: {}
Build Docker-compose file
Docker-compose up
If you want to
change something in docker-compose.yml
Docker-compose down
Than changes that
you want
Again
docker-compose up
Docker-compose up
–d (for detach mode)
Docker stacks and distributed application bundles
A Dockerfile can be
built into an image, and containers can be created from that image. Similarly,
a docker-compose.yml can be built into a distributed application bundle, and
stacks can be created from that bundle. In that sense, the bundle is a
multi-services distributable image format.
Produce a Bundle –
The easiest way to
produce a bundle is to generate it using docker-compose from an existing
docker-compose.yml. Of course, that’s just one possible way to proceed, in the
same way that docker build isn’t the only way to produce a Docker image.
$ docker-compose
bundle
WARNING:
Unsupported key 'network_mode' in services.nsqd - ignoring
WARNING:
Unsupported key 'links' in services.nsqd - ignoring
WARNING:
Unsupported key 'volumes' in services.nsqd - ignoring
[...]
Wrote bundle to
vossibility-stack.dab
Create a stack from a bundle
#docker deploy
vossibility-stack
Loading bundle from
vossibility-stack.dab
Creating service
vossibility-stack_elasticsearch
Creating service
vossibility-stack_kibana
Creating service
vossibility-stack_logstash
Creating service
vossibility-stack_lookupd
Creating service
vossibility-stack_nsqd
Creating service
vossibility-stack_vossibility-collector
# docker service ls
Bundle file format
Distributed
application bundles are described in a JSON format. When bundles are persisted
as files, the file extension is .dab.
A bundle has two
top-level fields: version and services. The version used by Docker 1.12 tools
is 0.1. services in the bundle are the services that comprise the app. They
correspond to the new Service object introduced in the 1.12 Docker Engine API.
A service has the
following fields:
Image (required)
string
The image that the
service runs. Docker images should be referenced with full content hash to
fully specify the deployment artifact for the service. Example:
postgres@sha256:e0a230a9f5b4e1b8b03bb3e8cf7322b0e42b7838c5c87f4545edb48f5eb8f077
Command [] string
Command to run in
service containers.
Args [] string
Arguments passed to
the service containers.
Env [] string
Environment
variables.
Labels map [string]string
Labels used for
setting meta data on services.
Ports [] Port
Service ports
(composed of Port (int) and Protocol (string). A service description can only
specify the container port to be exposed. These ports can be mapped on runtime
hosts at the operator's discretion.
WorkingDir string
Working directory
inside the service containers.
User string
Username or UID
(format: <name|uid>[:<group|gid>]).
Networks []string
Networks that the
service containers should be connected to. An entity deploying a bundle should
create networks as needed.
To use Docker in
swarm mode, install Docker. See installation instructions for all operating
systems and platforms.
Current versions of
Docker include swarm mode for natively managing a cluster of Docker Engines
called a swarm. Use the Docker CLI to create a swarm, deploy application
services to a swarm, and manage swarm behavior.
If you are using a
Docker version prior to 1.12.0, you can use standalone swarm, but we recommend
updating.
Feature highlights
Cluster management
integrated with Docker Engine: Use the Docker Engine CLI to create a swarm of
Docker Engines where you can deploy application services. You don’t need
additional orchestration software to create or manage a swarm.
Decentralized design:
Instead of handling
differentiation between node roles at deployment time, the Docker Engine
handles any specialization at runtime. You can deploy both kinds of nodes,
managers and workers, using the Docker Engine. This means you can build an
entire swarm from a single disk image.
Declarative service model:
Docker Engine uses a declarative approach to
let you define the desired state of the various services in your application
stack. For example, you might describe an application comprised of a web front
end service with message queueing services and a database backend.
Scaling:
For each service,
you can declare the number of tasks you want to run. When you scale up or down,
the swarm manager automatically adapts by adding or removing tasks to maintain
the desired state.
Desired state reconciliation:
The swarm manager
node constantly monitors the cluster state and reconciles any differences
between the actual state and your expressed desired state. For example, if you
set up a service to run 10 replicas of a container, and a worker machine
hosting two of those replicas crashes, the manager creates two new replicas to
replace the replicas that crashed. The swarm manager assigns the new replicas
to workers that are running and available.
Multi-host networking:
You can specify an
overlay network for your services. The swarm manager automatically assigns
addresses to the containers on the overlay network when it initializes or
updates the application.
Service discovery:
Swarm manager nodes
assign each service in the swarm a unique DNS name and load balances running
containers. You can query every container running in the swarm through a DNS
server embedded in the swarm.
Load balancing:
You can expose the
ports for services to an external load balancer. Internally, the swarm lets you
specify how to distribute service containers between nodes.
Secure by default:
Each node in the swarm enforces TLS mutual
authentication and encryption to secure communications between itself and all
other nodes. You have the option to use self-signed root certificates or
certificates from a custom root CA.
Rolling updates:
At rollout time you can apply service updates to nodes incrementally.
The swarm manager lets you control the delay between service deployment to
different sets of nodes. If anything goes wrong, you can roll-back a task to a
previous version of the service.
Swarm Management
A
swarm consists of one or more manager nodes and several worker nodes. The
manager node is used to dispatch tasks to worker nodes. The manager also
performs all of the orchestration and cluster management functions to maintain
the state of the swarm. A single manager node is elected as the leader manager
node and other manager nodes remain on standby so that they are ready to take
on the role of leader at any point in time. Manager nodes are elected to the
leader role through node consensus. Although it possible to run an environment
with a single manager node, ideally three or more nodes should run as manager
nodes.
Worker
nodes receive tasks from manager nodes and execute required actions for the
swarm, such as starting or stopping a container. By default, manager nodes also
behave as worker nodes, but this behavior is configurable. The leader manager
node tracks the state of the cluster and in the event that a worker node
becomes unavailable, the manager ensures that any containers that were running
on the unavailable worker node are started on an alternative worker node.
How to configure swarm?
On my machine, it looks
like this:
docker@manager1:~$ docker swarm init — advertise-addr 192.168.1.8
Swarm initialized: current node (5oof62fetd4gry7o09jd9e0kf) is now a manager.
To add a worker to this swarm, run the following command:
docker swarm join \
— token SWMTKN-1–5mgyf6ehuc5pfbmar00njd3oxv8nmjhteejaald3yzbef7osl1-ad7b1k8k3bl3aa3k3q13zivqd \
192.168.1.8:2377
To add a manager to this swarm, run ‘docker swarm join-token manager’ and follow the instructions.
docker@manager1:~$
Great!
You will also notice that the output mentions the
docker swarm join command to use in case you want another node to join as a
worker. Keep in mind that you can have a node join as a worker or as a manager.
At any point in time, there is only one LEADER and the other manager nodes will
be as backup in case the current LEADER opts out.
At this point you can see your Swarm status by
firing the following command as shown below:
docker@manager1:~$ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
5oof62fetd..* manager1 Ready Active Leader
This shows that there is a single node so far
i.e. manager1 and it has the value of Leader for the MANAGER column.
Stay in the SSH session itself for manager1.
Joining as Worker Node
To find out what docker
swarm command to use to join as a node, you will need to use the join-token
<role> command.
To find out the join
command for a worker, fire the following command:
docker@manager1:~$ docker swarm join-token worker
To add a worker to this swarm, run the following command:
docker swarm join \
— token SWMTKN-1–5mgyf6ehuc5pfbmar00njd3oxv8nmjhteejaald3yzbef7osl1-ad7b1k8k3bl3aa3k3q13zivqd \
192.168.1.8:2377
docker@manager1:~$
Joining as Manager Node
To find out the the
join command for a manager, fire the following command:
docker@manager1:~$ docker swarm join-token manager To add a manager to this swarm, run the following command:
docker swarm join \
— token SWMTKN-1–5mgyf6ehuc5pfbmar00njd3oxv8nmjhteejaald3yzbef7osl1–8xo0cmd6bryjrsh6w7op4enos \
192.168.1.8:2377
docker@manager1:~$
Notice in both the above cases, that you are
provided a token and it is joining the Manager node (you will be able to
identify that the IP address is the same the MANAGER_IP address).
Keep the SSH to manager1 open. And fire up other
command terminals for working with other worker docker machines.
Adding Worker Nodes to
our Swarm
Now that we know how to check the command to join
as a worker, we can use that to do a SSH into each of the worker Docker
machines and then fire the respective join command in them.
In my case, I have 5 worker machines
(worker1/2/3/4/5). For the first worker1 Docker machine, I do the following:
·
SSH into the worker1 machine i.e. docker-machine ssh
worker1
·
Then fire the respective command that I got for joining
as a worker. In my case the output is shown below:
docker@worker1:~$ docker swarm join \
— token SWMTKN-1–5mgyf6ehuc5pfbmar00njd3oxv8nmjhteejaald3yzbef7osl1-ad7b1k8k3bl3aa3k3q13zivqd \
192.168.1.8:2377
This node joined a swarm as a worker.
docker@worker1:~$
I do the same thing by launching SSH sessions for
worker2/3/4/5 and then pasting the same command since I want all of them to be
worker nodes.
After making all my worker nodes join the Swarm,
I go back to my manager1 SSH session and fire the following command to check on
the status of my Swarm i.e. see the nodes participating in it:
docker@manager1:~$ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
1ndqsslh7fpquc7fi35leig54 worker4 Ready Active
1qh4aat24nts5izo3cgsboy77 worker5 Ready Active
25nwmw5eg7a5ms4ch93aw0k03 worker3 Ready Active
5oof62fetd4gry7o09jd9e0kf * manager1 Ready Active Leader
5pm9f2pzr8ndijqkkblkgqbsf worker2 Ready Active
9yq4lcmfg0382p39euk8lj9p4 worker1 Ready Active
docker@manager1:~$
As expected, you can see that I have 6 nodes, one
as the manager (manager1) and the other 5 as workers.
We can also do execute the standard docker info
command here and zoom into the Swarm section to check out the details for our
Swarm.
Swarm: active
NodeID: 5oof62fetd4gry7o09jd9e0kf
Is Manager: true
ClusterID: 6z3sqr1aqank2uimyzijzapz3
Managers: 1
Nodes: 6
Orchestration:
Task History Retention Limit: 5
Raft:
Snapshot Interval: 10000
Heartbeat Tick: 1
Election Tick: 3
Dispatcher:
Heartbeat Period: 5 seconds
CA Configuration:
Expiry Duration: 3 months
Node Address: 192.168.1.8
Notice a few of the
properties:
·
The Swarm is
marked as active. It has 6 Nodes in total and 1 manager among them.
·
Since I am
running the docker info command on the manager1
itself, it shows the Is Manager as true.
·
The Raft
section is the Raft consensus algorithm that is used.
Create a Service
Now that we have our swarm up and running, it is
time to schedule our containers on it. This is the whole beauty of the
orchestration layer. We are going to focus on the app and not worry about where
the application is going to run.
All we are going to do is tell the manager to run
the containers for us and it will take care of scheduling out the containers,
sending the commands to the nodes and distributing it.
To start a service, you would need to have the
following:
·
What is the Docker image that you want to run. In our
case, we will run the standard nginx image that is
officially available from the Docker hub.
·
We will expose our service on port 80.
·
We can specify the number of containers (or instances)
to launch. This is specified via the replicas parameter.
·
We will decide on the name for our service. And keep
that handy.
What I am going to do then is to launch 5
replicas of the nginx container. To do that, I am again in the SSH session for
my manager1 node. And I give the following docker service create command:
docker service create --replicas 5 -p 80:80 --name web nginx
ctolq1t4h2o859t69j9pptyye
What has happened is that the Orchestration layer
has now got to work.
You can find out the status of the service, by
giving the following command:
docker@manager1:~$ docker service ls
ID NAME REPLICAS IMAGE COMMAND
ctolq1t4h2o8 web 0/5 nginx
This shows that the replicas are not yet ready.
You will need to give that command a few times.
In the meanwhile, you can also see the status of
the service and how it is getting orchestrated to the different nodes by using
the following command:
docker@manager1:~$ docker service ps web
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR
7i* web.1 nginx worker3 Running Preparing 2 minutes ago
17* web.2 nginx manager1 Running Running 22 seconds ago
ey* web.3 nginx worker2 Running Running 2 minutes ago
bd* web.4 nginx worker5 Running Running 45 seconds ago
dw* web.5 nginx worker4 Running Running 2 minutes ago
This shows that the nodes are getting setup. It
could take a while.
But notice a few things. In the list of nodes
above, you can see that the 5 containers are being scheduled by the
orchestration layer on manager1, worker2, worker3, worker4 and
worker5. There is no container scheduled for worker1 node
and that is fine.
A few executions of docker service
ls shows the following responses:
docker@manager1:~$ docker service ls
ID NAME REPLICAS IMAGE COMMAND
ctolq1t4h2o8 web 3/5 nginx
docker@manager1:~$
and then finally:
docker@manager1:~$ docker service ls
ID NAME REPLICAS IMAGE COMMAND
ctolq1t4h2o8 web 5/5 nginx
docker@manager1:~$
If we look at the service processes at this
point, we can see the following:
docker@manager1:~$ docker service ps web
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR
7i* web.1 nginx worker3 Running Running 4 minutes ago
17* web.2 nginx manager1 Running Running 7 minutes ago
ey* web.3 nginx worker2 Running Running 9 minutes ago
bd* web.4 nginx worker5 Running Running 8 minutes ago
dw* web.5 nginx worker4 Running Running 9 minutes ago
If you do a docker ps on
the manager1 node right now, you will find that the nginx daemon has been
launched.
docker@manager1:~$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
933309b04630 nginx:latest "nginx -g 'daemon off" 2 minutes ago Up 2 minutes 80/tcp, 443/tcp web.2.17d502y6qjhd1wqjle13nmjvc
docker@manager1:~$
Accessing the Service
You can access the
service by hitting any of the manager or worker nodes. It does not matter if the
particular node does not have a container scheduled on it. That is the whole
idea of the swarm.
Try out a curl to any
of the Docker Machine IPs (manager1 or worker1/2/3/4/5) or hit the URL
(http://<machine-ip>) in the browser. You should be able to get the
standard NGINX Home page.
Scaling up and Scaling down
This is done
via the docker service scale command. We currently have 5
containers running. Let us bump it up to 8 as shown below by executing the
command on the manager1 node.
$ docker service scale web=8
web scaled to 8
Now, we can check the status of the
service and the process tasks via the same commands as shown below:
docker@manager1:~$ docker service ls
ID NAME REPLICAS IMAGE COMMAND
ctolq1t4h2o8 web 5/8 nginx
ID NAME REPLICAS IMAGE COMMAND
ctolq1t4h2o8 web 5/8 nginx
In the ps web command below, you will
find that it has decided to schedule the new containers on worker1 (2 of them)
and manager1(one of them)
docker@manager1:~$ docker service ps
web
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR
7i* web.1 nginx worker3 Running Running 14 minutes ago
17* web.2 nginx manager1 Running Running 17 minutes ago
ey* web.3 nginx worker2 Running Running 19 minutes ago
bd* web.4 nginx worker5 Running Running 17 minutes ago
dw* web.5 nginx worker4 Running Running 19 minutes ago
8t* web.6 nginx worker1 Running Starting about a minute ago
b8* web.7 nginx manager1 Running Ready less than a second ago
0k* web.8 nginx worker1 Running Starting about a minute ago
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR
7i* web.1 nginx worker3 Running Running 14 minutes ago
17* web.2 nginx manager1 Running Running 17 minutes ago
ey* web.3 nginx worker2 Running Running 19 minutes ago
bd* web.4 nginx worker5 Running Running 17 minutes ago
dw* web.5 nginx worker4 Running Running 19 minutes ago
8t* web.6 nginx worker1 Running Starting about a minute ago
b8* web.7 nginx manager1 Running Ready less than a second ago
0k* web.8 nginx worker1 Running Starting about a minute ago
We wait for a
while and then everything looks good as shown below:
docker@manager1:~$ docker service ls
ID NAME REPLICAS IMAGE COMMAND
ctolq1t4h2o8 web 8/8 nginx
ID NAME REPLICAS IMAGE COMMAND
ctolq1t4h2o8 web 8/8 nginx
docker@manager1:~$ docker service ps
web
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR
7i* web.1 nginx worker3 Running Running 16 minutes ago
17* web.2 nginx manager1 Running Running 19 minutes ago
ey* web.3 nginx worker2 Running Running 21 minutes ago
bd* web.4 nginx worker5 Running Running 20 minutes ago
dw* web.5 nginx worker4 Running Running 21 minutes ago
8t* web.6 nginx worker1 Running Running 4 minutes ago
b8* web.7 nginx manager1 Running Running 2 minutes ago
0k* web.8 nginx worker1 Running Running 3 minutes ago
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR
7i* web.1 nginx worker3 Running Running 16 minutes ago
17* web.2 nginx manager1 Running Running 19 minutes ago
ey* web.3 nginx worker2 Running Running 21 minutes ago
bd* web.4 nginx worker5 Running Running 20 minutes ago
dw* web.5 nginx worker4 Running Running 21 minutes ago
8t* web.6 nginx worker1 Running Running 4 minutes ago
b8* web.7 nginx manager1 Running Running 2 minutes ago
0k* web.8 nginx worker1 Running Running 3 minutes ago
docker@manager1:~$
Inspecting nodes
You can inspect the nodes anytime via
the docker node inspect command.
For example if you are already on the
node (for example manager1) that you want to check, you can use the name self
for the node.
$ docker node inspect self
Or if you want to check up on the
other nodes, give the node name. For e.g.
$ docker node inspect worker1
Draining a node
If the node is ACTIVE, it is ready to
accept tasks from the Master i.e. Manager. For e.g. we can see the list of
nodes and their status by firing the following command on the manager1 node.
docker@manager1:~$ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
1ndqsslh7fpquc7fi35leig54 worker4 Ready Active
1qh4aat24nts5izo3cgsboy77 worker5 Ready Active
25nwmw5eg7a5ms4ch93aw0k03 worker3 Ready Active
5oof62fetd4gry7o09jd9e0kf * manager1 Ready Active Leader
5pm9f2pzr8ndijqkkblkgqbsf worker2 Ready Active
9yq4lcmfg0382p39euk8lj9p4 worker1 Ready Active
docker@manager1:~$
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
1ndqsslh7fpquc7fi35leig54 worker4 Ready Active
1qh4aat24nts5izo3cgsboy77 worker5 Ready Active
25nwmw5eg7a5ms4ch93aw0k03 worker3 Ready Active
5oof62fetd4gry7o09jd9e0kf * manager1 Ready Active Leader
5pm9f2pzr8ndijqkkblkgqbsf worker2 Ready Active
9yq4lcmfg0382p39euk8lj9p4 worker1 Ready Active
docker@manager1:~$
You can see that their AVAILABILITY is
set to READY.
As per the documentation, When the
node is active, it can receive new tasks:
·
during a
service update to scale up
·
during a
rolling update
·
when you set
another node to Drain availability
·
when a task
fails on another active node
But sometimes, we have to bring the
Node down for some maintenance reason. This meant by setting the Availability
to Drain mode. Let us try that with one of our nodes.
But first, let us check the status of
our processes for the web services and on which nodes they are running:
docker@manager1:~$ docker service ps
web
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR
7i* web.1 nginx worker3 Running Running 54 minutes ago
17* web.2 nginx manager1 Running Running 57 minutes ago
ey* web.3 nginx worker2 Running Running 59 minutes ago
bd* web.4 nginx worker5 Running Running 57 minutes ago
dw* web.5 nginx worker4 Running Running 59 minutes ago
8t* web.6 nginx worker1 Running Running 41 minutes ago
b8* web.7 nginx manager1 Running Running 39 minutes ago
0k* web.8 nginx worker1 Running Running 41 minutes ago
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR
7i* web.1 nginx worker3 Running Running 54 minutes ago
17* web.2 nginx manager1 Running Running 57 minutes ago
ey* web.3 nginx worker2 Running Running 59 minutes ago
bd* web.4 nginx worker5 Running Running 57 minutes ago
dw* web.5 nginx worker4 Running Running 59 minutes ago
8t* web.6 nginx worker1 Running Running 41 minutes ago
b8* web.7 nginx manager1 Running Running 39 minutes ago
0k* web.8 nginx worker1 Running Running 41 minutes ago
You find that we have 8 replicas of
our service:
·
2 on manager1
·
2 on worker1
·
1 each on
worker2, worker3, worker4 and worker5
Now, let us use another command to
check what is going on in node worker1.
docker@manager1:~$ docker node ps
worker1
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE 8t* web.6 nginx worker1 Running Running 44 minutes ago
0k* web.8 nginx worker1 Running Running 44 minutes ago
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE 8t* web.6 nginx worker1 Running Running 44 minutes ago
0k* web.8 nginx worker1 Running Running 44 minutes ago
docker@manager1:~$
We can also use the docker node
inspect command to check the availability of the node and as expected, you will
find a section in the output as follows:
$ docker node inspect worker1
…..
…..
"Spec": {
"Role": "worker",
"Availability": "active"
},
…
"Role": "worker",
"Availability": "active"
},
…
or
docker@manager1:~$ docker node
inspect — pretty worker1
ID: 9yq4lcmfg0382p39euk8lj9p4
Hostname: worker1
Joined at: 2016–09–16 08:32:24.5448505 +0000 utc
Status:
State: Ready
Availability: Active
Platform:
Operating System: linux
Architecture: x86_64
Resources:
CPUs: 1
Memory: 987.2 MiB
Plugins:
Network: bridge, host, null, overlay
Volume: local
Engine Version: 1.12.1
Engine Labels:
— provider = hypervdocker@manager1:~$
ID: 9yq4lcmfg0382p39euk8lj9p4
Hostname: worker1
Joined at: 2016–09–16 08:32:24.5448505 +0000 utc
Status:
State: Ready
Availability: Active
Platform:
Operating System: linux
Architecture: x86_64
Resources:
CPUs: 1
Memory: 987.2 MiB
Plugins:
Network: bridge, host, null, overlay
Volume: local
Engine Version: 1.12.1
Engine Labels:
— provider = hypervdocker@manager1:~$
We can see that it is “Active”
for its Availability attribute.
Now, let us set the Availability to DRAIN. When we give that command, the Manager will stop tasks running on that node and launches the replicas on other nodes with ACTIVE availability.
Now, let us set the Availability to DRAIN. When we give that command, the Manager will stop tasks running on that node and launches the replicas on other nodes with ACTIVE availability.
So what we are expecting is that the
Manager will bring the 2 containers running on worker1 and schedule them on the
other nodes (manager1 or worker2 or worker3 or worker4 or worker5).
This is done by updating the node by
setting its availability to “drain”.
docker@manager1:~$ docker node
update --availability drain worker1
worker1
worker1
Now, if we do a process status for the
service, we see an interesting output (I have trimmed the output for proper
formatting):
docker@manager1:~$ docker service ps
web
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE
7i* web.1 nginx worker3 Running Running about an hour ago
17* web.2 nginx manager1 Running Running about an hour ago
ey* web.3 nginx worker2 Running Running about an hour ago
bd* web.4 nginx worker5 Running Running about an hour ago
dw* web.5 nginx worker4 Running Running about an hour ago
2u* web.6 nginx worker4 Running Preparing about a min ago
8t* \_ web.6 nginx worker1 Shutdown Shutdown about a min ago
b8* web.7 nginx manager1 Running Running 49 minutes ago
7a* web.8 nginx worker3 Running Preparing about a min ago
0k* \_ web.8 nginx worker1 Shutdown Shutdown about a min ago
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE
7i* web.1 nginx worker3 Running Running about an hour ago
17* web.2 nginx manager1 Running Running about an hour ago
ey* web.3 nginx worker2 Running Running about an hour ago
bd* web.4 nginx worker5 Running Running about an hour ago
dw* web.5 nginx worker4 Running Running about an hour ago
2u* web.6 nginx worker4 Running Preparing about a min ago
8t* \_ web.6 nginx worker1 Shutdown Shutdown about a min ago
b8* web.7 nginx manager1 Running Running 49 minutes ago
7a* web.8 nginx worker3 Running Preparing about a min ago
0k* \_ web.8 nginx worker1 Shutdown Shutdown about a min ago
docker@manager1:~$
You can see that the containers on
worker1 (which we have asked to be drained) are being rescheduled on other
workers. In our scenario above, they got scheduled to worker2 and worker3
respectively. This is required because we have asked for 8 replicas to be
running in an earlier scaling exercise.
You can see that the two containers
are still in “Preparing” state and after a while if you run the command, they
are all running as shown below:
docker@manager1:~$ docker service ps
web
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE
7i* web.1 nginx worker3 Running Running about an hour ago
17* web.2 nginx manager1 Running Running about an hour ago
ey* web.3 nginx worker2 Running Running about an hour ago
bd* web.4 nginx worker5 Running Running about an hour ago
dw* web.5 nginx worker4 Running Running about an hour ago
2u* web.6 nginx worker4 Running Running 8 minutes ago
8t* \_ web.6 nginx worker1 Shutdown Shutdown 8 minutes ago
b8* web.7 nginx manager1 Running Running 56 minutes ago
7a* web.8 nginx worker3 Running Running 8 minutes ago
0k* \_ web.8 nginx worker1 Shutdown Shutdown 8 minutes ago
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE
7i* web.1 nginx worker3 Running Running about an hour ago
17* web.2 nginx manager1 Running Running about an hour ago
ey* web.3 nginx worker2 Running Running about an hour ago
bd* web.4 nginx worker5 Running Running about an hour ago
dw* web.5 nginx worker4 Running Running about an hour ago
2u* web.6 nginx worker4 Running Running 8 minutes ago
8t* \_ web.6 nginx worker1 Shutdown Shutdown 8 minutes ago
b8* web.7 nginx manager1 Running Running 56 minutes ago
7a* web.8 nginx worker3 Running Running 8 minutes ago
0k* \_ web.8 nginx worker1 Shutdown Shutdown 8 minutes ago
This makes for cool demo, isn’t it?
Remove the Service
You can simply use the service rm
command as shown below:
docker@manager1:~$ docker service rm
web
web
web
docker@manager1:~$ docker service ls
ID NAME REPLICAS IMAGE COMMAND
ID NAME REPLICAS IMAGE COMMAND
docker@manager1:~$ docker service
inspect web
[]
Error: no such service: web
[]
Error: no such service: web
Applying Rolling Updates
This is
straight forward. In case you have an updated Docker image to roll out to the
nodes, all you need to do is fire an service update command.
For e.g.
$ docker service update --image
<imagename>:<v’’ersion> web
Docker Machine is a
tool that lets you install Docker Engine on virtual hosts, and manage the hosts
with docker-machine commands. You can use Machine to create Docker hosts on
your local Mac or Windows box, on your company network, in your data center, or
on cloud providers like Azure, AWS, or Digital Ocean.
Using
docker-machine commands, you can start, inspect, stop, and restart a managed
host, upgrade the Docker client and daemon, and configure a Docker client to
talk to your host.
Point the Machine
CLI at a running, managed host, and you can run docker commands directly on
that host. For example, run docker-machine env default to point to a host
called default, follow on-screen instructions to complete env setup, and run
docker ps, docker run hello-world, and so forth.
Machine was the
only way to run Docker on Mac or Windows previous to Docker v1.12. Starting
with the beta program and Docker v1.12, Docker for Mac and Docker for Windows
are available as native apps and the better choice for this use case on newer
desktops and laptops. We encourage you to try out these new apps. The
installers for Docker for Mac and Docker for Windows include Docker Machine,
along with Docker Compose.
If you aren’t sure
where to begin, see Get Started with Docker, which guides you through a brief
end-to-end tutorial on Docker.
Docker Machine is a
tool that lets you install Docker Engine on virtual hosts, and manage the hosts
with docker-machine commands. You can use Machine to create Docker hosts on
your local Mac or Windows box, on your company network, in your data center, or
on cloud providers like Azure, AWS, or Digital Ocean.
Using
docker-machine commands, you can start, inspect, stop, and restart a managed
host, upgrade the Docker client and daemon, and configure a Docker client to
talk to your host.
Point the Machine
CLI at a running, managed host, and you can run docker commands directly on
that host. For example, run docker-machine env default to point to a host
called default, follow on-screen instructions to complete env setup, and run
docker ps, docker run hello-world, and so forth.
Machine was the
only way to run Docker on Mac or Windows previous to Docker v1.12. Starting
with the beta program and Docker v1.12, Docker for Mac and Docker for Windows
are available as native apps and the better choice for this use case on newer
desktops and laptops. We encourage you to try out these new apps. The
installers for Docker for Mac and Docker for Windows include Docker Machine,
along with Docker Compose.
If you aren’t sure
where to begin, see Get Started with Docker, which guides you through a brief
end-to-end tutorial on Docker.
What’s the difference between Docker Engine and Docker
Machine?
When people say
“Docker” they typically mean Docker Engine, the client-server application made
up of the Docker daemon, a REST API that specifies interfaces for interacting
with the daemon, and a command line interface (CLI) client that talks to the
daemon (through the REST API wrapper). Docker Engine accepts docker commands
from the CLI, such as docker run <image>, docker ps to list running
containers, docker image ls to list images, and so on.
Docker Machine is a
tool for provisioning and managing your Dockerized hosts (hosts with Docker
Engine on them). Typically, you install Docker Machine on your local system.
Docker Machine has its own command line client docker-machine and the Docker
Engine client, docker. You can use Machine to install Docker Engine on one or
more virtual systems. These virtual systems can be local (as when you use
Machine to install and run Docker Engine in VirtualBox on Mac or Windows) or
remote (as when you use Machine to provision Dockerized hosts on cloud
providers). The Dockerized hosts themselves can be thought of, and are
sometimes referred to as, managed “machines”.

Keep up the great work, I read few blog posts on this site and I believe that your website is really interesting and has loads of good info.
ReplyDeleteCloud Computing Classes in Chennai
Cloud Computing Institutes in Chennai
Thanks Sharmi for the appreciation.
ReplyDelete