Docker commands

Most useful Docker commands

(Click on the line for clipboard copy)

docker cp config.conf dataContainer:/config/

docker search <app image>

docker run -d redis:latest

docker run -d redis:3.2

docker run busybox ls

docker run busybox echo hi there

docker stop <containerID>

docker kill <containerID>

docker ps

docker ps –all

docker ps –filter “label=user=scrapbook”

docker ps –format ‘{{.Names}} container is using {{.Image}} image’

docker ps –format ‘table {{.Names}}\t{{.Image}}’

docker ps -q | xargs docker inspect –format ‘{{ .Id }} – {{ .Name }} – {{ .NetworkSettings.IPAddress }}’

docker inspect <containerID>

docker inspect -f “{{json .Config.Labels }}” rd

docker logs <containerID>

docker create busybox

docker create -v /config –name dataContainer busybox

Connect via Networks

docker network create backend-network

docker network connect frontend-network redis

docker run -d –name=redis –net=backend-network redis

docker run –net=backend-network alpine env

docker run -d -p 3000:3000 –net=frontend-network katacoda/redis-node-docker-example

Networks alias

docker network connect –alias db frontend-network2 redis

docker run –net=frontend-network2 alpine ping -c1 db

docker network ls

docker network inspect frontend-network

docker network disconnect frontend-network redis

Starting a container

docker start -a <containderID>

docker system df

docker system info

docker system prune

docker run -d –name redisHostPort -p 6379:6379 redis:latest

docker exec -it ubuntu bash

docker run -d –name redisMapped -v /opt/docker/data/redis:/data redis

docker run –volumes-from dataContainer ubuntu ls /configconfig.conf

docker run -d -p 3000:3000 –link redis-server:redis katacoda/redis-node-docker-example

docker run -d –name redis-syslog –log-driver=syslog redis

Run with Restart Policies

docker run -d –name restart-default scrapbook/docker-restart-example

docker run -d –name restart-3 –restart=on-failure:3 scrapbook/docker-restart-example

docker run -d –name restart-Always –restart=always scrapbook/docker-restart-example

Connect via links

docker run –link redis-server:redis alpine env

docker run –link redis-server:redis alpine ping -c 1 redis

docker run –net=backend-network alpine cat /etc/hosts

docker exec -it <containerID> bash

Run with Labels

docker run -l user=12345 -d redis

docker run –label-file=labels -d redis

docker ps –filter “label=user=scrapbook”

docker inspect -f “{{json .Config.Labels }}” rd

docker -d -H unix:///var/run/docker.sock
–label com.katacoda.environment=”production”
–label com.katacoda.storage=”ssd”

Run with nginx-proxy (round robin loadbalancing)

docker run -d -p 80:80 -e DEFAULT_HOST=proxy.example -v /var/run/docker.sock:/tmp/docker.sock:ro –name nginx jwilder/nginx-proxy

Run multiple times the below command for cluster loadbalancing

docker run -d -p 80 -e VIRTUAL_HOST=proxy.example katacoda/docker-http-server

curl http://docker

Building dockerfile images

FROM nginx:alpine
COPY . /usr/share/nginx/html .

docker build -t webserver-image:v1 .

docker images

docker run -d -p 80:80 webserver-image:v1

DataContainer

docker export dataContainer > dataContainer.tar

docker import dataContainer.tar

.dockerignore: list of files to ignore when building the container

Orchestration using Docker Compose

docker-compose up -d

docker-compose ps

docker-compose logs

docker-compose scale web=3

docker-compose stop

docker-compose rm

Docker Stats

docker stats nginx

docker ps -q | xargs docker stats

Docker build

docker build -f Dockerfile.multi -t golang-app

docker images

Testing app

curl docker:3000

Docker Swarms

host1:
docker swarm init

host2:
token=$(docker -H 172.17.0.31:2345 swarm join-token -q worker) && echo $token
SWMTKN-1-5f73718ktyw958x6izae5pa2rkj10xx9kl2ph5itgkufj5ezio-52k5o571whzu6tcth9qbq62iz

host2:
docker swarm join 172.17.0.31:2377 –token $token

docker node ls

docker network create -d overlay skynet

VXLAN

docker network create –attachable -d overlay eg1

docker run –name=dig –network eg1 benhall/dig dig http

docker network ls

docker service create –name http –network skynet –replicas 2 -p 80:80 katacoda/docker-http-server

docker service ls

docker service ps http

docker service inspect –pretty http

docker service inspect http –format=”{{.Endpoint.VirtualIPs}}”

docker inspect –format=”{{.NetworkSettings.Networks.eg1.IPAddress}}”

docker node ps self

docker node ps $(docker node ls -q | head -n1)

docker service scale http=5

Load balance and Service Discover

host1 :
docker swarm init

host2:
docker swarm join 172.17.0.16:2377 –token $(docker-H 172.17.0.16:2345 swarm join-token -q worker)

host1:
docker service create –name lbapp1 –replicas 2 -p81:80 katacoda/docker-http-server

host1:
curl docker:81

host2:
curl docker:81

VXLAN Virtual IP

host1:
docker network create –attachable -d overlay eg1

host1:
docker service create –name http –network eg1 –replicas 2 katacoda/docker-http-server

host1:
docker run –name=dig –network eg1 benhall/dig dig http

host1: (ping IP adress)
docker run –name=ping –network eg1 alpine ping -c5 http

host1:
docker service inspect http –format=”{{.Endpoint.VirtualIPs}}”

host1:
docker inspect –format=”{{.NetworkSettings.Networks.eg1.IPAddress}}” $(docker ps | grep docker-http-server | head -n1 | awk ‘{print $1}’)

Multi-host LB and Service Discovery

host1:
docker network create -d overlay app1-network

host1:
docker service create –name redis –network app1-network redis:alpine

host1:
docker service create –name app1-web –network app1-network –replicas 4 -p 80:3000 katacoda/redis-node-docker-example

host1:
curl docker

Apply Rolling Updates Across Swarm Cluster (no downtime)

Service creation

docker swarm init && docker service create –name http –replicas 2 -p 80:80 katacoda/docker-http-server:v1

Update a key value pair on both nodes

docker service update –env-add KEY=VALUE http

docker service update –limit-cpu 2 –limit-memory 512mb http

docker service inspect –pretty http

Scaling up replicas

docker service update –replicas=6 http

Updating image

docker service update –image katacoda/docker-http-server:v2 http

Rolling updates with delays

docker service update –update-delay=10s –update-parallelism=1 –image katacoda/docker-http-server:v3 http

Healthcheck for Containers

Dockerfile

FROM katacoda/docker-http-server:health
HEALTHCHECK –timeout=1s –interval=1s –retries=3 \
CMD curl -s –fail http://localhost:80/ || exit 1

Build and run healthy

docker build -t http .

docker run -d -p 80:80 –name srv http

docker ps

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e845cbcc4b0b http “/app” 3 minutes ago Up 3 minutes (healthy) 0.0.0.0:80->80/tcp srv

Force unhealthy call

curl http://docker/unhealthy

docker ps

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e845cbcc4b0b http “/app” 4 minutes ago Up 4 minutes (unhealthy) 0.0.0.0:80->80/tcp srv

Inspect Container errors

docker inspect –format “{{json .State.Health.Status }}” srv
“unhealthy”

docker inspect –format “{{json .State.Health }}” srv

{“Status”:”unhealthy”,”FailingStreak”:129,”Log”:[{“Start”:”2018-09-18T17:14:35.171859133Z”,”End”:”2018-09-18T17:14:35.221743801Z”,”ExitCode”:1,”Output”:””},{“Start”:”2018-09-18T17:14:36.225694953Z”,”End”:”2018-09-18T17:14:36.275208213Z”,”ExitCode”:1,”Output”:””},{“Start”:”2018-09-18T17:14:37.279296018Z”,”End”:”2018-09-18T17:14:37.328703724Z”,”ExitCode”:1,”Output”:””},{“Start”:”2018-09-18T17:14:38.332690443Z”,”End”:”2018-09-18T17:14:38.382850669Z”,”ExitCode”:1,”Output”:””},{“Start”:”2018-09-18T17:14:39.387127992Z”,”End”:”2018-09-18T17:14:39.437102605Z”,”ExitCode”:1,”Output”:””}]}

Revert to healthy status

curl http://docker/healthy

docker ps

Healthcheck with Swarm and automatic restart

docker swarm init

docker service create –name http –replicas 2 -p 80:80 http

curl http://docker/unhealthy

docker ps

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
17054256276c http:latest “/app” 10 seconds ago Up 5 seconds (healthy) 80/tcp http.1.nd77cbzx1dz3a4p1jzlybvh8x
19d1c4332f68 http:latest “/app” 4 minutes ago Up 4 minutes (healthy) 80/tcp http.2.iz3x7sqq83q53iuz3tsk0p8pk

Leave a Reply

Your email address will not be published. Required fields are marked *