Docker | Docs | Hub | Wikipedia

Docker (Linux)
Docker for Windows (DFW)
Docker for Mac (DFM)

Install (

Docker.sh

Background

Docker predecessor was dotCloud, a Linux container technology startup.

Docker accesses the Linux kernel's virtualization features, either directly using the runC/libcontainer library, or indirectly using libvirt, LXC or systemd-nspawn.

Architecture

Software

Editions

Objects

Networking | Configure | Reference Architecture | Tutorials

Batteries included, but removable.
docker network ls
ip addr show

Container Network Drivers/Options

  1. Bridge Networking a.k.a. Single-host Networks
    docker0; original/default; Driver: bridge Layer 2 network; isolated, even if on same host; routes through NAT firewall on host IP; external comms only by port mapping, host IP-to-IP. Containers connect to Docker bridge (docker0) network by default.
    docker run --name web -p 1234:80 nginx
    docker port web
  2. Overlay Networking a.k.a. Multi-host Networks
    Layer 2 network spanning multiple hosts, e.g., connects all containers across all nodes of the swarm.
    docker network create ...
    • Control Plane encrypted by default.
    • Data Plane encrypted per cmdline option
      docker network create --opt encrypted ...
  3. MACVLAN
    Each container (MAC) given its own IP Address on an existing VLAN. Requires promiscuous mode on host NIC; typically not available @ cloud providers.
  4. IPVLAN
  5. Experimental; does not require promiscous mode.
  6. Containers on the same network communicate with each other sans port mapping (-p). External ports closed by default; put frontend/backend on same network for inter-container comms. Best practice is to create a new virtual network for each app. E.g.,
    • Network web_app_1 for mysql and php/apache containers.
    • Network api_1 for mongo and nodejs containers.
  7. Containers can attach to more than one virtual network (or none).
  8. Network is selectable and configurable:
    • "--network none" adds container to a container-specific network stack.
    • "--network host" adds container to host’s network stack; to use host IP instead of virtual networks'.

List selected keys of "docker inspect ..." across all networks, refactored into another valid JSON object:

docker network ls -q |xargs docker network inspect $1 \
    |jq -Mr '.[] | select(.Name != "none") | {Name: .Name, Driver: .Driver, Address: .IPAM.Config}' \
    |jq --slurp .

Network Services

@ Swarm Mode | Control Plane

Docker supports IPSec encryption for overlay networks between Linux hosts out-of-the-box. The Swarm & UCP managed IPSec tunnels encrypt network traffic as it leaves the source container and decrypts it as it enters the destination container. This ensures that your application traffic is highly secure when it's in transit regardless of the underlying networks.

@ Swarm Mode | Data Plane

Extend Docker's IPSec encryption to the data plane. (The control plane is automatically encrypted on overlay networks.) In a hybrid, multi-tenant, or multi-cloud environment, it is crucial to ensure data is secure as it traverses networks you might not have control over.

At services thereunder, when two tasks are created on two different hosts, an IPsec tunnel is created between them and traffic gets encrypted as it leaves the source host and decrypted as it enters the destination host. The Swarm leader periodically regenerates a symmetrical key and distributes it securely to all cluster nodes. This key is used by IPsec to encrypt and decrypt data plane traffic. The encryption is implemented via IPSec in host-to-host transport mode using AES-GCM.

Tools

Images :: Docker Hub

The Explore tab lists all Official images (docker-library)

  1. [ACCTNAME/]REPONAME
  2. REPONAME
    • The "official" images are further distinguished by their REPONAME sans "ACCTNAME/" prefix. These are high quality images; well documented, versioned (per :TAG), and widely adopted. E.g., …

      # The official Nginx image; "1.11.9" is the Tag (version).
      docker pull nginx:1.11.9  
      

The ubiquitous "latest" Tag specifies a latest (stable) published version of a repo; not necessarily the latest commit. E.g., …

docker pull nginx:latest
# ... equivalent ...
docker pull nginx         

Tags / Tagging

ACCTNAME/REPONAME:TAG

The entirety, "USER/NAME:TAG", is often referred to as "tag".

One image may have many tags. To change the image tag, and optionally rename an image, …

docker image tag SRC_IMG[:TAG-old] TGT_IMG[:TAG-new]

The TAG of <none>

☩ di
REPOSITORY            TAG                 IMAGE ID            CREATED             SIZE
gd9h/prj3.api-amd64   dev                 ce6b736c74c1        2 hours ago         31.2MB
gd9h/prj3.pwa-amd64   dev                 414f405cee2c        19 hours ago        21.7MB
gd9h/prj3.rds-amd64   dev                 b1350eefb9b9        2 days ago          31.3MB
gd9h/prj3.exv-amd64   dev                 96594cfee5fa        2 days ago          17.5MB
postgres              <none>              baf3b665e5d3        4 days ago          158MB
postgres              12.6-alpine         c88a384583bb        3 weeks ago         158MB
golang                1.15.8              7185d074e387        4 weeks ago         839MB
nginx                 1.19.3-alpine       4efb29ff172a        5 months ago        21.8MB

Image Layers / Cache

Images are built of filesystem changes and metadata. The image build process implements the Union FS/Mount concept (per OverlayFS); files and directories of separate file systems are transparently overlaid, forming a single coherent file system.

Each image layer is hashed and cached, and so can be incorporated into multiple images; successive layers are nothing but changes (diff) from the prior layer. This strategy accounts for the radically lighter weight of images relaitve to virtual machines (VMs). Changes to a container are recorded per Copy on Write (CoW) process.

Local image cache @ Docker Engine host:

Docker references each layer, and each image containing them, by multiple unique identifiers (digests and other IDs), per Docker tool and context. Additionally, the image manifest file (JSON) itself is hashed, and that too is an image reference.

This is the source of much confusion, since the image digests reported on pull or run don't match those reported elsewhere; while "IMAGE ID" at "docker image ls" is something else entirely. And there is no easy way to match a cached layer (folders & files) to its Registry (layer digest).

Example: docker image inspect 'alpine'

Publishing

docker login
Stores auth key @ ~/.docker/config.json, until
docker logout

docker push ACCTNAME/REPONAME:TAG

Image Registry (v2)

The image resistry is integral to Docker's tools and its entire container ecosystem. Docker's Distribution toolset handles this; "… pack, ship, store, and deliver content."

Docker Registry

Build Process

image => container => image => container => ...

Images are immutable; containers are modified while running; new images are built of prior images and changes thereto while running. This is iterated as necessary …

Dockerfile | Best Practices

A Dockerfile is the recipe for an image build; the instruction set; has its own language/syntax/format. Each "stanza", FROM, ENV, RUN, … is an image layer; each layer is downloaded, hashed, and cached, so future builds (per mod) are fast, especially so if layered judiciously; order is important; frequently modified layers placed below (after) those infrequently modified; any content change (e.g., per COPY) breaks the cache at that layer, and affects subsequent layers.

# Base; every image must have ... 
FROM alpine:3.8
ENV NGINX_VERSION 1.13.6-1~stretch 

# Chain places all at one cacheable layer; 
# Remove unncecssary dependencies & pkg-mgr cache
RUN apt-get update \
    && apt-get -y install --no-install-recommends ... \
    && rm -rf /var/lib/apt/lists/*

# Docker handles logging; need only map output ...
RUN ln -sf /dev/stdout /var/log/nginx/access.log \
	&& ln -sf /dev/stderr /var/log/nginx/error.log

# open ports 80 & 443 (to other containers @ bridge network) 
EXPOSE 80 443  
# ... map to host port(s) per run option ... -p|-P 

# Change working directories; use instead of RUN cd ...
WORKDIR /usr/share/nginx/html
# Copy from host to container 
COPY . .
# Copy dir to dir (from host to container)
# Any changes to the files being copied will break the cache, 
# so copy ONLY WHAT IS NEEDED.
COPY index.html index.html

# Volume; outlives container; must manually delete; UNNAMED ONLY !
VOLUME /the/volume/path/at/container

# Must run command/script @ cntnr launch 
# (may be embedded in FROM stmnt)
ENTRYPOINT ["entrypoint.sh"]        
# then this; overridden on any arg @ `docker run ... IMAGE`
CMD ["command","param1","param2"]  
# or default param(s) for ENTRYPOINT command or script
CMD ["param1","param2"]   
# ... JSON Array syntax    

Storage — Data in containers

Such data is destroyed upon container deletion (rm); survives stop/start only. Files created inside a container are stored on a thin writable container layer on top of its read-only image layers.
- Difficult to access from outside the container. - Requires a (kernel process) driver to manage the Union filesystem.

Storage Driver a.k.a. Graph Driver (older) a.k.a. Snapshotter (newer)

Storage — Data Volumes

Separation of Concerns
Immutable design patterns treat containers as ephemeral, if not entirely stateless, and so persistent a.k.a. unique data best resides outside the container.

Data Volumes a.k.a. Volumes
Docker offers 3 options for persistent container storage, which is to say storage outside the container. All mount some kind of host (or remote) storage as a path at the container. Each has use cases..

Named volumes

docker container run ... -v NAME:/cntnr_path
Volumes survive container deletion,yet contain no meta regarding whence the volume came; docker volume ls ... merely lists per ID; no other info, even @ inspect. Hence Named Volumes.

Swarm Mode (SwarmKit)

Container orchestration; Docker's clustering solution; a secure Control Plane; all handled internally. Swarm Managers use Raft (algo+protocol+database).

Services

In Swarm Mode, application components are replicated, and distributed across the nodes, which communcate through the overlay network. Each instance of the replicated component is a Task. The sum of all identical tasks are a Service. So, applications are deployed, per component, as Services.

docker service create [OPTIONS] IMAGE [COMMAND] [ARG...]

docker service update [OPTIONS] SERVICE

Stacks

Production-grade Compose.

docker stack deploy -c "app1.yml" "app1"

docker stack ls

docker stack ps "app1"

docker stack services "app1"

Configs

echo "This is a config" |docker config create foo-bar -

Stored as file @ container: /

cat /foo-bar

Secrets

echo "This is a secret" |docker secret create foo-bar -

Stored as file @ container: /run/secrets/

cat /run/secrets/foo-bar

CI/CD :: Dev ⇔ Test ⇔ Prod

Docker Hub

CVE :: Security Vulnerabilities @ CVEdetails.com

Docker Store ($)

  1. Docker SW.
  2. Quality 3rd party images.

Docker Cloud ($)

SaaS Registries (3rd Party)

Docker Registry 2.0 (GitHub)

The code, an HTTP server, that runs Docker Hub; "The Docker toolset to pack, ship, store, and deliver content." A web API and storage system for storing and distributing Docker images.

The de facto standard for running a local (private) container registry. Not as full-featured as Docker Hub; no web GUI; basic auth only. Storage drivers support local, S3, Azure, Alibaba, GCP, and OpenStack Swift.

Run a Private Registry Server

Build it with persistent storage (-v) at host.

docker container run -d -p 5000:5000 --name 'registry' \
    -v $(pwd)/registry-data:/var/lib/registry 'registry' 
    # Bind Mount

Set Registry Domain

_REPO='127.0.0.1:5000'  # localhost:5000

Test it

# Pull/Tag/Push   
docker pull hello-world
docker tag hello-world ${_REPO}/hello-world
docker push ${_REPO}/hello-world
# Delete cached container & image
docker image remove hello-world
docker container rm $_CONTAINER
docker image remove ${_REPO}/hello-world
docker image ls  # verify it's gone (from cache) 
# Pull it from local registry 
docker pull ${_REPO}/hello-world
# View the image @ cache
docker image ls
# Run it (delete cntnr on exit)
docker run --rm ${_REPO}/hello-world

Query it

# List per name, in JSON
curl -X GET $_REPO/v2/_catalog
# or (same)
curl $_REPO/v2/_content 
# List tags of an image
curl $_REPO/v2/$_IMG/tags/list
# inspect (full info)
docker inspect $_REPO/ubuntu:18.04

Delete Image(s)/Repo(s)

Private Docker Registry with Swarm

Run a Registry @ Play with Docker

Templates > "5 Managers and no workers".

docker node ls
docker service create --name registry --publish 5000:5000 registry
docker service ps registry

Pull/Tag (127.0.0.1:5000)/Push the hello-world image again, then view the Registry catalog @ "5000" URL (endpoint); root is empty, but root/v2/_catalog shows Registry content per JSON.

Advance Configs

# @ TLS
$ docker run -d \
    --restart=always \
    --name registry \
    -v "$(pwd)"/certs:/certs \
    -e REGISTRY_HTTP_ADDR=0.0.0.0:443 \
    -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt \
    -e REGISTRY_HTTP_TLS_KEY=/certs/domain.key \
    -p 443:443 \
    registry:2

# @ TLS +Basic Auth
$ docker run -d \
    -p 5000:5000 \
    --restart=always \
    --name registry \
    -v "$(pwd)"/auth:/auth \
    -e "REGISTRY_AUTH=htpasswd" \
    -e "REGISTRY_AUTH_HTPASSWD_REALM=Registry Realm" \
    -e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd \
    -v "$(pwd)"/certs:/certs \
    -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt \
    -e REGISTRY_HTTP_TLS_KEY=/certs/domain.key \
    registry:2

# @ Swarm Service +TLS
$ docker node update --label-add registry=true node1
$ docker secret create domain.crt certs/domain.crt
$ docker secret create domain.key certs/domain.key
$ docker service create \
    --name registry \
    --secret domain.crt \
    --secret domain.key \
    --constraint 'node.labels.registry==true' \
    --mount type=bind,src=/mnt/registry,dst=/var/lib/registry \
    -e REGISTRY_HTTP_ADDR=0.0.0.0:443 \
    -e REGISTRY_HTTP_TLS_CERTIFICATE=/run/secrets/domain.crt \
    -e REGISTRY_HTTP_TLS_KEY=/run/secrets/domain.key \
    --publish published=443,target=443 \
    --replicas 1 \
    registry:2


Load-Balancer Considerations

Distribution Recipes : NGINX