Docker delivers software in containers, which simplifies the process by packaging everything it takes to run an application. With Docker growing in popularity, it becomes necessary to understand it before using it on your own or for a business. After all, PayPal, Netflix, AT&T, Oracle, Microsoft, and other major businesses use Docker as their preferred tool. That’s why we created this Docker cheat sheet for you. It will be of use before a job interview or simply to refresh your memory on certain commands. You might come upon to help assist you in learning by yourself, which would be even more detailed.
Start your 30-day FREE TRIAL to learn more about our Docker courses here at CloudInstitute.io.
Check Version
It’s good to know the current version of Docker. This will help you know what features are compatible with what you currently running. It’s also useful to know what containers to run whenever you are getting template containers. Here’s how to figure out what version you’re running:
- docker version shows the version of the docker you are running.
Find server version:
$ docker version --format '[[.Server.Version]]' 1.8.0
Here’s how to dump raw JSON data:
$ docker version --format '[[json .]]' {"Client":{"Version":"1.8.0","ApiVersion":"1.20","GitCommit":"f5bae0a","GoVersion":"go1.4.2","Os":"linux","Arch":"am"}
Containers
Docker is all about containers. It’s simply a package of software that can include code, tools, libraries, settings and anything else you need. It’s all in one place, one container.
Lifecycle
- docker create creates a container yet doesn’t start it.
- docker rename renames the container.
- docker run both creates and runs a container in a single operation.
- docker rm simply deletes a container.
- docker update updates a container's resource limits.
If you run a container without options, it will start and stop instantaneously. To keep it running, use the command, docker run -td container_id
The option -t will assign a pseudo-TTY session and -d will detach the container (run container in background and print container ID). To have a temporary container, docker run --rm removes the container after it stops
To delete volumes associated with the container, deleting it should include the -v switch such as in docker rm -v.
To run docker with a custom log driver (i.e., to syslog), use docker run --log-driver=syslog. Another option is docker run --name yourname docker_image because when you specify the –name inside the run command this will allow you to start and stop a container by naming it what you specified.
Starting and Stopping
- docker start starts a container to run it.
- docker stop stops a running container.
- docker restart stops and restarts a container.
- docker pause pauses a running container.
- docker unpause will unpause a running container.
- docker wait blocks until running container stops.
- docker kill sends a SIGKILL to a running container.
- docker attach will connect to a running container.
If you want to detach from a running container, use Ctrl + p, Ctrl + q. To combine a container with a host process manager, start daemon with -r=false then use docker start -a.
CPU Constraints
Limit CPU by using a percentage of all CPUs or by using specific cores.
For instance, you can tell the cpu-shares setting. The setting is odd. 1024 means 100% of the CPU. To have the container take 50% of the CPU cores, you specify 512.
docker run -it -c 512 agileek/cpuset-test
Another thing you can do is use CPU cores with cpuset-cpus.
Memory Constraints
Setting memory constraints on Docker:
docker run -it -m 300M ubuntu:14.04 /bin/bash
Capabilities
Linux capabilities are set by using cap-add and cap-drop for better security.
In order to mount a FUSE based filesystem, you need to combine both --cap-add and --device:
docker run --rm -it --cap-add SYS_ADMIN --device /dev/fuse sshfs
Give access to a single device:
docker run -it --device=/dev/ttyUSB0 debian bash
Give access to all devices:
docker run -it --privileged -v /dev/bus/usb:/dev/bus/usb debian bash
Info
- docker ps shows running containers.
- docker logs retrieves logs from a container.
- docker inspect looks at all the info on a container (including IP address).
- docker events gets events from container.
- docker port shows public facing port of container.
- docker top shows running processes in container.
- docker stats shows containers' resource usage statistics.
- docker diff shows changed files in the container's FS.
docker ps -a shows running as well as stopped containers.
docker stats --all shows a list of all containers, default shows the running ones.
Import / Export
- docker cp copies files/folders between a container and local filesystem.
- docker export turns container filesystem into tarball archive stream to STDOUT.
Executing Commands
- docker exec to execute a command in a container.
Enter a running container by attaching a new shell process to a running container. This is called foo, and use: docker exec -it foo /bin/bash.
Images
Images are templates for Docker containers.
Lifecycle
- docker images show images.
- docker import creates an image from tarball.
- docker build creates an image from Dockerfile.
- docker commit creates an image from a container, pausing it temporarily if it’s running.
- docker rmi removes an image.
- docker load loads an image from a tar archive as STDIN, including images and tags.
- docker save saves images to a tar archive stream to STDOUT with all parent layers, tags and versions.
Info
- docker history shows the history of the image.
- docker tag tags an image to a name (local or registry).
Cleaning up
You can always use the docker rmi command to remove images, but a tool called docker-gc can safely clean up images that aren’t used by any containers. As of docker 1.13, docker image prune can also remove unused images.
Load/Save Image
Load an image from file:
docker load < my_image.tar.gz
Save an existing image:
docker save my_image:my_tag | gzip > my_image.tar.gz
Import/Export Container
Import a container (as an image from file):
cat my_container.tar.gz | docker import - my_image:my_tag
Export an existing container:
docker export my_container | gzip > my_container.tar.gz
Difference Between Loading Saved Image and Importing an Exported Container as an Image
Load command will create a new image, and this will include the history.
Importing a container as an image using the import command creates a new image, but not the history, which creates a smaller image size than loading an image.
Networks
Docker automatically creates three network interfaces when you install it—bridge, host and none. By default, a new container is launched into the bridge network. To enable communication between multiple containers, create a new network and launch containers in it. This enables containers to communicate to one another while being apart from containers that are not connected to the network. It also helps map container names to their IP addresses.
Lifecycle
- docker network create NAME Create a new network (default type: bridge).
- docker network rm NAME Remove one or more networks by name or identifier. Containers can’t be linked to the network when you’re deleting them.
Info
- docker network ls List networks
- docker network inspect NAME Display detailed information on one or more networks.
Connection
- docker network connect NETWORK CONTAINER Connect a container to a network
- docker network disconnect NETWORK CONTAINER Disconnect a container from a network
Here is how you can assign a unique IP address for a container:
# create a new bridge network with your subnet and gateway for your ip blockdocker network create --subnet 203.0.113.0/24 --gateway 203.0.113.254 iptastic # run a nginx container with a specific ip in that block$ docker run --rm -it --net iptastic --ip 203.0.113.2 nginx # curl the ip from any other place (assuming this is a public ip block duh)$ curl 203.0.113.2
Registry & Repository
A collection of tagged images is a repository, and it’s hosted.
A registry is a host. This means it’s a server that stores repositories and provides an HTTP API for managing, uploading and downloading repositories.
Docker.com includes a massive number of repositories. However, images on Docker may not be secure, as there are some unverified images on its repository.
- docker login to login to a registry.
- docker logout to logout from a registry.
- docker search searches registry for image.
- docker pull pulls an image from registry to local machine.
- docker push pushes an image to the registry from local machine.
Dockerfile
Dockerfile creates a Docker container when you run docker build on it. This is preferred over docker commit.
Instructions
- .dockerignore
- FROM Sets the Base Image for resulting instructions.
- MAINTAINER (deprecated - use LABEL instead) Set the Author field of the generated images.
- RUN execute any commands in a new layer on top of the current image and commit the results.
- CMD provide defaults for an executing container.
- EXPOSE informs Docker that the container listens on the specified network ports at runtime. This doesn’t make ports accessible.
- ENV sets environment variable.
- ADD copies new files, directories or remote file to container. Invalidates caches. Avoid ADD and use COPYinstead.
- COPY copies new files or directories to container. The default is to copy as a root despite any of the USER/WORKDIR settings. Use --chown=<user>:<group> to give ownership to another user/group. (Same for ADD.)
- ENTRYPOINT configures a container that will run as an executable.
- VOLUME creates a mount point for externally mounted volumes or other containers.
- USER sets the user name for following RUN / CMD / ENTRYPOINT commands.
- WORKDIR sets the working directory.
- ARG defines a build-time variable.
- ONBUILD adds a trigger instruction when the image is used as the base for another build.
- STOPSIGNAL sets the system call signal that will be sent to the container to exit.
- LABEL apply key/value metadata to your images, containers, or daemons.
- SHELL override default shell is used by docker to run commands.
- HEALTHCHECK tells docker how to test a container to check that it is still working.
[[widget type="Itskills\BlogWidget\Block\Widget\Category" title="Enroll in Our Docker Certifications Training Program" description="Cloud Institute offers Docker certifications training that has helped thousands of learners prepare for a rewarding career." button_title="Get Started" id_path="category/1787"]]
Layers
The filesystem in Docker is based on layers.
Links
Links are how Docker containers communicate with one another through TCP/IP ports.
Volumes
It’s not necessary to have Docker volumes connected to any container. You can always use volumes attached from data-only containers for portability. Docker has names volumes which replace data-only containers. It’s best to use named volumes to implement it rather than data containers.
Lifecycle
Info
Volumes are helpful to use when you can't use links (which are TCP/IP only). For an example, if you want to have two docker instances communicate by leaving things on the filesystem.
One solution is to stack them in several docker containers at once, and this is through using docker run --volumes-from.
Volumes are isolated, so they’re usually used to store state from divisions between temporary containers.
Here’s how to map MacOS host directories as volumes:
docker run -v /Users/wsargent/myapp/src:/src
You can also run data-only containers as described to provide data portability.
Finally, you can also mount files as volumes.
Exposing Ports
Exposing incoming ports through the host container is completed by mapping the container port to the host port (only using localhost interface) using -p:
docker run -p 127.0.0.1:$HOSTPORT:$CONTAINERPORT --name CONTAINER -t someimage
You can inform Docker how a container listens on the network ports at runtime by using command EXPOSE:
EXPOSE <CONTAINERPORT>
EXPOSE doesn’t expose the port itself -- only -p will do that. Here’s how you can expose the container's port:
iptables -t nat -A DOCKER -p tcp --dport <LOCALHOSTPORT> -j DNAT --to-destination <CONTAINERIP>:<PORT>
Running Docker in Virtualbox? Then you’ll have to forward the port there too, using forwarded_port. You’ll need to outline a port range in your Vagrantfile like this so you can map them dynamically:
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config| ... (49000..49900).each do |port| config.vm.network :forwarded_port, :host => port, :guest => port end ...end
If you’ve forgotten what you mapped the port to on the host container, use docker port to show it:
docker port CONTAINER $CONTAINERPORT
Docker-Compose
Compose defines and runs multi-container Docker applications. Compose is how you can use a YAML file to configure your app’s services. A single command can be used to create and begin all services from your configuration.
You can use the following command to start your application:
docker-compose -f <docker-compose-file> up
You can also run docker-compose in detached mode using -d flag, which will help you stop whenever needed by the following command:
docker-compose stop
You can remove containers with the down command. Pass --volumes to also remove the data volume.
Security
Docker runs as root. If you’re inside the docker group, you’ll have root access. If you expose the docker unix socket to a container, you’re providing the container root access to the host. Docker shouldn’t be left defenseless, so you should still make sure the security is solid.
Security Tips
For the best security, you want to run Docker inside a virtual machine. This is the recommendation from the Docker Security Team Lead.
Sensitive information, image ids, shouldn’t be given to anyone. It’s best to think of them as passwords and not to give them out.
Since docker 1.11 came out, you can limit the number of active processes running inside a container to prevent fork bombs. This needs a linux kernel >= 4.3 with CGROUP_PIDS=y to be in kernel configuration.
docker run --pids-limit=64
Another addition since docker 1.11 is the capability to prevent processes from gaining new privileges. This feature has been in the linux kernel since version 3.5.
docker run --security-opt=no-new-privileges
Here are Container Solutions:
Turn off interprocess communication with:
docker -d --icc=false --iptables
Set the container to be read-only:
docker run --read-only
Verify images with a hashsum:
docker pull debian@sha256:a25306f3850e1bd44541976aa7b5fd0a29be
Set volumes to be read only:
docker run -v $(pwd)/secrets:/secrets:ro debian
Define/run a user in your Dockerfile. This will help you not run as root inside the container:
RUN groupadd -r user && useradd -r -g user userUSER user
Security Roadmap
The Docker roadmap talks about seccomp support. Bane, an AppArmor policy generator is useful for a security roadmap, and they're working on security profiles.
Tips
Some tips:
Prune
Data Management Commands are available as of Docker 1.13:
- docker system prune
- docker volume prune
- docker network prune
- docker container prune
- docker image prune
df
docker system df is a basic summary of space currently used by docker objects.
Heredoc Docker Container
docker build -t htop - << EOFFROM alpineRUN apk --no-cache add htopEOF
Last Ids
alias dl='docker ps -l -q'docker run ubuntu echo hello worlddocker commit $(dl) helloworld
Commit with Command (Requires Dockerfile)
docker commit -run='{"Cmd":["postgres", "-too -many -opts"]}' $(dl) postgres
Get IP Address
docker inspect $(dl) | grep -wm1 IPAddress | cut -d '"' -f 4
or with jq installed:
docker inspect $(dl) | jq -r '.[0].NetworkSettings.IPAddress'
or using a go template:
docker inspect -f '[[ .NetworkSettings.IPAddress ]]' <container_name>
or when building an image from Dockerfile, when you want to pass in a build argument:
DOCKER_HOST_IP=`ifconfig | grep -E "([0-9]{1,3}\.){3}[0-9]{1,3}" | grep -v 127.0.0.1 | awk '{ print $2 }' | cut -f2 -d: | head -n1`echo DOCKER_HOST_IP = $DOCKER_HOST_IPdocker build \ --build-arg ARTIFACTORY_ADDRESS=$DOCKER_HOST_IP -t sometag \ some-directory/
Get port mapping
docker inspect -f '[[range $p, $conf := .NetworkSettings.Ports]] [[$p]] -> [[(index $conf 0).HostPort]] [[end]]' <containername>
Find containers by regular expression
for i in $(docker ps -a | grep "REGEXP_PATTERN" | cut -f1 -d" "); do echo $i; done
Get Environment Settings
docker run --rm ubuntu env
Kill running containers
docker kill $(docker ps -q)
Delete all containers (force!! running or stopped containers)
docker rm -f $(docker ps -qa)
Delete old containers
docker ps -a | grep 'weeks ago' | awk '{print $1}' | xargs docker rm
Delete stopped containers
docker rm -v $(docker ps -a -q -f status=exited)
Delete containers after stopping
docker stop $(docker ps -aq) && docker rm -v $(docker ps -aq)
Delete dangling images
docker rmi $(docker images -q -f dangling=true)
Delete all images
docker rmi $(docker images -q)
Delete Dangling Volumes
As of Docker 1.9:
docker volume rm $(docker volume ls -q -f dangling=true)
In 1.9.0, the filter dangling=false does not work - it is ignored and will list all volumes.
Show Image Dependencies
docker images -viz | dot -Tpng -o docker.png
Slimming Down Docker Containers
- Cleaning APT in a RUN layer
This should be done in the same layer as other apt commands. Otherwise, the previous layers still persist the original information and your images will still be fat.
RUN {apt commands} \ && apt-get clean \ && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
- Flatten an image
ID=$(docker run -d image-name /bin/bash)docker export $ID | docker import – flat-image-name
- For backup
ID=$(docker run -d image-name /bin/bash)(docker export $ID | gzip -c > image.tgz)gzip -dc image.tgz | docker import - flat-image-name
Monitor System Resource Utilization for Running Containers
To check the CPU, memory, and network I/O usage of a single container, you can use:
docker stats <container>
For all containers listed by id:
docker stats $(docker ps -q)
For all containers listed by name:
docker stats $(docker ps --format '[[.Names]]')
For all containers listed by image:
docker ps -a -f ancestor=ubuntu
Remove all untagged images:
docker rmi $(docker images | grep “^” | awk '{split($0,a," "); print a[3]}')
Remove container by a regular expression:
docker ps -a | grep wildfly | awk '{print $1}' | xargs docker rm -f
Remove all exited containers:
docker rm -f $(docker ps -a | grep Exit | awk '{ print $1 }')
Volumes Can Be Files
Be aware that you can mount files as volumes. For example you can inject a configuration file like this:
# copy file from containerdocker run --rm httpd cat /usr/local/apache2/conf/httpd.conf > httpd.conf # edit filevim httpd.conf # start container with modified configurationdocker run --rm -it -v "$PWD/httpd.conf:/usr/local/apache2/conf/httpd.conf:ro" -p "80:80" httpd
Docker Training
We hope you found this short cheat sheet useful. We can provide even more explanations, from the basics of Docker all the way up to advanced concepts and practical training. Start your 30-day FREE TRIAL to gain access to over 200 self-paced courses.