Docker cheat sheet

What is it and why

The idea behind Docker is that applications can be packaged into safe containers with necessary dependencies like the required OS, binaries, libraries, source files, databases and so on. New developers can then start working on the project faster than before when every dependency had to be manually managed to setup a local environment.

In reality, this is however not the case. Managing docker is not as simple as it seems and it can quickly get messy with all the required commands and buzzwords that you need to know. The time taken to setup a local environment is transferred into setting up the local docker environment. Images, containers, Dockerfiles, docker compose, docker-compose.yml to name a few of all things that are needed to know - and yes, you need to know ALL these and much more.

Docker is a client-server solution, where the client is the docker command and the Docker Engine is the daemon behind. Both must run on the host machine for docker to work.

Docker project files do not contain source code and databases

Usually when being introduced to a new project to work on you will receive some kind of compressed file with a Dockerfile, a docker-compose.yml and some additional configuration files.

These project packages does NOT contain the source code and databases despite what has been told that images contain everything needed to run the application. The source code is most often found on some git repository and has to be downloaded separately and added through the volumes: property in docker-compose.yml. The databases are on some external server and credentials has to be entered in the source code.

Where to put Docker project files

It is optional where to put these. Note that these are not the same as the code repository location, it could be located somewhere else.

It could be placed into /opt, /var/www, /srv, ~/, ~/docker and so on.

See also:


Images contain what is needed to run the application - a reduced OS, a runtime environment, your application files, libraries, environment variables etc. Docker then uses the images to start containers. In OOP, images are like classes.

Docker creates images of applications based on Dockerfiles which contain the receipts for images. An image is most often based on another image. Like if you want to run a web server, then you might start with a Linux image and create a Dockerfile to customize it and add your source code and database which then in turn results in a new image.


Containers are processes with their own file systems provided by its image. In OOP, containers are like objects. Applications are loaded into containers, which then runs the application. Containers are isolated environments.

Containers are NOT virtual machines. They share the same kernel with the host OS, but there is no virtual machine involved. It is like a chroot environment. Windows containers cannot run on Linux or Mac hosts, the same goes for Mac, but Windows 10 can run both Linux and Windows containers because it contains a Linux kernel.

Using the images Docker then containers, in these containers then the application runs. One image makes one container.

Images and containers are prefixed with the name of the current directory, this is the reason for that the names of the containers change when you restart them. Someone thought it was a good idea, just to make the mess a little bit bigger.

Relation between Docker, images and containers

It can get messy if you do not grasp the difference between all these things, so here is a tree to explain:

Host machine
         |-application 1
         |-application 2

Docker Compose / docker-compose.yml

Docker Compose is a part of Docker used to run multiple containers based on docker-compose.yml files which contain receipts for how to configure these containers - open ports, setup shared directories with the host and so on.

The Dockerfile and docker-compose.yml could have been combined into one file although they do different things - the Dockerfile could have been section inside docker-compose.yml. Naming is not straight-forward either, the file should have been named Docker-Setup.yml or something.



Docker is in the Debian repository, but since this is bleeding edge technology you might want to use the official installer.

You do NOT need or want Docker Desktop on Linux, it will only mess up things.

You do NOT need to install Docker Compose separately anymore. The docker-compose command has been deprecated and included into the main command, it is now just docker compose. 

However you might get or find docker-compose commands to run, so add a symbolic link just for convenience:
sudo ln -s /usr/bin/docker /usr/bin/docker-compose

In Docker Desktop it can be re-enabled, go to settings cog, General, check Enable Docker Compose V1/V2 compatibility mode.


Here is a list of commands found inside these files.

To base the image on another image:
FROM <image - OS to use as base, search on for names>:<tag - OS version to use, also see web address>

! Beware, Docker images are binaries that are very hard to audit, for safety reasons use only images from official sources or build them yourself.

To copy files from host to image:
COPY <path on host> <path in image>

To change working directory in image:
WORKDIR <path in destination machine

To run command in image to do setup:
RUN <command> [arguments]

To set environment variable:

To launch the application program at the end of the Docker file:
CMD <command> [arguments]


This file is a setup file for the docker environment. It is written in the YAML format for some reason, but without the leading three hyphens (-).

This file is formatted differently depending on the version of docker it is created for, version is set with the version: property inside it. Versions and commands used in it can be found on Docker Hub.

Use services: to define images to build for each services in the application to deploy. Below are properties to use when running the containers. Services are images and containers combined it seems.

  <name for the service or image>:
     <properties to use when running the container for this image>

The build: and context: properties sets where the Dockerfile is located for the container:

      context: ./dir

Use image: to set an image without using a Dockerfile:

    image: <image name from Docker Hub>:<tag from Docker Hub>

Use ports: to share ports between host and container

      - <host port>:<container port>

Set environment variables:


Use volumes: to share files and directories between the host and the client container environment - there are several options to put after

      - <./host/path/to/file/or/folder>:</host/path/to/file/or/folder>
      - <./host/path/to/file/or/folder>:</host/path/to/file/or/folder>:<optional parameters>
      - </host/path/to/file/or/folder>:</host/path/to/file/or/folder:rw


Before starting you have to understand that you can manage images and containers independently, or use Docker Compose to manage multiple ones.

There are also a lot of aliases for the commands, but in some places commands are missing (!) and you have to write quite complex things just to do simple tasks - like stopping all containers.

  List images, add -a to list all:
    docker image ls
    docker images

  Fetch an image uploaded to Docker Hub:
    docker pull <user/image>

  Build image:
    docker build -t <name> <path to Dockerfile or use . for current directory>

  Remove image, use -f to force if containers are running for instance: 
    docker rmi  [options] <image>

  Create new container:
    docker create <image to base it on>     

  Stop container:
    docker container stop <container ID>

  Stop all containers (note the clumsy use of docker ps):
    docker stop $(docker ps -q)

  List running containers, add -a to list all:
    docker ps   
    docker container list
    docker container ls

  Start/stop/restart containers:
    docker start/stop/restart <container>

  Start a shell inside a container:
    docker exec -it <container ID> /bin/bash

  Download/create container, start it then stop it, add -it to interact and add connect a terminal:
    docker run [options] <image> [commands] [arguments]

  Remove container:
    docker rm <container>       

  Copy files and folders between host and a local file system, note that wildcards are NOT (!) supported:
    docker cp <host source path> <container>:<destination path>
    docker cp <container>:<source path> <host destination path>

  Images and Containers:
    Remove all all stopped containers, networks not used by at least one
    container, images without at least one container associated to them,
    build cache:
      docker system prune -a     

Docker Compose, run in same folder as docker-compose-yml:

  Build images using docker-compose.yml, add --no-cache to not use cache:
    docker compose build

  Run docker compose detached:
    docker compose up -d


  Services can communicate with each other over this internal network,
  the hostnames are the service names. It is for instance possible to ping
  one service name from another.

  List networks:
    docker network list

  Create a network:
    docker network create <your network> --subnet

Error - The path x is not shared from the host and is not known to Docker.

Edit the volumes: paths in docker-compose.yml to match the current file structure on the host.

If you have Docker Desktop, open that, then go to Docker -> Preferences... -> Resources -> File Sharing and add the directory.

Or you can edit /home/<user>/.docker/desktop/settings.json, add the path to the filesharingDirectories:
  "filesharingDirectories": [
    "/<add path here>"

Error - Cannot connect to the Docker daemon at unix:///home/<user>/.docker/desktop/docker.sock. Is the docker daemon running?

This error can appear when running docker compose up -d:
Cannot connect to the Docker daemon at unix:///home/<user>/.docker/desktop/docker.sock. Is the docker daemon running?

To fix, run this and note where the sock file is:
ps ax|grep dockerd

sudo mv /home/$USER/.docker/desktop/docker.sock /home/$USER/.docker/desktop/docker.sock.old
sudo ln -s <path to sock file> /home/$USER/.docker/desktop/docker.sock

To get permission to the sock then run this to add yourself to the docker group and then log out and login:

sudo usermod -a -G docker $USER

You can login yourself to the group without logging out and in using newgrp docker, but this is only valid in the current terminal.

After logging out and in then run docker compose up -d again. If it fails because of missing network, add it as stated above.


This is a personal note. Last updated: 2022-11-27 03:13:04.







Don't forget to pay my friend a visit too. Joakim