Open main menu

Docker

Revision as of 17:12, 19 December 2023 by Admin (talk | contribs) (→‎Docker in Docker: more security focus)

(diff) ← Older revision | Approved revision (diff) | Latest revision (diff) | Newer revision → (diff)
Docker Architecture

Linux containers (LXC)[1] technology has taken off with Docker https://www.docker.com/ [2][3] which was released as open source in March 2013. RedHat and others have collaborated with the corporate backer to the technology seemingly to compete with Canonical's JuJu https://juju.ubuntu.com/ and Charm technology which also is based on Linux containers. Linux containers are built into the linux kernel, and so offer a lightweight native method of virtualization compared to more traditional (heavyweight) virtualization techniques like VMWare, Vagrant, VirtualBox.

Essentially, the difference is the hypervisor and OS. Whereas containers are implemented with kernel features like namespaces, cgroups and chroots, a full VM requires a hypervisor plus an operating system in the VM. Docker runs a docker daemon on the Docker Host. (In comparison, Podman offers a daemon-less technique focused on the parent process - using a fork and exec model.)


It is important to realize that Docker is inherently a Linux technology.


Docker Engine is an open source containerization technology for building and containerizing your applications. Docker Engine acts as a client-server application with:

  • A server with a long-running daemon process dockerd.
  • APIs which specify interfaces that programs can use to talk to and instruct the Docker daemon.
  • A command line interface (CLI) client docker.

The CLI uses Docker APIs to control or interact with the Docker daemon through scripting or direct CLI commands. Many other Docker applications use the underlying API and CLI. The daemon creates and manage Docker objects, such as images, containers, networks, and volumes.

For more details, see Docker Architecture.

Volumes and Mounts

From Data management in Docker

By default all files created inside a container are stored on a writable container layer. This means that:

The data doesn’t persist when that container no longer exists, and it can be difficult to get the data out of the container if another process needs it.

A container’s writable layer is tightly coupled to the host machine where the container is running. You can’t easily move the data somewhere else.

Writing into a container’s writable layer requires a storage driver to manage the filesystem. The storage driver provides a union filesystem, using the Linux kernel. This extra abstraction reduces performance as compared to using data volumes, which write directly to the host filesystem.

Docker has two options for containers to store files on the host machine, so that the files are persisted even after the container stops: volumes, and bind mounts.

Docker also supports containers storing files in-memory on the host machine. Such files are not persisted. If you’re running Docker on Linux, tmpfs mount is used to store files in the host’s system memory. If you’re running Docker on Windows, named pipe is used to store files in the host’s system memory.


That is all you need to know[4] about Docker when it comes to sharing files between the host and the container.

Volumes are the preferred way to persist data in Docker containers and services.

Bind mounts are a way to share a directory from the host (typically your app source code) into a running container and have the container respond immediately to changes made to files on the host. If your containerized app is a nodejs app, and needs to be restarted to "see" changes, you can wrap node with nodemon which monitors files for changes and restarts the node app based on those files. PHP apps execute the newest code on each request (via the web server) and so do not require any special wrapper.


Note: Docker Desktop is simply a fancy GUI client application that uses virtualization (a Linux Virtual Machine) to bundle the Docker Engine into your host OS.

  • Volumes are the right choice when your application requires fully native file system behavior on Docker Desktop. For example, a database engine requires precise control over disk flushing to guarantee transaction durability. Volumes are stored in the Linux VM and can make these guarantees, whereas bind mounts are remoted to macOS or Windows OS, where the file systems behave slightly differently.


When you use either bind mounts or volumes, keep the following in mind:

  • If you mount an empty volume into a directory in the container in which files or directories exist, these files or directories are propagated (copied) into the volume. Similarly, if you start a container and specify a volume which does not already exist, an empty volume is created for you. This is a good way to pre-populate data that another container needs.
  • If you mount a bind mount or non-empty volume into a directory in the container in which some files or directories exist, these files or directories are obscured by the mount, just as if you saved files into /mnt on a Linux host and then mounted a USB drive into /mnt. The contents of /mnt would be obscured by the contents of the USB drive until the USB drive were unmounted. The obscured files are not removed or altered, but are not accessible while the bind mount or volume is mounted.

VSCode with Docker

Each time you create a Docker container, and connect VSCode to that container, VSCode installs a server into the container so that it can communicate with it. So, there is a small (a couple minutes?) delay while this happens before you can edit or do anything. Best to stop a container and only re-build it when necessary to avoid the delay.

Docker Images

Bitnami has a Docker Image for MediaWiki Don't use Bitnami. You will thank me later.

Security

Docker apparently doesn't respect your host firewall by default - leading to the potential for a gaping security hole. This has been a reported bug since 2018. One fix is to set the DOCKER_OPTS configuration parameter. Another is to add a jump rule to UFW. The bug report links to docs and multiple references.

Docker Downsides

One major negative to the system architecture of Docker is that it relies on a server daemon. **Unlike** Podman, Docker's Engine can use up 4GB of RAM just sitting idle. A similar thing happens with WSL2 on Windows [5]

Future Reading

  1. The compose application model https://docs.docker.com/compose/compose-file/02-model/
  2. Understand how moby buildkit is integrated with buildx (or docker) and use it.
  3. Interesting read about docker commit https://adamtheautomator.com/docker-commit/

Inspect your running container based on it's container name: docker inspect $(docker container ls | awk '/app2/ {print $1}')

Docker in Docker

Before you get 'fancy' with Docker, be sure to read and understand the Security best practices for Docker https://cheatsheetseries.owasp.org/cheatsheets/Docker_Security_Cheat_Sheet.html

Containers (unlike virtual machines) share the kernel with the host, therefore kernel exploits executed inside a container will directly hit the host kernel. For example, a kernel privilege escalation exploit (like Dirty COW) executed inside a well-insulated container will result in root access in the host.

That said, https://devopscube.com/run-docker-in-docker/ presents several use cases and techniques for DinD.

Docker In Docker

The DinD method is Docker inside Docker. Docker provides a special Docker container with the dind tag which is pre-configured to run Docker inside the container image.

An example of Docker In Docker would be running Docker inside a Docker image on Windows Subsystem for Linux (WSL)

  1. Install Docker Desktop for Windows.
  2. Enable the WSL 2 backend in Docker Desktop settings.
  3. Set WSL 2 as your default version: wsl --set-default-version 2 in PowerShell.
  4. Install a Linux distribution from the Microsoft Store (Ubuntu, for example).
  5. Open your Linux distribution (WSL 2 instance), and install Docker.
  6. Add your user to the Docker group to manage Docker as a non-root user: sudo usermod -aG docker $USER.
  7. Test Docker with: docker run hello-world.

Running Docker inside Docker can lead to some security and functionality issues. It's often better to use the Docker daemon of the host machine. You can do this by mounting the Docker socket from the host into the container. See the "Docker outside of Docker" section.

Docker outside of Docker

The DooD method, Docker outside of Docker, uses the docker socket of the host system from inside containers by mounting the host socket into the filesystem of the container.

Try curl to see how different processes can communicate through a socket on the same host:

curl --unix-socket /var/run/docker.sock http://localhost/version

While there are benefits to using the DooD method, you are giving your containers full control over the Docker daemon, which effectively is root on the host system. To mitigate these risks, again refer to the Security model of Docker, the Docker Security Cheat Sheet from OWASP and be sure to run your Docker daemon as an unprivileged user.


For security, use Nestybox Sysbox runtime

See https://github.com/nestybox/sysbox

References

  1. https://help.ubuntu.com/lts/serverguide/lxc.html
  2. See the interview on opensource.com
  3. more info from Wikipedia wp:Docker_(software)
  4. Not really. In practice you need to fully understand how the volumes and mounts work to avoid very common pitfalls like the host filesystem owner matching problem In a nutshell, the best approach is to run your container with a UID/GID that matches the host's UID/GID. It can be hard to implement while addressing all caveats. Hongli Lai wrote a tool to solve this (MatchHostFSOwner)
  5. https://news.ycombinator.com/item?id=26897095