Docker In Docker (also known as dind) allows developers to run a Docker container within an already running Docker container to support CI/CD pipelines and create sandboxed container environments. Running containers this way extends the already robust capabilities of the system to allow developers more flexibility in how they use it. There are three possible methods to accomplish this. Here, we’ll discuss how to pull this off and discuss the risks and tradeoffs for each.
Why Would You Run Docker In Docker?
Running Docker in Docker gives developers more flexibility to take advantage of the tool to accomplish tasks that might be more difficult to accomplish via other means.
Using Docker in Docker In a CI/CD Pipeline
In some DevOps environments, the Continuous Integration/ Continuous Delivery (CI/CD) pipeline uses Jenkins or GitLab running in a container. That means all the commands in the stages of the pipeline get executed on the agent. If one of the commands is a Docker command, then you’d need to run it from within a container.
A common use case for running Docker in Docker is using the containerization platform as a sandbox environment. In this way, you could isolate Docker from the host environment. This is easy to accomplish and the environment can be destroyed by getting rid of the container.
Packagecloud can hold all of your packages in one place, allowing you to control exactly which packages you are using and easily integrate them into your CI/CD workflow. Check out our free trial to get your packages set up quickly.
Method #1 - Docker in Docker Using DinD
To use this method, grab the docker:dind tag on Docker Hub. This is a self-contained image. Starting this image starts a Docker daemon inside the new container. This new Daemon runs independently of the host. As such if you run $ docker ps on the host, you will not see the inner daemon or any processes running on it. Likewise, when you run $ docker ps on the inner container, you won’t see any processes running on the host.
The Downsides To This Approach
This approach requires root access. Specifically privileged mode ($ docker run --privileged). Why not root directly? Here’s why:
Root vs. Privileged
The platform’s daemon runs as root on the host machine. This isn’t an issue because the daemon is isolated from the other containers in a separate namespace. A process running as root won’t be able to access files outside of its namespace directory. The issue with this Docker in Docker approach is the potential for privilege escalation attacks. The solution is to remap the container’s root user to a user with fewer privileges and deny it root access. Privileged mode is a flag set at runtime that allows a container to escape its namespace and access the entire host. Meaning it would have access to the host’s hardware such as network devices or partitions.
Method # 2 - Mounting the Hosts Docker Socket
This approach is sometimes called the Docker Outside of Docker approach. Here, you mount the host’s Docker socket into a container. This allows Docker commands run inside the container to execute against the existing daemon. Containers created using this method get created as sibling containers and live alongside the Docker container. Using the socket mitigates the risks dind poses. However, using the socket mount approach still has security issues. Anything with access to the socket can send instructions to the daemon.
Method #3 - Using Sysbox To Run Docker in Docker
Sysbox is an open-source dedicated container runtime that can nest containers without requiring privileged mode.
The biggest benefit to this approach, as mentioned above, is that it eliminates the security risks involved in running in privileged mode. Running Docker in Docker with Sysbox comes with a host of other benefits that make it the more ideal option for running nested containers.
Preload Container Images. Using Sysbox for a Docker in Docker setup allows you to easily preload inner container images into a system container image. The benefits to this approach are:
Improves Performance: This approach is ideal if your system container deployed a common set of inner containers. Using preloaded containers means your system container won’t need to pull an image from the network each time.
Easier Deployments: Running a preloaded container means you won’t need to pull those containers into the system container at runtime.
Air-Gapped Environments: In environments where there is no network connection, preloading the system container with an inner container is a must-have.
Hardened Security: The tool enables you to run Docker, Kubernetes, k3s, systems, and more in containers using secure rootless containers. This effectively protects against container attacks.
Containerized Dev Environments
Developers sometimes want sandboxed environments where they can quickly provision and work with containers in isolation on their system. Traditionally they used Virtual Machines (VMs) to accomplish this. With Sysbox, they can sandbox their environments quickly and easily. So which solution is the best Docker in Docker approach? According to Jérôme Petazzoni, the creator of the dind implementation, adopting the socket-based approach should be your preferred solution. Bind mounting your host’s daemon socket is safer, more flexible, and just as feature-complete as starting a Docker in a Docker container. However, he says that the more modern, and safer approach is the Sysbox project.
Packagecloud is a cloud-based service for distributing software packages to your machines and environments. Packagecloud enables users to store all of the packages that are required by their organization, regardless of OS or programming language, and repeatedly distribute them to their destination machines.
Packagecloud scans for vulnerabilities, supply chain poisonings, and trojan-horse attacks to make sure the packages you use are safe. Packagecloud compares your packages to all known cybersecurity threats, ensuring nothing inside of your packages is vulnerable. Check out our free trial to see how easy it is to package your machines securely without needing to manage infrastructure.