DEVing in a Container

I do a fair bit of development work for various side projects that I contribute to or work on myself.  For a very long time, I used a single dev virtual machine that I would try to build out to fit all my needs.  While this works fine in theory, there are several disadvantages to it.  First are dependency version conflicts.  It is not at all unusual for one project to require one set of packages, while another needs another set.  There is not normally a version conflict, but on occasion there is.  Yes, you can work around this with Python Virtual Environments, but to be completely honest, I hate the damn things and avoid them if at all possible.

Another problem is having a pristine environment.  It is way to easy (relatively speaking) to make something work in my environment.  It’s a lot harder to make sure that it is working in a generic user’s environment.  It means having to account for what they have or don’t have already installed on their system.  If you work from a clean environment each time on each project, it’s a lot easier to identify what exactly what is needed because you must install it yourself.

This sounds like a great use of a container.  Containers offer lots of advantages, but one of them is a pristine environment every time it gets initiated.  Fortunately, it appears that the people at Microsoft agree, because they came up with the idea of the devcontainer for Visual Studio.

I won’t go into creating the VM.  I’m just using a Rocky 9 install which I’m running off VMWare Workstation Pro.  If you didn’t hear the news, you can use Workstation Pro for free.  It’s one of the few decent things that Broadcom has done since they purchased VMWare.  Yes, I know turning off SELinux and FirewallD isn’t the best idea, but this is a relatively isolated VM (sitting behind a firewall on my home network and then running behind a NAT on my computer itself) so I’m not to worried about it and it just makes life a lot easier.

yum install -y epel-release
yum update -y
systemctl disable firewalld
systemctl stop firewalld
set enforce 0
yum install git -y

To install Docker (I’m not a fan of podman) I generally follow the walk-through found at https://www.digitalocean.com/community/tutorials/how-to-install-and-use-docker-on-rocky-linux-9

dnf config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
dnf install -y docker-ce docker-ce-cli containerd.io
systemctl start docker
systemctl enable docker
docker version
Client: Docker Engine - Community
 Version:           27.1.1
 API version:       1.46
 Go version:        go1.21.12
 Git commit:        6312585
 Built:             Tue Jul 23 19:58:57 2024
 OS/Arch:           linux/amd64
 Context:           default

Server: Docker Engine - Community
 Engine:
  Version:          27.1.1
  API version:      1.46 (minimum version 1.24)
  Go version:       go1.21.12
  Git commit:       cc13f95
  Built:            Tue Jul 23 19:57:11 2024
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.7.19
  GitCommit:        2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41
 runc:
  Version:          1.7.19
  GitCommit:        v1.1.13-0-g58aa920
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0

Now that Docker is up, I go in and manually add in my SSH keys that I already have created and are associated with my Github account. Now to configure git with my name and email address.

git config --global user.name "My Name"
git config --global user.email "myemail@gmail.com"

Finally the last prep step is to create a root directory where I do my development work out of.  I keep it simple and just create one called devel.

mkdir ~/devel

Now, there’s a few ways that you can handle things.  You could define a development environment for each individual project that you’re working on, and in some cases, that may well make sense.  For me though, not so much.  Most of my work is in Ansible or in Python.  Because my workflow and tools that I use with Ansible for example are always the same, it would be great if I could define the environment once and it gets used for all of my projects.  To do that, I create an `ansible` subdirectory within my devel folder and then put my .devcontainer folder in there.

mkdir ~/devel/ansible
mkdir ~/devel/ansible/.devcontainer

I want to be able to use the same container definition for all of my Ansible projects so I place it at the ansible level, it will work for all of my ansible projects.  VS Code will create the devcontainer environment if you want it to, but we’re going to do it the manual way because this is a tutorial, and I want to make changes to what they are doing anyway. I start by defining my DockerFile.

FROM geerlingguy/docker-ubuntu2204-ansible:latest

LABEL maintainer="Troy Ward"
# Install dependencies.
RUN apt-get update \
    && apt-get install -y --no-install-recommends \
    openssh-client \
    && rm -Rf /var/lib/apt/lists/* \
    && rm -Rf /usr/share/doc && rm -Rf /usr/share/man \
    && apt-get clean

COPY requirements.txt /tmp/requirements.txt
COPY ansible-requirements.yml /tmp/ansible-requirements.yml

# Install Ansible via Pip.
RUN pip3 install -r /tmp/requirements.txt
RUN ansible-galaxy install -r /tmp/ansible-requirements.yml

I won’t spend time going into this other than a short description. I am basing my image off of Geerlingguy and his Ubuntu 22.04 image. I am doing this for two reasons. 1. Because his images already have most of what is needed to run Ansible already installed which minimizes what I need to do. 2. I’m using his Ubuntu image (I prefer Rocky myself) because as you’ll see in a bit, the .devcontainer features are written to run on Ubuntu. You’ll notice that I am copying two files over. One of them is my PIP requirements for my particular environment, and the other is Ansible Galaxy requirements that I have. Next we’ll look at the devcontainer.json file.

{
    "name": "Ansible Dev",
    "build": {
        "dockerfile": "Dockerfile"
    },
    "features": {
        "ghcr.io/devcontainers/features/docker-outside-of-docker:1": {
            "version": "latest",
            "enableNonRootDocker": "true",
            "moby": "true"
        },
        "ghcr.io/devcontainers/features/git:1": {}
    },
    "remoteEnv": {
        "LOCAL_WORKSPACE_FOLDER": "${localWorkspaceFolder}"
    },
    "mounts": [
        "type=bind,source=/root/.ssh,target=/root/.ssh,readonly"
    ]
}

This is the file that actually does the magic. On lines 3 and 4, I am telling the container to build itself from my Dockerfile (I can also point it to a prebuild Docker image. On lines 6-13 I am having devcontainer install two “features”. A feature is just a pre-built set of commands and resources that runs in the container to do something. The first feature is the docker-outside-of-docker feature. Basically what this does is expose the docker port from my actual host to the inside of the container. This means that from within the container, if I run a docker command, it is reaching out to the actual host and executing it. This is really important to me for my Ansible work because I use Molecule to test all of my stuff, which in my case at least, uses Docker.

The second feature that I am installing is it, which is simply installing and configuring git within the container. If you want to look at the actual source code of what each of these features is doing, you can find them at https://github.com/devcontainers/features/tree/main/src.

On line 15, I am telling the devcontainer to mount the current folder I am running out of (whatever it is) into the container, and that is going to be my workspace. Finally on line 18, I’m adding one more mount point within the container with all of my ssh stuff in it so that I can use git. What isn’t in here, because it happens automagically is anything that connect my person git settings (Like my name and email that we configured above) into the container. The reason for that is because devcontainer will actually do that for me. You can find my entire build config on my github at https://github.com/pyrodie18/ansible-devcontainer.

Now to use the container. Within VS Code, I navigate to View > Command Palette > Reopen in Container. In the background, VS Code will run Docker and build the container image and then instantiate it. Next it will install the various features, and drop me (and my code) in it. Now I can go in and clone or create new git repos under my folder. The first time I create a new folder, I simply go to File > Open Folder and navigate to the folder I want to open. Because I’m doing this from within the context of the container, it will re-open the folder still within my container.