docker_image_ARM

Posted on 1 February 2022, updated on 8 December 2023.

Building multi-architecture Docker Container Images, was a niche consideration 2 years ago. With the new Apple's M1 Laptop, running on ARM CPU, a new generation of developers discovered that Docker - or OCI - images are built for specific CPU and OS.

 

Let me show you how to do it!

Why is supporting multiple CPU architectures more important than ever?

Two years ago, running light Kubernetes distribution such as k3s on ARM processors was the concern of "Raspberry Pie cluster" enthusiasts like me or Jeff Gerling. There were (and still are) some companies that were also trying to push for a distributed edge cloud computing universe, with small ARM servers hidden everywhere. While the idea of truly decentralized/distributed computing sounds great, I'm not convinced that adding billions more servers in the world is sustainable, and the current *crypto* environmental impact is confirming it.

However a true achievement of ARM CPU is reduced power consumption, and that is one of the reasons Apple's M1 laptops use such chips. They designed powerful laptops with great autonomy, and with a lot of investment into automatic transcription from amd64 apps to arm64 with Rosetta, the transition to this new architecture was transparent for most users.

However, developers did encounter issues when using Docker which, while adding a layer of abstraction to "build and ship everywhere" applications, still needs to talk to the CPU with the correct instruction set. A lot of developers switching to their new daily laptops discovered that they could not run some local environments using containers, and added to rebuild locally some images.

While it has been painful for a few weeks, it proved how the migration to ARM was not impossible, and solutions like AWS Graviton, running VM on ARM CPU seems easier to migrate to, promising a lower energy consumption, so a lower price, for the same performance!

With all these new adopters, the community invested into making the build and distribution of  images from a dockerfile for multiple CPU architectures easier than ever!

Building ARM images is way easier!

If you want to follow along with this article, I have updated my stupid "Time as a Service Python project" on GitHub. It is a practical application of everything explained here for a modern Python app.

More than one year ago, as described in this previous article, you needed to set up some special emulation tooling such as qemu on your computer. However, with the release of Buildkit in recent Docker version (19.03), you can build images for another architecture in a transparent way.

BuildKit is a total rewrite of Docker image building engine, with a focus on speed. It preserves all previous functionalities but is packed with some great new features for better OCI images such as:

  • Choose permissions for files added with the instruction COPY
  • Pull some files from a distant Docker Image
  • One of my favorite: truly parallelize multi-stage image building, only waiting when a dependency is actually needed

I advise you to always activate Buildkit, even for a single architecture build, since it will at least improve the speed of your builds.

You can enable BuildKit by setting the following environment variable

export DOCKER_BUILDKIT=1 # activate for all future commands in the current shell
# or just for one command
DOCKER_BUILDKIT=1 docker build ...

However, to use the multi-arch feature, the buildx command needs to be used, which enables advanced features only available with BuildKit. The command should be available on Docker Desktop for Windows and Mac, and for Linux distribution, if you used DEB/RPM packages. While on Mac it should work out of the box, for Linux you'll need to install the qemu-user-static package.

docker buildx --help
# Optional: Install buildx as a 'docker builder' alias
# docker buildx install

# On Linux only, to support cross-compilation, install qemu-user-static
# sudo apt-get install -y qemu-user-static

# The first time you'll try multibuild, you'll encounter
# error: multiple platforms feature is currently not supported for docker driver. Please switch to a different driver (eg. "docker buildx create --use")
docker buildx create --use
docker buildx ls
NAME/NODE          DRIVER/ENDPOINT             STATUS   PLATFORMS
loving_mccarthy *  docker-container
  loving_mccarthy0 unix:///var/run/docker.sock inactive
default            docker
  default          default                     running  linux/amd64, linux/386, linux/arm64, linux/riscv64, linux/ppc64le, linux/s390x, linux/arm/v7, linux/arm/v6

Now that your environment is set up, you can successfully build your app for several platforms with the --platforms flag, for example here for amd64 and arm64. To push, you only need to add --push, it will reuse the previous build cache.

# Only build
docker build --platform linux/amd64,linux/arm64 -t <image-tag> .
# When you need to push
docker build --platform linux/amd64,linux/arm64 -t <image-tag> --push .

Now that you are able to create a docker image and publish it locally, it is time to automate it in your CI/CD process.

A generic setup for GitHub: Actions + Registry

For this part, nothing has changed much since last year, except that the maintainer crazy-max of the GitHub actions I recommended is now maintaining the official Docker repositories for GitHub actions.

Since we are on GitHub, it is the occasion to use the associated Package Registry on ghcr.io, for which you won't have to create any secret if running in a Github Action. Moreover, it manages to cache Docker layers for very fast consecutive builds.

Also, I discovered the docker-metadata action which generates automatically smart tags for your release, depending on how the action was triggered, which is very handy for tools such as ArgoCD Image Updater.

  • a Pull Request --> pr-2
  • a commit on main branch --> main
  • a tag --> v1.2.3

This gives us this very compact workflow that can work on any GitHub project with a Dockerfile!

name: Docker build and release

on:
  push:
    branches:
      - "main"
    tags:
    - 'v*'
  pull_request:
    branches:
      - "main"

env:
  TARGET_PLATFORMS: linux/amd64,linux/arm64
  REGISTRY: ghcr.io
  IMAGE_NAME: ${{ github.repository }}

jobs:
  build_and_release:
    runs-on: ubuntu-20.04
    steps:
      - name: Checkout repository
        uses: actions/checkout@v2

      # Generate automatically tags <https://github.com/docker/metadata-action>
      - name: Docker meta
        id: meta
        uses: docker/metadata-action@v3
        with:
          images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
          tags: |
            type=schedule
            type=ref,event=branch
            type=ref,event=tag
            type=ref,event=pr
            type=sha


      - name: Set up QEMU
        uses: docker/setup-qemu-action@v1

      - name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v1

      - name: Log in to the Container registry
        uses: docker/login-action@v1
        with:
          registry: ${{ env.REGISTRY }}
          username: ${{ github.actor }}
          password: ${{ secrets.GITHUB_TOKEN }}

      # Build the images
      - name: Build and push docker
        uses: docker/build-push-action@v2
        with:
          push: ${{ github.event_name != 'pull_request' }}
          tags: ${{ steps.meta.outputs.tags }}
          labels: ${{ steps.meta.outputs.labels }}
          platforms: ${{ env.TARGET_PLATFORMS }}
          cache-from: type=gha
          cache-to: type=gha,mode=max

You can see it running on the repository.

Now our Docker/OCI image is stored in a registry, but how does it work for a client running docker pull, how will it run the correct image?

How are Docker images distributed for multiple platforms?

While it should magically find the correct image for your CPU architecture if available, an issue I encountered while installing ArgoCD Image Updater on ARM made me look into the OCI Image distribution process.

A Docker (or now an OCI) image registry is a REST API, with a layer of authentication and library management in front of an actual storage solution such as an S3 bucket. That's for example the implementation of the GitLab Docker Registry which uses the official Distribution/Registry binary managed by Docker, with several choices for the actual storage and integration with GitLab authentication.

Let's learn how to interact with a registry using curl and jq (thanks for this blog post to demystify the API). If you want, you can also learn more details with the official specification.

First, we need to fetch a token to be able to query the actual registry. It is also mandatory to read public images, but you don't need to supply any credentials. For a private image, you just need to use the --user argument. You get a topic for a specific repository, here we will first query the ArgoCD Docker Repository, which does not (as for now) publish images for ARM, only the amd64 format.

export REPOSITORY=argoproj/argocd
export TAG="v2.2.3"
export TOKEN=$(curl "https://auth.docker.io/token?scope=repository:$REPOSITORY:pull&service=registry.docker.io" | jq -r '.token')
# for a private repository: curl -u '<username>:<password>' ...
curl -H "Authorization: Bearer $TOKEN" "https://registry-1.docker.io/v2/$REPOSITORY/manifests/$TAG" | jq

We received a JSON object like this:

{
  "schemaVersion": 1,
  "name": "argoproj/argocd",
  "tag": "v2.2.3",
  "architecture": "amd64",
  "fsLayers": [
    {
      "blobSum": "sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4"
    },
// ...
  ],
  "history": [
   // ...
    {
      "v1Compatibility": "{\\"id\\":\\"a9499d07bd54b477e4b0a77b6c30e4934e365bfc1ad215911bad573692d5854c\\",\\"parent\\":\\"310b6806d386569e3da2236c27aa1cc1abe8b8d99d1b41681c59c419146492e6\\",\\"created\\":\\"2022-01-07T02:25:38.369481037Z\\",\\"container_config\\":{\\"Cmd\\":[\\"/bin/sh -c #(nop)  CMD [\\\\\\"bash\\\\\\"]\\"]},\\"throwaway\\":true}"
    },
    {
      "v1Compatibility": "{\\"id\\":\\"310b6806d386569e3da2236c27aa1cc1abe8b8d99d1b41681c59c419146492e6\\",\\"created\\":\\"2022-01-07T02:25:37.861169092Z\\",\\"container_config\\":{\\"Cmd\\":[\\"/bin/sh -c #(nop) ADD file:cfcb96e25bf4af2949d0c04953666b16dca08216ded8040ddfeedd0e782c6ddc in / \\"]}}"
    }
  ],
  "signatures": [
   // ...
  ]
}

If we look at the headers that we received (with curl -v for example), we see that we have content-type: application/vnd.docker.distribution.manifest.v1+prettyjws which is one of the oldest format used by Docker. It exists in fact several versions of these manifests, and we can ask nicely to the registry to give us another version with the correct header.

For example, we can ask for application/vnd.docker.distribution.manifest.v2+json which is compatible with the OCI specification. Docker registry seems to not support the official OCI spec application/vnd.oci.image.manifest.v1+json, but the format is almost the same.

curl -H "Authorization: Bearer $TOKEN" -H 'Accept: application/vnd.docker.distribution.manifest.v2+json' "https://registry-1.docker.io/v2/$REPOSITORY/manifests/$TAG" | jq
{
  "schemaVersion": 2,
  "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
  "config": {
    "mediaType": "application/vnd.docker.container.image.v1+json",
    "size": 7362,
    "digest": "sha256:19ab951d44b1cb93d067f761584f70fb70efaafc3a69bc0d01f9c0d8ecfaf6e5"
  },
  "layers": [
    {
      "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
      "size": 31703311,
      "digest": "sha256:318226705d6bf4f94a70707b917c9bea2190a83f512b1c257dffbb691859831e"
    },
    // ...
  ]
}

Another interesting format is the list of manifests application/vnd.docker.distribution.manifest.list.v2+json that display all available architectures (the OCI equivalent is application/vnd.oci.image.index.v1+json, but Docker Registry does not support it). However, since ArgoCD only supports one architecture, this list won't be available on Docker Registry (I think it's a bug, we should be able to have a list with one element).

We'll move to the official python docker image. Note that this kind of package that you can import directly without a "namespace" (image:tag instead of username/image:tag) is aliased to library/image. So here we will need a token for library/python.

export REPOSITORY="library/python"
export TAG="3.9"
export TOKEN=$(curl "https://auth.docker.io/token?scope=repository:$REPOSITORY:pull&service=registry.docker.io" | jq -r '.token')
# for a private repository: curl -u '<username>:<password>' ...
curl -H 'Accept: application/vnd.docker.distribution.manifest.list.v2+json' -H "Authorization: Bearer $TOKEN" "https://registry-1.docker.io/v2/$REPOSITORY/manifests/$TAG" | jq

Now we have several manifests to choose from. Not that if you don't specifically ask for application/vnd.docker.distribution.manifest.list.v2+json you will end up with a default image, probably the linux/amd64 one which will not work on ARM or Windows. Not that this is the only way of knowing the actual supported platforms is to have this manifest list first.

{
  "manifests": [
// Here the manifest for a classic amd64 linux
    {
      "digest": "sha256:2518e00b368b533389f0b0934c123d15b364bea00270c25855714afaf7948078",
      "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
      "platform": {
        "architecture": "amd64",
        "os": "linux"
      },
      "size": 2218
    },
// ...
// Here for arm64
    {
      "digest": "sha256:6eed89f4a82c64d6dbee2c4f56abd18932ccf12021fe824e1a1f6568caaf3651",
      "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
      "platform": {
        "architecture": "arm64",
        "os": "linux",
        "variant": "v8"
      },
      "size": 2218
    },
// ...
// Here an image for a specific version of Windows
    {
      "digest": "sha256:d45e425d57e35b22503d857ee0a195d475849e192cd412ee191a536e4c74c778",
      "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
      "platform": {
        "architecture": "amd64",
        "os": "windows",
        "os.version": "10.0.17763.2458"
      },
      "size": 3397
    }
  ],
  "mediaType": "application/vnd.docker.distribution.manifest.list.v2+json",
  "schemaVersion": 2
}

Now your docker client can search if a compatible image for its system exists, and then query the specific manifest using its digest entry instead of the tag. For example, to fetch the ARM image manifest:

curl -H 'Accept: application/vnd.docker.distribution.manifest.v2+json' -H "Authorization: Bearer $TOKEN" "https://registry-1.docker.io/v2/$REPOSITORY/manifests/sha256:6eed89f4a82c64d6dbee2c4f56abd18932ccf12021fe824e1a1f6568caaf3651" | jq

Then your client will be able to download each layer of the selected manifest by using the same syntax with each layer's digest.

The API on ghcr.io is almost the same, however, it seems to not support correctly the Accept header. That was the cause of my previous bug: ArgoCD Image updater was expecting a simple manifest (asking application/vnd.docker.distribution.manifest.v2+json), and was receiving a list of manifests. Thankfully, a fix is now in place.

To conclude this dive into Docker Image, we can look for the manifest list of our pytime-v2 application.

export REPOSITORY="dixneuf19/pytime-v2"
export TAG="main"
export TOKEN=$(curl "https://ghcr.io/token?scope=repository:$REPOSITORY:pull" | jq -r '.token')
# for a private repository: curl -u '<username>:<access_token>' ...
curl -H "Authorization: Bearer $TOKEN" "https://ghcr.io/v2/$REPOSITORY/manifests/$TAG" | jq
{
  "mediaType": "application/vnd.docker.distribution.manifest.list.v2+json",
  "schemaVersion": 2,
  "manifests": [
    {
      "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
      "digest": "sha256:4ab16977fda652254a29800f9b72761b6b8696cc46e00ce2e4158f7d9882f70a",
      "size": 1788,
      "platform": {
        "architecture": "amd64",
        "os": "linux"
      }
    },
    {
      "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
      "digest": "sha256:d986cec22a57d1263b1095370e51ee9d4e498ee1a59987376da51aaa37e99f02",
      "size": 1788,
      "platform": {
        "architecture": "arm64",
        "os": "linux"
      }
    }
  ]
}
 # Fetch the actual manifest for arm
 # curl -H "Authorization: Bearer $TOKEN" "https://ghcr.io/v2/$REPOSITORY/manifests/sha256:d986cec22a57d1263b1095370e51ee9d4e498ee1a59987376da51aaa37e99f02"

We do have both images for arm64 and amd64 constructed in our CI/CD.

 

The Docker environment has evolved a lot since two years ago, and we have now no excuses to not supporting several architectures in our continuous delivery workflow. The whole build and distribution system should be clearer for you, however, it is still an evolving topic. As I found while searching for this article, the OCI specification, which should be the norm, is seldom totally supported, and each registry services have small differences which can cause issues in our applications.

The same remark is true for Kubernetes, which works fine on arm64 even if etcd is still not 100% officially supported. You can still encounter some issues, such as Kubelet trying to use an image for another architecture if it doesn't find a correct one, instead of just failing to launch the pod.

We know that the journey to fully containerized and deploying an application can still be a rough ride, but at Padok we thrive on such challenging projects!