Multi-architecture builds with docker buildx

Last reviewed on 2026-05-02

How to build a single image that runs on both linux/amd64 and linux/arm64 (and beyond), with practical guidance on emulation, native runners, and CI cost.

Why multi-arch matters now

Apple Silicon developer machines are arm64. AWS Graviton, Ampere-based GCE instances, and Azure ARM machines are arm64. Most CI runners are amd64. A single-architecture image either makes life painful for half your engineers or makes deployment painful in production. A manifest-list image with both architectures resolves the right variant automatically based on the platform pulling it.

The mental model

What's pushed to a registry under a single tag is not always a single image. A manifest list (also called a "multi-arch manifest" or, in OCI vocabulary, an "image index") points at one image per architecture. When you docker pull myimage:latest, the daemon picks the manifest entry matching the host's os/arch. The FROM line in your Dockerfile resolves the same way at build time.

The job of docker buildx build --platform linux/amd64,linux/arm64 ... is to (a) run your Dockerfile once per platform, (b) produce one image per platform, and (c) push them under a single tag with a manifest list on top.

Set up buildx and a builder instance

On modern Docker Desktop, buildx is installed and a default builder is configured. On a stock Linux installation, you may need to enable it:

docker buildx version
docker buildx create --name multiarch --driver docker-container --use
docker buildx inspect --bootstrap

The docker-container driver runs BuildKit inside a container, which is what unlocks multi-platform builds and richer caching. The default docker driver cannot do multi-arch.

Native versus emulated builds

ApproachHow it worksProsCons
QEMU emulation binfmt_misc + QEMU run non-native instructions on the host CPU. Works on a single runner; trivial to set up. Significantly slower for compile-heavy steps; flaky for some toolchains.
Native multi-runner One amd64 runner and one arm64 runner; buildx joins them under one builder. Fast, predictable, builds at native speed. Two CI runners; more setup; more cost.
Cross-compilation Build on amd64 targeting arm64 with the language's cross toolchain. Fastest of all for languages that support it (Go, Rust, Zig). Doesn't help for languages that need to run their build (Node native modules, Python C extensions).

QEMU is the obvious starting point. Switch to native runners or cross-compilation when emulated builds become a bottleneck — typically once your build crosses the 5-minute mark on the non-native architecture.

Enable QEMU emulation

docker run --privileged --rm tonistiigi/binfmt --install all

This registers handlers for non-native architectures. After it runs, docker run --rm --platform=linux/arm64 alpine uname -m should print aarch64 on an x86 host.

A working Dockerfile

# syntax=docker/dockerfile:1.7
FROM --platform=$BUILDPLATFORM golang:1.22-alpine AS builder
ARG TARGETOS
ARG TARGETARCH
WORKDIR /src
COPY go.mod go.sum ./
RUN go mod download
COPY . .
ENV CGO_ENABLED=0
RUN GOOS=$TARGETOS GOARCH=$TARGETARCH go build -o /out/server ./cmd/server

FROM gcr.io/distroless/static-debian12:nonroot
COPY --from=builder /out/server /server
USER nonroot
EXPOSE 8080
ENTRYPOINT ["/server"]

Two details make this fast:

The runtime stage doesn't get --platform, so each platform's build of the runtime stage uses the matching distroless image.

Build and push the manifest list

docker buildx build \
  --platform linux/amd64,linux/arm64 \
  -t ghcr.io/example/server:1.0.0 \
  --push \
  .

--push uploads each per-platform image and writes the manifest list. --load would only work for a single-platform build into the local Docker daemon.

To verify:

docker buildx imagetools inspect ghcr.io/example/server:1.0.0

The output lists each manifest entry with its platform, OS, and digest.

CI integration

Most CI providers have a buildx-friendly action or step. The pattern in GitHub Actions:

- uses: docker/setup-qemu-action@v3
- uses: docker/setup-buildx-action@v3
- uses: docker/login-action@v3
  with:
    registry: ghcr.io
    username: ${{ github.actor }}
    password: ${{ secrets.GITHUB_TOKEN }}
- uses: docker/build-push-action@v6
  with:
    context: .
    platforms: linux/amd64,linux/arm64
    push: true
    tags: ghcr.io/example/server:${{ github.sha }}
    cache-from: type=gha
    cache-to: type=gha,mode=max

type=gha uses the GitHub Actions cache, which is per-architecture. For non-Actions CI, use type=registry with a dedicated cache reference.

Cache misses on multi-arch are expensive. Each platform has its own layer cache. If you neglect cache-from/cache-to, every platform rebuilds from scratch on every run, doubling or tripling build time. Always wire up a remote cache for multi-arch.

Common mistakes

Pre-flight checklist

  1. Is buildx installed and using a docker-container driver?
  2. Are binfmt_misc handlers registered (or are you on native runners)?
  3. Does your Dockerfile use $TARGETOS and $TARGETARCH for any cross-compilation?
  4. Are your base images multi-arch?
  5. Is a remote cache configured for at least the platforms you build regularly?
  6. Has buildx imagetools inspect confirmed the manifest list contains both platforms?

Where to read next