Multi-architecture builds with docker buildx
Last reviewed on 2026-05-02
How to build a single image that runs on both linux/amd64 and linux/arm64 (and beyond), with practical guidance on emulation, native runners, and CI cost.
Why multi-arch matters now
Apple Silicon developer machines are arm64. AWS Graviton, Ampere-based GCE instances, and Azure ARM machines are arm64. Most CI runners are amd64. A single-architecture image either makes life painful for half your engineers or makes deployment painful in production. A manifest-list image with both architectures resolves the right variant automatically based on the platform pulling it.
The mental model
What's pushed to a registry under a single tag is not always a single image. A manifest list (also called a "multi-arch manifest" or, in OCI vocabulary, an "image index") points at one image per architecture. When you docker pull myimage:latest, the daemon picks the manifest entry matching the host's os/arch. The FROM line in your Dockerfile resolves the same way at build time.
The job of docker buildx build --platform linux/amd64,linux/arm64 ... is to (a) run your Dockerfile once per platform, (b) produce one image per platform, and (c) push them under a single tag with a manifest list on top.
Set up buildx and a builder instance
On modern Docker Desktop, buildx is installed and a default builder is configured. On a stock Linux installation, you may need to enable it:
docker buildx version
docker buildx create --name multiarch --driver docker-container --use
docker buildx inspect --bootstrap
The docker-container driver runs BuildKit inside a container, which is what unlocks multi-platform builds and richer caching. The default docker driver cannot do multi-arch.
Native versus emulated builds
| Approach | How it works | Pros | Cons |
|---|---|---|---|
| QEMU emulation | binfmt_misc + QEMU run non-native instructions on the host CPU. |
Works on a single runner; trivial to set up. | Significantly slower for compile-heavy steps; flaky for some toolchains. |
| Native multi-runner | One amd64 runner and one arm64 runner; buildx joins them under one builder. |
Fast, predictable, builds at native speed. | Two CI runners; more setup; more cost. |
| Cross-compilation | Build on amd64 targeting arm64 with the language's cross toolchain. |
Fastest of all for languages that support it (Go, Rust, Zig). | Doesn't help for languages that need to run their build (Node native modules, Python C extensions). |
QEMU is the obvious starting point. Switch to native runners or cross-compilation when emulated builds become a bottleneck — typically once your build crosses the 5-minute mark on the non-native architecture.
Enable QEMU emulation
docker run --privileged --rm tonistiigi/binfmt --install all
This registers handlers for non-native architectures. After it runs, docker run --rm --platform=linux/arm64 alpine uname -m should print aarch64 on an x86 host.
A working Dockerfile
# syntax=docker/dockerfile:1.7
FROM --platform=$BUILDPLATFORM golang:1.22-alpine AS builder
ARG TARGETOS
ARG TARGETARCH
WORKDIR /src
COPY go.mod go.sum ./
RUN go mod download
COPY . .
ENV CGO_ENABLED=0
RUN GOOS=$TARGETOS GOARCH=$TARGETARCH go build -o /out/server ./cmd/server
FROM gcr.io/distroless/static-debian12:nonroot
COPY --from=builder /out/server /server
USER nonroot
EXPOSE 8080
ENTRYPOINT ["/server"]
Two details make this fast:
--platform=$BUILDPLATFORMon the builder stage runs the build natively on the host (amd64on a typical CI), not emulated.$TARGETOS/$TARGETARCHare populated by buildx for each requested platform, so a single Dockerfile cross-compiles for each target.
The runtime stage doesn't get --platform, so each platform's build of the runtime stage uses the matching distroless image.
Build and push the manifest list
docker buildx build \
--platform linux/amd64,linux/arm64 \
-t ghcr.io/example/server:1.0.0 \
--push \
.
--push uploads each per-platform image and writes the manifest list. --load would only work for a single-platform build into the local Docker daemon.
To verify:
docker buildx imagetools inspect ghcr.io/example/server:1.0.0
The output lists each manifest entry with its platform, OS, and digest.
CI integration
Most CI providers have a buildx-friendly action or step. The pattern in GitHub Actions:
- uses: docker/setup-qemu-action@v3
- uses: docker/setup-buildx-action@v3
- uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- uses: docker/build-push-action@v6
with:
context: .
platforms: linux/amd64,linux/arm64
push: true
tags: ghcr.io/example/server:${{ github.sha }}
cache-from: type=gha
cache-to: type=gha,mode=max
type=gha uses the GitHub Actions cache, which is per-architecture. For non-Actions CI, use type=registry with a dedicated cache reference.
cache-from/cache-to, every platform rebuilds from scratch on every run, doubling or tripling build time. Always wire up a remote cache for multi-arch.
Common mistakes
- Forgetting to install QEMU. Without
binfmt_mischandlers registered, the non-native build will fail withexec format error. - Pulling a package that has no
arm64build. Many APT and Alpine packages are multi-arch by default, but a few are not. Pin or substitute. - Building Node or Python wheels under emulation. Native module compilation under QEMU is painfully slow. Switch to native runners or pre-build wheels.
- Copying a host-built binary. If your CI compiles the binary before the Docker build and copies it in, the binary is whatever architecture CI ran on. Compile inside the build for each target.
- Mixed image references. If your
FROMis a manifest-list image, buildx selects the right one per platform. If it isn't, every platform pulls the same architecture and nothing works on the other one. Prefer base images that ship multi-arch.
Pre-flight checklist
- Is buildx installed and using a
docker-containerdriver? - Are
binfmt_mischandlers registered (or are you on native runners)? - Does your Dockerfile use
$TARGETOSand$TARGETARCHfor any cross-compilation? - Are your base images multi-arch?
- Is a remote cache configured for at least the platforms you build regularly?
- Has
buildx imagetools inspectconfirmed the manifest list contains both platforms?
Where to read next
- docker build reference — flags and behaviour the buildx CLI inherits.
- New Dockerfile features — heredocs, build secrets, and other BuildKit extras.
- Multi-stage builds — the foundation that makes cross-compilation tractable.
- Choosing a base image — which families ship multi-arch out of the box.
- Worked example: optimising microservices pipelines — caching strategies that interact with multi-arch.