Repo
Docs
Deploying with Docker

Deploying with Docker

Building a Docker (opens in a new tab) image is a common way to deploy all sorts of applications. However, doing so from a monorepo has several challenges.

The problem

TL;DR: In a monorepo, unrelated changes can make Docker do unnecessary work when deploying your app.

Let's imagine you have a monorepo that looks like this:

├── apps
│   ├── docs
│   │   ├── server.js
│   │   └── package.json
│   └── web
│       └── package.json
├── package.json
└── package-lock.json

You want to deploy apps/docs using Docker, so you create a Dockerfile:

Dockerfile
FROM node:16
 
WORKDIR /usr/src/app
 
# Copy root package.json and lockfile
COPY package.json ./
COPY package-lock.json ./
 
# Copy the docs package.json
COPY apps/docs/package.json ./apps/docs/package.json
 
RUN npm install
 
# Copy app source
COPY . .
 
EXPOSE 8080
 
CMD [ "node", "apps/docs/server.js" ]

This will copy the root package.json and the root lockfile to the docker image. Then, it'll install dependencies, copy the app source and start the app.

You should also create a .dockerignore file to prevent node_modules from being copied in with the app's source.

.dockerignore
node_modules
npm-debug.log

The lockfile changes too often

Docker is pretty smart about how it deploys your apps. Just like Turbo, it tries to do as little work as possible (opens in a new tab).

In our Dockerfile's case, it will only run npm install if the files it has in its image are different from the previous run. If not, it'll restore the node_modules directory it had before.

This means that whenever package.json, apps/docs/package.json or package-lock.json change, the docker image will run npm install.

This sounds great - until we realise something. The package-lock.json is global for the monorepo. That means that if we install a new package inside apps/web, we'll cause apps/docs to redeploy.

In a large monorepo, this can result in a huge amount of lost time, as any change to a monorepo's lockfile cascades into tens or hundreds of deploys.

The solution

The solution is to prune the inputs to the Dockerfile to only what is strictly necessary. Turborepo provides a simple solution - turbo prune.

turbo prune docs --docker

Running this command creates a pruned version of your monorepo inside an ./out directory. It only includes workspaces which docs depends on.

Crucially, it also prunes the lockfile so that only the relevant node_modules will be downloaded.

The --docker flag

By default, turbo prune puts all relevant files inside ./out. But to optimize caching with Docker, we ideally want to copy the files over in two stages.

First, we want to copy over only what we need to install the packages. When running --docker, you'll find this inside ./out/json.

out
├── json
│   ├── apps
│   │   └── docs
│   │       └── package.json
│   └── package.json
├── full
│   ├── apps
│   │   └── docs
│   │       ├── server.js
│   │       └── package.json
│   ├── package.json
│   └── turbo.json
└── package-lock.json

Afterwards, you can copy the files in ./out/full to add the source files.

Splitting up dependencies and source files in this way lets us only run npm install when dependencies change - giving us a much larger speedup.

Without --docker, all pruned files are placed inside ./out.

Example

Our detailed with-docker example (opens in a new tab) goes into depth on how to utilise prune to its full potential. Here's the Dockerfile, copied over for convenience.

This Dockerfile is written for a Next.js (opens in a new tab) app that is using the standalone output mode (opens in a new tab).

FROM node:18-alpine AS base
 
FROM base AS builder
RUN apk add --no-cache libc6-compat
RUN apk update
# Set working directory
WORKDIR /app
RUN yarn global add turbo
COPY . .
RUN turbo prune web --docker
 
# Add lockfile and package.json's of isolated subworkspace
FROM base AS installer
RUN apk add --no-cache libc6-compat
RUN apk update
WORKDIR /app
 
# First install the dependencies (as they change less often)
COPY .gitignore .gitignore
COPY --from=builder /app/out/json/ .
COPY --from=builder /app/out/yarn.lock ./yarn.lock
RUN yarn install
 
# Build the project
COPY --from=builder /app/out/full/ .
RUN yarn turbo run build --filter=web...
 
FROM base AS runner
WORKDIR /app
 
# Don't run production as root
RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nextjs
USER nextjs
 
COPY --from=installer /app/apps/web/next.config.js .
COPY --from=installer /app/apps/web/package.json .
 
# Automatically leverage output traces to reduce image size
# https://nextjs.org/docs/advanced-features/output-file-tracing
COPY --from=installer --chown=nextjs:nodejs /app/apps/web/.next/standalone ./
COPY --from=installer --chown=nextjs:nodejs /app/apps/web/.next/static ./apps/web/.next/static
COPY --from=installer --chown=nextjs:nodejs /app/apps/web/public ./apps/web/public
 
CMD node apps/web/server.js

Remote caching

To take advantage of remote caches during Docker builds, you will need to make sure your build container has credentials to access your Remote Cache.

There are many ways to take care of secrets in a Docker image. We will use a simple strategy here with multi-stage builds using secrets as build arguments that will get hidden for the final image.

Assuming you are using a Dockerfile similar to the one above, we will bring in some environment variables from build arguments right before turbo build:

ARG TURBO_TEAM
ENV TURBO_TEAM=$TURBO_TEAM
 
ARG TURBO_TOKEN
ENV TURBO_TOKEN=$TURBO_TOKEN
 
RUN yarn turbo run build --filter=web...

turbo will now be able to hit your remote cache. To see a Turborepo cache hit for a non-cached Docker build image, run a command like this one from your project root:

docker build -f apps/web/Dockerfile . --build-arg TURBO_TEAM=“your-team-name” --build-arg TURBO_TOKEN=“your-token“ --no-cache