As a developer diving into Docker, I’ve learned that efficiency and clarity are everything. Recently, I tackled a project that required running a Node.js application with Puppeteer (for browser automation) inside a Docker container. The challenge? Ensuring Puppeteer could reliably find Chrome while keeping the image lean. Let me walk you through my solution using multi-stage builds, and the lessons I learned along the way.
Why Multi-Stage Builds?
Multi-stage builds in Docker are like having separate workspaces for different tasks: one for setup, another for dependencies, and a final one for the polished application. They help reduce image size and improve security by excluding unnecessary build tools from the final image. Here’s how I structured mine:
Setting the Foundation
I started with a base image (node:18
) and defined environment variables to configure the Node.js runtime and Puppeteer:
FROM node:18 AS env ENV NODE_ENV=development ENV NODE_OPTIONS="--max-old-space-size=2048" ENV PUPPETEER_CACHE_DIR="/root/.cache/puppeteer" ENV PUPPETEER_SKIP_CHROMIUM_DOWNLOAD="true" # Tell Puppeteer exactly where to find Chrome later ENV PUPPETEER_EXECUTABLE_PATH="/usr/bin/google-chrome-stable"
Key Decisions:
PUPPETEER_SKIP_CHROMIUM_DOWNLOAD
: Since I planned to use a system-installed Chrome, skipping Puppeteer’s bundled Chromium saved bandwidth and avoided version conflicts.- Memory Limits: Increasing Node’s memory ceiling withÂ
NODE_OPTIONS
 prevented crashes during resource-heavy builds.
Installing Dependencies & Chrome
Next, I inherited the environment from Stage 0 and focused on installing system and Node.js dependencies:
FROM env AS deps WORKDIR /app # Install Google Chrome RUN apt-get update && apt-get install -y wget gnupg curl \ && wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - \ && sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" > /etc/apt/sources.list.d/google.list' \ && apt-get update && apt-get install -y google-chrome-stable \ && rm -rf /var/lib/apt/lists/* # Debugging: Confirm Chrome’s location RUN which google-chrome-stable && google-chrome-stable --version # Install Node modules with Yarn COPY package.json yarn.lock* .yarnrc.yml ./ COPY .yarn/releases ./.yarn/releases RUN yarn install # Link Chrome to Puppeteer RUN npx puppeteer browsers install chrome
Aha Moment:
Initially, Puppeteer couldn’t find Chrome even though it was installed. Adding RUN which google-chrome-stable
revealed the binary was correctly placed at /usr/bin/google-chrome-stable
, which led me to set PUPPETEER_EXECUTABLE_PATH
in Stage 0. This environment variable became the missing puzzle piece!
Building the Application
In the final stage, I copied only what was needed (like node_modules
and source code) to keep the image slim:
FROM env AS builder WORKDIR /app COPY --from=deps /app/node_modules ./node_modules COPY . . RUN yarn build EXPOSE 3000 ENV PORT=3000 # Healthcheck for robustness HEALTHCHECK --interval=30s --timeout=5s --start-period=10s \ CMD curl -f http://localhost:3000 || exit 1 CMD ["yarn", "start"]
Why the Healthcheck?
I added a HEALTHCHECK
to ensure the app was responsive after deployment. It’s a simple but powerful way to catch runtime issues early.
Testing Puppeteer in the Container
To verify everything worked, I included a test script (saved as screenshot.js
):
const puppeteer = require('puppeteer'); (async () => { const browser = await puppeteer.launch({ executablePath: process.env.PUPPETEER_EXECUTABLE_PATH, headless: true, args: ['--no-sandbox', '--disable-setuid-sandbox'], }); const page = await browser.newPage(); await page.goto('https://example.com'); await page.screenshot({ path: 'example.png' }); await browser.close(); console.log('Screenshot saved as example.png!'); })();
Running this inside the container (node screenshot.js
) confirmed Puppeteer could launch Chrome and take screenshots—no more “Chrome not found” errors!
Final Thoughts
This project taught me three big lessons:
- Explicit Paths Are Lifesavers: Puppeteer relies on precise Chrome paths. DefiningÂ
PUPPETEER_EXECUTABLE_PATH
 eliminated guesswork. - Debugging Builds Pays Off: Logging Chrome’s location during the build (
which google-chrome-stable
) saved hours of frustration. - Multi-Stage Builds Are Worth It: Separating dependency installation from the final build kept the image clean and secure.
If you’re working with Puppeteer in Docker, start with multi-stage builds. They’re not just a best practice—they’re a game-changer.