
Most startup engineers first encounter Docker when they're told the app needs to be containerised for deployment. They write a Dockerfile that works, ship it, and move on. Three months later, images are 2GB, builds take 12 minutes, the production container crashes on the first traffic spike, and nobody can figure out why the environment variable isn't loading. Docker done badly creates exactly the problems it's supposed to solve. Done well, it gives you consistent environments from development to production, predictable deployments, and the ability to scale horizontally without infrastructure surprises. Here's the setup that holds up in production โ not just in the tutorial.
๐ก TL;DR
The most important Docker decisions for a startup app: use multi-stage builds to keep production images small (target under 200MB), don't run as root in production containers, use docker-compose for local development with named volumes for database persistence, and never bake secrets into your Dockerfile. A well-built production image runs faster, costs less in registry storage, and has fewer attack surface issues than a single-stage build. Most startup Dockerfiles have at least three of these wrong.
Stop Making These Dockerfile Mistakes First
Before covering the right setup, let's name the most common problems in startup Dockerfiles. These show up constantly and each one has a real cost in production.
Mistake | Real cost | Fix |
|---|---|---|
Using | Unpredictable builds when latest changes | Pin to a specific version: |
Copying everything before npm install | Cache invalidation on every code change โ slow rebuilds | Copy package.json first, install, then copy source |
Single-stage build with dev dependencies | 2โ4GB images, slow pulls, high registry costs | Multi-stage build โ build stage includes dev deps, prod stage doesn't |
Running as root | Security vulnerability โ any container escape is root access | Add a non-root user: |
Baking secrets into the image | Secrets visible in image layers and registry | Use environment variables at runtime, not build time |
Multi-Stage Builds: The One Dockerfile Pattern Worth Knowing
This is the most impactful Docker pattern for startup applications. A multi-stage build uses multiple FROM instructions in one Dockerfile. The build stage installs everything needed to compile or build your app. The production stage copies only the compiled output โ no dev dependencies, no build tools, no source files.
๐๏ธ Node.js multi-stage build structure
Stage 1 (builder): use node:20-alpine, copy package files, run npm ci, copy source, run build command. Stage 2 (production): use node:20-alpine again, copy node_modules production dependencies and compiled output from the builder stage, set NODE_ENV=production, add a non-root user, expose the port, and set the start command. The production image gets only what it needs to run โ no TypeScript compiler, no test runners, no build tooling.
๐ Target image sizes
A well-built Node.js production image should be under 200MB. React frontend images served by nginx can be under 30MB. If your production image is over 500MB, something is wrong โ usually dev dependencies or build tools in the final stage. Run docker image ls to check and docker history [image] to see which layers are large.
โก Layer caching — order matters
Docker caches each layer and only rebuilds from the first changed layer. Copy files in order of how often they change: package.json and package-lock.json first (changes rarely), then source code (changes every commit). This way, npm ci only re-runs when dependencies change, not on every source file change. A well-ordered Dockerfile drops rebuild time from 3โ4 minutes to under 30 seconds on unchanged dependencies.
Docker Compose for Local Development: The Setup That Actually Works
Docker Compose is where most startup developers spend more time than in production Docker. The local development setup needs to be fast to rebuild, easy to reset, and consistent with production while not being identical โ you want hot reload in development, not a production build cycle.
๐ Compose file structure for a Node.js + PostgreSQL app
Define services: api (your Node.js app with a development Dockerfile target), db (postgres:15-alpine), and optionally redis if you use it. Mount your source code as a volume into the api container so file changes trigger hot reload without rebuilding. Use a named volume for the db data directory so the database persists across container restarts.
๐ Hot reload in the container
Mount your source directory into the container with a bind mount: ./src:/app/src. Run your Node.js app with nodemon or ts-node-dev as the start command in development. When you save a file, the watcher in the container picks up the change and restarts the process โ without rebuilding the image. This gives you the same hot reload experience as running locally without Docker.
๐๏ธ Database persistence with named volumes
Use a named volume (not a bind mount) for your database data directory. Named volumes persist across docker-compose down and restart. If you want to reset the database to a clean state, run docker-compose down -v which removes volumes. If you don't specify volumes at all, your database data disappears every time you bring the stack down โ which is worse than annoying when you have seed data or migrations.
Production Deployment: From Image to Running Container
Building the image correctly is half the job. Running it in production correctly is the other half. Here are the decisions that matter most once you're past the local development setup.
๐ง Health checks in your Dockerfile
Add a HEALTHCHECK instruction that hits your API's health endpoint. This tells your container orchestrator (ECS, Kubernetes, Cloud Run) whether the container is actually ready to receive traffic โ not just that the process started. A container without a health check is assumed healthy as soon as it starts, even if your app takes 10 seconds to initialise.
๐ Logging to stdout, not to files
Containers should log to stdout and stderr, not to log files inside the container. Your orchestrator captures stdout logs and ships them to your log aggregation system (CloudWatch, Datadog, GCP Logging). Writing logs to files inside a container means they disappear when the container is replaced and require a log agent sidecar to collect them. Change your logging configuration to use console.log/console.error or a logger configured to write to stdout.
๐ Environment variables at runtime, never at build time
Secrets injected at build time using ARG or ENV in your Dockerfile are stored in image layers and visible to anyone with access to the registry. Pass all secrets as environment variables at container runtime: via ECS task definitions, Kubernetes secrets, or Cloud Run environment configuration. Use ENV in your Dockerfile only for non-sensitive defaults like NODE_ENV=production and PORT=3000.
[INTERNAL LINK: CI/CD pipeline setup guide โ devshire.ai/blog/cicd-pipeline-react-nodejs-setup-2026]
Trusted by 500+ startups & agencies
"Hired in 2 hours. First sprint done in 3 days."
Michael L. ยท Marketing Director
"Way faster than any agency we've used."
Sophia M. ยท Content Strategist
"1 AI dev replaced our 3-person team cost."
Chris M. ยท Digital Marketing
Join 500+ teams building 3ร faster with Devshire
1 AI-powered senior developer delivers the output of 3 traditional engineers โ at 40% of the cost. Hire in under 24 hours.
The Bottom Line
Use multi-stage builds. Your production image should include only compiled output and production dependencies โ no build tools, no dev dependencies. Target under 200MB for Node.js apps.
Order your Dockerfile layers by change frequency: copy package files first, run npm install, then copy source code. This keeps the dependency cache valid on every source change.
Never run production containers as root. Add a
USER nodeinstruction before your start command.Pass secrets at container runtime via environment variables โ never bake them into the image with ARG or ENV at build time.
In Docker Compose for local development, use bind mounts for source code (enables hot reload) and named volumes for database data (enables persistence).
Add a HEALTHCHECK to your production Dockerfile. Without it, your container orchestrator assumes the container is healthy as soon as the process starts โ even if your app isn't ready.
Log to stdout, not to files inside the container. Your orchestrator collects stdout logs; container-internal log files disappear when containers are replaced.
Frequently Asked Questions
What is Docker and why should startups use it?
Docker packages your application and its dependencies into a container โ a portable, isolated environment that runs consistently regardless of where it's deployed. For startups, the main benefits are: eliminating "works on my machine" environment problems, predictable deployments because the exact same image runs in staging and production, and easy horizontal scaling by running multiple container instances behind a load balancer.
What is a multi-stage Docker build and why does it matter?
A multi-stage build uses multiple FROM instructions in one Dockerfile. The first stage (build stage) includes everything needed to compile or build your app โ TypeScript compiler, dev dependencies, build tools. The second stage (production stage) copies only the compiled output and production dependencies. The result is a small, clean production image without development tooling. A typical Node.js app goes from a 1.5โ2GB single-stage image to a 150โ200MB multi-stage image.
How do I keep Docker builds fast?
Layer caching is the key. Copy your package.json and lockfile before copying source code, and run npm install before copying source. This way, Docker only re-runs npm install when dependencies change โ not on every code change. Use specific base image versions (not latest) so the cache isn't invalidated by base image updates. On CI, use the GitHub Actions cache for Docker layers or use registry-based caching with --cache-from.
How do I manage environment variables and secrets in Docker?
Never bake secrets into your Docker image using ARG or ENV instructions โ they're stored in image layers. Pass secrets at container runtime: via --env flags, .env files (for local development only, never in production), ECS task definition environment configuration, Kubernetes secrets, or Cloud Run environment variable configuration. For sensitive production secrets, use a secrets manager (AWS Secrets Manager, Google Secret Manager) and inject values at startup.
How do I set up Docker Compose for local development?
Create a docker-compose.yml that defines your app service (using a development Dockerfile target), your database service (postgres:15-alpine or mysql:8), and any other dependencies. Mount your source directory as a bind volume into the app container to enable hot reload. Use a named volume for database data so it persists between restarts. Add a depends_on with health check condition so your app waits for the database to be ready before starting.
What base image should I use for a Node.js Docker container?
Use an Alpine-based image for smaller size: node:20-alpine or node:22-alpine. Pin to a specific version (not node:latest) so your builds are reproducible. Alpine images are 5โ10x smaller than Debian-based images for the same Node.js version. The trade-off is that Alpine uses musl libc instead of glibc, which can cause compatibility issues with some native modules. If you hit compatibility issues with a specific npm package, fall back to node:20-slim.
Should I use Docker or Kubernetes for a startup app?
Start with Docker on a managed container service (AWS ECS, Google Cloud Run, Railway) โ not Kubernetes. Kubernetes adds significant operational complexity and is overkill for teams under 10 engineers or applications without complex orchestration needs. Cloud Run and ECS handle container deployment, auto-scaling, health checks, and rolling deploys without requiring Kubernetes knowledge. Move to Kubernetes when you have specific requirements (custom networking, multi-cluster, advanced scheduling) that managed services don't cover. [INTERNAL LINK: Kubernetes vs Docker Compose comparison โ devshire.ai/blog/kubernetes-vs-docker-compose-startup]
Need a Developer Who Knows Docker and Cloud Deployment?
devshire.ai matches product teams with developers experienced in containerisation, CI/CD pipelines, and cloud infrastructure โ pre-screened for real production experience. Get a shortlist in 48โ72 hours.
Start Your Search at devshire.ai โ
No upfront cost ยท Shortlist in 48โ72 hrs ยท Freelance & full-time ยท Stack-matched candidates
About devshire.ai โ devshire.ai matches AI-powered engineering talent with product teams that ship. Every developer has passed a live proficiency screen. Typical time-to-hire: 8โ12 days. Start hiring โ
Related reading: Best Tech Stack for Startups in 2026 ยท How to Automate Your Startup Backend With AI ยท API-First SaaS Development ยท How to Scale Your MVP to 10k Users ยท SaaS Security Best Practices
Devshire Team
San Francisco ยท Responds in <2 hours
Hire your first AI developer โ this week
Book a free 30-minute call. We'll match you with the right developer for your project and get you started within 24 hours.
<24h
Time to hire
3ร
Faster builds
40%
Cost saved

