CI/CD, Docker, and Zero-Downtime Deployments
Shipping code shouldn't be a gamble. Here's how to build pipelines that make production deploys boring — in the best way.
Every developer has broken production at least once. The first time, it's a rite of passage. The second time, it's a process problem. Automated deployment pipelines exist to make the gap between your local machine and production as thin and auditable as possible.
deployed_code Why Containerize First
Before you can automate deployments, your app needs to run identically everywhere. Docker solves the "works on my machine" problem by packaging the application, its runtime, and its configuration into a single reproducible image.
A production-grade Dockerfile
# Stage 1: build FROM node:20-alpine AS builder WORKDIR /app COPY package*.json ./ RUN npm ci --only=production # Stage 2: minimal runtime image FROM node:20-alpine WORKDIR /app COPY --from=builder /app/node_modules ./node_modules COPY . . EXPOSE 3000 CMD ["node", "server.js"]
Multi-stage builds keep the final image lean — only production dependencies, no build tooling. A typical Express API image goes from ~900MB to under 150MB with this approach.
manage_accounts GitHub Actions Pipeline
A minimal, production-ready CI/CD pipeline does four things on every push to main: run tests, build the Docker image, push it to a registry, and trigger a rolling deploy on the server.
# .github/workflows/deploy.yml name: Deploy to Production on: push: branches: [main] jobs: deploy: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - name: Build & push image run: | docker build -t ghcr.io/${{ github.actor }}/myapp:${{ github.sha }} . docker push ghcr.io/${{ github.actor }}/myapp:${{ github.sha }} - name: SSH & rolling deploy uses: appleboy/ssh-action@v1 with: host: ${{ secrets.SSH_HOST }} script: | docker pull ghcr.io/${{ github.actor }}/myapp:${{ github.sha }} docker stop myapp-old || true docker run -d --name myapp-new -p 3000:3000 \ ghcr.io/${{ github.actor }}/myapp:${{ github.sha }} sleep 10 && curl -f http://localhost:3000/health && \ docker stop myapp && docker rename myapp-new myapp
swap_horiz Zero-Downtime with Blue-Green
The pattern above is a simplified blue-green deploy: the new container starts alongside the old one, a health check validates the new instance, and traffic is cut over only on success. Nginx upstream reload (which is signal-based and connection-draining) makes this seamless for HTTP traffic.
"Your deploy strategy should make you less afraid to ship, not more."
If you're batching unreleased changes out of fear, something is wrong with the process — not the code.
bug_report Lessons Learned
- Secrets management first. Never commit credentials. Use GitHub Secrets + environment-specific vaults from day one.
- Health check endpoints are non-negotiable.
GET /healthshould check DB connectivity, cache readiness, and return200only when the service is truly ready. - Pin your Docker base image tags.
node:20-alpineis better thannode:alpine— tag changes break reproducibility. - Test in CI what you run in prod. If your tests run against SQLite but prod runs Postgres, your tests aren't testing what matters.