arrow_back Blog home Home
DevOps Docker Cloud CI/CD

CI/CD, Docker, and Zero-Downtime Deployments

Shipping code shouldn't be a gamble. Here's how to build pipelines that make production deploys boring — in the best way.

calendar_today February 2026 schedule 9 min read person Saptarshi Sadhu

Every developer has broken production at least once. The first time, it's a rite of passage. The second time, it's a process problem. Automated deployment pipelines exist to make the gap between your local machine and production as thin and auditable as possible.

info
Stack covered GitHub Actions · Docker · Docker Compose · Nginx · AWS EC2 / Lightsail · Health-check blue-green deploys

deployed_code Why Containerize First

Before you can automate deployments, your app needs to run identically everywhere. Docker solves the "works on my machine" problem by packaging the application, its runtime, and its configuration into a single reproducible image.

A production-grade Dockerfile

dockerfile
# Stage 1: build
FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production

# Stage 2: minimal runtime image
FROM node:20-alpine
WORKDIR /app
COPY --from=builder /app/node_modules ./node_modules
COPY . .
EXPOSE 3000
CMD ["node", "server.js"]

Multi-stage builds keep the final image lean — only production dependencies, no build tooling. A typical Express API image goes from ~900MB to under 150MB with this approach.

manage_accounts GitHub Actions Pipeline

A minimal, production-ready CI/CD pipeline does four things on every push to main: run tests, build the Docker image, push it to a registry, and trigger a rolling deploy on the server.

yaml
# .github/workflows/deploy.yml
name: Deploy to Production
on:
  push:
    branches: [main]

jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Build & push image
        run: |
          docker build -t ghcr.io/${{ github.actor }}/myapp:${{ github.sha }} .
          docker push ghcr.io/${{ github.actor }}/myapp:${{ github.sha }}

      - name: SSH & rolling deploy
        uses: appleboy/ssh-action@v1
        with:
          host: ${{ secrets.SSH_HOST }}
          script: |
            docker pull ghcr.io/${{ github.actor }}/myapp:${{ github.sha }}
            docker stop myapp-old || true
            docker run -d --name myapp-new -p 3000:3000 \
              ghcr.io/${{ github.actor }}/myapp:${{ github.sha }}
            sleep 10 && curl -f http://localhost:3000/health && \
              docker stop myapp && docker rename myapp-new myapp

swap_horiz Zero-Downtime with Blue-Green

The pattern above is a simplified blue-green deploy: the new container starts alongside the old one, a health check validates the new instance, and traffic is cut over only on success. Nginx upstream reload (which is signal-based and connection-draining) makes this seamless for HTTP traffic.

"Your deploy strategy should make you less afraid to ship, not more."
If you're batching unreleased changes out of fear, something is wrong with the process — not the code.
0
Downtime seconds
~2 min
Pipeline to live
100%
Rollback coverage

bug_report Lessons Learned

check_circle
The goal A deploy pipeline so reliable that shipping to production feels as routine as saving a file.