How We Cut Our Deploy Time from 60 Minutes to 15 Minutes
A practical walkthrough of how we optimized our CI/CD pipeline for a 12-service monorepo on DigitalOcean App Platform — from pre-built Docker images to parallel CI jobs.
The Problem: An Hour Per Deploy
Our platform is a TypeScript monorepo with 12 backend microservices, a React SPA, shared packages, and a PostgreSQL database. We deploy to DigitalOcean App Platform. Until last week, every push to main took roughly 60 minutes to reach production.
For a small team shipping fast, that is unacceptable. A one-hour deploy loop means you either batch changes (risky) or spend your afternoon watching progress bars. We decided to fix it.
Where the Time Was Going
Before optimizing, our pipeline had three sequential phases:
| Phase | Duration | What It Did |
|---|---|---|
| CI checks | ~20 min | Install deps, build monorepo, lint, typecheck, test, security audit |
| Security | ~15 min | Duplicate install + build, then RLS coverage check and dep audit |
| DO deploy | ~25-35 min | DigitalOcean builds 12 Docker images from scratch, deploys everything |
The biggest offender was the DigitalOcean deploy phase. App Platform clones the repo and builds every Dockerfile independently — with no Docker layer cache between deployments. Twelve services, each running pnpm install and tsc from scratch, every single time.
Fix 1: Pre-Build Docker Images in GitHub Actions
This was the single biggest win. Instead of letting DigitalOcean build 12 Dockerfiles from scratch, we now:
- Build all 12 service images in parallel using a GitHub Actions matrix
- Push them to DigitalOcean Container Registry (DOCR)
- Update the live app spec to point at the pre-built images
- Tell DO to deploy — it just pulls the images, no building
The key enabler is docker/build-push-action with GitHub Actions cache:
build-images:
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
include:
- name: auth-service
dockerfile: services/auth/Dockerfile
- name: module-config-service
dockerfile: services/module-config/Dockerfile
# ... 10 more services
steps:
- uses: actions/checkout@v4
- uses: docker/setup-buildx-action@v3
- uses: docker/login-action@v3
with:
registry: registry.digitalocean.com
username: ${{ secrets.DIGITALOCEAN_ACCESS_TOKEN }}
password: ${{ secrets.DIGITALOCEAN_ACCESS_TOKEN }}
- uses: docker/build-push-action@v6
with:
context: .
file: ${{ matrix.dockerfile }}
push: true
tags: registry.digitalocean.com/coherence/${{ matrix.name }}:${{ github.sha }}
cache-from: type=gha,scope=${{ matrix.name }}
cache-to: type=gha,mode=max,scope=${{ matrix.name }}
The type=gha cache stores Docker layers in GitHub Actions cache. First build is cold (~8 min per service), but subsequent builds with unchanged layers take 2-3 minutes. Since all 12 run in parallel, the wall clock time is just the slowest individual build.
Result: DO deploy dropped from ~30 min to ~7 min. It just pulls images and runs health checks — no building.
The Spec Update Trick
DigitalOcean App Platform specs can reference either a GitHub repo (source build) or a DOCR image (pre-built). We use yq to dynamically update the live spec:
# Fetch the live spec (preserves all encrypted secrets)
doctl apps spec get $APP_ID --format yaml > /tmp/spec.yaml
# Swap each service from GitHub source to DOCR image
for svc in auth-service module-config-service ...; do
yq -i "del(.services[] | select(.name == \"$svc\").github)" /tmp/spec.yaml
yq -i "(.services[] | select(.name == \"$svc\")).image.registry_type = \"DOCR\"" /tmp/spec.yaml
yq -i "(.services[] | select(.name == \"$svc\")).image.tag = \"$COMMIT_SHA\"" /tmp/spec.yaml
done
# Apply and deploy
doctl apps update $APP_ID --spec /tmp/spec.yaml
doctl apps create-deployment $APP_ID --wait
This is idempotent — on the first run it migrates from source to image, on subsequent runs it just updates the tag. The doctl apps spec get preserves all secrets as encrypted values, so the update is safe.
Fix 2: Parallelize CI Checks
Our CI was running lint → typecheck → test → security sequentially in one job. We split them into parallel jobs that share a cached workspace:
ci-setup (install + build packages, ~3 min)
├── ci-lint (~3 min)
├── ci-typecheck (~3 min)
├── ci-test (~6 min)
└── ci-security (~2 min)
The ci-setup job installs dependencies, builds shared packages, and saves the workspace to GitHub Actions cache. Each parallel job restores that cache and runs only its specific check.
Key insight: We only build shared packages (packages/*) in CI setup — not apps or services. Lint and typecheck work on source files. Tests use vitest which transpiles on-the-fly. The actual production builds happen in the Docker image matrix. This cut our setup step from 11 minutes to under 3.
Fix 3: Remove Type-Checked ESLint Rules
Our ESLint config used parserOptions.project to enable type-checked rules like no-floating-promises and no-misused-promises. These rules require ESLint to spin up a full TypeScript type-checker — essentially running tsc inside ESLint.
Since we already run tsc --noEmit as a separate typecheck step, these rules were making us pay for type-checking twice. Removing parserOptions.project and the type-checked rules dropped lint from 14 minutes to 3.5 minutes.
We also added --cache to all ESLint scripts, which helps on local dev even though CI starts fresh each time.
Fix 4: Run CI and Image Builds Concurrently
CI checks and Docker image builds have no dependency on each other. We run them in parallel:
ci-setup ──> parallel CI checks ──┐
├──> deploy-staging
build-images (12 parallel) ───────┘
Image builds typically finish before CI (cached builds take ~2 min), so by the time tests pass, images are already waiting in DOCR.
The Other Small Wins
- Merged CI and Security jobs — security was a separate job that duplicated the entire install + build. Folding it into the CI flow saved 10-15 min.
- Concurrency groups with
cancel-in-progress: true— stale runs get cancelled when new commits are pushed. - Sentry shallow clone —
fetch-depth: 100instead of full git history. - Vite
reportCompressedSize: falsein CI — small but free.
Final Results
| Metric | Before | After | Improvement |
|---|---|---|---|
| CI wall clock | ~35 min | ~9 min | 74% faster |
| DO deploy | ~30 min | ~7 min | 77% faster |
| Total pipeline | ~60 min | ~16 min | 73% faster |
| Docker builds (first run) | N/A (DO built) | ~8 min | Cached after first |
| Docker builds (cached) | N/A | ~2 min | GHA layer cache |
The pipeline now looks like this in GitHub Actions:
┌─ ci-setup (3 min) ─┬─ lint (3 min)
│ ├─ typecheck (3 min)
│ ├─ test (6 min) ──┐
│ └─ security (2 min) ├── deploy (7 min)
└─ build-images x12 (2 min, parallel) ──────┘
What We Would Do Next
If we needed to go even faster:
- Vitest sharding — split the test suite across 2-3 parallel runners
- TypeScript incremental —
tsc --noEmit --incrementalwrites a.tsbuildinfofile that speeds up subsequent type checks - Selective image builds — only rebuild services whose files actually changed using path filters
- Convert web to a service — the React SPA still builds from source on DO (static sites do not support pre-built images). Converting it to an nginx service would let us pre-build it too
But at 16 minutes from push to production, we are happy to ship.
Key Takeaways
-
If your platform builds Docker images, build them yourself. Managed platforms rarely cache Docker layers between deployments. Building in CI with
docker/build-push-actionand GHA cache is dramatically faster. -
Do not run type-checking twice. If you have
tsc --noEmitin your pipeline, you do not also need ESLint type-checked rules. Pick one. -
Build once, share via cache. A monorepo CI that installs and builds in every parallel job wastes most of its time on redundant setup.
-
Measure before optimizing. Our initial assumption was that tests were the bottleneck. The real bottleneck was DigitalOcean rebuilding 12 Docker images from scratch on every deploy.
Keith Fawcett
Founder
Founder of Coherence. Building the intelligence layer for business.
Related Articles
How Much Does a CRM Really Cost? A Complete Breakdown
Discover the true cost of CRM beyond per-seat pricing. We break down hidden fees for implementation, training, add-ons, and integrations across major platforms.
What Are AI Agents in CRM? The Complete Guide
Learn what AI agents are, how they differ from chatbots, and how they automate lead qualification, follow-ups, and pipeline management in modern CRM platforms.
Coherence vs Building a Custom CRM: Why Off-the-Shelf Wins
Thinking about building a custom CRM? Compare the real costs, timelines, and risks of custom development vs using a flexible platform like Coherence.