A Simple Modal Took 7 Days to Reach Production
A simple informational modal took 7 days to reach production.
Is that too long? I’m honestly not sure anymore. A year ago I wouldn’t have questioned it. But now?
Here’s the thing — it wasn’t complex. Nobody messed up. The process worked exactly as it should.
- Day 1 — Development. Honestly, done in minutes.
- Day 2 — Code review.
- Day 3 — Back to the progress after minimal CR feedback.
- Day 4 — Waiting for re-review.
- Day 5 — Approved, QA done.
- Day 6 — Business validation.
- Day 7 — Merge and deploy.
That’s it. Seven days. For a simple modal. And every single step made sense on paper.
The bottleneck moved
We got really good at making the coding part faster. AI turns hours into minutes, sometimes seconds. But everything around the code? Still the same. Same review rounds, same QA cycles, same board with 7+ columns.
We’re basically shipping at 10x speed through a pipeline designed for 1x.
Think about it like this. Imagine you upgrade your car engine to go 300 km/h, but you’re still driving on a road with a speed bump every 100 meters. The engine isn’t the bottleneck anymore. The road is.
In software teams, the “road” is:
- Waiting for code review — the reviewer has their own sprint work
- QA cycles — manual testing queues, environment availability
- Re-review loops — even tiny feedback means another round trip
- Business validation — stakeholders have meetings, other priorities
- Deployment windows — some teams only deploy on certain days
Each of these adds latency. Not because anyone is slow or lazy, but because the process assumes that coding takes the majority of the time. It doesn’t anymore.
Not everything needs the same ceremony
And look… I’m not saying let’s yolo everything straight to production. Security-sensitive stuff, complex logic, big architectural changes — sure, review that properly. No shortcuts there.
But a modal? A tooltip? Adding a column to a table? Updating a validation message? Does that really need the same ceremony as rewriting a payment flow?
What if we categorized changes by risk?
Low risk — copy changes, UI tweaks, simple additions with no logic changes:
- Automated checks (linting, types, tests pass)
- One approval, same-day merge
- No separate QA cycle — the PR author verifies
Medium risk — new features, refactors, dependency updates:
- Standard code review
- QA on the feature
- Normal deploy cycle
High risk — auth changes, payment flows, data migrations, infrastructure:
- Thorough review by senior engineer
- Dedicated QA with test plans
- Staged rollout, monitoring
This isn’t a radical idea. It’s how open source has worked forever. A typo fix in docs gets merged in minutes. A core API change gets weeks of review. The ceremony matches the risk.
We already do this in our backend architecture
The funny thing is, our backend systems already think in risk tiers. We have different error handling strategies based on severity:
- “Skip it” — the task is done, don’t retry. Order was cancelled? Customer already refunded? Cool, mark it as complete and move on. Returns HTTP 200.
- “Try again” — transient failure. API throttled, service temporarily down, network hiccup. Returns HTTP 503, the task queue retries with exponential backoff.
- “Alert a human” — failed too many times. After N retries, fire a notification to Slack. Something needs manual attention.
Three tiers of risk, three different responses. We didn’t design this because of a process document — we designed it because treating every failure the same way doesn’t work.
Why don’t we apply the same thinking to our delivery process?
The team that figures this out wins
I think the first teams to figure out how to match process speed to actual risk will have a real edge. Just look at AI-native tools shipping new versions every other day with 50+ lines in the changelog. That’s the pace now.
It’s not about removing quality gates. It’s about right-sizing them. A one-size-fits-all process made sense when coding was the expensive part. Now that AI compressed the coding time by 10x, the process overhead is the dominant cost.
The uncomfortable truth is that most engineering processes were designed to protect against developers making mistakes in code. But if AI handles the straightforward code and a senior reviews the complex stuff — what exactly is the 7-column board protecting against?
What I’m trying on my team
I don’t have all the answers yet, but here’s what we’re experimenting with:
- Trust tiers — senior devs can self-merge low-risk changes after automated checks pass
- AI-first review — CodeRabbit catches the mechanical stuff, humans focus on architecture and business logic
- Async QA — for low-risk changes, QA happens post-merge with a quick rollback path
- Deployment frequency — we deploy multiple times per day instead of batching
It’s early. Some of it works, some of it doesn’t. But at least we’re asking the question: does this process still make sense for the world we’re building in now?
I’d genuinely love to hear how other teams are handling this. Has AI actually changed how your team delivers? Or just how it writes code?