The Future of Coding Is Specification Writing

I’ve spent the past week refining my AI-assisted development workflow, and I keep arriving at the same conclusion: the future of app development isn’t asking AI to write code. It’s learning to write specifications that AI can execute.

Bear with me.

The Old Way

The legacy development workflow looks like this: an IDE with a dozen tabs open, a few terminals, thirty browser tabs for Stack Overflow and documentation. Coders spend roughly 80% of their time researching and debugging, not writing code. Context switches are brutal. Then there’s Git, then the DevOps pipeline, then deployment configuration. We’ve worked this way for decades, and it’s wildly inefficient.

The New Stack

After six months of experimentation (born partly from unemployment, partly from boredom), I’ve landed on an architecture that changes how I think about building software:

The core: Warp Terminal acts as the orchestration engine. Claude provides the reasoning. GitHub handles version control and CI/CD. Docker provides the runtime.

The spokes: Linear manages the Agile workflow—bugs, features, sprints, PRs. Notion serves as the single source of truth for specifications. Slack acts as the notification layer and command interface.

The monthly cost runs about $154: Warp ($20), Claude API ($100, and yes, it’s expensive), Linear ($12), Notion ($12), GitHub Copilot ($10). VSCode is optional and free.

How It Actually Works

The hardest part is the initial wiring—getting every app talking to every other app through their respective APIs and integrations. It took me half a day of tedious configuration. But once connected, the workflow becomes surprisingly elegant.

You write your specification in Notion. You break it into features and iterations in Linear. You tag @Warp in Slack. And then you watch.

Slack becomes your command center. When a build fails, you tag @Linear to create a bug report, then @Claude to investigate and fix. Once resolved, Claude documents the fix in Notion—what caused it, how it was resolved, who handled it. The AI has context from every connected tool, which is what makes the whole system work.

Need to add designers? Connect Figma. Support team? Drop in Intercom or ServiceNow. As long as new tools can talk to your core stack, they inherit the entire pipeline.

A Real Test

I wrote a formal specification for a web-based task management app with strict security requirements. I broke it into iterations in Linear, tagged @Warp, and monitored Slack. The cycle was repetitive: issue surfaces, I route it to the right agent, agent resolves and documents, repeat.

After a few dozen iterations, the system delivered a functional containerized app ready for AKS deployment. Claude handled code reviews and PRs in GitHub, managed merges, and generated documentation throughout. Logging, exception handling, and observability were baked in from the spec.

The code was solid, though Claude still struggles with sophisticated design patterns and polymorphism. Pointing it to reference material helps, but this remains an area where human architectural judgment matters.

What This Doesn’t Solve

I want to be honest about the limitations.

This workflow shines for greenfield projects with clear requirements. I haven’t stress-tested it against a fifteen-year-old monolith with compliance constraints and legacy integrations—the kind of system I’ve spent my career navigating in enterprise Azure environments.

AI-generated tests are genuinely useful, but the claim that AI “always writes proper unit tests with 100% coverage” would be an overstatement. Coverage metrics don’t equal test quality. AI tests can be superficial, tautological, or miss edge cases entirely. You still need human review.

And when AI makes an architectural decision that seems reasonable but creates compounding technical debt? That’s hard to catch in the moment. Specifications are famously incomplete and contradictory. The experienced architect’s value has always been navigating that ambiguity—I’m not yet convinced this workflow fully preserves that.

The Real Insight

Here’s what I keep coming back to: specifications are becoming the new source code.

If AI handles implementation, testing, documentation, and deployment, the leverage point shifts upstream. The people who can write precise, comprehensive specifications become the bottleneck. That’s historically been product managers, not engineers.

Software engineers, this is worth paying attention to. Our competition isn’t AI. It’s anyone who can articulate what software should do clearly enough that AI can build it.

The future is going to be strange. I’m choosing to find that exciting.



Discover more from The [K]nightly Build

Subscribe to get the latest posts sent to your email.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.