How I Write Software With AI
My software development workflow has gotten messier since I started using AI tools heavily. That sounds like a criticism but it’s not. The mess is the point.
The fundamental shift is simple: writing the wrong code used to be expensive. Now it’s cheap. When spinning up a throwaway script takes minutes instead of hours, you can afford to explore more dead ends. You can write code you know you’ll delete just to understand the problem better. The economics of exploration have changed.
I mostly build data and AI backend services, the kind of systems where you’re wrangling datasets, training models, and wrapping them in APIs. Here’s how that process looks now.
Phase 1: Exploratory Chaos
I start with EDA, scripts, and temporary code. Lots of brainstorming sessions with an AI assistant, sketching out ideas, trying things that might not work. This phase used to be labor intensive enough that I’d try to minimize it. Now I lean into it.
The goal isn’t to write good code. It’s to get a feel for the data and understand what the service should actually do. I’m asking questions like: What’s in this dataset? What patterns matter? What would a user actually want from this? What’s going to be hard?
Everything from this phase gets thrown away. That used to feel wasteful. Now it feels like the fastest path to understanding.
Phase 2: The Simplest Thing That Works
Once I have a decent mental model of the service, I start building for real. The setup is almost always the same: a FastAPI server running in Docker Compose with LocalStack for AWS services.
For the data layer, if I already know what I need, I’ll set up the right database from the start. Usually PostgreSQL, often with pgvector if embeddings are involved. If I’m still figuring out the data model, I’ll just dump everything into Parquet files on S3 and deal with proper storage later.
The goal here is to map the data and model context into an API that I can actually evaluate. It’s going to be slow. The endpoints won’t be well-designed. I might have multiple duplicate endpoints with slight variations because I’m still figuring out what the interface should look like.
This is intentionally rough. I’m not trying to build the right thing yet. I’m trying to build something I can poke at.
Phase 3: Dogfooding With a Throwaway Interface
Then things get fun. I build a quick interface that roughly matches what an actual user would do. Sometimes it’s a TUI, sometimes a little web app, whatever gets me closest to the real usage pattern fastest.
This interface exists purely for dogfooding. By actually using the service, even through a janky prototype, I can feel where the bottlenecks are. I discover what works and what doesn’t. The scope of the service narrows naturally as I bump into reality.
At this stage I’ll often have a single AI agent with all the repos in scope. It’s heavy refactors, lots of editing, tightening things up. The code is finally starting to look like something real.
Phase 4: Delete the Scaffolding
At the end, the EDA code and the prototype app usually just get deleted. They were never meant to survive. They’re the two exploratory ends, one pointing at the data and one pointing at the user, that helped me develop the actual service faster.
What remains is the core service, informed by all that exploration but not burdened by it.
The Meta Point
The old workflow was more linear because exploration was expensive. You’d try to think through everything upfront, design carefully, then build once. The new workflow is messier because exploration is cheap. You can afford to learn by doing, even when “doing” means writing code you’ll throw away.
AI is making certain kinds of code so cheap to produce that the economics of the whole process shift. When throwaway code is nearly free, you throw away more code. And somehow, you end up with better software faster.
Stay in the loop
Get notified when I publish new posts. No spam, unsubscribe anytime.