1000 Prompts
Vibe Coded

Part 4

1000 Prompts

1000 prompts in Cline in 96 days. The milestone where AI coding stopped being an experiment and became my default workflow.

10 posts

June 2025

Sometime in early June, the counter in Cline ticked past 987 prompts. I'd started on February 4th. In 96 days, I'd averaged over 10 coding prompts per day, every single day.

One thousand conversations with an AI about code. What does that teach you?

The Milestone Nobody Celebrates

There's no achievement badge for "1000 AI coding prompts." No community recognition. No certificate. In a world that was still debating whether AI coding was a gimmick, admitting you'd done it a thousand times felt less like a flex and more like a confession.

But every prompt was logged. Every response saved. I could search back through months of work and see exactly what I'd built, which prompts had been expensive duds, which had been surgically efficient, and how my approach had evolved over time.

The data told a story that my memory couldn't: the prompts from February were dramatically different from the ones in June. Early prompts were long, detailed, over-specified — like writing a contract with someone you don't trust. By June, they were short, contextual, conversational — like talking to a colleague who already understands the project.

The MobilePay Moment

I described the milestone to a friend by mentioning that I'd coded a MobilePay integration with AI. His response wasn't "how does it work?" or "is it secure?" — it was "what was the prompt?"

That question marked a cultural shift. When developers start asking "what was the prompt?" instead of "what framework did you use?", something fundamental has changed about how we think about building software. The prompt is the architecture. The prompt is the design decision. The prompt is where the craft lives now — or at least, an increasingly important part of it.

Hitting the Plateau

Around this time, something unexpected happened: I stopped being excited about model improvements.

Claude 4 had just launched. Everyone on LinkedIn and X was posting about it. And I... didn't care that much. Not because it wasn't good — it was — but because I'd run an experiment that surprised me.

In my largest vibe coding project to date, I had so many rules and context files that when I swapped in Gemini Flash — a model 23 times cheaper than the premium models — the results were barely distinguishable. The model was following my rules, my ADRs, my conventions. The "intelligence" wasn't coming from the model anymore. It was coming from the system I'd built around the model.

This was the insight that stopped me in my tracks for a few days. If a cheap model with good instructions could match an expensive model with vague ones, then all the money and hype around bigger, smarter models was somewhat missing the point. The point was the context. The rules. The accumulated decisions. The human architecture around the AI.

I went deep with Gemini Flash over a weekend to test this theory. The results held.

"The Technology Is Ready"

By mid-June, I wrote a post that captured where my head was at: "For me, the technology is ready and it's just about getting started. I have no expectation that models will get better or need for them to get better to deliver more value."

This was a contrarian take. The AI world was obsessed with the next model, the next benchmark, the next capability. I was saying: stop waiting. What we have today is already enough to fundamentally change how software gets built. The bottleneck isn't the AI — it's the humans learning to use it.

The quiet period that followed wasn't because I lost interest. It was because the daily reality of vibe coding had stopped being novel and started being work. Good work. Productive work. But work nonetheless. I wasn't discovering new capabilities anymore — I was applying proven ones to real problems.

GitHub Copilot Enters the Mix

Around this time, I started using GitHub Copilot more seriously alongside Cline. The two tools served different purposes:

Cline was for deep, contextual, multi-file work — "understand this codebase and build a new feature across these 5 files." Copilot was for in-flow autocomplete — "finish this line of code I've started typing."

The frustration was that they didn't share data. Cline's logged history of 1000+ prompts existed in one silo. Copilot's suggestions existed in another. I started thinking about whether there was a way to bring everything together — one place to see all my AI-assisted development, regardless of tool.

That thread would eventually lead me to Claude Code, but I wasn't there yet.

The List Grows

One habit from this period that I can't defend rationally: I kept starting new projects. A freezer inventory system (because I have two freezers and can never find anything). OpenAI's new image generation API experiments. Hackathon preparations. Each one adding to the pile.

But I was also getting faster at the boring parts. The ADRs were paying off. Each new project started at a higher baseline because the AI already knew my preferences. The gap between "idea" and "working prototype" kept shrinking. If it had been 10 minutes in December, it was 5 minutes by June.

The gap between "prototype" and "shipped product" hadn't shrunk at all. That was still the hard part.

What the Numbers Actually Mean

1000 prompts in 96 days. Here's what that breaks down to in practice:

  • Wake up, open IDE, prompt the AI: that's 2-3 prompts before morning coffee.
  • Working on the main project of the day: 4-5 substantive prompts with iterative refinement.
  • Exploring a side idea: 2-3 prompts to see if something works.
  • Evening session: 2-3 more prompts on whatever caught my attention.

It wasn't a grind. It was a rhythm. The AI was always there, always ready, always fast. The human bottleneck was deciding what to ask for, not waiting for answers.

And mixed in with those 1,000 Cline prompts were all the Bolt.new projects, the Anthropic Console sessions, the experiments in other tools. The real number was probably 3-4x higher. I just only had receipts for Cline.

Looking Forward

The post I wrote about the milestone ended on a forward-looking note: I'd started hearing about multi-agent coding workflows. The idea that you could have multiple AI instances working on different parts of a project simultaneously. It sounded like science fiction.

It wasn't. And it was coming faster than I thought.

Posts in this part

Part 3The May ChallengePart 5The Hackathon Era