300 Sessions and Counting
Vibe Coded

Part 2

300 Sessions and Counting

From curious experimenter to full-time AI coder. 300 sessions across Bolt, Lovable, v0, and Cline — and I'm just getting started.

10 posts

February – April 2025

My first Cline prompt was on February 4th, 2025. I know the exact date because Cline logs everything — every prompt, every response, every token spent. That kind of receipts turns out to matter when you're trying to tell people what happened.

What happened was this: I went deep.

Finding My Tool

January had been a blur of Bolt.new projects, Lovable experiments, and Anthropic Console sessions. I was generating websites, prototypes, and ideas at a pace that felt reckless. But something was missing. The web-based tools were great for standalone projects — create something from scratch, iterate, maybe ship it. But they didn't connect to my actual development workflow. They were islands.

Cline changed that. Here was a tool that sat inside my IDE, understood my existing codebase, and could make changes in the context of a real project. The jump from "generate a new thing from scratch" to "work with me on the thing I'm already building" was enormous.

On February 6th, 2025 — two days after my first Cline prompt — Andrej Karpathy tweeted about "vibe coding." Suddenly there was a word for what I'd been doing since Christmas. I wasn't just using AI to code. I was vibing with it. Describing what I wanted, watching it appear, steering by feel rather than specification.

The name stuck. Not because it was technically accurate, but because it captured the experience perfectly.

The Numbers Tell a Story

Here's what the dashboard showed by late spring:

  • 1,000 prompts in Cline in 96 days. That's over 10 coding prompts per day, every day, including weekends.
  • 250+ projects in Bolt.new — each one a separate experiment, prototype, or idea.
  • 300+ chat sessions across Bolt, Lovable, v0, Cline, and console.anthropic.com.

I described it to a friend: "I coded a MobilePay integration with AI." The first question wasn't "how does it work?" — it was "what was the prompt?" That shift in curiosity told me something was changing in how people thought about development.

The beautiful thing about Cline was that I could search back through everything. Which prompts worked. Which were expensive. Which burned tokens uselessly. It was like having a development diary that wrote itself.

Learning to Vibe

During these months, I figured out some things that nobody was teaching yet, because the practice was too new:

Context windows matter more than model quality. This was the big one. You could have the smartest model in the world, but if you stuffed its context window with irrelevant information, it would produce garbage. Learning to manage context — what to include, what to leave out, what to reference — was the actual skill underneath "vibe coding."

Prompt engineering is architecture. When I described something to the AI, I was effectively writing a requirements document. The better I specified the constraints, the style, the edge cases, the better the output. Twenty years of software architecture experience turned out to be directly transferable — I just wasn't typing code anymore. I was typing intent.

The feedback loop is everything. The reason I was doing 10+ prompts a day wasn't that each one was a masterpiece. Most weren't. But the cycle of prompt → result → evaluate → adjust was so fast that I could iterate my way to quality faster than I could plan my way there.

ChatGPT versus an IDE tool are completely different experiences. If you were using ChatGPT to code by copying and pasting between the browser and your editor, you were doing it the hard way. Tools like Cline that operated directly on your files, understanding your project structure, were a quantum leap.

The Skeptics and the Believers

Through February and March, the developer community was splitting into camps. Studies were appearing — some scientific, some not — trying to prove or disprove whether AI actually made developers faster. I could see truth in both sides.

The skeptics were right that AI didn't magically make bad developers into good ones. It didn't understand business requirements. It couldn't sit in a planning meeting. And when it went wrong, it went wrong in spectacular and hard-to-debug ways.

But the believers were right too. For the specific act of translating a well-understood requirement into working code, AI was absurdly fast. The catch was that software development is so much more than that specific act. Requirements gathering, architecture decisions, testing, deployment, maintenance, communication with stakeholders — AI touched only a fraction of the job.

The real insight, which I wouldn't fully articulate until later, was this: AI didn't make the developer 10x faster. It made the coding part of development 10x faster. Whether that mattered depended entirely on how much of your job was actually coding.

For me, in my hobby projects and prototypes, it was nearly 100%. So the speedup felt miraculous.

Spending Money to Learn

I should mention the economics. In these early months, I was paying for tokens directly — not through a subscription, but per-use through the API. This was deliberate. I wanted to see exactly what things cost. When people asked "is this expensive?" I wanted real numbers.

The answer was: not really, for what you got. $2 for 10 website variations. A few dollars a day for intensive coding sessions. Less than a nice coffee. The cost was so low that the main constraint wasn't money — it was time and imagination.

But I was also learning that not all prompts were created equal. Some burned through tokens like a bonfire and produced little. Others were surgically efficient. The economics of AI coding weren't just about price per token — they were about learning which prompts gave you the most value.

The Seeds of What Came Next

By April 2025, I'd established a rhythm. Wake up, coffee, open Cline, start building. The hobby projects were multiplying faster than I could finish them. I had ideas for apps, tools, integrations — all partially built, all promising, none shipped.

That accumulation of unfinished promise was about to become a problem. But it was also building something I couldn't see yet: a deep, intuitive understanding of how AI coding worked that would serve me when the tools evolved.

Because they were about to evolve dramatically.

Posts in this part

Part 1It Started at ChristmasPart 3The May Challenge