Why My Simple Feature Took 3 Hours to Code: A Lesson in Vibe Coding

Thursday, August 28, 2025

Ever tried vibe coding? Dive into a story where a simple coding task spiraled into a three-hour debate with an LLM. Discover the insights and lessons learned along the way!

Why My Simple Feature Took 3 Hours to Code: A Lesson in Vibe Coding

Today, I dove headfirst into an experiment known as vibe coding (yup, that’s a thing!). The guideline? I’d rely solely on an AI model for coding—no manual tweaks allowed. Sounds simple enough, right?

🚀 The Setup

The mission was straightforward: add Seed and TopP parameters to my Go model-provider abstraction. It should’ve been a routine plumbing job, with one catch: all existing tests had to pass. Here’s the initial interface I started with:


type ChatArgument interface {
    setTemperature(float64)
    setMaxTokens(int)
    setTopP(float64) // to be implemented
    setSeed(int)     // to be implemented
}

And indeed, my usual entry points for the chat functionality looked something like this:

func (c *VLLMChatClient) Chat(ctx context.Context, messages []Message, options ...ChatArgument) (Message, error) {
    // function implementation here
}

You know what I expected? A 20-60 minute task, max. Let’s just say things didn’t go according to plan!

❗ The Unexpected Detour

Instead of a smooth update, I was met with the model confidently rewriting my implementations. The ChatArgument interface morphed into something completely different! And while that might work for a greenfield project, my existing codebase was now tangled in dependencies that counted on the original API.

🤔 Curious Minds

So, naturally, I asked why the model was so eager to change something I intended to keep stable. And guess what? Three hours later, I found myself in a full-on design debate with my AI assistant. It was like arguing with a junior dev convinced that they were right.

💬 The Debate

The AI started suggesting patterns, launching an attack that felt a tad overconfident. The first suggestion? The classic Go functional options pattern! Let’s give it a shot:


type ChatOption func(*chatOptions)

type chatOptions struct {
    Temperature float64
    MaxTokens   int
    TopP        float64
    Seed        int
}

On paper, it all looked fine. But in practice? Completely useless. I couldn’t tell if TopP was set or defaulting to 0.0, which was crucial since API defaults differ widely among vendors.

Despite my concerns, the model pushed for the Builder pattern and map-based options. Each round, it only grew more critical of my approach.

🚫 Breaking the Rule

By 3 PM, staring at my unprocessed to-do list, I realized things weren't going as planned. I caved and broke my own rule: I handed the model my blueprint:


type ChatConfig struct {
    Temperature *float64 `json:"temperature,omitempty"`
    MaxTokens   *int     `json:"max_tokens,omitempty"`
    TopP        *float64 `json:"top_p,omitempty"`
    Seed        *int     `json:"seed,omitempty"`
}

type ChatArgument interface {
    Apply(config *ChatConfig)
}

This interface wasn’t as flexible as my original, but it was easy enough for the AI to grasp. More importantly, it preserved the notion of pointers.

  • nilUnset, i.e., use vendor defaults.
  • &0.0Explicitly set to zero.

This was essential when bridging the gap between multiple LLM APIs with varying defaults.

And lo and behold, once I put this pattern in front of it, the model fell in line! Five minutes later, I had the snippets I required.

🎓 The Takeaway

In retrospect, the issue wasn’t merely bad AI output. My variable names lacked perfection, the interface served more as a type-safety check, and some comments had grown stale. This context pollution—typical in any evolving codebase—nudged the model towards inappropriate patterns.

What should’ve been a one-hour manual task turned into a three-hour argument with an overly confident assistant. The silver lining? It validated why my abstraction originally functioned as it did.

The pointer-based config wasn’t some exercise in over-engineering; it was a necessary design choice crafted to handle unset vs. explicit states across inconsistent vendor APIs.

🤝 Lessons Learned

The main takeaway? AI can be a stellar executor when given a precise blueprint, but as an architect? That’s a different story.

This is exactly why I took the plunge and created contenox/runtime. If we want agents to truly deliver serious work, solid abstractions and guardrails aren’t just useful—they're essential.

I invite you to join me in reclaiming control from the LLMs!