The Art of Vibe-Coding: Crafting a Personal Writing Assistant with Google AI Studio šØ
This post is my submission for the DEV Education Track: Build Apps with Google AI Studio. Ready for a creative journey? Letās explore how I developed a personal writing assistant to help fiction writers, evolving from a mere concept to a functional MVP.
What I Built š ļø
Not too long ago, I had this cool idea for a startupāa personal writing assistant for fiction writers.You know, before Generative AI was a buzzword! When generative AI took off, I thought, whoa, time to jump on this train! šāØ
So I kicked things off by documenting my concept on a product page I whipped up ages ago, and it was finally time to see what AI could conjure.
My Experience š§
Initially, I tested the waters by feeding the webpage with my product features to see how it would respond. But spoiler alert: it flopped. š The output was completely off-base. Talk about a letdown, huh? So, I decided to simplify everything.
Having previously built a character generator with Google AI Studio in just half an hour (not bad, right?), I knew I could make this work as well.
Rough Outline of Steps š
Letās break it down into digestible chunks (no one likes a massive info dump, right? š):
- Create an Editor with Gemini: This is where I aimed to generate follow-up suggestions in a cozy sidebar. Fun fact: I specifically had to ask for an RTF editor; otherwise, it defaulted to a plain textarea, which is so passƩ.
- Add Sidebar Tabs: The sidebar now features Characters, Events, Locations, Timeline, Notes, and Organizations ā all auto-created by Gemini for quick access.
- Characters Tab: Gemini nailed it here! It churned out forms for characters with handy auto-creation features. Magical! šŖ
- Organizations Tab: Similar success with auto-generationāsuper easy peasy!
- Locations Tab: Yep, you guessed it! Flawless auto-generation once again.
- Remove Events Tab: Instead, I integrated Events within the Timeline Tabāeasy enough!
- Drag & Drop for Events: This was trickier due to dependency hiccups. Who knew vibe-coding could be such a bumpy ride? š
- Focus Mode: Quick to implement but mysteriously vanished after another feature was added. Just classic software drama!
- Dark Mode: Done in a snap, but I felt like I just discovered the black-and-white filter on your Instagram! š¤š¤
- Timer for Writing Sessions: Because who doesnāt love a good productivity boost? ā±ļø
- Migrate to Next.js: My goal for future improvements.
So, is it production-ready? Not yet. But itās a stellar prototype. Would I deploy this version? Nah! It needs more tweaks and customization love. š ļøš
After pivoting to Next.js, I did hit a code wall. So, back to an older checkpoint it was, just to scoop up some juicy demo video content. Note to self: Next.js migration is on the list for later!
The takeaway? Adding features was a real struggle once the codebase expanded. It got messy, fast. But hey, isnāt that part of the coding adventure?
Challenges with Vibe Coding š§
Even in AI-driven development, challenges echo those of human coders:
- Bugs, of course! They often relate to dependencies from conflicting versions loaded in your browser.
- As the code grew, introducing new features became increasingly tricky. I literally had to re-ask for features I previously builtātalk about frustrating! š¤
- At some junctures, Gemini hinted at what it āplannedā to implement while making no actual changes. And yes, I had to command it to implement things multiple times to get any response.
- Eventually, the speediness fizzles out, and you start hitting more dependency errors. At this point, I decided to scale back to offline development.
Takeaways š¤
Google AI Studio is a gem for whipping up a minimalist MVP, but itās less ideal for production-level deployments. While deploying via Cloud Run is possible, I need more invisible magic behind the scenes. This was a solid kickoff for hooking up core features, but the real work lies aheadāclassic AI not quite there yet. ā”ļø
Demo š„
Hereās a link to my demo video.
Note: I didnāt showcase image generation because it didnāt run smoothly on my local machine. To use the Imagen model, you really need to tap into Vertex AI Service, which means dealing with different auth methods and the Node.js backend. So, while everything worked within Google AI Studio, local execution was a whole different story! š
Thanks for stopping by! If you're eager to see the deployed version, leave a comment below. I promise; I wonāt bite! š¦š¬