Zero Customers, Infinite Lessons
It’s been about two weeks since my app launched, and I have… drumroll… 0 paying customers. But that’s okay!
As of today, I’ve had over 1,000 impressions, 50+ downloads with a healthy 8.7% conversion rate, and a solid 5-star rating (thank you, friends—I see you). But here’s the kicker: after a very long time, I was able to move beyond “just a prototype” and actually launch it. All thanks to the v-word—Vibe Coding. After a lot of struggle, trial and error, and probably way too many conversations with AI models, I learned how to nudge them or if I may say, bend them to my will. So let’s talk about what got me here, what I learned along the way, and how you can vibe with AI to ship your own projects.
The AI coding landscape: Know your tools
Before we dive into the tactics, let’s see what’s out there. The AI coding tool space is massive with various interfaces and each serves a different purpose:
The “Do It All” Platforms Think Lovable, Bolt, V0, Replit Agent, and a million more popping up every week. These are great for going from idea to deployed prototype in minutes. Perfect for validating concepts or creating that initial base that you can iterate on.
Coding Assistants in Your IDE GitHub Copilot, Cursor, Windsurf, and similar tools available to you right where you code. They autocomplete, suggest, and help you build iteratively.
Code Assistants in the Terminal Claude Code, Aider, and others let you work from the command line. These are powerful for developers who heavily leverage the terminal for tasks beyond coding.
Coding Agents in IDE or on the web Tools like Cursor’s Composer in IDE/agent on web, Windsurf’s Cascade and Claude code web that work more autonomously - planning, executing multiple steps, and handling larger refactors. The web versions are great for connecting to your Github projects to work on tasks like bug reviews and pull request analysis (the one where multiple folks work on a project and you need help with analyse code to check if it can be merged with the main codebase)
My 10 Takeaways from Vibing with AI Code Assistants
1. Start with a wireframe and add it to your project context
Before you write a single line of code, sketch out your app. You need to have, at the least, a hand-drawn diagram, better yet, a full-fledged figma. Since AI models are multi-modal, adding this to your project documentation helps them understand the big picture. They can make smarter decisions about components, routing, and architecture when they know what you intend to build.
2. Identify your external systems early
Think about what all would you need to make this app a reality
- What would the backend look like and what APIs would you need - You may use a backend provider like Firebase/Supabase with their edge functions or a full fledged backend app to deploy on AWS
- How would the database look like. It is worth while to spend time working with the AI model to chalk down all the functionality and data you would need to power that functionality to come up with a data model. This helps with documentation, reference and consistency when the AI model is able to look up these details when building out functionality.
- What third-party services? Authentication provider? Payment processor? Getting this clear upfront helps the AI set up the right structure from the beginning rather than retrofitting integrations later.
3. The big ugly draft vs. iterative builder approach
Here’s a critical decision point: Do you want the AI to generate everything in one massive go, or build piece by piece? I learned the hard way that iterative building wins almost every time. Yes, the “spray and paint” method (asking for everything at once) feels faster, but you end up with code you don’t understand and bugs that takes ages to fix. Build incrementally, understand each piece, and you’ll ship something that won’t make you want to give up midway.
4. Local infrastructure helps you test
Set up your local development environment properly—Docker, local servers, test databases, whatever you are comfortable with. Spending time understanding and learning about the this complexity helps you a long way in the future. Being able to run and test changes locally means you can iterate 10x faster. The AI can interact with the backend, automatically test them and recommend changes. This tight feedback loop is everything. Another way is to leverage MCPs and integrations(For example, Supabase CLI to make changes on the cloud) - but getting the AI to reliable use them is harder than asking to make local calls.
5. Always rely on web search
The AI models have training cutoffs, and best practices evolve. Make it a habit to search for:
- Current best practices for the framework you’re using
- Verification of methodology when trying something new
- Latest package versions and compatibility issues I’ve saved myself countless hours by having the AI search before implementing something that might be outdated or deprecated.
6. Be descriptive to the best of your abilities
Vague prompts get vague results. Instead of “add authentication,” try “implement email/password authentication using NextAuth.js with a PostgreSQL database, including login, signup, password reset, and protected routes.” The more context you provide, the better the output. Include:
- What you want to achieve
- What framework/libraries to use
- Any specific patterns or approaches you prefer
- What success looks like If you are unable to be descriptive, spend some time chatting with AI to understand the technology, what options exist, what the AI recommends you do. Challenge it, question its assumption and more often than not, you will get a better output than going in blind.
7. Maintain project structure and documentation
Create a PROJECT_STRUCTURE.md or similar document that explains your architecture. Reference atomic design principles or the best practice for the technology you are using (For example - MVVM for iOS etc). Update this as you build. The AI can reference this to maintain consistency, rebuild context when you revisit the code base and make better decisions about where new code should live.
8. Use system prompts for repeat instructions
Both Claude and Cursor support custom instructions or system prompts (like .cursorrules or .claude.md files). Put your repeat instructions here:
- Use TypeScript for type safety
- Follow atomic design patterns
- Write tests for new features
- Use Tailwind for styling
- Prefer functional components
- Reference the internet for best practices
Here’s what most of my system prompts contain - Use current best practices for the framework we’re using. Produce clean, bug-free code. Ask clarifying questions whenever you’re unsure about requirements. Test the solutions wherever possible. Research any methodology or approach you’re uncertain about. Today’s date is [current date] (as AI training cutoffs can make them use outdated information). Leverage relevant tools and MCPs (Model Context Protocols) for custom capabilities when needed. Here are the capabilities available to you - CLI, Memory, Excel, Docker etc
Now you don’t have to repeat yourself every single conversation. The internet also has a ton of examples for each type of project, reference and update them according to your preference.
9. Maintain an action list or todo list
Keep a living TODO.md or TASKS.md file in your project. Have the AI update it as you complete tasks and identify new ones. This creates a shared context of what’s done, what’s next, and what’s blocked. It’s like a project manager that works across chat memory, context and different tools.
10. Design systems are worth the time
Early on, define your design system and document it:
- Color palette (primary, secondary, accent, neutrals)
- Typography scale
- Component library approach
- Spacing system
Yes, it feels like busy work when you just want to ship but this pays massive dividends. Your AI assistant will use these consistently, and your app will look complete and consistent from the get go.
A few more pointers
Import a foundation, then iterate Starting from an empty directory can be overwhelming. Feel free to use those “do it all” platforms (Lovable, Bolt, V0) to generate a solid base, then move to your IDE for iterative development. Or you can leverage a solid GitHub starter template.
You have to invest time in learning how to deploy The AI tools are amazing at writing code but often terrible at deployment. Spend considerable time building up your deployment capability separately. Learn Docker, CI/CD, and your hosting platform’s deployment process. This will take you a long way on while making iterative changes and deployments.
Take stock after every major turn and turn it into a learning opportunity When you complete a feature or hit a milestone, pause. If you have time, use this as an opportunity to really understand the code that was generated. Ask the AI to explain sections. This is how you learn both coding AND troubleshooting. Future you will thank the present you. I have learnt considerable amount of iOS and Go this way.
The tool comparison
After building with all three extensively and interchangeably, here’s my take: Windsurf has the most generous limits. Even with Claude’s recent usage limit changes and the expensive 2x credit requirement, it’s still the better deal for extended coding sessions. The Cascade feature and the model availability is quite good. Cursor (with Claude running under the hood) seems to have the highest “hit rate” across most models. What I mean is - When I ask it to do something, it just works more often than not. The tradeoff? It’s the priciest option as the product matures. Claude Pro (via claude.ai) gives generous limits for iterative development. The pesky daily limits do get annoying but it’s quite helpful when you are looking for iterative fixes rather building 10 things at a time. My strategy: Use Windsurf for heavy building days, Cursor for days I need the highest quality output or for complex tasks, and Claude Pro for architectural discussions and planning.
The bottom line
Learn to vibe with these tools. Give them context, ask them to search when needed, iterate thoughtfully, and maintain clear documentation. The AI handles the syntax; you handle the vision and architecture.
If you found this article helpful or have your own experiences with AI coding assistants, feel free to reach out. Always open to learning more and hearing new perspectives on building with AI.