Mistakes I Made While Vibecoding My SaaS, YTMetrix, and Why I'm Open Sourcing It
A founder postmortem on building a YouTube analytics platform nobody asked for, the pivots that consumed months of my life, and why giving it away is the best decision I've made.
TL;DR
- 5 months, 50,000 lines of code, 127 database migrations, 5 beta users, 0 paying customers
- Vibecoding makes building dangerously easy — I built 25 features, only 5 survived
- The OAuth trust problem killed adoption — creators won't connect channels to indie tools
- The CSV-to-Claude workflow commoditized my entire value proposition overnight
- I'm open sourcing YTMetrix as a case study for indie hackers considering vibecoding a SaaS
I spent five months building a YouTube analytics SaaS called YTMetrix. I wrote roughly 50,000 lines of code. I deleted about 15,000 of them. I ran 127 database migrations. I built 25+ features. Maybe 5 of them are useful.
I had 5 beta users. Zero paying customers. And one very expensive education in why technical excellence does not equal product-market fit.
This is the full story of what I built, what went wrong, what I learned, and why I'm open sourcing the entire thing today.
If you're an indie hacker thinking about vibecoding your way to a SaaS product, this article is for you. Not because I'll tell you how to succeed. Because I'll tell you exactly how I failed, so you don't have to.
The Seed: Where the Idea Came From
The idea for YTMetrix didn't come from a whiteboard session or a market analysis spreadsheet. It came from real work.
A few years ago, I was working with Ankur Warikoo, helping grow his YouTube channel from 10,000 subscribers to 100,000 and eventually beyond a million. During that time, I had a front-row seat to something that fundamentally changed how I thought about YouTube growth: watching Ankur read and interpret YouTube analytics data.
This wasn't surface-level stuff. It wasn't "check your views and call it a day." Ankur had a systematic way of looking at impressions, click-through rates, average view duration, audience retention curves, and traffic sources to make content decisions. He'd look at a video's performance data and know exactly what worked, what didn't, and what to make next. It was data-driven content creation at a level I hadn't seen before.
I was hooked. I started applying the same methodology to other channels I worked with. And everywhere I went, I saw the same problem: YouTube Studio gives you the data, but it buries it under layers of interface clutter, spread across multiple tabs and screens, and it makes cross-video comparison genuinely painful.
That's when the idea first entered my brain: What if there was a dedicated analytics platform that pulled YouTube data and presented it the way a growth strategist actually needs to see it?
I sat on this idea for three and a half to four years. I knew what the product should look like. I knew who it was for. But I didn't build it because the technical barrier to entry was too high for a solo builder.
And then vibecoding happened.
Why Now? The Vibecoding Era
The reason I was able to build YTMetrix in 2025 and not in 2021 is simple: the tools didn't exist back then.
When I say "vibecoding," I mean something very specific. I'm not talking about writing code traditionally with AI autocomplete. I'm talking about a fundamentally different development workflow where you describe what you want to build, iterate through conversation with an AI coding assistant, and arrive at production-quality code without being a traditional software engineer.
I come from a content, marketing, and brand background. I can read code. I understand systems. I've built no-code automations and worked with APIs. But I'm not a computer science graduate.
But with vibecoding, I could build a full-stack application with:
- Google OAuth integration for YouTube channel connection
- Real-time data sync with the YouTube Analytics API
- A PostgreSQL database with 30+ tables
- AI-powered video analysis using Google Gemini
- A clean, responsive UI built with shadcn/ui
- Docker containerization for easy deployment
The trap is less obvious but far more important: vibecoding makes it dangerously easy to build the wrong thing. When adding a new feature takes hours instead of weeks, you lose the natural friction that forces you to think about whether you should build it at all.
I fell into this trap repeatedly.
Week 1: The MVP That Wasn't
I started building YTMetrix in late October 2025, working nights and weekends around my full-time marketing role.
The initial scope was deliberately small:
- Connect a YouTube channel via Google OAuth
- Pull key metrics: views, watch time, subscribers, engagement
- Display everything in a clean, simple dashboard
- Sort videos by engagement rate to surface top performers
Estimated timeline: two weeks.
The first few days were magical. I had the OAuth flow working. I was pulling data from the YouTube Analytics API. I had a basic Next.js application rendering video thumbnails with metrics next to them.
And then I made my first mistake. Instead of deploying this ugly, working thing and putting it in front of even one other person, I started making it "better."
"Better" meant adding a goal system. "Better" meant building a data pipeline. "Better" meant designing for scale I didn't have.
My two-week MVP estimate would eventually stretch to five months.
Pivot 1: The Goal Taxonomy Rabbit Hole
What I Built: Goal Taxonomy
Three days into development, I had what I thought was a brilliant insight: not all YouTube videos should be measured the same way.
So I decided to build a Goal Taxonomy System:
- 12 goal types: Education, Entertainment, Tutorial, Review, Vlog, News, Commentary, Shorts Growth, Community Building, Product Launch, Collaboration, and Evergreen
- Custom scoring weights per goal type
- Performance grading from A+ to F
- Mandatory goal selection before viewing analytics
I built 15+ database tables. I wrote 8 RPC functions in Supabase. I designed 4 separate UI screens.
Why Goal Taxonomy Failed
The first beta user said: "I just want to see my numbers."
Creators don't think in goals. They publish a video. They want to know how it's doing. They don't want to classify their content before seeing a single number.
Mandatory steps kill activation. The activation funnel had a 100% drop-off at the goal selection step.
The Goal Taxonomy Fix
I ripped the entire Goal Taxonomy System out. Deleted roughly 3,000 lines of code. Replaced it with engagement-based ranking: videos sorted by engagement rate. No configuration needed.
Three weeks of work, gone. But the product was immediately better for it.
The lesson: When features are cheap to build, the real discipline is in what you choose to delete.
Pivot 2: The dbt Data Pipeline Over-Engineering
What I Built: dbt Pipeline
I set up a proper data pipeline with dbt:
- Three database schemas: Bronze (raw), Silver (cleaned), Gold (dashboard-ready)
- 15+ dbt models handling transformations
- GitHub Actions workflows for scheduled dbt runs
- Weekly aggregation pipelines
Why the dbt Pipeline Failed
I had zero users. I was building enterprise data infrastructure for an audience of one.
Supabase and dbt do not play nicely together. Connection pooling issues, IPv6 configuration problems, credential management headaches.
Debugging became a nightmare. Each layer added a new potential failure point.
The dbt Pipeline Fix
I disabled dbt entirely. Wrote a single RPC function:
SELECT public.run_channel_aggregation_pipeline('channel_id');
One function call. No pipeline. No scheduling. Works instantly.
The lesson: You don't need enterprise architecture for zero users.
The 127 Migration Cautionary Tale
Here's a number that tells the story: 127 database migrations in five months, with one user.
| Category | Approximate Migrations | Net Result |
|---|---|---|
| Core YouTube data models | 15 | Kept |
| Goal Taxonomy System | 20 | All deleted |
| dbt schema restructuring | 18 | All reversed |
| Multi-platform preparation | 12 | All deleted |
| AI features | 15 | Partially kept |
| Browser extension sync | 8 | Deleted |
| Channel watchlists | 10 | Deleted |
| Auth and user management | 8 | Kept |
| CSV upload | 6 | Kept |
| Miscellaneous | 15 | Mixed |
Roughly 70% of my migrations were for features that no longer exist.
The migrations reveal a fundamental issue: I was using the database as a sketch pad. The AI assistant will happily generate a perfect migration file for your half-baked idea. It won't ask whether the feature is worth building.
Pivot 3: The Browser Extension Saga
The Problem
The YouTube Analytics API doesn't provide impressions and CTR data for most creators. These are arguably the two most important metrics.
What I Built: Browser Extension
A Chrome browser extension that scraped YouTube Studio's DOM when creators visited it, sent the data back to YTMetrix, and merged it with API data.
Why the Extension Failed
YouTube Studio changes its DOM constantly. Every update broke the extension.
The user experience was terrible. Six steps before seeing complete data.
Chrome Web Store review is slow. Fixes took days to reach users.
The Simpler Solution
CSV upload. Export from YouTube Studio, upload to YTMetrix. Two steps. No extension.
I spent two weeks building the extension. The CSV upload took two days and is more reliable.
The lesson: The "clever" solution is rarely the right solution.
The OAuth Trust Problem Nobody Warned Me About
This is arguably the most important lesson from the entire journey.
In 2024 and 2025, there was a wave of YouTube channel hijackings through compromised OAuth tokens and phishing attacks. The creator community was justifiably paranoid about granting third-party applications access to their Google accounts.
When I put YTMetrix in front of prospective users, the reaction was consistent: "I'm not connecting my YouTube account to that."
It didn't matter that YTMetrix only requested read-only access. Creators had been burned, and they weren't going to trust an indie builder's new analytics platform.
The lesson: Technical security is necessary but not sufficient. User trust is a product feature, and it takes time to build.
The CSV-to-Claude Workflow That Killed My Moat
Here's what finally convinced me to open source YTMetrix.
A creator friend showed me their workflow:
- Go to YouTube Studio
- Export analytics as CSV
- Drop the CSV into Claude
- Ask: "What are my best and worst performing videos? What patterns do you see?"
No signup. No OAuth. No monthly subscription. Just a CSV and a conversation.
The things I thought were YTMetrix's moat turned out to be commoditized overnight:
- Multi-channel management? Drop multiple CSVs into one conversation.
- Custom metrics? Ask Claude to calculate whatever you want.
- Content recommendations? "Based on my data, what should I make next?" is just a prompt.
The lesson: Before building a platform, check whether a workflow already exists that's good enough.
What Actually Worked
Not everything was a mistake:
- Simple OAuth Connect — Click, authorize, data appears
- Clean Video List — Thumbnails, metrics, sorted by engagement
- Refresh Button — Manual trigger felt responsive
- CSV Upload — Simple, reliable, works every time
- Video AI Insights — Gemini analysis produced useful recommendations
The pattern: Every surviving feature is simple, requires minimal configuration, and delivers immediate value.
The Tech Stack
| Technology | Why |
|---|---|
| Next.js 16 | App Router, Server Components, API Routes |
| TypeScript | Caught hundreds of bugs |
| TailwindCSS | Fast iteration |
| shadcn/ui | Beautiful, accessible components |
| Supabase | Managed Postgres with Auth |
| Google Gemini | Video content analysis |
| Docker | Containerized deployment |
Total cost to build: ~$50. Domain, hosting, API credits.
Monthly running cost: $20-25. Supabase + hosting + minimal AI API usage.
The Numbers: A Brutally Honest Accounting
| Metric | Value |
|---|---|
| Total calendar time | 5 months |
| Active development time | 6-8 weeks |
| Lines of code written | ~50,000 |
| Lines of code deleted | ~15,000 |
| Database migrations | 127 |
| Features built | 25+ |
| Features that survived | ~5 |
| Beta users | 5 |
| Paying customers | 0 |
An 80% waste rate. Not because the features were poorly built — they were just the wrong features.
Why I'm Open Sourcing YTMetrix
- Releasing the sunk cost — The code has value even if the business doesn't
- Proving the journey — I can take an idea and ship it, end to end
- A case study for vibecoding — Both the possibilities and pitfalls
- Contributing to open source — Every dependency I used was open source
- Closure — Transforming an abandoned SaaS into a community resource
What I'd Do Differently
- Ship in Week 1, Not Month 3 — Deploy the ugly working thing immediately
- Talk to 20 Creators First — Before writing a line of code
- Validate Willingness to Pay — Put a price tag on the landing page from day one
- One Feature at a Time — Sequential validation instead of parallel speculation
- Resist the Vibecoding Temptation — No new feature until the current one has 3 users
How to Use the Repository
Quick Start:
git clone https://github.com/metashwat/ytmetrix.git
cd ytmetrix
cp .env.example .env
# Edit .env with your credentials
docker compose up
What You'll Need:
| Service | What For |
|---|---|
| Supabase | Database, Auth |
| Google Cloud Console | YouTube API, OAuth |
| Google Gemini API | Video AI Insights |
The repository includes complete source code, database migrations, Docker configuration, and setup documentation.
License: MIT — do whatever you want with it.
Final Thoughts
YTMetrix is the most educational failure of my career so far.
I can ship. From idea to production with OAuth, AI, real-time sync, and containerized deployment.
Vibecoding is real. A non-traditional developer built a full-stack SaaS working nights and weekends. The tools exist and they work.
The market always wins. It doesn't matter how good your code is. If the market doesn't want your product, the market doesn't want your product.
Open source is not failure. It's recognizing that code has value beyond commercial vision.
If you're an indie hacker reading this: build fast, but validate faster. Vibecoding gives you the speed to build anything. Use that speed to test whether anyone wants it before you build all of it.
The code is free now. The lessons cost me five months. Take both.
Repository: github.com/metashwat/ytmetrix
License: MIT
Status: Open Source. PRs welcome.
If this postmortem was useful, share it with another indie hacker who's about to build something without talking to users first. They'll thank you later.


