Why I Didn't Fine-Tune Models for My MVP
When I started building Likeyou AI, the obvious path seemed to be fine-tuning a model for personality matching. Everyone talks about fine-tuning. It's what "real" AI companies do, right?
The Fine-Tuning Trap
Here's what nobody tells you: fine-tuning is expensive, slow, and often unnecessary for an MVP. The data requirements alone can kill your timeline. You need thousands of examples of good matches, which I didn't have because... the product didn't exist yet.
The chicken-and-egg problem is real: you need data to train a model, but you need a model to get the product working, but you need a product to collect data.
Prompt Engineering First
Instead of fine-tuning, I focused on prompt engineering with LLM's base model. The results were surprisingly good. By carefully structuring the prompts - defining what compatibility means, providing scoring rubrics, including negative examples - I got 80% of the way there without any training.
// Instead of training on "these users matched well"
// I told the model explicitly what to look for:
"Score compatibility based on:
- Shared interests (weight: 0.3)
- Communication style similarity (weight: 0.25)
- Life goals alignment (weight: 0.25)
- Complementary traits (weight: 0.2)"
When to Fine-Tune
Fine-tuning makes sense when:
- • You have thousands of labeled examples
- • Prompt engineering hits a ceiling you can't break through
- • The task is highly specialized and domain-specific
- • You need consistent formatting that prompts can't guarantee
For an MVP? Prompt engineering is almost always the right choice. Ship fast, validate the idea, then optimize with training data you've collected from real users.
The Lesson
Don't let "best practices" slow you down. The goal of an MVP is learning, not perfection. LLM's base model plus good prompts got me to a shippable product in weeks instead of months. Now I have real user data to inform whether fine-tuning is even necessary.
Spoiler: it probably isn't. The prompts keep getting better.