Most ideas don't deserve six months of engineering. A small minority do. The whole job of validation is figuring out which one yours is, fast and cheap, before you commit a quarter of your life to the wrong one.
This is the framework I use, the one I've watched work for first-time founders and ex-FAANG engineers alike, and the one that AI builders like Marcus collapse from "two months of building" into "two weeks of asking".
The mistake every smart founder makes
You won't be hurt by ideas that are obviously stupid. You'll be hurt by ideas that sound smart, that pass dinner-table review, and that require a real product to be tested. Because the moment a real product is required, the cost of testing balloons — engineering time, design rounds, back-end glue, support — and you stop being able to run cheap experiments.
The whole point of pre-build validation is to keep the experiment cost in three-digit euros, not five. If you can't get a clear yes/no signal under €500, your test design is wrong, not your idea.
What you're actually trying to learn
One question, before anything else: will a specific person, with a specific job to be done, give you specific money for this specific outcome?
Notice what's not in there. Not "is the market big". Not "would people use it if it were free". Not "is there a category". Those are research questions for the second hour. The first-hour question is whether anyone, anywhere, will pay you something for the thing you're describing — and what they say when you describe it badly the first time.
The five tests, in priority order
1. The thirty-second pitch test (cost: €0, time: 1 day)
Write the offer in three lines: who, what, why. Send it as a one-paragraph DM to twenty people who you believe have the problem. Not friends — strangers in the right LinkedIn or Discord, with five minutes of personalisation each.
The bar: do at least three reply with a "tell me more"? If two or fewer reply at all, your pitch is unreadable, your audience is wrong, or your channel is wrong. Diagnose which, then iterate. Don't move on.
2. The landing-page test (cost: €0–€100, time: 2 days)
Build a single landing page that promises the outcome and asks for an email — or, better, a credit-card pre-order with a refund policy. The page should look like the company exists. Hero, three benefits, social proof if you can fake it honestly (a testimonial from a friend who is a real customer counts), pricing, and a sign-up form.
If you build this on a hand-coded HTML page, it takes a week. If you build it with an AI builder, it takes an afternoon, and the polish doesn't matter — the experiment is the same. The bar: from 100 visitors driven by either a small ad or your own outreach, do at least 5 leave their email? Anything below 3% means the offer isn't sharp enough yet.
3. The concierge test (cost: €0, time: 3–5 days)
Pick five people from the leads you collected. Offer to do the thing manually, by hand, for them, this week. Not "here's the prototype". "Hi, I'll personally do X for you. Free this round, but I'll ask you for honest feedback after."
You'll discover whether the problem is real in twelve minutes. If three of the five say yes and follow through, the demand is real. If four say "looks cool, get back to me when it's automated", you've found a thing people want to read about, not a thing they want to buy. That's a different business — content, not software — and you should make that decision now.
4. The price-elasticity test (cost: €0, time: 1 week)
Of the people who said yes to the concierge offer, ask three to pay you. Quote a price that hurts a little. Do not discount. The bar: at least one pays the full price without negotiating it down by more than 30%.
This is the test that filters out 80% of "cool ideas". If nobody will pay €60 for a thing you've already given them once, free, by hand, you don't have a business — you have a hobby that solves nobody's emergency. That's fine, but it's not what you're doing.
5. The falsifiable forecast (cost: €0, time: 2 days)
Write down, before you build anything, what you think week one of revenue will look like. Number of customers. Total revenue. Average transaction. Conversion rate from page visit to paid. Then run a real campaign — small ad budget, your own outreach, a Reddit post in a relevant subreddit, a cold-email sequence — and compare reality to your forecast.
If reality is within 50% of your forecast, you understand your market. Build. If reality is 5× lower than you predicted, you don't, and your business plan is built on confidence that the world doesn't share. Stop and figure out why before you commit engineering.
The total budget
If you spend more than €500 across all five tests, you're using the wrong tools. The classic stack — a domain (€10), a Marcus-generated landing page (free, first project), a small Google or Reddit ad budget for traffic (€100–€200), a Notion or Airtable lead tracker (free), and an hour of Stripe Checkout (free if you don't take payments yet) — covers it.
The point of cheap is that you can run the same five tests on three competing ideas in the same month. By the end of that month, one of those ideas has clearly outperformed the others, and you have evidence — not opinion — that it's the one to commit to.
What changes when you have an AI builder
The reason this framework was hard to run in 2018 is that "build a landing page that looks legit" took two days of fighting with WordPress, and "wire Stripe to a checkout flow" took another day of fighting with documentation. Each test was cheap in money and expensive in attention.
An AI builder collapses both into the same hour you'd use to write the pitch. You describe the offer, the audience, and the price; the page exists, hosted, with the form wired and the analytics on. The variant of test 4 — "what if I price it at €120 instead of €60?" — is one prompt, not one engineering ticket.
The implication is that if you're not running this five-test framework on every promising idea before committing engineering, you're choosing to spend more money than you have to. The cost of running the framework approached zero in 2025; not running it is now an active decision.
What this framework is not
This is for product-market-fit validation of a B2B or B2C software idea where someone will give you money in week one. It's not for:
- Two-sided marketplaces, which need cold-start tactics that don't fit in five tests. (See: every Uber-clone post-mortem ever.)
- Hardware or capital-intensive deep tech, which need investor and supply-chain validation before anything else.
- Pure consumer apps with ad-revenue models, which depend on retention curves you can't measure in two weeks.
- Free products with a "we'll figure out monetisation later" model, which is a polite phrase for "we don't have a business yet".
For everything else — most B2B SaaS, most local-business software, most founder-first products — these five tests, in this order, in this budget, are the right amount of cheap before the engineering bet.
The ruthlessness rule
The single hardest part of this framework is honouring the result. Most founders, having run the tests, find a reason to ignore the bar they set themselves and build the thing anyway. They convince themselves that a 1% conversion rate "would scale with the right channel", that nobody pre-paying "is just because the page is rough", that the concierge offer "didn't really test the product".
That conversation is your real validation: not whether the metrics moved, but whether you can say "the data didn't support it" out loud and walk away. Founders who can do that go on to build the thing the data did support, twelve weeks later, and ship it. Founders who can't go on to spend their savings on a thing the world had already told them they didn't want.