Why most validation advice wastes your time
The typical advice—"talk to users", "validate your idea"—sounds right but fails in practice because it sets no falsifiable standard. You end up interviewing ten people who say "yeah, I'd use that", then spending six months building something nobody buys. The problem isn't that you skipped validation. The problem is you ran validation theatre: activities that feel productive but generate no information you can act on.
Real validation has three properties. First, it tests behaviour, not opinion. Second, it costs you something material—money, time, or reputation—so the signal isn't free. Third, it has a clear bar: if X happens, build it; if Y happens, kill it. The five experiments below meet all three criteria. Run them in order. If any test fails its bar, stop. You just saved yourself from building something the market doesn't want.
Total budget: under €500. Total calendar time: two weeks if you move fast. The cheapest education you'll ever get.
Test one: the 30-second pitch test
Find twenty people who match your target customer profile. Not friends. Not your mum. People who have the problem you think you're solving and currently pay for a solution or suffer without one. Each conversation follows the same script: "I'm testing an idea. Can I describe it in thirty seconds and you tell me whether you'd pay for it?"
Then describe the outcome, not the features. "It cuts your month-end close from six days to two days" beats "It's an AI-powered reconciliation dashboard." Watch their face. If they lean forward and ask "how?", you have something. If they nod politely and wait for you to finish, you don't.
The bar
At least twelve out of twenty must do one of these within sixty seconds of hearing your pitch:
- Ask for early access or a way to stay updated
- Offer to introduce you to someone who needs it more
- Start describing their current painful workaround unprompted
Polite interest doesn't count. "That sounds useful" doesn't count. If fewer than twelve people show urgent interest, your framing is wrong or the pain isn't severe enough. Reframe or kill the idea here. Cost: €0 and forty hours of your time finding and talking to twenty qualified strangers.
Test two: the landing page test
You passed test one. Now build a one-page site that describes the product as if it exists. Headline that names the outcome. Three bullets explaining how it works. Pricing table with two tiers. A "Start your trial" button that leads to an email capture form saying "We're launching in four weeks—join the waitlist."
Use Marcus to build this page in under an hour. Domain, hosting, and a week of Google Ads traffic will run you €150. Drive 500 clicks from a tightly targeted search or social campaign. Track two numbers: email capture rate and time-on-page for people who scrolled past the fold.
The bar
At least 8% of visitors must give you their email. If time-on-page is under twenty seconds, your value prop isn't landing. If it's over ninety seconds but conversion is low, your call-to-action is broken. At 8% conversion from 500 clicks, you have forty qualified leads who think the thing you described is worth trying. Below 5%? The gap between what you're promising and what they believe is too wide. Rewrite the framing or stop.
Cost: €150 for ads, domain, and hosting. Time: six hours to build the page and write ad copy. Calendar time: one week to collect 500 clicks.
Test three: the concierge MVP
You have forty waitlist emails. Pick the ten warmest—people who replied to your confirmation email or asked follow-up questions. Offer them the product outcome manually. If you're building invoicing automation, you invoice for them by hand using a spreadsheet. If you're building reporting dashboards, you generate the reports yourself in Google Sheets and email PDFs weekly.
Charge them. Not a token €1. Charge 40% of what you plan to charge for the real product. If your target price is €50/month, charge €20/month for the concierge version. Set a four-week commitment. Tell them it's manual now, automated soon, and you need to learn what they actually need.
The bar
At least six out of ten must pay for the full four weeks and ask to continue into week five. If they churn after week one, the outcome isn't valuable enough. If they churn after week two, your delivery cadence is wrong. If they make it to week four but don't ask about next steps, they got the outcome once and don't need it ongoing. A SaaS needs recurring need, not one-time relief.
This test also shows you what's hard to deliver. If you spend twelve hours a week per customer doing the manual work, your automation has to collapse that time by 95% or your unit economics will never work. Cost: time only—perhaps eighty hours across four weeks. Revenue: €480 if six customers pay €20/month for four weeks. This test can actually make money.
Test four: the price-elasticity test
You now have six customers paying €20/month for a manual version of your product. They've asked to continue. Before you build anything, test whether they'll pay full price. Split your six paying customers into two groups of three. Email group A: "Starting next month, the price increases to €50/month as we move to the automated version." Email group B: "Starting next month, the price increases to €80/month as we move to the automated version."
You're testing whether your assumed price is anchored correctly. If all six agree to the higher price without negotiation, you're underpricing. If fewer than four agree to either price, your outcome isn't worth what you thought. The goal is to find the ceiling where one person in three pushes back.
The bar
At least four out of six must accept one of the two price increases without asking for a discount. Ideally, at least one person in group B (the €80 group) accepts that price. If everyone in group B tries to negotiate down, €50 is probably your ceiling. If fewer than four accept any increase, the manual version created dependency but not enough value to justify SaaS pricing. You might have a services business, not a product business.
Cost: €0. Time: two hours to draft the emails and field responses. This test saves you from building a product you have to sell for €29/month when your CAC will be €180.
Test five: the falsifiable forecast
You have four to six customers paying €50–€80/month for a manual version of your product. Now write down the build plan and the growth forecast required to make building worthwhile. Be specific. "I will spend three months building the MVP. I will spend €2,000 on my own salary opportunity cost, €1,500 on contractors for design and copywriting, and €500 on tools. By month six, I need twenty paying customers at an average of €60/month to break even on a twelve-month horizon."
Then define the leading indicator you'll track in month one after launch. Not revenue—it's too lagging. Pick something you control that predicts revenue. "I will send fifty cold emails per week to my ICP. I need a 15% reply rate and a 30% reply-to-demo conversion rate to hit twenty customers in six months." Write this down. Date it. If you miss the month-one leading indicator by more than 30%, you stop building and go back to test three with a different customer segment.
The bar
Your forecast must include a kill metric: the specific signal that tells you to stop in month one, month three, and month six. "If I don't have five paying customers by month three, I shut it down." "If CAC exceeds €300 in month six, I shut it down." Most founders skip this step because writing down the kill condition makes it real. Write it anyway. The exercise of defining failure conditions often reveals that the unit economics don't work at any reasonable scale, and you just saved three months.
Cost: €0. Time: four hours to model the economics, define leading indicators, and write the falsifiable forecast. This is the last checkpoint before you write code.
What to build when you pass all five tests
If you made it through all five experiments, you have something rare: evidence that strangers will pay you recurring money for a specific outcome, at a price that covers your costs, delivered through a method you can systematize. Now you can build. But build the smallest automated version that delivers the outcome your concierge customers paid for. Not the full vision. Not the ten-feature roadmap. The single workflow that creates the result they paid €60/month to get.
This is where Marcus becomes useful. You need to go from working concierge MVP to working software MVP in under thirty days, or your paying customers will churn. Marcus lets you describe the workflow in plain language, generate the app structure, iterate on design and logic without hiring a dev team, and deploy to a real domain with authentication and payments in the same day. You stay in control of what gets built because you've already run the experiments that tell you what matters.
The Builder plan at €29/month/project is enough for a single-product MVP. If you validated a SaaS with multiple customer segments or adjacent products, the Studio plan at €290/month lets you manage up to ten projects with shared design systems and user auth. But start with one. Your five experiments told you exactly one thing people will pay for. Build that, charge for it, and run the falsifiable forecast for six months before expanding.
Where most people fail these tests
The most common failure mode is passing test one and skipping directly to building. You talked to twenty people, twelve of them sounded interested, so you spend four months coding. Then you launch and nobody buys, because interest and purchase intent are different species. Test two and test three force you to cross the interest-to-money gap before you invest build time.
The second most common failure is passing test three but ignoring the unit economics it revealed. You delivered the concierge MVP and it took you fifteen hours per customer per month. You plan to automate it, but you haven't actually mapped which parts of those fifteen hours are automatable and which require human judgment. If eight of those fifteen hours are edge-case handling that software can't do, your automation only saves you 47% of the cost. The math doesn't work. Test three is supposed to show you this before you build.
The third failure mode is running all five tests, passing them, then building the wrong product anyway. You validated a lightweight workflow tool, but you build an enterprise platform because it sounds more impressive. Your concierge customers paid for three features. You launch with eighteen. Scope discipline is the hardest part. Your experiments told you the minimum; build the minimum.
What happens after validation
Validation doesn't guarantee success. It guarantees you won't fail for the dumbest reason: building something nobody wants. You can still fail at distribution, positioning, customer success, or competition. But you've eliminated the failure mode that kills 70% of first-time SaaS founders. You know there's a customer, a price, and a repeatable outcome. Everything else is execution risk, and execution risk you can manage.
The six customers you closed in test three become your design partners for the automated product. They've already paid you for the outcome. When you launch the software version, you give them the first sixty days free as thanks for being concierge customers, then move them to the validated price point. Half will churn because they preferred the white-glove manual version. The half who stay are your loudest advocates, because they watched you build the product around their exact workflow.
Run these five tests in order. Don't skip steps. Don't soften the bars because you're emotionally attached to the idea. The bar is the point. You're not trying to confirm your idea is good. You're trying to discover whether the market agrees with you before you spend six months building in the dark. If the market says no at test one, you just saved yourself €20,000 and half a year. If it says yes at test five, you have the clearest possible mandate to build. Either outcome is a win.