The Three Scenarios Are Dead

Every investment memo and annual plan runs three scenarios: base, upside, downside. Three futures that feel rigorous but aren't. Stochastic simulation fixes this in principle. Now LLMs can parameterize it in practice. The full distribution of outcomes changes what you see and where you look.

The Three Scenarios Are Dead
Share

Every private equity investment memo and every corporate annual plan contains the same artifact: three scenarios. Base, upside, downside. Three neat lines on a chart, diverging politely from each other like well-behaved children.

If you've built these models, whether for a deal committee or a board deck, you know how the process actually works: a team picks five or six variables (revenue growth, churn, gross margin, CAPEX, maybe working capital), assigns each one a "conservative" and "aggressive" estimate, then cross-multiplies them into three scenarios that feel rigorous but aren't.

The base case is what the deal team believes. The downside is the base case minus 20%, adjusted so it still clears the return hurdle. The upside is there to make the deck look balanced.

Nobody actually thinks in three scenarios. The world doesn't either.

The Problem Isn't Laziness. It's Architecture.

Three-scenario modeling persists because it was the best we could do within the constraints of spreadsheet-based planning.

Consider what a proper uncertainty model requires. A SaaS company's revenue depends on new logo acquisition, expansion revenue, churn, pricing changes, competitive dynamics, and macroeconomic conditions. Each of those variables has its own probability distribution. Some are correlated. Churn increases when the economy contracts. New logo acquisition slows when competitors cut prices. Expansion revenue depends on product adoption curves that shift based on customer segment mix.

A three-scenario model picks one state for each variable and locks them together. "In the downside, growth slows AND churn increases AND margins compress." That's a specific future, not a probability distribution. It tells you what happens if everything goes wrong simultaneously. It doesn't tell you what's likely.

Monte Carlo simulation has existed for decades and solves this in principle. Assign each variable a distribution, define correlations, run thousands of iterations, and see the full range of outcomes. Financial engineers have used this since the 1990s.

But in practice, the approach was limited by parameterization. Someone still had to specify every distribution, every correlation coefficient, every conditional relationship. For a simple five-variable model, that's manageable. For a realistic business model with fifty interacting variables across multiple time periods, you need a quantitative analyst spending weeks just to set up the simulation before you run it.

That's why PE firms, corporate strategy teams, and even sophisticated operators defaulted back to three scenarios. Not because they believed in it. Because the alternative was too expensive and too slow.

What Changed

Two things happened.

First, compute got cheap enough that running 10,000 Monte Carlo iterations on a complex model takes seconds, not hours. This has been true for a few years, and by itself it wasn't enough. The bottleneck was never compute. It was specification.

Second, large language models got good at the part humans are bad at: inferring the structural relationships between variables in a business model.

Here's what that means in practice. Ask an experienced SaaS operator what happens to net revenue retention when the economy contracts. They'll give you an answer based on intuition and pattern matching. "It drops, probably 5 to 15 points, depends on the customer base." That's useful but imprecise, and it covers one relationship out of hundreds.

An LLM trained on thousands of SaaS companies' financial data can infer those relationships systematically. It can generate correlation matrices. It can identify conditional dependencies ("churn increases when NRR drops below 105%, but the relationship is nonlinear"). It can propose distributional shapes for variables based on the specific business context rather than a generic normal distribution.

The LLM doesn't replace the stochastic simulation. It parameterizes it. It does the work that previously required a quant team spending weeks, and it does it in minutes.

What This Looks Like in Practice

Take a PE firm evaluating a $200M revenue vertical SaaS company. The traditional approach: build a five-year DCF, stress-test three scenarios, sanity-check the downside against the return threshold.

The new approach: build the same financial model, but instead of assigning three point estimates, feed the company's historical data and sector context into an LLM to generate probabilistic assumptions for every material variable. Revenue growth by customer cohort. Gross margin evolution as the company scales. Sales efficiency by channel. Churn behavior under different macro conditions. R&D intensity required to maintain competitive position.

Each variable gets a distribution. The distributions have inferred correlations. You run 10,000 iterations.

What comes back isn't three lines on a chart. It's a probability-weighted outcome distribution. You can see that there's a 72% chance the investment returns above 2x. An 18% chance it returns above 3x. A 6% chance you lose money. And you can see which variables drive the variance. Maybe 40% of outcome variance comes from net revenue retention. Maybe 25% comes from sales efficiency. The base case revenue growth assumption that the deal team spent three weeks debating? It might account for 10% of outcome variance, meaning the entire negotiation about whether growth is 18% or 22% was time spent on the wrong question.

That's the real insight. Three-scenario analysis doesn't just oversimplify outcomes. It systematically misdirects attention. Teams debate the variables they can see (headline growth, blended margins) while ignoring the hidden correlations and conditional relationships that actually determine returns.

The Objections

We hear three.

"Garbage in, garbage out." True. If the distributional assumptions are wrong, the output is wrong. But this objection applies equally to three-scenario analysis, where the assumptions are also wrong and you don't even get a probability distribution to show how wrong. At least with a stochastic model, you can sensitivity-test every assumption independently and see which ones matter.

"LLMs hallucinate." Also true, and a real concern. The answer is calibration. You don't hand an LLM your financial model and ask it to predict the future. You use it to generate plausible distributional parameters, then validate those parameters against historical data, comparable companies, and expert judgment. The LLM accelerates the parameterization process. It doesn't replace diligence.

"Our LPs won't understand it." This one is actually the biggest barrier. Investment committees are trained to evaluate three-scenario presentations. A probability distribution requires a different conversation. But the information content is so much higher that the adjustment is worth it. And frankly, any LP who prefers three fictional scenarios to a genuine probability-weighted analysis is optimizing for comfort over accuracy.

The Transition

This isn't theoretical. We've been building and deploying probabilistic financial models that use this approach. The reaction is always the same: once you see the full distribution, you can't go back to three scenarios. The information loss is too obvious.

The transition will happen in stages. Early adopters in PE and growth equity are already moving. Corporate strategy teams at larger companies will follow within a year or two. The three-scenario model will persist in board presentations and investor decks for longer, because formats have inertia. But the actual analytical work behind those presentations will increasingly be probabilistic.

Five years from now, presenting a three-scenario financial model will feel like presenting a hand-drawn chart. Not wrong, exactly. Just obviously incomplete.

The Shift in Questions

The deepest change isn't about the math. It's about the questions you ask.

Three-scenario analysis answers the question: "What happens if things go well, badly, or as expected?"

Probabilistic analysis answers a different question: "What is the probability-weighted distribution of outcomes, and what drives the variance?"

The first question is about storytelling. The second is about truth. They lead to different decisions, different risk management strategies, and different allocations of attention. The teams and firms that make this shift will have a genuine analytical edge. The ones that don't will keep debating whether the base case is 18% or 22% growth.

The answer, of course, is that it's both. And neither. And a hundred other numbers, each with its own probability.

That's not a weakness of the model. That's reality.