Two mid-market CEOs sit across from us in the same month. Both run $18M services companies. Both are sharp, ambitious, and working harder than anyone on their team. Both have capital to invest in growth. And over the next twelve months, one of them will grow 45% while the other grows 6%.
The difference won't be effort. They both work 60-hour weeks. It won't be intelligence. They're both smart enough to run companies that most people couldn't build. It won't be capital. They both have budget for meaningful investment.
The difference will be decisions. Specifically, the infrastructure through which they make decisions.
The first CEO evaluates opportunities by gut feel, filtered through whatever's top of mind. A compelling sales pitch from a vendor. An article about AI that sounded relevant. A competitor's new marketing campaign that sparked anxiety. Each stimulus produces a reaction: a new initiative, a new vendor, a new project. By quarter two, the company is running eleven initiatives simultaneously, none of them connected, most of them underresourced, and the team is exhausted from context-switching between priorities that seem to change monthly.
The second CEO evaluates every opportunity through a structured framework — a set of five questions that filter for compound impact. Most ideas that sound exciting fail at least one of the five tests. The ones that pass all five are resourced fully, connected to existing infrastructure, and measured against 90-day outcomes. By quarter two, the company has executed three initiatives, all of them building on each other, and the team has energy because priorities are clear and stable.
Same market. Same starting revenue. Same intelligence. Different decision-making infrastructure. Radically different outcomes.
This is the playbook we use with every Boost Consulting client. It isn't proprietary genius. It's structured thinking applied consistently. And it's the single highest-leverage intervention we deliver — because every other improvement (sales infrastructure, marketing, automation, integration) depends on the quality of the decisions that precede it.
Why Decision-Making Infrastructure Matters More Than Any Single Decision
Most business advice focuses on specific decisions: which CRM to buy, which marketing channel to invest in, how to structure your sales comp plan. These decisions matter. But they're second-order effects of a first-order question: how does your leadership team decide what to do?
In the mid-market, the answer is usually informal. The CEO makes most strategic decisions. Input comes from whoever is loudest, most persistent, or most recently in the CEO's field of vision. Decisions are made in hallway conversations, over lunch, or during the fifteen minutes between meetings when someone says "we should really look into that." Priorities are set annually and revised constantly — not through structured adaptation but through the gravitational pull of whatever feels most urgent that week.
This isn't a character flaw. It's a stage-of-growth reality. At $3M, informal decision-making works because the CEO has direct visibility into every aspect of the business. By the time the company reaches $10M–$50M, the CEO's visibility has narrowed while the complexity of decisions has multiplied. They're making higher-stakes decisions with lower-quality information, more competing demands, and less time to think.
The result is decision fatigue masquerading as strategic agility. The company appears dynamic — always launching something new, always responding to the market, always busy. But when you look at the actual outcomes, the busyness isn't compounding. Initiatives are started and abandoned. Investments produce isolated results that don't connect to each other. The team is working hard but the growth curve is flat, because the decisions driving the work aren't designed to compound.
Decision-making infrastructure changes this. Not by making decisions for the CEO, but by providing a consistent framework that filters out the noise, focuses resources on high-leverage opportunities, and ensures that every major initiative strengthens the overall system rather than adding a new standalone project to manage.
Here are the five tests.
Test 1: The Leverage Test
The question: "How many infrastructure layers does this decision impact?"
Before committing resources to any initiative, score it on a simple matrix: effort required versus the number of layers in your growth infrastructure it affects. The 5-Layer Growth Infrastructure Model (as we detailed in our framework piece) provides the reference: Strategy Architecture, Revenue Engine, Growth Amplification, Operational Intelligence, and Compound Infrastructure.
A decision that requires significant effort but only impacts one layer is low-leverage. Example: spending $40,000 on a website redesign that improves brand aesthetics but doesn't connect to lead capture, CRM, or sales process. The website looks better. Nothing else changes.
A decision that requires comparable effort but impacts three or four layers simultaneously is high-leverage. Example: deploying AI lead response that captures leads (Layer 3), qualifies and routes them to sales (Layer 2), automates the initial engagement (Layer 4), and feeds data back into strategic targeting (Layer 1). Same investment of time and capital. Dramatically broader impact.
The Leverage Test doesn't mean you never do single-layer improvements. Sometimes a targeted fix is exactly what's needed. But the test forces you to see the full picture before committing. When two opportunities compete for the same resources, the one that touches more layers wins — because its impact will compound through the interactions between those layers.
In practice, we use a simple scoring grid with clients. Every proposed initiative gets a 1–5 score for each layer it impacts (1 = no impact, 5 = transformative impact). The total score creates a leverage rating. Initiatives that score below 10 across five layers aren't rejected automatically, but they require a stronger justification than initiatives that score above 15. The framework doesn't make the decision. It makes the tradeoff visible.
A real-world example: a $22M construction services firm was debating between two investments — a new fleet management software ($35,000) and a sales infrastructure overhaul ($40,000). The fleet software scored well on Layer 4 (operational intelligence) but had zero impact on Layers 1–3 and minimal impact on Layer 5. Total leverage score: 8. The sales overhaul scored across Layers 1 through 5: it required strategy work (ICP definition), rebuilt the revenue engine, improved marketing targeting through sales data, automated lead response and follow-up, and connected sales data to the broader operational dashboard. Total leverage score: 19. The decision became obvious — not through subjective debate but through structured evaluation.
The fleet software was a fine investment. The sales infrastructure was a compound investment. The Leverage Test made the distinction clear.
Test 2: The 90-Day Horizon
The question: "Can we measure the impact of this decision within 90 days?"
This test addresses one of the most common failure modes in mid-market strategy: scope creep disguised as ambition.
When an operator says "we need to transform our sales culture" or "we need to become an AI-first company" or "we need to completely rethink our go-to-market," they're expressing a legitimate strategic aspiration. But aspirations that can't be measured in 90 days can't be managed in 90 days. And initiatives that can't be managed tend to drift, bloat, and eventually stall.
The 90-Day Horizon test isn't about lowering ambition. It's about breaking ambitious goals into measurable increments. "Transform our sales culture" isn't measurable in 90 days. "Increase CRM adoption from 35% to 80% and reduce average lead response time from 26 hours to under 5 minutes" is measurable in 90 days — and it's a concrete step toward the bigger cultural transformation.
"Become an AI-first company" isn't measurable in 90 days. "Deploy AI lead response, automate the five highest-volume manual workflows, and achieve 2,000 automated actions per month" is measurable in 90 days — and it's a concrete step toward the AI-first aspiration.
The discipline is in the decomposition. Every strategic ambition can be broken into 90-day sprints with specific OKRs. If it can't — if the smallest meaningful increment still takes longer than 90 days — the scope is too broad and needs to be restructured.
This test has a secondary benefit: it kills zombie initiatives. The projects that have been "in progress" for six months with no clear milestone, no defined endpoint, and no measurable outcome. Every leadership team has them. They consume resources, occupy mental bandwidth, and produce a chronic sense of unfinished business that erodes team energy. The 90-Day Horizon forces each initiative to either demonstrate measurable progress within a quarter or be stopped, restructured, or deprioritized.
Marta Novak, our Head of Client Strategy, applies this test ruthlessly. "Most mid-market companies have ten things in progress and three things finished," she observes. "The 90-Day Horizon inverts that. Three things in progress, all with clear endpoints this quarter. Everything else goes on a backlog, sequenced by leverage score. The team stops feeling overwhelmed because the list of current priorities is short, specific, and achievable. And the velocity of actual completion goes up dramatically — because resources are concentrated instead of scattered."
Test 3: The Compound Question
The question: "Will this decision produce an asset that keeps working after the effort stops?"
This is the most important test in the playbook, and the one that most fundamentally changes how operators allocate resources.
Every business activity falls into one of two categories: maintenance or growth. Maintenance activities keep the current operation running. Growth activities create assets that produce value beyond the initial effort. The Compound Question separates the two.
Hiring a salesperson is maintenance. Building a sales system that makes every salesperson more effective is growth. Running a marketing campaign is maintenance (it stops producing when you stop spending). Building a content library that generates organic traffic for years is growth. Sending manual follow-up emails is maintenance. Building an automated nurture sequence that runs continuously is growth.
Maintenance isn't bad. Every business requires it. But companies that allocate 80% of their discretionary resources to maintenance and 20% to growth produce linear results at best. Companies that invert that ratio — or even reach 50/50 — produce compound results, because each growth investment creates an asset that continues generating value while the team moves on to build the next one.
The Compound Question forces this distinction into every resource allocation decision. When a proposal comes before the leadership team, the first question isn't "is this a good idea?" It's "will this produce a lasting asset, or will the impact end when the effort ends?"
A $20,000 trade show booth is a maintenance expense. When the event is over, the impact is over. A $20,000 investment in a sales playbook and onboarding system is a compound asset. It produces value with every new hire, every new deal, and every rep who uses it — for years, potentially, with minimal ongoing cost.
This doesn't mean you never spend on maintenance. Sometimes you need the trade show for relationship reasons or market presence. But the Compound Question ensures you know what you're buying. You're buying presence, not infrastructure. And you're making that choice consciously, not by default.
The compound math over time is striking. A company that creates four lasting assets per quarter — an automated workflow, a documented process, a content library, a reporting dashboard — accumulates sixteen compound assets in a year. Each one continues producing value. By year two, the company has thirty-two assets working simultaneously, most of which required zero ongoing effort after the initial build. The company that spent the same budget on sixteen maintenance activities per year has exactly what it had before: the same operation, maintained at the same level, with no accumulated infrastructure.
Same budget. Radically different trajectory. The only difference is the question that preceded the allocation.
Test 4: The Integration Filter
The question: "Does this decision connect to or strengthen existing infrastructure, or does it create a new standalone initiative?"
This test addresses the sprawl problem — the tendency for growing companies to accumulate disconnected projects, tools, and initiatives that each solve one problem but collectively create a coordination burden that outweighs their individual value.
Connected decisions strengthen the existing system. They plug into the infrastructure that's already in place, feed data to and from other components, and create interactions that amplify the value of what already exists.
Standalone decisions create new islands. They solve one problem in isolation, require their own management overhead, and don't interact with anything else.
Example: a company with an existing CRM and sales infrastructure decides it needs better marketing attribution. A standalone approach would be to purchase a marketing analytics tool, configure it separately, and have the marketing team run it alongside the CRM. The tool works on its own. But the data doesn't flow between systems. Marketing sees marketing metrics. Sales sees sales metrics. Nobody sees the connected picture.
An integrated approach would be to build attribution tracking into the existing CRM, so that lead source data flows through the entire pipeline from first touch to closed revenue. No new tool to manage. No data reconciliation needed. Marketing and sales see the same numbers because the numbers come from the same system.
The integrated approach is almost always harder to implement in the short term. It requires understanding the existing infrastructure, designing the connection points, and ensuring data flows correctly. The standalone approach is faster — buy the tool, set it up, start using it.
But over 12 months, the integrated approach produces dramatically more value. The data connections enable optimization that the standalone approach can never achieve. Marketing can see which campaigns produce revenue, not just leads. Sales can see which lead sources produce the best close rates. Strategy can see the full funnel from spend to revenue. Every decision made on the basis of this connected data is better than the same decision made on fragmented data.
The Integration Filter doesn't say "never do standalone." Some initiatives genuinely need to be standalone — a one-time research project, a market test, a temporary campaign. But the filter forces the question: is this standalone by necessity or by convenience? If it could be connected, and the connection would amplify value, the default should always be integration.
Test 5: The Succession Test
The question: "Can this decision survive the departure of the person championing it?"
This test is uncomfortable because it asks operators to confront a reality they prefer not to think about: everyone is temporary. The CEO. The VP of Sales. The operations manager who holds everything together. The engineer who built the system everyone depends on.
If a strategic initiative depends on one person's presence, attention, or enthusiasm to survive, it's not infrastructure. It's a dependency. And dependencies are liabilities that masquerade as assets until the person leaves, burns out, or simply shifts their attention to something else.
The Succession Test evaluates every major decision against this question. If we build this, and the person who championed it leaves in six months, does it keep working? Does the system continue, or does it collapse?
Infrastructure that passes the Succession Test has specific characteristics: it's documented, so anyone can understand what it does and how it works. It's systematic, so it runs on processes and tools rather than individual expertise. It's measurable, so its performance can be evaluated by anyone with access to the dashboard. And it's transferable, so responsibility can shift from one person to another without a loss of institutional knowledge.
This connects directly to the principles we discussed in our piece on building sales infrastructure that survives your best rep quitting. The Succession Test applies the same logic to every strategic initiative in the company, not just sales.
A CEO who champions a new strategic planning process but runs it personally, stores the documents on their laptop, and doesn't train anyone else to facilitate it has created a dependency, not infrastructure. When they're pulled into a crisis for three weeks, the strategic cadence stops. When they eventually transition the CEO role, the process evaporates.
The same CEO who champions the process but builds it in a shared system, documents the facilitation steps, trains two other leaders to run it, and establishes a cadence that doesn't depend on any single person has created infrastructure. It survives their absence because it was designed to.
The Succession Test is the hardest of the five for most leaders to internalize, because it asks them to design for their own replaceability. The natural instinct is to be indispensable. The operator's discipline is to be dispensable — to build systems that are more durable than any individual, including themselves.
Using the Playbook in Your Next Leadership Meeting
You don't need a consulting engagement to start applying these five tests. Here's how to implement the framework in your next leadership meeting.
Start by listing every active initiative and every proposed initiative on a whiteboard. Include everything: the CRM project that's been running for four months, the marketing campaign under consideration, the new hire that's been discussed, the automation idea someone mentioned last week.
Then score each one against the five tests. The Leverage Test: how many layers does it impact? (1–5 score per layer, total out of 25.) The 90-Day Horizon: can we measure impact within a quarter? (Yes or no.) The Compound Question: does it create a lasting asset? (Yes or no.) The Integration Filter: does it connect to existing infrastructure? (Connected, partially connected, or standalone.) The Succession Test: would it survive the departure of its champion? (Yes, partially, or no.)
The scoring will produce a natural ranking. Some initiatives will score high across all five tests — these are your compound decisions, the ones that deserve full resourcing. Some will score high on one or two tests but low on others — these are good ideas that need restructuring before they deserve resources. Some will score low across the board — these are the initiatives that feel productive but aren't building anything lasting. They're candidates for deprioritization or elimination.
The first time you run this exercise, the most valuable outcome won't be the scores themselves. It will be the conversation they produce. When the leadership team sees that seven of their ten active initiatives fail the Succession Test, or that only two of eight proposed projects pass the Integration Filter, the strategic discussion shifts from "should we do this?" to "how should we think about what we do?"
That shift — from decision-by-decision evaluation to systematic decision-making — is the transition from good operator to great operator. It's the infrastructure behind the infrastructure.
Decisions That Compound
The five tests in this playbook aren't magic. They're structured thinking, applied consistently, to the most consequential activity any operator engages in: deciding where to invest time, capital, and attention.
Marta Novak's reflection on how the playbook evolved captures the core idea: "At McKinsey, the frameworks were designed to produce elegant analysis. They were diagnostic tools. At Boost, we needed something different — frameworks designed to produce better action. Not more analysis, better decisions. The five tests evolved from hundreds of client engagements where we watched the same pattern: smart operators making smart individual decisions that didn't add up to compound progress. The missing ingredient was never intelligence. It was a consistent filter for asking 'will this decision make everything else we've already built more valuable?' "
That's the compound question behind all five tests. Every decision either strengthens the system or fragments it. Every resource allocation either builds on existing infrastructure or creates a new island to manage. Every initiative either creates a lasting asset or produces a temporary result that evaporates when the effort stops.
The operators who build the most durable, fastest-growing companies aren't making more decisions. They're making fewer decisions, better — filtered through a framework that selects for compound impact over isolated improvement.
The playbook is on the table. The five tests are simple enough to memorize and powerful enough to change how your leadership team allocates every dollar and every hour. The only question is whether you'll apply them consistently — starting with the next decision on your desk.
About Boost
Boost is the growth infrastructure company for ambitious mid-market businesses. We integrate AI-powered sales, marketing, automation, and strategic consulting into one compounding ecosystem. Founded by operators. Powered by AI.
For more information, visit useboost.net.