WhyUser · Annual Business Case

The ROI Workbook

Primer

What this calculator models

WhyUser pays back on three streams that don't depend on growth — operational efficiency, budget protected, and GTM velocity. Performance lift is upside.

Cost reduction
Operational efficiency
Hours and agency fees replaced by simulation runs — review cycles, content QA, research labor.
CFO readHard cost reduction.
Waste prevention
Budget protected
Ad spend at risk on bad creative or landing pages, email list damage, content QA waste.
CFO readProtects existing spend. Insurance against known failure modes.
Throughput
GTM velocity gain
Marketing FTE throughput improvement — 15–22% based on number of active features.
CFO readProductivity multiplier on existing headcount.
Growth upside
Performance lift
New ARR from improved CVR, reply rates, click-through, MQL volume.
CFO readFrom your pilot, not our forecast.
i.

Scope of the business case

Which WhyUser features are in pilot scope, and at what plan tier. Active features determine which sections appear below. All plans support all features — plans differ by monthly credit allowance.

Active features in pilot
Page & Content Committee Simulation
Simulates the buying committee reading a landing page or whitepaper. Surfaces which stakeholder vetoes which element.
Labor origin · Content team / PMM
Ad Campaign Simulation
Simulates the ad-to-landing-page funnel. Quantifies $ of ad spend at risk from diagnosable failures.
Labor origin · Campaign / demand gen team
Email Campaign Simulation
Simulates inbox psychology across subject + body variants. Predicts open, click, reply-to-meeting rates.
Labor origin · Campaign / SDR team
Audience Discovery
Surfaces Hidden Champion titles and adjacent sub-verticals for TAM expansion, reverse-engineered from content.
Labor origin · PMM / GTM research
Candidate plan tier
ii.

Shared context

Fields used across multiple streams. All values are yours.

Median annual contract value in your core market. Not list price — actual sold.
Content marketing / Demand gen / SDR / ad ops / Research FTE. Used for GTM Velocity calculation only — other streams use per-feature inputs.
B2B SaaS median ≈ 3%. Prefilled — override if your CRM shows different. Only affects Performance Lift.
Fully loaded marketing FTE cost. Industry typical $150/hr. Confirm with finance.
iii.

Attribution controls

The variables a CFO objects to most. Exposed so you set them.

40%
Share of Performance Lift attributable to WhyUser (rest to sales, product, market). Default 40%.
45%
Share of budget waste diagnosable / preventable by WhyUser. Default 45%.
iv.

Feature inputs

Only the feature sections you enabled above appear here. Each feature captures its own streams; the aggregation panel below handles dedup.

Committee Simulation

Produces: Operational Efficiency · Budget Protected · Performance Lift
Landing pages, whitepapers, solution briefs per quarter.
If outsourced. $0 if all in-house.
Total annual content spend (agency + internal if tracked).
Total monthly sessions across organic, direct, referral, social, and paid traffic — all benefit from committee-tested messaging. From Google Analytics.
Session-to-conversion rate. "Conversion" = whatever your analytics tracks via conversion pixel: form fills, demo requests, trial signups, content downloads. Not MQLs. B2B SaaS median ≈ 2–4%.
Relative lift on baseline CVR from committee-tested messaging. 15% = B2B SaaS messaging-driven CVR benchmark (low end).
Upper-end relative lift. Override to match your pilot expectations.
Baseline CVR · Conservative → · Optimistic →

Ad Campaign Simulation

Produces: Operational Efficiency · Budget Protected · Performance Lift
LinkedIn, Google, Reddit — protected channels.
Campaign average CPC. LinkedIn B2B SaaS median $6–8.
Click-through rate on your ad creative — clicks ÷ impressions. LinkedIn B2B 0.4–0.6%, Google Search 3–5%, Display ~0.5–1%.
What your ads dashboard reports as "conversion rate" — form fills, demo requests, trial signups, content downloads tracked by your conversion pixel.
Distinct campaigns or ad variants launched per year. Drives Operational Efficiency calc (review-cycle hours saved).
Relative lift on creative click-through rate from pre-validated headlines and creative. More clicks per ad dollar.
Upper-end CTR lift. Override to match your pilot expectations.
Relative lift on landing-page conversion rate from pre-validated landing pages. More conversions per click.
Upper-end CVR lift. Override to match your pilot expectations. Total conversion lift compounds CTR × CVR.
Baseline CTR · Conservative → · Optimistic →
Baseline CVR · Conservative → · Optimistic →

Email Campaign Simulation

Produces: Operational Efficiency · Budget Protected · Performance Lift
Emails sent per month across campaigns.
Outreach/Salesloft/Apollo report CTR. HubSpot/Marketo default to CTOR. Math differs by ~5×.
≈ — opens/mo
From your ESP (HubSpot, Marketo, Outreach, Apollo). Subject-line-driven.
≈ — clicks/mo
Value of selected click-metric type above. Body-copy-driven.
What you optimize for. Outbound = reply (Outreach, Salesloft, Apollo). Nurture = landing conversion (HubSpot, Marketo, Mailchimp).
≈ — replies/mo
Reply rate from your outbound dashboard. Directly reflects message resonance.
Unique campaigns run per year. Drives Operational Efficiency calc.
If outsourced. $0 if all in-house.
Lift from pre-validated subject lines.
Upper-end open-rate lift. Override to match your pilot.
Lift from pre-validated body copy and CTAs.
Upper-end click-rate lift.
Lift on reply rate from pre-validated CTAs and personalization.
Upper-end success-metric lift. Total compounds open × click × success.
Baseline open rate · Conservative → · Optimistic →
Baseline click rate · Conservative → · Optimistic →
Baseline reply rate · Conservative → · Optimistic →

Audience Discovery

Produces: TAM expansion · CPC reduction · Operational Efficiency
Team hours on ICP / vertical / persona research per quarter.
If outsourced. $0 if all in-house.
Industry data suggests 1–3 is typical in Year 1.
Year 1 deals per activated vertical. Typical 2–8.
Median, not average. Often 20–30% smaller than core ACV until PMF proven in new vertical.
Share of new-vertical deals attributable to Audience Discovery (rest to sales/product). Customer-set.
v.

Aggregation & dedup

Per-feature value streams summed across active features. GTM Velocity Gain computed on your Marketing FTE, with diminishing-returns gain across active features. Dedup rules applied automatically — no double-counting of the same reclaimed hour or the same dollar of spend.

Feature Type Conservative Optimistic

Sensitivity

Scenario Total annual value ROI multiple Payback (mo)
vi.

Methodology & sources

Every benchmark cited. Every URL auditable.

Content operations
B2B marketing approval cycle 15–19 days baseline; reduction 50–70% with process optimization — Jam7 analysis of Series A/B B2B tech
Revision rounds with structured templates: 2.3 → 0.8 per asset (65% reduction) — Jam7 production data
Content review cycle time: webpage assembly 4 hrs → 10 min with agentic AI — AWS + Gradial case study
AI use in B2B content: 87% productivity improvement, 80% Operational Efficiency improvement — Content Marketing Institute 2026 B2B Report
LinkedIn & paid ads
Senior title CPC $6.40; junior $4.40; global avg $5.58; SaaS/tech median ~$7.85Autelo B2B benchmarks 2025
CPC range by audience specificity: varies 2–3× by specificity — Stackmatix LinkedIn Ads 2026
LinkedIn conversion rate: 2–5% good, 5–15% possible — Powered by Search
B2B customer journey: 211 days, 76 touches, 6.8 stakeholders — Dreamdata 2024 via Swydo
B2B SaaS expansion / adjacency
Expansion ARR as share of new ARR: 28.8% (2020) → 32.3% (2023); 40%+ at $50M+ ARR — ChartMogul via HubiFi
NRR benchmark: 85–135%, top performers 110%+ — ChartMogul via Velaris
B2B SaaS median growth rate 2025: 26% median, 50% top quartile — Pavilion 2025 Benchmarks
TAM attribution
TAM attribution is treated as a customer-set variable — WhyUser surfaces the opportunity; your sales and marketing teams activate it. You own the attribution number, set on the slider above.
vii.

How each value stream is calculated

One paragraph per stream. Designed so you can point at any number and answer "where did this come from?" in plain English.

Operational Efficiency

What it measures: labor hours reclaimed when WhyUser pre-validates content, ad variants, email copy, and audience research — replacing expensive human review cycles with simulated-committee feedback.

Formula: For each active feature, we multiply the cost of one review cycle (stakeholders × hours × loaded hourly rate, plus agency fees) by the number of cycles per year. Then we subtract the simulation cost and apply a capture factor (50% conservative, 80% optimistic) to account for cycles that still require some human review.

Example — Committee Simulation: 5 stakeholders × 1.5 hrs × $150 × 3 rounds × 8 assets × 4 quarters = $54K of review labor per year. Conservative capture (50%) = $27K reclaimed; optimistic (80%) = $43K. Same shape applies to the other features with their feature-specific cycle counts.

Budget Protected

What it measures: budget dollars currently at risk on campaigns, content, and email sends that fail silently. WhyUser diagnoses the failure before launch, so you don't spend behind broken creative.

Formula: budget at risk × attributable share × capture rate. "Budget at risk" is the portion of spend visibly failing (for ads: spend on clicks that didn't convert, discounted by 0.7 to acknowledge that some non-converting traffic still drives brand and retargeting value; for committee: annual content production budget; for email: pipeline impact from list-health erosion). Attributable share is the slice WhyUser can realistically diagnose (default 45%, editable). Capture rate is how much of that you actually recover by acting on the diagnosis (50% conservative, 75% optimistic).

Example — Ad Campaign: $15K/mo ad spend × 97% non-converting (1 − 3% landing CVR) × 0.7 waste coefficient × 45% attributable × 50% capture × 12 = $27K/yr prevented waste. Committee waste uses your annual content production budget on the same multiplicative structure.

GTM Velocity Gain

What it measures: additional team throughput — the compounding effect of a team doing more campaigns / launches / experiments per quarter because fewer cycles are burned on failed iterations.

Formula: annual fully-loaded Marketing FTE cost × throughput gain. Throughput gain is 15% for one active WhyUser feature, 20% for two, 22% for three or more (diminishing returns, per v1.3 specification).

Example: 2 Marketing FTE × $150 × 2,080 hrs = $624K annual cost × 15% gain = ~$94K/yr. With more FTE or more active features, this scales.

Performance Lift

What it measures: additional revenue from messaging that converts better — more MQLs from your landing pages, more replies from outbound, more clicks on nurture, more conversions on paid ads.

Formula: baseline funnel throughput × relative lift × ACV × close rate × attribution. Baseline throughput is what your current dashboards report today (sessions × CVR for content; ad spend / CPC × landing CVR for ads; volume × open × click × reply-or-CVR for email). For ads, total relative lift compounds CTR × CVR end-to-end. For email, total lift compounds open × click × success-metric (reply or landing CVR). For other features, lift is your single editable conservative / optimistic input. Attribution (default 40%) credits WhyUser's share of the improvement — the rest goes to sales execution, product, and market conditions.

Example — Committee Simulation: 10K sessions/mo × 12 × 3% baseline CVR × 15% lift × $30K ACV × 3% close × 40% attribution = ~$194K conservative. Same chain applies to Ad and Email with their respective baseline funnels.