Scope of the business case
Which WhyUser features are in pilot scope, and at what plan tier. Active features determine which sections appear below. All plans support all features — plans differ by monthly credit allowance.
Shared context
Fields used across multiple streams. All values are yours.
Attribution controls
The variables a CFO objects to most. Exposed so you set them.
Feature inputs
Only the feature sections you enabled above appear here. Each feature captures its own streams; the aggregation panel below handles dedup.
Committee Simulation
Produces: Operational Efficiency · Budget Protected · Performance LiftAd Campaign Simulation
Produces: Operational Efficiency · Budget Protected · Performance LiftEmail Campaign Simulation
Produces: Operational Efficiency · Budget Protected · Performance LiftAudience Discovery
Produces: TAM expansion · CPC reduction · Operational EfficiencyAggregation & dedup
Per-feature value streams summed across active features. GTM Velocity Gain computed on your Marketing FTE, with diminishing-returns gain across active features. Dedup rules applied automatically — no double-counting of the same reclaimed hour or the same dollar of spend.
| Feature | Type | Conservative | Optimistic |
|---|
Sensitivity
| Scenario | Total annual value | ROI multiple | Payback (mo) |
|---|
Methodology & sources
Every benchmark cited. Every URL auditable.
How each value stream is calculated
One paragraph per stream. Designed so you can point at any number and answer "where did this come from?" in plain English.
What it measures: labor hours reclaimed when WhyUser pre-validates content, ad variants, email copy, and audience research — replacing expensive human review cycles with simulated-committee feedback.
Formula: For each active feature, we multiply the cost of one review cycle (stakeholders × hours × loaded hourly rate, plus agency fees) by the number of cycles per year. Then we subtract the simulation cost and apply a capture factor (50% conservative, 80% optimistic) to account for cycles that still require some human review.
Example — Committee Simulation: 5 stakeholders × 1.5 hrs × $150 × 3 rounds × 8 assets × 4 quarters = $54K of review labor per year. Conservative capture (50%) = $27K reclaimed; optimistic (80%) = $43K. Same shape applies to the other features with their feature-specific cycle counts.
What it measures: budget dollars currently at risk on campaigns, content, and email sends that fail silently. WhyUser diagnoses the failure before launch, so you don't spend behind broken creative.
Formula: budget at risk × attributable share × capture rate. "Budget at risk" is the portion of spend visibly failing (for ads: spend on clicks that didn't convert, discounted by 0.7 to acknowledge that some non-converting traffic still drives brand and retargeting value; for committee: annual content production budget; for email: pipeline impact from list-health erosion). Attributable share is the slice WhyUser can realistically diagnose (default 45%, editable). Capture rate is how much of that you actually recover by acting on the diagnosis (50% conservative, 75% optimistic).
Example — Ad Campaign: $15K/mo ad spend × 97% non-converting (1 − 3% landing CVR) × 0.7 waste coefficient × 45% attributable × 50% capture × 12 = $27K/yr prevented waste. Committee waste uses your annual content production budget on the same multiplicative structure.
What it measures: additional team throughput — the compounding effect of a team doing more campaigns / launches / experiments per quarter because fewer cycles are burned on failed iterations.
Formula: annual fully-loaded Marketing FTE cost × throughput gain. Throughput gain is 15% for one active WhyUser feature, 20% for two, 22% for three or more (diminishing returns, per v1.3 specification).
Example: 2 Marketing FTE × $150 × 2,080 hrs = $624K annual cost × 15% gain = ~$94K/yr. With more FTE or more active features, this scales.
What it measures: additional revenue from messaging that converts better — more MQLs from your landing pages, more replies from outbound, more clicks on nurture, more conversions on paid ads.
Formula: baseline funnel throughput × relative lift × ACV × close rate × attribution. Baseline throughput is what your current dashboards report today (sessions × CVR for content; ad spend / CPC × landing CVR for ads; volume × open × click × reply-or-CVR for email). For ads, total relative lift compounds CTR × CVR end-to-end. For email, total lift compounds open × click × success-metric (reply or landing CVR). For other features, lift is your single editable conservative / optimistic input. Attribution (default 40%) credits WhyUser's share of the improvement — the rest goes to sales execution, product, and market conditions.
Example — Committee Simulation: 10K sessions/mo × 12 × 3% baseline CVR × 15% lift × $30K ACV × 3% close × 40% attribution = ~$194K conservative. Same chain applies to Ad and Email with their respective baseline funnels.