Benchmark Database
Reference ranges derived from independently sourced B2B research data. Not what vendors report — what observable project outcomes show across methodology types and markets.
Methodology Note
All benchmark ranges below are derived from independently verified B2B research project data collected 2023–2025. Sample sizes per metric are noted. These figures represent observed distributions, not prescriptive targets. A project performing within range is not validated; one outside range warrants investigation, not automatic rejection.
Module 01
Metrics describing the quality and representativeness of B2B respondent panels. Ranges reflect variation across online, telephone, and hybrid methodologies.
| Audience Segment | P25 | Median | P75 | Note |
|---|---|---|---|---|
| C-Suite / VP-level | 28% | 38% | 51% | High abandonment on long surveys (>12 min) |
| Mid-level Manager | 44% | 56% | 67% | Most consistent cross-industry |
| Technical / Specialist | 41% | 53% | 64% | Higher with domain-relevant screeners |
| SMB Owner / Decision-maker | 36% | 48% | 60% | Wide variance by industry vertical |
Definition: Completes ÷ Survey starts (post-screener). Excludes quota-closed terminations. Vendor-reported figures not included in this dataset.
| Screener Complexity | P25 | Median | P75 | Note |
|---|---|---|---|---|
| Single qualifier (industry only) | 18% | 28% | 41% | Panel-dependent |
| Two qualifiers (industry + role) | 9% | 16% | 24% | Vendor IR claims frequently overstated by 30–60% |
| Three+ qualifiers (niche audience) | 3% | 7% | 13% | High CPI risk zone; always request IR guarantee |
Definition: Qualified starts ÷ Total panel invitations. Pre-screened panels excluded from this dataset as IR is artificially inflated.
Module 02
Rates of low-quality, fraudulent, or otherwise unusable responses detected through independent quality-check protocols.
| QC Method | P25 | Median | P75 | Note |
|---|---|---|---|---|
| Speeder detection only | 3% | 6% | 11% | Baseline method; misses sophisticated fraud |
| Speeder + attention checks | 5% | 9% | 16% | Standard HYMBS minimum |
| Full QC battery (6+ methods) | 8% | 14% | 22% | Higher removal reflects more rigorous detection, not worse panels |
A high QC removal rate is not inherently negative — it indicates the detection protocol is functioning. A consistently low rate (<3%) with basic QC methods warrants scrutiny.
| Designed LOI | Actual Median | Variance | Note |
|---|---|---|---|
| 10 min | 8.4 min | −16% | Typical speeders compress by 20–35% |
| 15 min | 13.1 min | −13% | Most representative range for B2B |
| 20 min | 16.8 min | −16% | Abandonment increases sharply above 18 min for online |
| 25+ min | 18.2 min | −27% | Strong speeder signal; high removal rate expected |
Actual median LOI consistently below designed LOI is expected. Concern arises when median falls below 50% of designed LOI or when the distribution is bimodal (indicating two distinct response-behaviour groups).
Module 03
CPI ranges observed in independently sourced B2B fieldwork engagements. Excludes agency mark-up and study design fees.
| Audience Difficulty | Low (P25) | Typical (P50) | High (P75) |
|---|---|---|---|
| General business (broad) | $12 | $22 | $38 |
| Industry-specific (single sector) | $28 | $48 | $75 |
| Role-specific (Director+) | $55 | $90 | $140 |
| Niche technology buyer | $80 | $135 | $220 |
| C-Suite, enterprise only | $120 | $200 | $380+ |
CPI at P25 does not imply lower quality — it may reflect efficient panel access or lower incidence cost in specific geographies. CPI at P75+ does not guarantee quality; verify QC protocols independently.
Data Submission
HYMBS accepts anonymised project-level data contributions to improve benchmark coverage. All submissions are processed under confidentiality protocol and attributed only in aggregate.