Assess experimentation and A/B testing program maturity across teams. Audit culture, tools, and governance to benchmark, uncover gaps, and drive improvements.
What's Included
AI-Powered Questions
Intelligent follow-up questions based on responses
Automated Analysis
Real-time sentiment and insight detection
Smart Distribution
Target the right audience automatically
Detailed Reports
Comprehensive insights and recommendations
Sample Survey Items
Q1
Multiple Choice
Which function best describes your primary role?
Product management
Growth/performance marketing
Lifecycle/CRM
Brand/creative marketing
Data/analytics
Engineering
Design/UX
Q2
Dropdown
Approximately how many people in your team are directly involved in experimentation?
1
2–5
6–10
11–20
21–50
51+
Q3
Multiple Choice
In the last 90 days, about how many experiments did your team launch?
0
1–2
3–5
6–10
11–20
21+
Q4
Multiple Choice
What are the primary objectives your experiments target? Select up to 5.
Conversion rate
Retention/churn
Engagement
Monetization/revenue
Activation/onboarding
Acquisition/traffic
Feature adoption
Pricing/packaging
Brand/creative effectiveness
Learning about user behavior
Q5
Rating
Over the past 6 months, how would you rate the rigor of your team’s hypotheses?
Scale: 10 (star)
Min: Low rigorMax: High rigor
Q6
Matrix
How much do you agree with each statement about your team’s experimentation process?
Rows
Strongly disagree
Disagree
Neutral
Agree
Strongly agree
We have a documented experimentation process.
•
•
•
•
•
We pre-define primary metrics and MDE before launch.
•
•
•
•
•
We conduct power/sample size calculations.
•
•
•
•
•
We pre-register or log hypotheses and analysis plans.
•
•
•
•
•
We run QA and guardrail checks before launch.
•
•
•
•
•
Q7
Multiple Choice
Which test or study types do you run regularly? Select all that apply.
A/B or split tests
Multivariate tests (MVT)
Holdout/control tests
Quasi-experiments/observational
Multi-armed bandits
Sequential tests
UX/usability studies
Surveys/concept tests
Feature-flag rollouts/experiments
Q8
Dropdown
Typical runtime for a single experiment (from start to decision).
Same day
1–3 days
4–7 days
1–2 weeks
3–4 weeks
Over 4 weeks
Varies widely
Q9
Constant Sum
Allocate 100 points across where your team spends effort in a typical experiment.
Total must equal 100
Ideation/prioritization
Design/UX and copy
Instrumentation and data quality
Implementation/engineering
QA and launch
Monitoring during run
Analysis and interpretation
Documentation and sharing
Rollout and follow-up
Min per option: 0Whole numbers only
Q10
Multiple Choice
Which experimentation tools or platforms are currently in use? Select all that apply.
Optimizely
VWO
AB Tasty
Statsig
Eppo
Amplitude Experiment
LaunchDarkly or Flagsmith
Google Optimize (legacy)
In-house/custom platform
None currently
Q11
Multiple Choice
How are experiment datasets integrated with your analytics and data warehouse?
Fully integrated to analytics and warehouse
Partial integration; manual pulls
Isolated in tool only
Not sure
Q12
Multiple Choice
Do you have a defined and versioned metrics catalog for experiments?
Yes, centrally defined and versioned
Yes, team-specific only
In progress
No
Q13
Multiple Choice
How do you determine sample size and test duration?
Fixed-horizon power analysis
Sequential testing/alpha spending
Heuristics/benchmarks
Vendor tool auto-calculates
We usually don’t
Not sure
Q14
Multiple Choice
Attention check: To confirm you’re paying attention, please select “I am paying attention.”
I am paying attention
I am not paying attention
Prefer not to say
Q15
Ranking
Rank the top factors when deciding whether to ship a variant (drag to order).
Drag to order (top = most important)
Effect size vs. baseline
Statistical significance or credible interval
Impact on guardrail metrics
Estimated business value
Implementation cost/complexity
Qualitative feedback/UX signals
Q16
Multiple Choice
Which risk controls are typically used for experiments? Select all that apply.
Guardrail metrics monitored
Kill switches/instant rollback
Ethics/privacy review when needed
Traffic allocation caps
Country/segment exclusions
QA and instrumentation checklist
Q17
Multiple Choice
Is there an experimentation council or governance body?
Yes, org-wide
Yes, within my business unit
No, but being considered
No
Q18
Multiple Choice
Where are experiment plans and results documented?
Central system of record
Team wiki or docs
Within the testing tool
Spreadsheets
Not consistently documented
Q19
Opinion Scale
Overall, how mature is experimentation in your organization today?
Range: 1 – 10
Min: Ad-hocMid: DefinedMax: Best-in-class
Q20
Numeric
Typically, how many business days from test end to decision?
Accepts a numeric value
Whole numbers only
Q21
Multiple Choice
In the last 6 months, about what share of experiments led to a roll-out?
0–10%
11–25%
26–40%
41–60%
61–80%
81–100%
We don’t track
Q22
Long Text
What are the biggest blockers to effective experimentation right now?
Max 600 chars
Q23
Dropdown
What is your seniority level?
Individual contributor
Manager
Director
VP
C-level
Other
Q24
Dropdown
Approximately how many employees are in your company?
1–10
11–50
51–200
201–1,000
1,001–5,000
5,001–10,000
10,001+
Q25
Dropdown
Which industry best describes your organization?
Consumer software
B2B/SaaS
E-commerce/retail
Financial services/fintech
Media/entertainment
Healthcare/life sciences
Gaming
Telecom
Travel/hospitality
Other
Q26
Dropdown
Where are you primarily based?
North America
Latin America
Europe
Middle East
Africa
Asia
Oceania
Q27
Dropdown
How many years have you worked with experimentation or A/B testing?
0–1
2–3
4–6
7–10
11+
Q28
Long Text
Anything else we should know about your experimentation practice?
Max 600 chars
Q29
AI Interview
AI Interview: 2 Follow-up Questions on your experimentation operations
AI InterviewLength: 2Personality: Expert InterviewerMode: Fast
Q30
Chat Message
Thank you for completing the survey! Your input will help improve experimentation operations.
Ready to Get Started?
Launch your survey in minutes with this pre-built template