Ship variants, measure outcomes, keep what works.
A/B testing provides evidence for product decisions. Two versions of a headline, a checkout flow, or an onboarding step — measure which converts better. Implementing A/B testing requires variant assignment, consistent user experience, and statistically rigorous measurement.
Product decisions made without experimentation — wanting to test multiple variants of UI, copy, or flows to measure which performs better
A/B testing has three requirements that are frequently implemented poorly:
Consistent assignment: A user should see the same variant on every visit. If the assignment is random on each page load, the user sees different variants — corrupting the data. Assignment should be based on user ID or a stable anonymous cookie, hashed to a variant.
Mutual exclusivity: Users in experiment A shouldn't also be in experiment B if the experiments affect overlapping behavior. Experiment interaction effects corrupt results.
Statistical significance: An experiment with 20 users per variant can't produce reliable conclusions. The required sample size depends on the expected effect size and the baseline conversion rate. Running an experiment for 2 days and calling it "significant" at p < 0.1 is invalid.
Implementation options:
Posthog experiments: Built on top of Posthog feature flags. Assigns users to variants, tracks conversion events, reports statistical significance. If using Posthog for analytics, experiments come naturally.
Vercel Edge Middleware (for Next.js): Assign variants at the edge before the page renders. No client-side flicker. Store assignment in a cookie; report events with the variant attribute.
GrowthBook: Open-source experimentation platform. Connects to your existing analytics to pull metrics. Self-hosted or cloud.
What can be A/B tested:
- Onboarding flow (step order, required vs optional steps)
- Pricing page (price points, feature emphasis, CTA copy)
- Trial conversion prompts (timing, messaging)
- Email subject lines and send timing
- Feature discovery UX
A/B testing infrastructure with consistent variant assignment, conversion tracking, and result reporting
Posthog experiments
or Vercel Edge Middleware setup
Variant assignment
with consistent user-level assignment
Conversion event
tracking per variant
Experiment results
reporting with significance indicators
Guardrail metrics
to detect negative side effects
One honest number to start.
Fixed-scope, fixed-price. The number below is the starting point — final scope is built from your brief.
A/B testing infrastructure with consistent variant assignment, conversion tracking, and result reporting
Three steps, every time.
The same repeatable engagement on every project. No surprises, no mystery, no billable ambiguity.
Brief & discovery.
We send you questions, then get on a call. Output: a written scope with every step, feature, and integration listed.
Build & ship.
Fixed schedule, weekly reviews. No scope creep unless you change the scope — and if you do, we reprice it transparently.
Warranty & retainer.
30-day warranty on every launch. Most clients stay on a monthly retainer for ongoing features and maintenance.
Why Fixed-Price Matters Here
A/B testing infrastructure is the assignment mechanism, event tracking, and reporting setup. Fixed-price.
Questions, answered.
Until it reaches statistical significance, or a predetermined minimum sample size. A test that reaches significance in 2 days either has an enormous effect size or was underpowered. Plan for 2-4 weeks minimum for most conversion metrics.
Yes — feature flags with event tracking provide the basics. A dedicated experimentation platform adds sample size calculators, significance testing, and interaction detection. For early-stage products: feature flags + Posthog. For growth-stage: a dedicated platform.
Tell Ryel about your project.
Describe what you’re building and what outcome you need. You’ll have a written, fixed-price scope within the week.