Skip to content

cro-pimp

Active router for ALL conversion rate optimization requests — classifies by use case (full audit, experimentation, landing page optimization, behavioral analytics, server-side tracking) and routes to the correct CRO skill. Use when improving conversion rates, setting up experiments, optimizing funnels, or tracking user behavior.

ModelSource
sonnetpack: cro
Full Reference If the request involves conversion rate optimization in ANY way — CRO audits, A/B testing, split testing, landing page optimization, heatmaps, session recordings, Microsoft Clarity, server-side tracking, CAPI fan-out, conversion events, or anything else CRO-related — you MUST route through this skill FIRST.

This is not optional. This is not negotiable. You cannot skip this.

The orchestration layer for all conversion rate optimization expertise. Not documentation — an active router. Every CRO request flows through this routing table before any response.

Mandatory Announcement — FIRST OUTPUT before anything else:

┏━ 📊 cro-pimp ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ [one-line description of what request/routing] ┃
┗━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛

No exceptions. Box frame first, then route.

The CRO pack covers the full optimization spectrum — from site-wide conversion audits and controlled experiments to landing page tuning, behavioral analytics with Clarity, and server-side event tracking via CAPI.

Classify the request. Invoke the matching skill. No response before invocation.

Request PatternSkill
Full site audit, conversion funnel review, CRO strategycro-audit
A/B test, split test, experiment design, variant rolloutab-testing
Landing page, CTA, form, hero, above-the-fold optimizationlanding-page-cro
Heatmaps, session recordings, rage clicks, scroll depth, Clarity setupmicrosoft-clarity
CAPI, server-side events, event deduplication, pixel + server parityserver-side-tracking
”How do I improve conversions?” / “Where do I start?”Decision matrix → route

When the user hasn’t specified a tool or approach, classify their use case:

SignalRoute To
”Audit my site”, “why aren’t people converting”, full funnel reviewcro-audit
”Run an experiment”, “test two variants”, “split traffic”ab-testing
”Optimize my landing page”, “improve CTA”, “reduce form drop-off”landing-page-cro
”Set up heatmaps”, “watch session recordings”, “find rage clicks”microsoft-clarity
”Server-side events”, “CAPI”, “deduplicate pixel + server”, “/api/track”server-side-tracking
”Which CRO tool should I use?”Ask one question: is this about tracking, testing, or content?

Shortcut rules:

  • Full audit with no specific hypothesis → cro-audit, no discussion
  • Has a specific variant to test → ab-testing, no discussion
  • Targeting a single page/CTA → landing-page-cro, no discussion
  • Wants to understand user behavior visually → microsoft-clarity, no discussion
  • Server-side event reliability or dedup issue → server-side-tracking, no discussion

Before routing, check project context:

  • Check for Microsoft Clarity script (clarityProjectId, clarity("init", ...) in source)
  • Check for A/B test configs (experiments.json, feature flag files, Optimizely/VWO/GrowthBook setup)
  • Check for /api/track endpoint (server-side tracking already wired)
  • package.json → detect @microsoft/clarity, @growthbook/growthbook, or analytics libs already installed
  • .env.example → CAPI tokens, Clarity project IDs, experiment keys hint at existing setup
StateAction
Clarity script detectedRoute to microsoft-clarity — already initialized
A/B config or flag file detectedRoute to ab-testing — existing experiment infrastructure
/api/track endpoint presentRoute to server-side-tracking — extend existing pipeline
Nothing detectedApply decision matrix
User SaysChain
”My conversion rate is low, where do I start?”cro-audit → surfaces gaps → chain to specific skill
”Set up heatmaps then run a test on my hero”microsoft-clarityab-testing
”Optimize my checkout page”landing-page-cro (page-specific) + microsoft-clarity (behavioral data)
“Make sure my Meta CAPI matches browser pixel”server-side-tracking
”Run an experiment on CTA copy”landing-page-cro (identify hypothesis) → ab-testing (run test)
“Full CRO setup from scratch”cro-auditmicrosoft-clarityab-testingserver-side-tracking
  • Never respond about CRO, experiments, or tracking before invoking the target skill
  • No summarizing, planning to invoke, or explaining what you’re about to do
  • If unclear, ask ONE clarifying question, then route
  • The skill’s content has the verified facts — always defer to it
  • “How do I improve conversions?” is decision matrix territory — NEVER jump to implementation