topic-scorer
Algorithmic topic scoring engine — computes composite scores from freshness, search volume, social buzz, polarization heat, brand fit, and competition gap. Use when ranking topics for content priority or recalibrating scoring weights.
| Model | Source |
|---|---|
| sonnet | pack: content-pumper |
Full Reference
┏━ 🎯 topic-scorer ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ Algorithmic engine for topic priority ranking ┃ ┃ — scores, thresholds, and weight recalibration ┃ ┗━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛
topic-scorer
Section titled “topic-scorer”Computes composite scores for topics in content-topics.json. Reads raw signals, normalizes them to 0-100, applies weights, and writes scores back via topic-memory. Triggers status transitions when scores cross config.autoThreshold.
Scoring Formula
Section titled “Scoring Formula”composite = Σ(normalized_signal × weight)Weights come from content-topics.json → config.scoringWeights. Default weights:
| Signal | Default Weight |
|---|---|
| freshness | 0.20 |
| searchVolume | 0.20 |
| socialBuzz | 0.15 |
| polarizationHeat | 0.10 |
| brandFit | 0.20 |
| competitionGap | 0.15 |
Total weight must sum to 1.0. Validate before scoring — abort with error if sum != 1.0.
Signal Normalization
Section titled “Signal Normalization”Every signal normalizes to 0-100 before weighting.
freshness
Section titled “freshness”100 = discovered todaydecay = 5 pts/day since firstSeennormalized = max(0, 100 - (daysSinceDiscovered × 5))Source: history.firstSeen in topic record.
searchVolume
Section titled “searchVolume”normalized = min(100, log10(volume) × 20)Volume < 10 → normalized = 0. Volume 100k+ → normalized capped at 100.
Source: content-research skill or Google Trends via WebSearch.
socialBuzz
Section titled “socialBuzz”normalized = socialBuzz × 100Raw value already 0-1. Source: trend-scanner signal output.
polarizationHeat
Section titled “polarizationHeat”normalized = polarizationHeat × 100Raw value already 0-1. Source: sentiment-mapper signal output.
brandFit
Section titled “brandFit”normalized = LLM assessment 0-100Assess topic against brand.json fields: voice, values, verticals. High fit = topic aligns with brand verticals AND voice. Low fit = off-brand subject matter or tone mismatch.
competitionGap
Section titled “competitionGap”normalized = max(0, 100 - (competitors × 10))competitors = number of ranking articles found via "topic" site:competitor.com WebSearch. 10+ competitors → normalized = 0.
Data Sources
Section titled “Data Sources”| Signal | Source |
|---|---|
| freshness | history.firstSeen in content-topics.json |
| searchVolume | content-research skill → WebSearch Google Trends |
| socialBuzz | trend-scanner output → topic signals |
| polarizationHeat | sentiment-mapper output → topic signals |
| brandFit | LLM assessment vs brand.json |
| competitionGap | WebSearch competitor count |
Batch Scoring Process
Section titled “Batch Scoring Process”- Read
content-topics.json - Filter: topics where
status != "archived" - For each topic:
- Read raw signals from
topic.signals - Normalize each signal (rules above)
- Compute composite = Σ(normalized × weight)
- Round to 2 decimal places
- Call
topic-memory update-scorewith result
- Read raw signals from
- After all scores updated, check threshold (see below)
- Output leaderboard: top 10 topics sorted by score descending
Threshold Actions
Section titled “Threshold Actions”After batch scoring, for each topic where status === "discovered":
if composite >= config.autoThreshold AND queued_count < config.maxQueueSize: → set status = "queued" → log: "topic queued — score {composite} crossed threshold {autoThreshold}"queued_count = current count of topics with status === "queued". Never exceed maxQueueSize.
Weight Recalibration
Section titled “Weight Recalibration”Weekly process — correlates historical topic scores with article performance.
- Topics with
statusin["published", "monitoring"] history.performance[]entries from GA4 or GSC
Performance Signal
Section titled “Performance Signal”performance_score = normalize(sessions × 0.4 + impressions × 0.3 + clicks × 0.3)Normalize across all published topics to 0-100.
Correlation Step
Section titled “Correlation Step”For each signal, compute Pearson correlation between:
normalized_signal_at_queue_time(from score history)performance_score(from actual results)
Weight Update Rule
Section titled “Weight Update Rule”new_weight = current_weight × (1 + (correlation - avg_correlation) × 0.1)Clamp each new weight to [0.05, 0.40]. Normalize all weights to sum to 1.0 after adjustment.
Output
Section titled “Output”Write updated config.scoringWeights to content-topics.json. Log before/after weights for audit.
Recalibration requires minimum 10 published topics with performance data. Skip if insufficient data — log reason.
Integration
Section titled “Integration”| Skill | Relationship |
|---|---|
topic-memory | Writes scores and status via update-score operation |
trend-scanner | Provides socialBuzz signal |
sentiment-mapper | Provides polarizationHeat signal |
content-research | Provides searchVolume data |
content-pumper-pimp | Triggers batch scoring on schedule |
Output Format
Section titled “Output Format”topic-scorer batch complete — 2026-03-01T14:00:00Z
Scored ✓ 42 topicsQueued ✓ 3 new (total: 7)Skipped ○ 2 archived
Top 5: 88.4 AI regulation debate queued 82.1 Medicare negotiation impact queued 79.3 EV tax credit changes discovered 71.0 Student loan forgiveness monitoring 65.5 Housing affordability index discovered