Skip to content

classify-and-plan-edit

Use when scoring raw takes against storyboard requirements and building an edit timeline. Classifies takes as good/mess-up/partial/silence, picks best takes per beat, and generates an edit decision list (EDL).

ModelSource
sonnetpack: video-pipeline
Full Reference

Reads storyboard.json, clip-manifest.json, and transcripts/ to score every take against storyboard beat requirements using fuzzy text matching and four-dimensional scoring. Selects the best take per beat by usability score, detects precise in/out points, handles coverage gaps, and produces edl.json consumed by execute-edit.


ItemValue
Inputsstoryboard.json, clip-manifest.json, transcripts/*.json
Outputedl.json in current directory
Next stageexecute-edit
Match thresholdcontent_match > 0.4 (configurable)
In-point buffer0.3s before first word
Out-point buffer0.5s after last word

I want to…File
Understand classification categories, scoring dimensions, beat matching, and gap handlingreference/classification.md
See the edl.json schema and classification summary outputreference/output.md

Usage: Read the reference file matching your current task from the index above. Each file is self-contained with code examples and inline gotchas.


┏━ ⚡ classify-and-plan-edit ━━━━━━━━━━━━━━━━━━━━━┓ ┃ Classifying [count] takes → building EDL ┃ ┗━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛