Batch LLM with rate limiting
Prevents failure at job 847 of 10,000 - throttle paces the pipeline, tok gates each doc before the API call wastes a request.
Process a large document set without hitting API rate limits. Safe to rerun.
Pipeline
for f in docs/*.md; do cat "$f" | vrk tok --check 8000 | vrk throttle --rate 60/m | vrk prompt --json | vrk kv set "result:$(basename $f)"; done