Token-checked LLM call
Prevents silent truncation - the model never sees a prompt it can only half-fit.
Count tokens before sending to an LLM - abort if too large.
Pipeline
cat prompt.txt | vrk tok --check 4000 | vrk prompt --system "Summarise this."