Scrub secrets from LLM output
Catches secrets the model echoes back before they reach storage - one leaked API key in kv is a breach.
Mask any accidentally leaked secrets before storing a result.
Pipeline
vrk prompt --system "summarise this" < doc.txt | vrk mask | vrk kv set summary