26 Unix tools. One binary. Zero dependencies.·the missing coreutils for the agent era·vrk mcp - expose all 26 tools to any AI agent·curl vrk.sh/install.sh | sh - ready in 5 seconds·

tag

#prompt

8 posts tagged #prompt

recipe

Token-checked LLM call

· 2 min read

Prevents silent truncation - the model never sees a prompt it can only half-fit. Count tokens before sending to an LLM - abort if too large.

recipe

Retry flaky API call

· 2 min read

Transient 500s don't kill the pipeline - coax retries with backoff so one bad request doesn't stop the run. Wrap an LLM prompt in coax for ...

recipe

Fetch and summarise a page

· 1 min read

Catches oversized pages before the API call - no wasted request on a doc that won't fit in context. Grab a URL, check token count, then summarise ...

recipe

Cache LLM response

· 1 min read

Avoids duplicate API calls for identical prompts - the hash keys the cache so reruns are free. Send a prompt, get the request hash, and store the ...