26 Unix tools. One binary. Zero dependencies.·the missing coreutils for the agent era·vrk mcp - expose all 26 tools to any AI agent·curl vrk.sh/install.sh | sh - ready in 5 seconds·

vrk prompt

vrk prompt pipes text to Claude or GPT and prints the response to stdout with zero boilerplate.

The problem

Calling an LLM from a shell script means a curl command with JSON escaping, content-type headers, API key management, and response extraction. A backtick in the input breaks the JSON. A 429 response makes the jq pipeline print null. You spend more time on HTTP plumbing than on the actual task.

The solution

vrk prompt pipes text to Claude or GPT and prints the response to stdout. Set ANTHROPIC_API_KEY or OPENAI_API_KEY, pipe in content, get back plain text. No curl, no JSON escaping, no response parsing. Temperature defaults to 0 for deterministic output. --schema validates the response against a JSON schema. --retry retries failed validations with escalating temperature.

Before and after

Before

curl -s https://api.anthropic.com/v1/messages \
  -H "x-api-key: $ANTHROPIC_API_KEY" \
  -H "content-type: application/json" \
  -d '{"model":"claude-sonnet-4-6","max_tokens":1024,"messages":[{"role":"user","content":"Summarize this"}]}'

After

cat article.md | vrk prompt --system 'Summarize this' --model claude-sonnet-4-6

Example

cat article.md | vrk prompt --system 'Summarize the key findings in 3 bullet points'

Exit codes

CodeMeaning
0Success
1API failure, budget exceeded, schema mismatch, invalid JSONL, field not found
2Usage error - no input, missing flags, –field with –explain

Flags

FlagShortTypeDescription
--model-mstringLLM model (default from VRK_DEFAULT_MODEL or claude-sonnet-4-6)
--systemstringSystem prompt text, or @file.txt to read from file
--fieldstringDot-path field in each JSONL line to use as prompt text
--budgetintExit 1 if prompt exceeds N tokens
--fail-fboolFail on non-2xx API response or schema mismatch
--json-jboolEmit response as JSON envelope with metadata
--quiet-qboolSuppress stderr output
--schema-sstringJSON schema for response validation
--explainboolPrint equivalent curl command, no API call
--retryintRetry N times on schema mismatch (escalates temperature)
--endpointstringOpenAI-compatible API base URL

Usage

Simple question

$ echo 'What is the capital of France?' | vrk prompt
Paris.

System prompt from a file

For prompts longer than a line, store them in a file and reference with @:

cat user-feedback.csv | vrk prompt --system @prompts/analyze-feedback.txt

Schema-validated structured output

Force the LLM to return JSON matching a specific shape:

$ cat bug-report.txt | vrk prompt \
    --system 'Extract structured fields from this bug report' \
    --schema '{"severity":"string","component":"string","summary":"string","reproducible":"boolean"}'
{"severity":"high","component":"auth","summary":"Login fails after password reset","reproducible":true}

If the LLM returns JSON that doesn’t match the schema, prompt exits 1. Combine with --retry to automatically re-prompt:

cat bug-report.txt | vrk prompt \
  --system 'Extract structured fields' \
  --schema '{"severity":"string","component":"string","summary":"string"}' \
  --retry 3

On each retry, temperature escalates slightly to encourage the model to try a different approach.

See what would be sent without calling the API

$ echo 'test input' | vrk prompt --explain

The --explain flag prints the equivalent curl command and exits without making an API call. Use it to debug what prompt is actually being sent.

Flag details

–model / -m

Selects the LLM. Defaults to claude-sonnet-4-6 or the value of VRK_DEFAULT_MODEL.

# Use GPT-4o via OpenAI
cat article.md | vrk prompt -m gpt-4o --system 'Summarize this'

# Use a local model via Ollama
cat article.md | vrk prompt -m llama3 --endpoint http://localhost:11434/v1

–budget

Pre-flight token check. If the prompt exceeds N tokens, prompt exits 1 without calling the API. Saves money on obviously-too-large inputs.

$ cat huge-document.txt | vrk prompt --budget 4000 --system 'Summarize'
error: prompt: 23847 tokens exceeds budget 4000
$ echo $?
1

–schema / -s

Validates the LLM response against a JSON schema. Can be an inline JSON string or a path to a .json file:

# Inline schema
echo 'Is this positive or negative: I love this product' | \
  vrk prompt --schema '{"sentiment":"string","confidence":"number"}'

# Schema from file
echo 'Extract entities' | vrk prompt --schema entities-schema.json

–retry

Only meaningful with --schema. Retries N times when the response doesn’t match the schema, escalating temperature on each attempt:

cat input.txt | vrk prompt \
  --schema '{"answer":"string","confidence":"number"}' \
  --retry 3 --fail

–fail / -f

Exit 1 on non-2xx API response or schema mismatch instead of printing partial output. Use in CI or pipelines where partial results are worse than no results.

–json / -j

Wraps the response in a JSON envelope with metadata (model used, token counts, timing). This is for wrapping the LLM’s response - it does NOT instruct the model to respond in JSON. Use --schema for that.

–endpoint

Point prompt at any OpenAI-compatible API. Works with Ollama, vLLM, LiteLLM, or any provider that speaks the OpenAI chat completions format:

cat notes.txt | vrk prompt \
  --endpoint http://localhost:11434/v1 \
  --model llama3 \
  --system 'Summarize these meeting notes'

Pipeline integration

Fetch, measure, and summarize

# Grab a web page, check it fits in context, summarize it
vrk grab https://blog.example.com/post | \
  vrk tok --check 12000 | \
  vrk prompt --system 'Summarize the key points in 3 bullets'

Redact secrets before sending to an LLM

# Mask credentials from log output, then analyze
cat deploy.log | vrk mask | \
  vrk prompt --system 'What errors occurred in this deployment?'

Structured extraction with validation

# Extract entities from each chunk, validate the schema, log results
cat long-document.md | vrk chunk --size 4000 | \
  vrk prompt --field text \
    --schema '{"entities":"array","summary":"string"}' \
    --retry 2 \
    --system 'Extract named entities and a one-line summary' \
    --json | \
  vrk validate --schema '{"entities":"array","summary":"string"}' --strict

Nightly batch with retry and state tracking

# Process new articles, track progress in kv
for url in $(cat urls.txt); do
  CONTENT=$(vrk grab "$url" | vrk mask)
  SUMMARY=$(echo "$CONTENT" | vrk prompt --system @prompts/summarize.txt --retry 2)
  if [ $? -eq 0 ]; then
    vrk kv set --ns summaries "$(echo "$url" | vrk slug)" "$SUMMARY" --ttl 168h
    vrk kv incr --ns summaries processed
  fi
done

When it fails

API key missing:

$ echo 'hello' | ANTHROPIC_API_KEY= vrk prompt
error: prompt: ANTHROPIC_API_KEY or OPENAI_API_KEY must be set
$ echo $?
1

Budget exceeded:

$ cat huge-file.txt | vrk prompt --budget 100 --system 'Summarize'
error: prompt: 23847 tokens exceeds budget 100
$ echo $?
1

Schema mismatch without –retry:

$ echo 'Tell me a joke' | vrk prompt --schema '{"setup":"string","punchline":"string"}' --fail
error: prompt: response does not match schema
$ echo $?
1