A CLI that structures every prompt for maximum signal, minimum waste. Supports Claude, GPT-4, Groq, DeepSeek. Save ~67% on API costs.
brew install oleg-koval/tap/promptctl
Copied!
Unstructured prompts waste tokens on rambling, require follow-ups, and produce output you can't use. That's real money.
Vague prompts produce unfocused responses. You end up sending 2-3 follow-ups to get what you actually need. Each call costs tokens. Each rework wastes your time.
avg. $0.048 vs $0.016 per callStructured prompts with personas, constraints, and output formats get a focused response in one shot. Less output tokens (no rambling), fewer calls (no rework), better results.
verified across 10 modelsSwitch between Claude Sonnet, GPT-4o, Llama 70B, DeepSeek, and more. See cost comparison across all models before you send. Pick the cheapest model that fits your task.
promptctl cost --compareSingle binary. No Node.js, no Python, no Docker. Install via Homebrew, run immediately. Works on macOS (Intel + Apple Silicon), Linux, and Windows.
go build = doneNo LLM call needed. The engine is deterministic and rule-based - it applies prompting best practices at the speed of your disk.
Natural language, messy, incomplete - exactly how you'd explain it to a colleague. "analyze my idea about X, be critical"
Detects task type from 11 categories. Assigns expert persona, output structure, and constraints automatically.
XML tags, decomposed tasks, implied needs, tone-matched constraints. Ready to send or save as a reusable template.
Send directly to any LLM with cost tracking, pipe to Claude CLI, or copy to clipboard. You choose the workflow.
Built for developers who use LLMs daily and care about what they spend.
Raw intent to structured prompt. Auto-detects business analysis, code review, debugging, architecture, and 7 more task types.
See exactly what a prompt will cost before sending. Compare across all 10 models. Every call shows savings vs unstructured.
YAML templates with variables, conditionals, auto file reading. Global library + project-level overrides. 5 starters included.
4-step wizard: pick provider, pick model, paste API key (browser opens for you), done. Switch models anytime with one command.
stdout output. Pipe to Claude CLI, OpenAI CLI, clipboard, files, or other tools. No lock-in. Fits your existing workflow.
API keys stored locally with 0600 permissions. Supports env vars for CI/CD. Nothing leaves your machine without your consent.
promptctl outputs to stdout. Combine it with any CLI tool, agent framework, or automation pipeline.
$ promptctl review --file=auth.ts | claude $ promptctl create "plan k8s migration" | claude
Pipe structured prompts directly into Anthropic's Claude CLI for immediate execution.
$ promptctl create "optimize SQL queries" | openai chat $ promptctl debug --file=api.ts --error="timeout" | openai chat
Works with any OpenAI-compatible CLI. Prompt stays structured regardless of model.
$ promptctl send review --file=main.go --model=gpt-4o # Sending to GPT-4o... # Est. cost: $0.0125 (saves ~$0.025 vs unstructured) # ... response ... # Cost: $0.0118 | Tokens: 420 in / 1,240 out | 2.3s
Built-in execution with real-time cost reporting after every call.
$ promptctl review --file=src/*.ts | tee review.md $ promptctl cp commit --changes="$(git diff --staged)" $ cat report.txt | promptctl send --create "summarize this"
Compose with standard Unix tools. Automate code reviews in CI, generate commit messages from diffs, batch-process files.
Run promptctl cost --compare
to see this for your actual prompts.
| Model | Structured cost | Without promptctl | You save |
|---|---|---|---|
| Claude Sonnet 4 | $0.0183 | $0.0549 | 67% ($0.037) |
| GPT-4o | $0.0125 | $0.0375 | 67% ($0.025) |
| Claude Haiku 4.5 | $0.0049 | $0.0147 | 67% ($0.010) |
| GPT-4o Mini | $0.0007 | $0.0022 | 67% ($0.002) |
| Llama 3.3 70B | $0.0012 | $0.0036 | 67% ($0.002) |
| DeepSeek V3 | $0.0014 | $0.0041 | 67% ($0.003) |
Based on a ~550 token structured prompt. Unstructured estimate includes avg. 3x rework factor.
Single binary. macOS, Linux, Windows. No runtime dependencies.