v0.2.0 - now with multi-model support

Stop wasting money
on bad prompts

A CLI that structures every prompt for maximum signal, minimum waste. Supports Claude, GPT-4, Groq, DeepSeek. Save ~67% on API costs.

$ brew install oleg-koval/tap/promptctl Copied!
Star on GitHub
promptctl create
$ promptctl create "analyze my SaaS idea for recruitment in NL. be critical"
<context>
You are a seasoned venture analyst with experience evaluating
500+ business ideas. Your job is honest assessment - not encouragement.
</context>
<task>
1. Market size and growth trajectory
2. Competitive landscape with named competitors
3. Unit economics and path to profitability
4. Kill criteria - signals to stop pursuing this
</task>
<constraints>
- Be ruthlessly honest. If the idea is bad, say so directly.
- Challenge every assumption. Default to skepticism.
</constraints>
The problem

Bad prompts are expensive

Unstructured prompts waste tokens on rambling, require follow-ups, and produce output you can't use. That's real money.

3x

More expensive without structure

Vague prompts produce unfocused responses. You end up sending 2-3 follow-ups to get what you actually need. Each call costs tokens. Each rework wastes your time.

avg. $0.048 vs $0.016 per call
67%

Savings on every prompt

Structured prompts with personas, constraints, and output formats get a focused response in one shot. Less output tokens (no rambling), fewer calls (no rework), better results.

verified across 10 models
10

Models, one command

Switch between Claude Sonnet, GPT-4o, Llama 70B, DeepSeek, and more. See cost comparison across all models before you send. Pick the cheapest model that fits your task.

promptctl cost --compare
0

Dependencies

Single binary. No Node.js, no Python, no Docker. Install via Homebrew, run immediately. Works on macOS (Intel + Apple Silicon), Linux, and Windows.

go build = done
How it works

From rough idea to optimized prompt in milliseconds

No LLM call needed. The engine is deterministic and rule-based - it applies prompting best practices at the speed of your disk.

1

You type intent

Natural language, messy, incomplete - exactly how you'd explain it to a colleague. "analyze my idea about X, be critical"

2

Engine classifies

Detects task type from 11 categories. Assigns expert persona, output structure, and constraints automatically.

3

Prompt structured

XML tags, decomposed tasks, implied needs, tone-matched constraints. Ready to send or save as a reusable template.

4

Send or pipe

Send directly to any LLM with cost tracking, pipe to Claude CLI, or copy to clipboard. You choose the workflow.

Features

Everything you need, nothing you don't

Built for developers who use LLMs daily and care about what they spend.

promptctl create

Raw intent to structured prompt. Auto-detects business analysis, code review, debugging, architecture, and 7 more task types.

💰

Cost estimation

See exactly what a prompt will cost before sending. Compare across all 10 models. Every call shows savings vs unstructured.

📂

Template library

YAML templates with variables, conditionals, auto file reading. Global library + project-level overrides. 5 starters included.

🎯

Interactive setup

4-step wizard: pick provider, pick model, paste API key (browser opens for you), done. Switch models anytime with one command.

🔁

Pipe anywhere

stdout output. Pipe to Claude CLI, OpenAI CLI, clipboard, files, or other tools. No lock-in. Fits your existing workflow.

🔒

Secure by default

API keys stored locally with 0600 permissions. Supports env vars for CI/CD. Nothing leaves your machine without your consent.

Integrations

Pipe to any LLM tool or agent

promptctl outputs to stdout. Combine it with any CLI tool, agent framework, or automation pipeline.

Claude CLI

$ promptctl review --file=auth.ts | claude
$ promptctl create "plan k8s migration" | claude

Pipe structured prompts directly into Anthropic's Claude CLI for immediate execution.

OpenAI CLI

$ promptctl create "optimize SQL queries" | openai chat
$ promptctl debug --file=api.ts --error="timeout" | openai chat

Works with any OpenAI-compatible CLI. Prompt stays structured regardless of model.

Direct send with cost tracking

$ promptctl send review --file=main.go --model=gpt-4o
# Sending to GPT-4o...
# Est. cost: $0.0125 (saves ~$0.025 vs unstructured)
# ... response ...
# Cost: $0.0118 | Tokens: 420 in / 1,240 out | 2.3s

Built-in execution with real-time cost reporting after every call.

Shell automation & CI/CD

$ promptctl review --file=src/*.ts | tee review.md
$ promptctl cp commit --changes="$(git diff --staged)"
$ cat report.txt | promptctl send --create "summarize this"

Compose with standard Unix tools. Automate code reviews in CI, generate commit messages from diffs, batch-process files.

Save money

Know exactly what you spend

Run promptctl cost --compare to see this for your actual prompts.

Model Structured cost Without promptctl You save
Claude Sonnet 4$0.0183$0.054967% ($0.037)
GPT-4o$0.0125$0.037567% ($0.025)
Claude Haiku 4.5$0.0049$0.014767% ($0.010)
GPT-4o Mini$0.0007$0.002267% ($0.002)
Llama 3.3 70B$0.0012$0.003667% ($0.002)
DeepSeek V3$0.0014$0.004167% ($0.003)

Based on a ~550 token structured prompt. Unstructured estimate includes avg. 3x rework factor.

Start saving in 10 seconds

Single binary. macOS, Linux, Windows. No runtime dependencies.

$ brew tap oleg-koval/tap && brew install promptctl Copied!

or download binaries from GitHub Releases