Show HN: Runprompt – run .prompt files from the command line

github.com

122 points by chr15m 21 hours ago

I built a single-file Python script that lets you run LLM prompts from the command line with templating, structured outputs, and the ability to chain prompts together.

When I discovered Google's Dotprompt format (frontmatter + Handlebars templates), I realized it was perfect for something I'd been wanting: treating prompts as first-class programs you can pipe together Unix-style. Google uses Dotprompt in Firebase Genkit and I wanted something simpler - just run a .prompt file directly on the command line.

Here's what it looks like:

--- model: anthropic/claude-sonnet-4-20250514 output: format: json schema: sentiment: string, positive/negative/neutral confidence: number, 0-1 score --- Analyze the sentiment of: {{STDIN}}

Running it:

cat reviews.txt | ./runprompt sentiment.prompt | jq '.sentiment'

The things I think are interesting:

* Structured output schemas: Define JSON schemas in the frontmatter using a simple `field: type, description` syntax. The LLM reliably returns valid JSON you can pipe to other tools.

* Prompt chaining: Pipe JSON output from one prompt as template variables into the next. This makes it easy to build multi-step agentic workflows as simple shell pipelines.

* Zero dependencies: It's a single Python file that uses only stdlib. Just curl it down and run it.

* Provider agnostic: Works with Anthropic, OpenAI, Google AI, and OpenRouter (which gives you access to dozens of models through one API key).

You can use it to automate things like extracting structured data from unstructured text, generating reports from logs, and building small agentic workflows without spinning up a whole framework.

Would love your feedback, and PRs are most welcome!

Barathkanna 14 hours ago

This is really clever. Dotprompt as a thin, pipe-friendly layer around LLMs feels way more ergonomic than spinning up a whole agent stack. The single-file + stdlib approach is a nice touch too. How robust is the JSON schema enforcement when chaining multiple steps?

  • chr15m 11 hours ago

    If the LLM returns invalid schema the next link in the chain will send that through since it defaults to string input if the JSON doesn't parse. Maybe I should make it error out in that situation.

    • anonym29 8 hours ago

      Is including a json schema validator and running the output through the validator against the input prompt, such that you can detect when the output doesn't match the schema, and optionally retry until it does match (possibly with a max number of attempts before it throws an error) too complex of an idea for the target implementation concept you were envisioning?

      It certainly doesn't intuitively sound like it matches the "Do one thing" part of the Unix philosophy, but it does seem to match the "and do it well" part.

      That said, I can totally understand a counterargument which proposes that schema validation and processing logic should be something else that someone desiring reliability pipes the output into.

cootsnuck 19 hours ago

This is pretty cool. I like using snippets to run little scripts I have in the terminal (I use Alfred a lot on macOS). And right now I just manually do LLM requests in the scripts if needed, but I'd actually rather have a small library of prompts and then be able to pipe inputs and outputs between different scripts. This seems pretty perfect for that.

I wasn't aware of the whole ".prompt" format, but it makes a lot of sense.

Very neat. These are the kinds of tools I love to see. Functional and useful, not trying to be "the next big thing".

dymk 19 hours ago

Can the base URL be overridden so I can point it at eg Ollama or any other OpenAI compatible endpoint? I’d love to use this with local LLMs, for the speed and privacy boost.

  • chr15m 18 hours ago

    Good idea. Will figure out a way to do this.

    • khimaros 16 hours ago

      simple solution: honor OPENAI_API_BASE env var

    • benatkin 17 hours ago

      Perhaps instead of writing an llm abstraction layer, you could use a lightweight one, such as @simonw's llm.

      • chr15m 11 hours ago

        I don't want to introduce a dependency. Simon's tool is great but I don't like the way it stores template state. I want my state in a file in my working folder.

tomComb 18 hours ago

Everything seems to be about agents. Glad to see a post about enabling simple workflows!

meander_water 11 hours ago

This is really cool and interesting timing, as I created something similar recently - https://github.com/julio-mcdulio/pmp

I've been using mlflow to store my prompts, but wanted something lightweight on the cli to version and manage prompts. I setup pmp so you can have different storage backends (file, sqlite, mlflow etc.).

I wasn't aware of dotprompt, I might build that in too.

gessha 15 hours ago

Just like Linus being content with other people working on solutions to common problems, I’m so happy that you made this! I’ve had this idea for a long time but haven’t had the time to work on it.

__MatrixMan__ 18 hours ago

It would be cool if there were some cache (invalidated by hand, potentially distributed across many users) so we could get consistent results while iterating on the later stages of the pipeline.

  • stephenlf 17 hours ago

    That’s a great idea. Store inputs/outputs in XDG_CACHE_DIR/runprompt.sqlite

  • chr15m 18 hours ago

    Do you mean you want responses cached to e.g. a file based on the inputs?

    • __MatrixMan__ 8 hours ago

      Yeah, if it's a novel prompt, by all means send it to the model, but if its the same prompt as 30s ago, just immediately give me the same response I got 30s ago.

      That's typically how we expect bash pipelines to work, right?

  • dymk 11 hours ago

    “tee” where you want to intercept and cat that file into later stages?

    • __MatrixMan__ 8 hours ago

      Yeah sure but it breaks the flow that makes bash pipelines so fun:

      - arrow up

      - append a stage to the pipeline

      - repeat until output is as desired

      If you're gonna write to some named location and later read from it you're drifting towards a different mode of usage where you might as well write a python script.

cedws 19 hours ago

Can it be made to be directly executable with a shebang line?

  • leobuskin 13 hours ago

    /usr/local/bin/promptrun

      #!/bin/bash
      file="$1"
      model=$(sed -n '2p' "$file" | sed 's/^# \*//')
      prompt=$(tail -n +3 "$file")
      curl -s https://api.anthropic.com/v1/messages \
        -H "x-api-key: $ANTHROPIC_API_KEY" \
        -H "content-type: application/json" \
        -H "anthropic-version: 2023-06-01" \
        -d "{
          \"model\": \"$model\",
          \"max_tokens\": 1024,
          \"messages\": [{\"role\": \"user\", \"content\": $(echo "$prompt" | jq -Rs .)}]
        }" | jq -r '.content[0].text'
    
    
    hello.prompt

      #!/usr/local/bin/promptrun
      # claude-sonnet-4-20250514
    
      Write a haiku about terminal commands.
  • _joel 19 hours ago

    it already has one - https://github.com/chr15m/runprompt/blob/main/runprompt#L1

    If you curl/wget a script, you still need to chmod +x it. Git doesn't have this issue as it retains the file metadata.

    • vidarh 18 hours ago

      I'm assuming the intent was to as if the *.prompt files could have a shebang line.

         #!/bin/env runprompt
         ---
         .frontmatter...
         ---
         
         The prompt.
      
      Would be a lot nicer, as then you can just +x the prompt file itself.
  • chr15m 18 hours ago

    That's on my TODO list for tomorrow, thanks!

stephenlf 17 hours ago

Seeing lots of good ideas in this thread. I am taking the liberty of adding them as GH issues

stephenlf 18 hours ago

Fun! I love the idea of throwing LLM calls in a bash pipe

journal 15 hours ago

i literally vibe coded a tool like this. it supports image in, audio out, and archiving.

  • chr15m 11 hours ago

    Cool, I'm going to add file modalities too. Thanks for the validation!

ltbarcly3 17 hours ago

Ooof, I guess vibecoding is only as good as the vibecoder.

orliesaurus 19 hours ago

Why this over md files I already make and can be read by any agent CLI ( Claude, Gemini, codex, etc)?

  • chr15m 11 hours ago

    Less typing. More control over chaining prompts together. Reproducibility. Running different prompts on different providers and models. Easy to install and runs everywhere. Inserts into scripting workflows simply. 12 factor env config.

  • jsdwarf 17 hours ago

    Claude.md is an input to claude code which requires a monthly plan subscription north of 15€ / month. Same applies to Gemini.md, unless you are ok that they use your prompts for training Gemini. The python script works with a pay per use api key.

  • garfij 18 hours ago

    Do your markdown files have frontmatter configuration?

swah 18 hours ago

Thats pretty good, now lets see simonw's one...