No description
  • Rust 65.9%
  • Shell 16%
  • HTML 11.5%
  • CSS 4.9%
  • Makefile 0.9%
  • Other 0.8%
Find a file
Scott Yeager 5b801ffaaf Add weekly budget pacing widget on hero_claude_ui /admin
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-09 16:00:02 +00:00
crates Add weekly budget pacing widget on hero_claude_ui /admin 2026-05-09 16:00:02 +00:00
scripts Initial hero_claude: agent supervisor, RPC, SDK, UI 2026-05-09 01:03:21 +00:00
.gitignore Initial hero_claude: agent supervisor, RPC, SDK, UI 2026-05-09 01:03:21 +00:00
buildenv.sh Initial hero_claude: agent supervisor, RPC, SDK, UI 2026-05-09 01:03:21 +00:00
Cargo.lock Add Forgejo issue-driven agent workflow 2026-05-09 13:19:02 +00:00
Cargo.toml Add Forgejo issue-driven agent workflow 2026-05-09 13:19:02 +00:00
Makefile Initial hero_claude: agent supervisor, RPC, SDK, UI 2026-05-09 01:03:21 +00:00
README.md Add Forgejo issue-driven agent workflow 2026-05-09 13:19:02 +00:00
rust-toolchain.toml Initial hero_claude: agent supervisor, RPC, SDK, UI 2026-05-09 01:03:21 +00:00

hero_claude

Hero service that manages Claude Code-style autonomous agents. Submit a prompt

  • working directory + model + thinking effort via a web UI; the service spawns the claude CLI as a subprocess, parses its stream-json output, and tracks agents through running → awaiting_review → completed.

Relationship to claude-agent-sdk-python

This is, in spirit, a Rust port of the slice of claude-agent-sdk-python that we actually use. Both spawn the claude CLI as a subprocess in --print --output-format stream-json mode, drain it line-by-line, and turn the events into something application code can consume. The Python SDK gives you a library you embed; hero_claude wraps the same idea behind a service (RPC + SQLite + web UI) so multiple agents can run side-by-side and be observed.

Implemented

  • One-shot prompts via claude -p with stream-json output
  • Per-agent subprocess supervisor (parallel, no concurrency cap in v1)
  • Model selection (claude-haiku-4-5, claude-sonnet-4-6, claude-opus-4-7)
  • Effort levels (none, low, medium, high) → --effort flag
  • Permission mode (hard-wired to bypassPermissions for v1)
  • Cancellation (SIGTERM via Child::kill_on_drop + oneshot)
  • Token / cost accumulation from the terminal result line
  • Stream message persistence (every line stored verbatim, classified by type)
  • Crash recovery — running agents on disk are reconciled to failed on server restart so they don't appear stuck
  • Working-directory filter and three-bucket dashboard (running / awaiting_review / completed)
  • Plan mode — iterative planning sessions; the assistant produces a refreshed markdown plan each turn (with optional embedded HTML forms), the user refines, then hands off to a fresh executor agent.
  • Ralph loop — autonomous, time-bounded loops on a single goal. Specify a duration (10m, 1h30m, 30 minutes, 300) and an instruction; the service spawns sequential executor agents, replays prior iteration summaries to each, and stops at the deadline.
  • Forgejo issue workflow — paste a forge.ourworld.tf issue or repo URL on the "New agent" page. The repo is auto-cloned under $CODE_ROOT (default ~/code); the issue body + comments are turned into a prompt classified as bug-fix or plan-mode-feature; on agent finish, the final message is posted back to the issue as a comment via the Forgejo API. Auth: FORGEJO_TOKEN env var or ~/.config/forgejo/token (single line). All work stays local — nothing is pushed.

Not implemented (yet)

  • Bidirectional control protocol. The Python SDK speaks a JSON-RPC dialect to a long-lived claude process for streaming user input, mid-session tool approval, and interrupt/set_permission_mode calls. We do one-shot only.
  • Hooks. No PreToolUse / PostToolUse / Stop / UserPromptSubmit callbacks.
  • In-process MCP servers / @tool decorator. No way for the host process to expose Rust functions as tools to the agent.
  • Permission callbacks. No can_use_tool style prompts; we run with bypassPermissions.
  • Custom system prompts, --add-dir, --allowed-tools, --mcp-config, etc. — only the flags we use.
  • Resumable sessions (--resume, --continue) and partial-message streaming (--include-partial-messages).

If you need any of these, the claude CLI flags exist — claude_cli.rs is the ~30-line file to extend.

Requirements

  • Linux x86_64 (v1)
  • claude CLI on PATH — install with npm i -g @anthropic-ai/claude-code or follow the official install instructions, then claude login once.

Quick start

# Build & install both binaries to ~/hero/bin
make install

# Register with hero_proc and start (recommended)
make run                     # = hero_claude --start

# Or run directly in foreground (debug mode, no hero_proc)
make rundev                  # server  (rpc.sock)
make rundev-ui               # UI      (ui.sock)

The dashboard is reachable through hero_router once started; the UI binds a Unix socket at ~/hero/var/sockets/hero_claude/ui.sock.

Database: ~/hero/var/hero_claude/db.sqlite (created on first start).

Crates

Crate Role
hero_claude Server binary (RPC over rpc.sock) + lifecycle CLI
hero_claude_lib Models, sqlx store, supervisor, claude CLI driver
hero_claude_sdk Generated OpenRPC client from openrpc.json
hero_claude_ui Askama + Bootstrap + Unpoly dashboard (ui.sock)
hero_claude_examples Integration / smoke tests

CLI subcommands

hero_claude --start         # register with hero_proc and start
hero_claude --stop          # stop via hero_proc
hero_claude --status        # show hero_proc status
hero_claude serve           # run the RPC server in the foreground
hero_claude login           # shell out to `claude login`
hero_claude doctor          # print claude binary path/version, login state, db path