Migrate all repos to directory socket convention #116

Open
opened 2026-04-13 13:34:00 +00:00 by mik-tf · 4 comments
Owner

Background

The Hero stack uses two socket conventions:

  • Old (flat): $HERO_SOCKET_DIR/hero_<name>_server.sock
  • New (directory): $HERO_SOCKET_DIR/hero_<name>/rpc.sock + hero_<name>/ui.sock

hero_agent and hero_aibroker were migrated by devops. The skills docs were updated in PR #73 on hero_skills. But ~50% of repos still reference the old flat convention.

Repos to migrate

  • hero_auth (cli, sdk, ui — 5+ references) -> archived
  • hero_db (sdk, ui — 4 references)
  • hero_zero (cli — 2 references)
  • hero_browser_mcp (sdk, openrpc — 2 references)
  • hero_foundry_ui (sdk — 2 references)
  • hero_books (comment only)
  • hero_router (scanner display comments) ??? think its ok
  • Service TOMLs in hero_zero (hero_agent, hero_aibroker, hero_shrimp, hero_compute_manager, forgejo_mcp)

Convention

$HERO_SOCKET_DIR/
├── hero_<name>/
│   ├── rpc.sock      # server
│   └── ui.sock       # admin UI

Env var cascade: $HERO_<NAME>_SOCKET$HERO_SOCKET_DIR/<name>/rpc.sock$HOME/hero/var/sockets/<name>/rpc.sock

Reference: hero_agent commit 6e4b9e4, hero_skills PR #73

## Background The Hero stack uses two socket conventions: - **Old (flat):** `$HERO_SOCKET_DIR/hero_<name>_server.sock` - **New (directory):** `$HERO_SOCKET_DIR/hero_<name>/rpc.sock` + `hero_<name>/ui.sock` hero_agent and hero_aibroker were migrated by devops. The skills docs were updated in PR #73 on hero_skills. But ~50% of repos still reference the old flat convention. ## Repos to migrate - [x] hero_auth (cli, sdk, ui — 5+ references) -> archived - [x] hero_db (sdk, ui — 4 references) - [ ] hero_zero (cli — 2 references) - [x] hero_browser_mcp (sdk, openrpc — 2 references) - [x] hero_foundry_ui (sdk — 2 references) - [x] hero_books (comment only) - [ ] hero_router (scanner display comments) ??? think its ok - [ ] Service TOMLs in hero_zero (hero_agent, hero_aibroker, hero_shrimp, hero_compute_manager, forgejo_mcp) ## Convention ``` $HERO_SOCKET_DIR/ ├── hero_<name>/ │ ├── rpc.sock # server │ └── ui.sock # admin UI ``` Env var cascade: `$HERO_<NAME>_SOCKET` → `$HERO_SOCKET_DIR/<name>/rpc.sock` → `$HOME/hero/var/sockets/<name>/rpc.sock` Reference: hero_agent commit 6e4b9e4, hero_skills PR #73
Owner

Compliance Audit: hero_browser_mcp

Audited the hero_browser_mcp repository against two Hero standards: Hero Sockets and CLI-managed Service Selfstart.


Pattern 1 — Hero Sockets

  • Service socket directory correctly named hero_browser/ (not hero_browser_server/ or hero_browser_mcp/)
  • Both rpc.sock and ui.sock present under $HERO_SOCKET_DIR/hero_browser/
  • All required endpoints implemented on rpc.sock: POST /rpc, GET /openrpc.json, GET /health, GET /.well-known/heroservice.json
  • ui.sock has /health and /.well-known/heroservice.json
  • TCP port 8884 is the documented exception for MCP HTTP transport (required for Claude Code MCP integration)

Pattern 2 — CLI-managed Service Selfstart

  • Single CLI binary hero_browser owns --start and --stop
  • --start correctly calls restart_service() (idempotent — safe whether service is running or stopped)
  • --stop calls stop_service()
  • Server binary (hero_browser_server) and UI binary (hero_browser_ui) have no lifecycle flags, no clap, no direct process management
  • Both server and UI actions have .is_process(), kill_other (socket/port cleanup), and health checks configured
  • Makefile run target correctly calls hero_browser --start

Result

100% compliant — no issues found. No changes required.

## Compliance Audit: hero_browser_mcp Audited the [hero_browser_mcp](https://forge.ourworld.tf/lhumina_code/hero_browser_mcp) repository against two Hero standards: **Hero Sockets** and **CLI-managed Service Selfstart**. --- ### Pattern 1 — Hero Sockets ✅ - Service socket directory correctly named `hero_browser/` (not `hero_browser_server/` or `hero_browser_mcp/`) - Both `rpc.sock` and `ui.sock` present under `$HERO_SOCKET_DIR/hero_browser/` - All required endpoints implemented on `rpc.sock`: `POST /rpc`, `GET /openrpc.json`, `GET /health`, `GET /.well-known/heroservice.json` - `ui.sock` has `/health` and `/.well-known/heroservice.json` - TCP port 8884 is the documented exception for MCP HTTP transport (required for Claude Code MCP integration) --- ### Pattern 2 — CLI-managed Service Selfstart ✅ - Single CLI binary `hero_browser` owns `--start` and `--stop` - `--start` correctly calls `restart_service()` (idempotent — safe whether service is running or stopped) - `--stop` calls `stop_service()` - Server binary (`hero_browser_server`) and UI binary (`hero_browser_ui`) have no lifecycle flags, no `clap`, no direct process management - Both server and UI actions have `.is_process()`, `kill_other` (socket/port cleanup), and health checks configured - Makefile `run` target correctly calls `hero_browser --start` --- ### Result **100% compliant** — no issues found. No changes required.
Owner

https://forge.ourworld.tf/lhumina_code/hero_auth_archive
is now archived, is part of new proxy

https://forge.ourworld.tf/lhumina_code/hero_auth_archive is now archived, is part of new proxy
Owner

hero_books compliance fixes for hero_proc_service_selfstart pattern

Checked the hero_books repo against the CLI-managed service registration pattern and applied two fixes:

Fix 1: Removed unused hero_proc_sdk from hero_books_server/Cargo.toml

The server binary had hero_proc_sdk listed as a dependency but never imported or used. Per the pattern, only the CLI binary (hero_books) should depend on hero_proc_sdk. Removed from crates/hero_books_server/Cargo.toml.

Fix 2: Replaced empty Some(vec![]) / Some(String::new()) with None in kill_other

All three actions (hero_books_server, hero_books_ui, hero_books_admin) had their kill_other fields set with empty placeholders instead of None:

// Before
kill_other = Some(KillOther {
    action: Some(String::new()),
    process_filters: Some(vec![]),
    port: Some(vec![]),
    socket: Some(vec![...]),
});

// After
kill_other = Some(KillOther {
    action: None,
    process_filters: None,
    port: None,
    socket: Some(vec![...]),
});

Note: UI and admin health checks correctly keep all check-type fields as Nonehero_proc's Http health check only supports TCP host:port connections and has no HTTP-over-Unix-socket support, so there is no valid health check type for ui.sock and web_admin.sock. Those fall back to process liveness monitoring.

Overall compliance: the repo was already well-aligned with the pattern — --start calls restart_service(), all actions have .is_process() and kill_other, socket paths follow $HERO_SOCKET_DIR/hero_books/<type>.sock, and the Makefile correctly delegates to the CLI binary only.

## hero_books compliance fixes for `hero_proc_service_selfstart` pattern Checked the `hero_books` repo against the CLI-managed service registration pattern and applied two fixes: ### Fix 1: Removed unused `hero_proc_sdk` from `hero_books_server/Cargo.toml` The server binary had `hero_proc_sdk` listed as a dependency but never imported or used. Per the pattern, only the CLI binary (`hero_books`) should depend on `hero_proc_sdk`. Removed from `crates/hero_books_server/Cargo.toml`. ### Fix 2: Replaced empty `Some(vec![])` / `Some(String::new())` with `None` in `kill_other` All three actions (`hero_books_server`, `hero_books_ui`, `hero_books_admin`) had their `kill_other` fields set with empty placeholders instead of `None`: ```rust // Before kill_other = Some(KillOther { action: Some(String::new()), process_filters: Some(vec![]), port: Some(vec![]), socket: Some(vec![...]), }); // After kill_other = Some(KillOther { action: None, process_filters: None, port: None, socket: Some(vec![...]), }); ``` Note: UI and admin health checks correctly keep all check-type fields as `None` — `hero_proc`'s `Http` health check only supports TCP host:port connections and has no HTTP-over-Unix-socket support, so there is no valid health check type for `ui.sock` and `web_admin.sock`. Those fall back to process liveness monitoring. **Overall compliance**: the repo was already well-aligned with the pattern — `--start` calls `restart_service()`, all actions have `.is_process()` and `kill_other`, socket paths follow `$HERO_SOCKET_DIR/hero_books/<type>.sock`, and the Makefile correctly delegates to the CLI binary only.
Owner

hero_foundry_ui — migration complete ✓

Repo: https://forge.ourworld.tf/lhumina_code/hero_foundry_ui
Branch: development

Changes applied

Socket path correction (hero_foundry_ui_sdk, hero_foundry_ui_server)

  • hero_foundry_ui/rpc.sockhero_foundry/rpc.sock
  • hero_foundry_ui/ui.sockhero_foundry/ui.sock
  • Both now respect $HERO_SOCKET_DIR (fallback ~/hero/var/sockets)

Server binary implemented (hero_foundry_ui_server/src/main.rs)

  • Was a TODO stub; now a full Axum-over-Hyper Unix socket server
  • POST /rpc (JSON-RPC 2.0), GET /openrpc.json, GET /health, GET /.well-known/heroservice.json

UI binary — TCP → Unix socket (hero_foundry_ui/src/main.rs)

  • Replaced .bind("127.0.0.1:8654") with .bind_uds(hero_foundry/ui.sock)
  • Added GET /health + GET /.well-known/heroservice.json routes

New CLI crate (crates/hero_foundry/)

  • New binary hero_foundry with --start (calls hp.restart_service()) and --stop
  • Registers both server and UI actions with kill_other + is_process() + health checks

Build files

  • Cargo.toml — added crates/hero_foundry workspace member
  • buildenv.sh — added hero_foundry to BINARIES, fixed socket path comments
  • Makefilerunhero_foundry --start, new stophero_foundry --stop (removed zinit calls)

cargo check --workspace passes cleanly.

## hero_foundry_ui — migration complete ✓ Repo: https://forge.ourworld.tf/lhumina_code/hero_foundry_ui Branch: `development` ### Changes applied **Socket path correction** (`hero_foundry_ui_sdk`, `hero_foundry_ui_server`) - `hero_foundry_ui/rpc.sock` → `hero_foundry/rpc.sock` - `hero_foundry_ui/ui.sock` → `hero_foundry/ui.sock` - Both now respect `$HERO_SOCKET_DIR` (fallback `~/hero/var/sockets`) **Server binary implemented** (`hero_foundry_ui_server/src/main.rs`) - Was a TODO stub; now a full Axum-over-Hyper Unix socket server - `POST /rpc` (JSON-RPC 2.0), `GET /openrpc.json`, `GET /health`, `GET /.well-known/heroservice.json` **UI binary — TCP → Unix socket** (`hero_foundry_ui/src/main.rs`) - Replaced `.bind("127.0.0.1:8654")` with `.bind_uds(hero_foundry/ui.sock)` - Added `GET /health` + `GET /.well-known/heroservice.json` routes **New CLI crate** (`crates/hero_foundry/`) - New binary `hero_foundry` with `--start` (calls `hp.restart_service()`) and `--stop` - Registers both server and UI actions with `kill_other` + `is_process()` + health checks **Build files** - `Cargo.toml` — added `crates/hero_foundry` workspace member - `buildenv.sh` — added `hero_foundry` to `BINARIES`, fixed socket path comments - `Makefile` — `run` → `hero_foundry --start`, new `stop` → `hero_foundry --stop` (removed zinit calls) `cargo check --workspace` passes cleanly.
Sign in to join this conversation.
No labels
No milestone
No project
No assignees
2 participants
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
lhumina_code/home#116
No description provided.