Hero OS — Master Roadmap #38

Open
opened 2026-03-18 17:24:39 +00:00 by mik-tf · 9 comments
Owner

Vision

Hero OS is a unified platform where every service is self-managing, discoverable, and observable. A single process manager (hero_proc) launches and monitors all services. Every service exposes its API via OpenRPC, and hero_inspector automatically bridges those specs into MCP — no manual MCP implementation anywhere. AI routes user intent through the right model at the right cost. All logs are centralized. Everything ships on development.


Architecture

User (Hero OS UI)
  │
  ├─ Simple intent ("show contacts")
  │   → hero_aibroker → small model → generated client → service
  │
  └─ Complex intent ("compare Q1 vs Q2 and draft summary")
      → hero_shrimp → big model → MCP (via inspector) → services

Core Components

Component Role
hero_proc Central process manager (replaces zinit). Launches all services, collects logs into SQLite, provides tree view of all prefixes
hero_inspector The bridge: reads OpenRPC specs → auto-generates MCP interfaces. MCP is only accessed via inspector, never implemented manually
hero_aibroker Intelligent LLM routing broker. Routes cheap vs expensive models based on intent. Generates Python clients from OpenRPC
hero_shrimp AI agent (Bun/TypeScript). Complex multi-step reasoning, MCP tool calls via inspector, conversation state. Stays TypeScript — no Rust rewrite
hero_auth OAuth-based SSO. SDK provides login module for all applications
hero_collab Group communication (Slack alternative). SQLite, OSIS, OpenRPC. Data export via WebDAV
hero_osis Schema-first object storage
hero_rpc Server framework: HeroRpcServer, HeroUiServer, lifecycle
hero_os Dioxus desktop shell + islands
hero_archipelagos All island components (AI, contacts, calendar, etc.)
hero_services Deployment orchestration, Docker builds

hero_proc — Process Manager

hero_proc replaces zinit as the unified process manager.

Self-starting services

  • Every service binary supports a --start flag
  • --start uses the Hero SDK to register the process with hero_proc
  • The SDK can be embedded in any binary — one pattern for all services
  • No more custom service management code
  • Makefile targets call binaries directly: make runhero_embedder --start
  • hero_proc launches all binaries (except itself)

Startup sequence

  1. Service configures itself and registers a health check with hero_proc
  2. Checks if it is up and running
  3. If not → error with clear, actionable prompt
  4. If healthy → continue

Logging

  • All service logs sent to hero_proc → stored in SQLite
  • Logs buffered in SDK (line by line, flushed every second)
  • Structured prefixes: embedder.workspace.id.job.id.source
  • Multiple log levels, centralized for observability
  • External work = always a job; internal work = a log
  • Tree view of all prefixes in hero_proc UI

SDK

  • SDK knows how to register a process with hero_proc
  • SDK provides login via hero_auth
  • SDK handles log buffering and shipping
  • Can be embedded/imported inside any binary that needs to be a managed service
  • Command line: --start uses SDK to launch and register

Service Standards

Every Hero service must expose:

  1. OpenRPC spec — single source of truth for the API
  2. Health endpoint — used by hero_proc for monitoring
  3. MCP — always via hero_inspector, never manually implemented

hero_inspector reads OpenRPC specs and auto-generates MCP interfaces. Inspector merges into the proxy and service layer.

Services architecture

  • Services are a combination of actions (the "run" concept is removed)
  • Actions can connect to each other; jobs execute within actions
  • Internal job logs captured automatically via the SDK
  • More modular, fewer mistakes

Git & Branches

  • development_kristof has been merged into development
  • All work continues on development
  • Some fixes needed to stabilize after the merge (#49)
  • Goal: everything clean on development, then ship

Testing Workflow

  1. Verify manually that things work
  2. If something is broken, capture a screenshot and file an issue
  3. Ask the AI to write a test that targets the issue — the test must fail, proving it catches the problem
  4. Fix the underlying issue
  5. Run the test again and confirm it passes — the fix is validated, and the test stays as a guard against regression

Roadmap Phases

P0: Unblock — Fix Docker deployment

Immediate blocker from the development_kristof merge.

Issue Title Status
#49 Docker deployment broken after service config restructuring open — blocker

Fix the flat config layout (services/*.toml vs services/user/*.toml), broken profile system, and binary name mismatches (_openrpc/_http vs _server/_ui).

P1: hero_proc — Process Manager

Replace zinit with hero_proc. Every service self-starts with --start.

Issue Title Status
#27 Unify server lifecycle: all Hero services should use one pattern open
#30 Deprecate OServer — migrate hero_os to HeroRpcServer open
#6 Fix 4 core services as needed by hero_rpc open
#50 Hero OS development plan: hero_proc, services, auth, collab, shipping open

Additional work (from #50):

  • Build hero_proc, stress test it
  • Add --start flag to every service binary
  • Update all Makefiles to use hero_<service> --start
  • Centralized logging to SQLite
  • Demo: terminal + UI

P2: Inspector + OpenRPC

All services expose OpenRPC. Inspector auto-generates MCP. No manual MCP anywhere.

Issue Title Status
#18 AI Broker: OpenRPC-driven Python code generation open (blocked on #27)
#29 hero_service: standard tests, E2E test harness, CLI tool open

Additional work:

  • hero_inspector --start via hero_proc
  • Merge inspector into proxy and services
  • Confirm MCP always via inspector (check with Timur)

P3: Stabilization — AI & UX

Fix AI experience, auth, conversations.

Issue Title Status
#32 AI SSE streaming (word-by-word response rendering) open
#45 AI conversations stored in OSIS open
#48 AI Assistant: double loading indicator open
#37 Fix preflight errors in hero_rpc open
#46 Build: auto-invalidate WASM cache open

P4: New Services — Auth, Collab, Compute

Issue Title Status
#33 Compute island: connect to TFGrid node open

New work:

  • hero_auth: OAuth SSO, SDK login module, used by all applications
  • hero_collab: Slack alternative — SQLite, OSIS, OpenRPC, WebDAV export. Group management (admin creates groups, assigns users). Ship fast.
  • UI theme: revert to original (current one too plain)

P5: Ship

Issue Title Status
#42 Comprehensive Hero ecosystem docs update open
#15 Cross-compilation & getting started docs open
#28 Dioxus Bootstrap migration open
#51 README & docs: setup flow for developers and users closed

Bulk of code → integration → fix bugs → ship. Services go live one by one. Compute goes live, people can play.


Other Items

  • Nushell integration on Hero
  • Git worktrees for deployment workflows
  • Hero browser MCP in Rust

Closed Issues

Issue Title
#34 4-pillar standard (OpenRPC+MCP+Health+Socket)
#35 Context creation fix
#36 Clean MCP architecture
#39 Browser favicons
#40 hero_auth users_delete bug
#41 hero_osis MCP data format issues
#43 Auth first-user setup
#44 AI chat markdown rendering (dark mode)
#47 Light mode bold text invisible in AI chat

Key Decisions

  • hero_proc replaces zinit — every service self-starts with --start, one pattern for all
  • MCP only via inspector — no service implements MCP directly, inspector reads OpenRPC and generates MCP
  • Shrimp stays TypeScript — fast iteration, LLM ecosystem
  • OpenRPC = single source of truth — MCP, Python clients, SDKs, docs, discovery all derived from it
  • All work on development — no more feature branches for now, stabilize and ship
  • Logging centralized — all logs → hero_proc → SQLite, observable, queryable
  • Right model for the job — small models for simple ops, big models for complex reasoning

Key Repos

Repo Role
hero_proc Process manager — launches, monitors, logs all services
hero_rpc (hero_service crate) Server framework: HeroRpcServer, HeroUiServer, lifecycle
hero_inspector Service discovery, docs, OpenRPC → MCP gateway
hero_aibroker Intelligent LLM routing + Python code generation
hero_shrimp AI agent + MCP tool management (Bun/TypeScript)
hero_auth OAuth SSO, login SDK
hero_collab Group communication, Slack alternative
hero_os Dioxus desktop shell + islands
hero_archipelagos All island components
hero_osis Schema-first object storage
hero_services Deployment orchestration, Docker builds

Build & Deploy

Build flow

cd lhumina_code/hero_services
source ~/hero/cfg/env/*
make dist
  • Rust service binaries built from local repos via Docker volume mount
  • Dioxus shell (hero_os_app) built by dx build with local [patch] overrides
  • cargo-local-patches.toml maps hero_archipelagos + hero_osis crates to local paths
  • Changes picked up immediately — no push needed before building

Deploy

cd lhumina_code
docker build --no-cache -f hero_services/Dockerfile.pack -t forge.ourworld.tf/lhumina_code/hero_zero:0.1.0-dev .
docker push forge.ourworld.tf/lhumina_code/hero_zero:0.1.0-dev
cd hero_services/deploy/single-vm && make update ENV=herodev

Image Tagging (semver 2.0)

Tag Purpose
hero_zero:0.1.0-dev Development builds, what herodev runs
hero_zero:0.1.0-rc1 Release candidate, ready for testing
hero_zero:0.1.0 Stable release, what users pull
hero_zero:latest Always points to latest stable

Rules

  • Build inside Docker only (never bare metal)
  • Dev builds use 0.1.0-dev tag. Never push a stable tag without confirmation
  • Issues tracked on lhumina_code/home
  • Source env vars before running anything: source ~/hero/cfg/env/*

Dev Environment

  • Working dir: lhumina_code/
  • Build: lhumina_code/hero_services/
  • Dev deploy: herodev.gent04.grid.tf
  • Issues: lhumina_code/home

#50 is the detailed development plan for hero_proc, services, auth, collab, and shipping — tracked as work item in P1. Updated 2026-03-19.

## Vision Hero OS is a unified platform where every service is self-managing, discoverable, and observable. A single process manager (hero_proc) launches and monitors all services. Every service exposes its API via OpenRPC, and hero_inspector automatically bridges those specs into MCP — no manual MCP implementation anywhere. AI routes user intent through the right model at the right cost. All logs are centralized. Everything ships on `development`. --- ## Architecture ``` User (Hero OS UI) │ ├─ Simple intent ("show contacts") │ → hero_aibroker → small model → generated client → service │ └─ Complex intent ("compare Q1 vs Q2 and draft summary") → hero_shrimp → big model → MCP (via inspector) → services ``` ### Core Components | Component | Role | |-----------|------| | **hero_proc** | Central process manager (replaces zinit). Launches all services, collects logs into SQLite, provides tree view of all prefixes | | **hero_inspector** | The bridge: reads OpenRPC specs → auto-generates MCP interfaces. MCP is *only* accessed via inspector, never implemented manually | | **hero_aibroker** | Intelligent LLM routing broker. Routes cheap vs expensive models based on intent. Generates Python clients from OpenRPC | | **hero_shrimp** | AI agent (Bun/TypeScript). Complex multi-step reasoning, MCP tool calls via inspector, conversation state. Stays TypeScript — no Rust rewrite | | **hero_auth** | OAuth-based SSO. SDK provides login module for all applications | | **hero_collab** | Group communication (Slack alternative). SQLite, OSIS, OpenRPC. Data export via WebDAV | | **hero_osis** | Schema-first object storage | | **hero_rpc** | Server framework: HeroRpcServer, HeroUiServer, lifecycle | | **hero_os** | Dioxus desktop shell + islands | | **hero_archipelagos** | All island components (AI, contacts, calendar, etc.) | | **hero_services** | Deployment orchestration, Docker builds | --- ## hero_proc — Process Manager hero_proc replaces zinit as the unified process manager. ### Self-starting services - Every service binary supports a `--start` flag - `--start` uses the Hero SDK to register the process with hero_proc - The SDK can be embedded in any binary — one pattern for all services - No more custom service management code - Makefile targets call binaries directly: `make run` → `hero_embedder --start` - hero_proc launches all binaries (except itself) ### Startup sequence 1. Service configures itself and registers a health check with hero_proc 2. Checks if it is up and running 3. If not → error with clear, actionable prompt 4. If healthy → continue ### Logging - All service logs sent to hero_proc → stored in SQLite - Logs buffered in SDK (line by line, flushed every second) - Structured prefixes: `embedder.workspace.id.job.id.source` - Multiple log levels, centralized for observability - External work = always a job; internal work = a log - Tree view of all prefixes in hero_proc UI ### SDK - SDK knows how to register a process with hero_proc - SDK provides login via hero_auth - SDK handles log buffering and shipping - Can be embedded/imported inside any binary that needs to be a managed service - Command line: `--start` uses SDK to launch and register --- ## Service Standards Every Hero service must expose: 1. **OpenRPC** spec — single source of truth for the API 2. **Health** endpoint — used by hero_proc for monitoring 3. **MCP** — always via hero_inspector, never manually implemented hero_inspector reads OpenRPC specs and auto-generates MCP interfaces. Inspector merges into the proxy and service layer. ### Services architecture - Services are a combination of **actions** (the "run" concept is removed) - Actions can connect to each other; jobs execute within actions - Internal job logs captured automatically via the SDK - More modular, fewer mistakes --- ## Git & Branches - `development_kristof` has been merged into `development` - **All work continues on `development`** - Some fixes needed to stabilize after the merge (#49) - Goal: everything clean on development, then ship --- ## Testing Workflow 1. Verify manually that things work 2. If something is broken, capture a screenshot and file an issue 3. Ask the AI to write a test that targets the issue — the test must fail, proving it catches the problem 4. Fix the underlying issue 5. Run the test again and confirm it passes — the fix is validated, and the test stays as a guard against regression --- ## Roadmap Phases ### P0: Unblock — Fix Docker deployment Immediate blocker from the `development_kristof` merge. | Issue | Title | Status | |-------|-------|--------| | #49 | Docker deployment broken after service config restructuring | **open — blocker** | Fix the flat config layout (`services/*.toml` vs `services/user/*.toml`), broken profile system, and binary name mismatches (`_openrpc`/`_http` vs `_server`/`_ui`). ### P1: hero_proc — Process Manager Replace zinit with hero_proc. Every service self-starts with `--start`. | Issue | Title | Status | |-------|-------|--------| | #27 | Unify server lifecycle: all Hero services should use one pattern | open | | #30 | Deprecate OServer — migrate hero_os to HeroRpcServer | open | | #6 | Fix 4 core services as needed by hero_rpc | open | | #50 | Hero OS development plan: hero_proc, services, auth, collab, shipping | open | Additional work (from #50): - Build hero_proc, stress test it - Add `--start` flag to every service binary - Update all Makefiles to use `hero_<service> --start` - Centralized logging to SQLite - Demo: terminal + UI ### P2: Inspector + OpenRPC All services expose OpenRPC. Inspector auto-generates MCP. No manual MCP anywhere. | Issue | Title | Status | |-------|-------|--------| | #18 | AI Broker: OpenRPC-driven Python code generation | open (blocked on #27) | | #29 | hero_service: standard tests, E2E test harness, CLI tool | open | Additional work: - hero_inspector `--start` via hero_proc - Merge inspector into proxy and services - Confirm MCP always via inspector (check with Timur) ### P3: Stabilization — AI & UX Fix AI experience, auth, conversations. | Issue | Title | Status | |-------|-------|--------| | #32 | AI SSE streaming (word-by-word response rendering) | open | | #45 | AI conversations stored in OSIS | open | | #48 | AI Assistant: double loading indicator | open | | #37 | Fix preflight errors in hero_rpc | open | | #46 | Build: auto-invalidate WASM cache | open | ### P4: New Services — Auth, Collab, Compute | Issue | Title | Status | |-------|-------|--------| | #33 | Compute island: connect to TFGrid node | open | New work: - **hero_auth**: OAuth SSO, SDK login module, used by all applications - **hero_collab**: Slack alternative — SQLite, OSIS, OpenRPC, WebDAV export. Group management (admin creates groups, assigns users). Ship fast. - **UI theme**: revert to original (current one too plain) ### P5: Ship | Issue | Title | Status | |-------|-------|--------| | #42 | Comprehensive Hero ecosystem docs update | open | | #15 | Cross-compilation & getting started docs | open | | #28 | Dioxus Bootstrap migration | open | | #51 | README & docs: setup flow for developers and users | **closed** | Bulk of code → integration → fix bugs → ship. Services go live one by one. Compute goes live, people can play. --- ## Other Items - Nushell integration on Hero - Git worktrees for deployment workflows - Hero browser MCP in Rust --- ## Closed Issues | Issue | Title | |-------|-------| | #34 | 4-pillar standard (OpenRPC+MCP+Health+Socket) | | #35 | Context creation fix | | #36 | Clean MCP architecture | | #39 | Browser favicons | | #40 | hero_auth `users_delete` bug | | #41 | hero_osis MCP data format issues | | #43 | Auth first-user setup | | #44 | AI chat markdown rendering (dark mode) | | #47 | Light mode bold text invisible in AI chat | --- ## Key Decisions - **hero_proc replaces zinit** — every service self-starts with `--start`, one pattern for all - **MCP only via inspector** — no service implements MCP directly, inspector reads OpenRPC and generates MCP - **Shrimp stays TypeScript** — fast iteration, LLM ecosystem - **OpenRPC = single source of truth** — MCP, Python clients, SDKs, docs, discovery all derived from it - **All work on `development`** — no more feature branches for now, stabilize and ship - **Logging centralized** — all logs → hero_proc → SQLite, observable, queryable - **Right model for the job** — small models for simple ops, big models for complex reasoning --- ## Key Repos | Repo | Role | |------|------| | hero_proc | Process manager — launches, monitors, logs all services | | hero_rpc (`hero_service` crate) | Server framework: HeroRpcServer, HeroUiServer, lifecycle | | hero_inspector | Service discovery, docs, OpenRPC → MCP gateway | | hero_aibroker | Intelligent LLM routing + Python code generation | | hero_shrimp | AI agent + MCP tool management (Bun/TypeScript) | | hero_auth | OAuth SSO, login SDK | | hero_collab | Group communication, Slack alternative | | hero_os | Dioxus desktop shell + islands | | hero_archipelagos | All island components | | hero_osis | Schema-first object storage | | hero_services | Deployment orchestration, Docker builds | --- ## Build & Deploy ### Build flow ```bash cd lhumina_code/hero_services source ~/hero/cfg/env/* make dist ``` - Rust service binaries built from local repos via Docker volume mount - Dioxus shell (hero_os_app) built by `dx build` with local `[patch]` overrides - `cargo-local-patches.toml` maps hero_archipelagos + hero_osis crates to local paths - Changes picked up immediately — no push needed before building ### Deploy ```bash cd lhumina_code docker build --no-cache -f hero_services/Dockerfile.pack -t forge.ourworld.tf/lhumina_code/hero_zero:0.1.0-dev . docker push forge.ourworld.tf/lhumina_code/hero_zero:0.1.0-dev cd hero_services/deploy/single-vm && make update ENV=herodev ``` ### Image Tagging (semver 2.0) | Tag | Purpose | |-----|--------| | `hero_zero:0.1.0-dev` | Development builds, what herodev runs | | `hero_zero:0.1.0-rc1` | Release candidate, ready for testing | | `hero_zero:0.1.0` | Stable release, what users pull | | `hero_zero:latest` | Always points to latest stable | ### Rules - Build inside Docker only (never bare metal) - Dev builds use `0.1.0-dev` tag. Never push a stable tag without confirmation - Issues tracked on `lhumina_code/home` - Source env vars before running anything: `source ~/hero/cfg/env/*` --- ## Dev Environment - **Working dir**: `lhumina_code/` - **Build**: `lhumina_code/hero_services/` - **Dev deploy**: herodev.gent04.grid.tf - **Issues**: `lhumina_code/home` --- *#50 is the detailed development plan for hero_proc, services, auth, collab, and shipping — tracked as work item in P1. Updated 2026-03-19.*
Author
Owner

Session 20 — Status Update

Completed: Issue #43 — Auth First-User Setup

Commits pushed to development:

  • hero_osis 56a7ab4user_create and user_count RPC methods
  • hero_os 96c33ae — Setup wizard + login UX improvement

What was built:

  • Backend: Two new UserService RPC methods (user_create, user_count) with input validation, argon2 password hashing, and automatic Owner role for first user
  • Frontend: Setup wizard component shown on fresh instances (no users). Once the first user registers, the wizard disappears and normal login takes over
  • Login screen label changed from "Name" to "Name or Email" (backend already accepted both)

Tested locally in Docker:

  • Fresh instance → setup wizard appears → create account → auto-login → desktop loads
  • Existing users → login screen appears → login with name or email works
  • Validation: empty name, short password, duplicate email all return proper errors
  • First user gets Owner role, subsequent users get Member

Remaining from roadmap

  • #42 — Comprehensive docs update
  • #37 — Fix preflight CORS errors in hero_rpc
  • #45 / #32 — Deferred (Shrimp design decision needed)

Blockers found (not from our work)

  • hero_services: Kristof's profile restructure (development_kristof merge) broke build pipeline — service TOMLs use wrong binary names (_openrpc/_http instead of _server/_ui), Dockerfile.pack missing profiles copy, build-local.sh references deleted services/user/ directory
  • hero_redis: Kristof's merge introduced compile errors (missing imports, removed modules)
  • hero_voice, hero_aibroker: Also fail to compile on current development

These are separate from #43 and need their own fix session.

## Session 20 — Status Update ### Completed: Issue #43 — Auth First-User Setup **Commits pushed to `development`:** - **hero_osis** [`56a7ab4`](https://forge.ourworld.tf/lhumina_code/hero_osis/commit/56a7ab4) — `user_create` and `user_count` RPC methods - **hero_os** [`96c33ae`](https://forge.ourworld.tf/lhumina_code/hero_os/commit/96c33ae) — Setup wizard + login UX improvement **What was built:** - Backend: Two new `UserService` RPC methods (`user_create`, `user_count`) with input validation, argon2 password hashing, and automatic Owner role for first user - Frontend: Setup wizard component shown on fresh instances (no users). Once the first user registers, the wizard disappears and normal login takes over - Login screen label changed from "Name" to "Name or Email" (backend already accepted both) **Tested locally in Docker:** - Fresh instance → setup wizard appears → create account → auto-login → desktop loads - Existing users → login screen appears → login with name or email works - Validation: empty name, short password, duplicate email all return proper errors - First user gets Owner role, subsequent users get Member ### Remaining from roadmap - **#42** — Comprehensive docs update - **#37** — Fix preflight CORS errors in hero_rpc - **#45 / #32** — Deferred (Shrimp design decision needed) ### Blockers found (not from our work) - `hero_services`: Kristof's profile restructure (`development_kristof` merge) broke build pipeline — service TOMLs use wrong binary names (`_openrpc`/`_http` instead of `_server`/`_ui`), `Dockerfile.pack` missing profiles copy, `build-local.sh` references deleted `services/user/` directory - `hero_redis`: Kristof's merge introduced compile errors (missing imports, removed modules) - `hero_voice`, `hero_aibroker`: Also fail to compile on current `development` These are separate from #43 and need their own fix session.
mik-tf changed title from Hero OS AI & Service Integration — Master Roadmap to Hero OS — Master Roadmap 2026-03-19 19:25:01 +00:00
Author
Owner

Update 2026-03-19: New development plan

Issue #50 outlines the new Hero OS development plan:

  • hero_proc replaces zinit as process manager
  • Every service self-registers with --start flag
  • OpenRPC/MCP through hero_inspector
  • Binary naming: _openrpc / _http (replacing _server / _ui)
  • development_kristof merged into development

Immediate blocker: #49 — Docker deployment broken after the merge. Fix in progress.

### Update 2026-03-19: New development plan **Issue #50** outlines the new Hero OS development plan: - hero_proc replaces zinit as process manager - Every service self-registers with --start flag - OpenRPC/MCP through hero_inspector - Binary naming: _openrpc / _http (replacing _server / _ui) - development_kristof merged into development **Immediate blocker: #49** — Docker deployment broken after the merge. Fix in progress.
Author
Owner

#51 (README & docs: setup flow for developers and users) is now closed.

  • README rewritten for two audiences: users (docker pull :hero) and developers (bootstrap → make run)
  • make run now does distpackdocker run from source — same image users pull
  • Env vars documented as 3 ways (UI, source, CLI), UI prompts on first use
  • Semver tagging convention in place

Signed-off-by: mik-tf

#51 (README & docs: setup flow for developers and users) is now **closed**. - README rewritten for two audiences: users (docker pull `:hero`) and developers (bootstrap → `make run`) - `make run` now does `dist` → `pack` → `docker run` from source — same image users pull - Env vars documented as 3 ways (UI, source, CLI), UI prompts on first use - Semver tagging convention in place Signed-off-by: mik-tf
Author
Owner

P1 hero_proc: Phase 1 complete

hero_proc replaces zinit in Docker deployment (commit bd14ea1 on hero_services). 21/21 services running.

Remaining P1 work:

  • #52 — Migrate 5 services to new CLI pattern
  • Phase 2 — Self-registering services via --start

Signed-off-by: mik-tf

**P1 hero_proc: Phase 1 complete** hero_proc replaces zinit in Docker deployment (commit bd14ea1 on hero_services). 21/21 services running. Remaining P1 work: - #52 — Migrate 5 services to new CLI pattern - Phase 2 — Self-registering services via --start Signed-off-by: mik-tf
Author
Owner

P1 Progress Update

P0 (Docker deployment): Done (#49 closed)
P1 (hero_proc): Phase 1 (hero_proc replaces zinit) and Phase 2 (all services on HeroServer CLI) complete. #52 closed. Remaining: --start self-registration, centralized logging.

All 21 services running with serve subcommand pattern. Next: Phase 3 of #50.

## P1 Progress Update **P0 (Docker deployment)**: Done (#49 closed) **P1 (hero_proc)**: Phase 1 (hero_proc replaces zinit) and Phase 2 (all services on HeroServer CLI) complete. #52 closed. Remaining: `--start` self-registration, centralized logging. All 21 services running with `serve` subcommand pattern. Next: Phase 3 of #50.
Author
Owner

Roadmap progress update

Completed

Priority Item Status
P0 Docker deployment (#49) Done
P1.1 hero_proc replaces zinit Done — 21/21 services
P1.2 HeroServer CLI pattern (#52) Done — all binaries
P1.3 Self-registration via HeroLifecycle Done — unified lifecycle, self-registering binaries
P2 Inspector/MCP cleanup Done — hero_auth manual /mcp removed, inspector is sole MCP gateway

In progress / next

Priority Item Status
P1.4 Centralized logging SQLite storage done, SDK shipping next
P3 AI/UX bugs (#32, #45, #48, #37, #46) Not started
P4 hero_collab Not started
P5 Ship (docs, testing, deploy) Not started

Key architectural changes

  • Single lifecycle pattern: HeroLifecycle from hero_service crate. Every binary supports start, stop, status, logs, run, serve
  • Self-registration: Each binary registers itself with hero_proc via start command. hero_services_server is thin orchestrator
  • MCP via inspector only: hero_inspector reads OpenRPC from all services and generates MCP tools. No service implements MCP directly
  • Local build patches: cargo-server-patches.toml redirects git deps to local repos during Docker builds

Signed-off-by: mik-tf

## Roadmap progress update ### Completed | Priority | Item | Status | |----------|------|--------| | P0 | Docker deployment (#49) | Done | | P1.1 | hero_proc replaces zinit | Done — 21/21 services | | P1.2 | HeroServer CLI pattern (#52) | Done — all binaries | | P1.3 | Self-registration via HeroLifecycle | **Done** — unified lifecycle, self-registering binaries | | P2 | Inspector/MCP cleanup | **Done** — hero_auth manual /mcp removed, inspector is sole MCP gateway | ### In progress / next | Priority | Item | Status | |----------|------|--------| | P1.4 | Centralized logging | SQLite storage done, SDK shipping next | | P3 | AI/UX bugs (#32, #45, #48, #37, #46) | Not started | | P4 | hero_collab | Not started | | P5 | Ship (docs, testing, deploy) | Not started | ### Key architectural changes - **Single lifecycle pattern**: `HeroLifecycle` from `hero_service` crate. Every binary supports `start`, `stop`, `status`, `logs`, `run`, `serve` - **Self-registration**: Each binary registers itself with hero_proc via `start` command. hero_services_server is thin orchestrator - **MCP via inspector only**: hero_inspector reads OpenRPC from all services and generates MCP tools. No service implements MCP directly - **Local build patches**: `cargo-server-patches.toml` redirects git deps to local repos during Docker builds Signed-off-by: mik-tf
Author
Owner

Master roadmap — session progress

Completed this session

Priority Item Status
P1.3 Self-registration via HeroLifecycle Done — 28/29 binaries
P1.4 Centralized logging (SDK) Done — HeroLogger with buffered shipping
P2 Inspector/MCP cleanup Done — hero_auth /mcp removed
Lifecycle migration (proxy, foundry_admin, biz) Done
Build system (local patches) Done
Architecture docs Done

Remaining

Priority Item Status
P1.4 Log UI tree view Open (E5)
P3 AI/UX bugs (#32, #45, #48, #37, #46) Open (Stream C)
P4 hero_collab Open (Stream D)
P5 Ship (docs, testing, deploy) Open (Stream F)

Docker image status

  • Tag: hero_zero:dev
  • Services: 20/21 running (hero_foundry_ui = separate binary, expected)
  • All smoke tests pass
  • Build: 39 binaries, validation passed

Signed-off-by: mik-tf

## Master roadmap — session progress ### Completed this session | Priority | Item | Status | |----------|------|--------| | P1.3 | Self-registration via HeroLifecycle | Done — 28/29 binaries | | P1.4 | Centralized logging (SDK) | Done — HeroLogger with buffered shipping | | P2 | Inspector/MCP cleanup | Done — hero_auth /mcp removed | | — | Lifecycle migration (proxy, foundry_admin, biz) | Done | | — | Build system (local patches) | Done | | — | Architecture docs | Done | ### Remaining | Priority | Item | Status | |----------|------|--------| | P1.4 | Log UI tree view | Open (E5) | | P3 | AI/UX bugs (#32, #45, #48, #37, #46) | Open (Stream C) | | P4 | hero_collab | Open (Stream D) | | P5 | Ship (docs, testing, deploy) | Open (Stream F) | ### Docker image status - Tag: `hero_zero:dev` - Services: 20/21 running (hero_foundry_ui = separate binary, expected) - All smoke tests pass - Build: 39 binaries, validation passed Signed-off-by: mik-tf
Author
Owner

Progress Update — 2026-03-20

Closed 4 issues in this session: #61, #62, #63, #64. All changes pushed to development across 3 repos.

What was done

#64 — Smoke test harness (new)
Created hero_services/tests/smoke.sh — 57 tests across 10 categories (health, proxy routing, auth flow, WASM content, admin dashboards, seed data, CORS, RPC discovery, service endpoints, socat bridges). Makefile targets: make smoke, make smoke-remote, make smoke-docker. Final result: 56 passed, 0 failed, 1 skipped.

#62 — Pre-download embedder models
BGE embedding models (1.9G) are now downloaded at build time and baked into the Docker image. Eliminates 2-3 min HuggingFace download on first boot. Entrypoint waits for OSIS socket before admin seeding.

#63 — WASM socket routing 502s
Fixed 6 broken services:

  • hero_aibroker: added to profile (server+ui), fixed binary name and serve subcommand, socket symlink
  • hero_auth_ui: symlink to hero_auth_server.sock
  • hero_foundry_ui: replaced dead socat bridge with symlink to hero_foundry_admin.sock
  • hero_shrimp: fixed TOML to use compiled binary, added to profile, socat bridge
  • hero_biz: added to profile
  • hero_voice: added missing hero_service dependency — now builds successfully

#61 — Zinit→hero_proc islands
Rewrote services and service archipelago islands from zinit REST API (port 9800) to iframe embeds of hero_proc_ui. Added zinit_ui.sock → hero_proc_ui.sock symlink. No more zinit dependency.

Roadmap impact

Phase Progress
P0 Docker deployment is stable. 39 services running, all proxy routes working.
P1 hero_proc fully operational as process manager. zinit completely replaced in UI layer. All services self-register via hero_proc.
P2 Inspector discovers 39 services, MCP gateway working (tested via smoke).

Current state

  • Docker image: hero_zero:dev (5.6GB with baked models)
  • Services: 39 running via hero_proc
  • Smoke tests: 56/57 pass (1 skip = timing)
  • Local test: docker run -d -p 8080:6666 hero_zero:devhttp://localhost:8080 works
  • Login: admin / hero123

Signed-off-by: mik-tf

## Progress Update — 2026-03-20 Closed 4 issues in this session: #61, #62, #63, #64. All changes pushed to `development` across 3 repos. ### What was done **#64 — Smoke test harness (new)** Created `hero_services/tests/smoke.sh` — 57 tests across 10 categories (health, proxy routing, auth flow, WASM content, admin dashboards, seed data, CORS, RPC discovery, service endpoints, socat bridges). Makefile targets: `make smoke`, `make smoke-remote`, `make smoke-docker`. Final result: 56 passed, 0 failed, 1 skipped. **#62 — Pre-download embedder models** BGE embedding models (1.9G) are now downloaded at build time and baked into the Docker image. Eliminates 2-3 min HuggingFace download on first boot. Entrypoint waits for OSIS socket before admin seeding. **#63 — WASM socket routing 502s** Fixed 6 broken services: - hero_aibroker: added to profile (server+ui), fixed binary name and serve subcommand, socket symlink - hero_auth_ui: symlink to hero_auth_server.sock - hero_foundry_ui: replaced dead socat bridge with symlink to hero_foundry_admin.sock - hero_shrimp: fixed TOML to use compiled binary, added to profile, socat bridge - hero_biz: added to profile - hero_voice: added missing hero_service dependency — now builds successfully **#61 — Zinit→hero_proc islands** Rewrote `services` and `service` archipelago islands from zinit REST API (port 9800) to iframe embeds of hero_proc_ui. Added `zinit_ui.sock → hero_proc_ui.sock` symlink. No more zinit dependency. ### Roadmap impact | Phase | Progress | |-------|----------| | **P0** | Docker deployment is stable. 39 services running, all proxy routes working. | | **P1** | hero_proc fully operational as process manager. zinit completely replaced in UI layer. All services self-register via hero_proc. | | **P2** | Inspector discovers 39 services, MCP gateway working (tested via smoke). | ### Current state - Docker image: `hero_zero:dev` (5.6GB with baked models) - Services: 39 running via hero_proc - Smoke tests: 56/57 pass (1 skip = timing) - Local test: `docker run -d -p 8080:6666 hero_zero:dev` → http://localhost:8080 works - Login: admin / hero123 Signed-off-by: mik-tf
Author
Owner

Future items absorbed from #56

  • Nushell integration on Hero
  • Git worktrees for deployment workflows
  • Hero browser MCP — full Rust-native rewrite (hero_browser_server exists but uses Chrome DevTools protocol)

Signed-off-by: mik-tf

## Future items absorbed from #56 - Nushell integration on Hero - Git worktrees for deployment workflows - Hero browser MCP — full Rust-native rewrite (hero_browser_server exists but uses Chrome DevTools protocol) Signed-off-by: mik-tf
Sign in to join this conversation.
No labels
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
lhumina_code/home#38
No description provided.