Hero v1 Roadmap: how the pieces compose toward a single binary release #120

Open
opened 2026-04-15 00:06:28 +00:00 by mik-tf · 1 comment
Owner

TL;DR. Hero is converging on a single binary that runs the whole stack (hero_compute#45). Project 13 has the pieces but not the order. This suggests six phases for composing them into v1, plus one process rule: stage cross-repo refactors the way #118 does, so we stop hitting breaks like hero_osis#23.


Hi guys!

Hero is converging on one shape: a single statically linked binary that runs the whole stack (osis, agent, embedder, aibroker, books, collab, the lot), installable with one command, logged in once via hero_proxy, rendered in a Dioxus web desktop. That's where hero_compute#45 points, and it's what Timur's rename work (hero_rpc#13), Mahmoud's osis consolidation, the auth move to hero_proxy (#118), and the 100% Dioxus SPA goal (#104) are all converging on.

Right now the Hero OS demo runs online via hero_zero (the repo previously known as hero_services), which packages every service binary, service TOML, seed file, and runtime asset into one Docker image and runs it on a TFGrid VM. That's been useful for shipping demos fast, but it's a transitional layer. Once hero_compute#45 lands (one binary, all services inside it), hero_zero's assembly role mostly goes away, and deploying Hero becomes a one-line install anywhere.

Project 13 tracks the individual work. What it doesn't show is the order. A lot of these pieces depend on each other, and shipping them out of sequence is what causes problems like last week's hero_osis#23, where the embedder SDK got removed before hero_archipelagos_intelligence was migrated, and downstream builds broke until we pinned.

So here's a suggested order, six phases, all grounded in tickets already on the board:

  1. Foundations first. hero_rpc#13 (hero_sdk / hero_core rename), #116 (socket convention), bin_companions. Nothing downstream stabilizes until these do.
  2. Single auth surface. Finish #118. Phase 2 shipped in hero_proxy#25, phase 3 pending. Retire hero_auth once done.
  3. Canonical service pattern everywhere. HeroRpcServer trait in every service, each repo owns its own hero_service.toml. hero_zero shrinks to pack-only.
  4. UI unification. #104 (pure Dioxus, no iframes), hero_os#35 (one chrome model), hero_os#36 (dead-code cleanup). This is also what unlocks mobile and PWA from the same codebase.
  5. AI layer coherent. hero_agent replaces hero_shrimp (already in flight), hero_embedder stands on its own as a peer service (not an osis domain), a workflow coordinator ties agent + embedder + aibroker + browser_mcp + compute together into one execution story.
  6. Ship the binary. hero_compute#45, hero_compute#63 (install script), #15 (cross-compile), hero_builder#1. This is the release. At that point hero_zero can retire too, since there's no assembly layer left to maintain.

One suggestion for how to move through this without repeating the hero_osis#23 kind of break: make the #118 pattern the default for any cross-repo refactor. That rollout staged cleanly, consumers prepared first, producer followed, nothing orphaned. Using it as the standard would prevent the flip-flops we've been running into.

This is meant to sit alongside project 13, not replace it. The board stays the source of truth for what's actively moving. This just tries to show how the moving pieces assemble into v1.

**TL;DR.** Hero is converging on a single binary that runs the whole stack ([hero_compute#45](https://forge.ourworld.tf/lhumina_code/hero_compute/issues/45)). Project 13 has the pieces but not the order. This suggests six phases for composing them into v1, plus one process rule: stage cross-repo refactors the way [#118](https://forge.ourworld.tf/lhumina_code/home/issues/118) does, so we stop hitting breaks like [hero_osis#23](https://forge.ourworld.tf/lhumina_code/hero_osis/pulls/23). --- Hi guys! Hero is converging on one shape: a single statically linked binary that runs the whole stack (osis, agent, embedder, aibroker, books, collab, the lot), installable with one command, logged in once via hero_proxy, rendered in a Dioxus web desktop. That's where [hero_compute#45](https://forge.ourworld.tf/lhumina_code/hero_compute/issues/45) points, and it's what Timur's rename work ([hero_rpc#13](https://forge.ourworld.tf/lhumina_code/hero_rpc/issues/13)), Mahmoud's osis consolidation, the auth move to hero_proxy ([#118](https://forge.ourworld.tf/lhumina_code/home/issues/118)), and the 100% Dioxus SPA goal ([#104](https://forge.ourworld.tf/lhumina_code/home/issues/104)) are all converging on. Right now the Hero OS demo runs online via `hero_zero` (the repo previously known as `hero_services`), which packages every service binary, service TOML, seed file, and runtime asset into one Docker image and runs it on a TFGrid VM. That's been useful for shipping demos fast, but it's a transitional layer. Once [hero_compute#45](https://forge.ourworld.tf/lhumina_code/hero_compute/issues/45) lands (one binary, all services inside it), hero_zero's assembly role mostly goes away, and deploying Hero becomes a one-line install anywhere. Project 13 tracks the individual work. What it doesn't show is the order. A lot of these pieces depend on each other, and shipping them out of sequence is what causes problems like last week's [hero_osis#23](https://forge.ourworld.tf/lhumina_code/hero_osis/pulls/23), where the embedder SDK got removed before `hero_archipelagos_intelligence` was migrated, and downstream builds broke until we pinned. So here's a suggested order, six phases, all grounded in tickets already on the board: 1. **Foundations first.** [hero_rpc#13](https://forge.ourworld.tf/lhumina_code/hero_rpc/issues/13) (hero_sdk / hero_core rename), [#116](https://forge.ourworld.tf/lhumina_code/home/issues/116) (socket convention), `bin_companions`. Nothing downstream stabilizes until these do. 2. **Single auth surface.** Finish [#118](https://forge.ourworld.tf/lhumina_code/home/issues/118). Phase 2 shipped in [hero_proxy#25](https://forge.ourworld.tf/lhumina_code/hero_proxy/issues/25), phase 3 pending. Retire hero_auth once done. 3. **Canonical service pattern everywhere.** HeroRpcServer trait in every service, each repo owns its own `hero_service.toml`. hero_zero shrinks to pack-only. 4. **UI unification.** [#104](https://forge.ourworld.tf/lhumina_code/home/issues/104) (pure Dioxus, no iframes), [hero_os#35](https://forge.ourworld.tf/lhumina_code/hero_os/issues/35) (one chrome model), [hero_os#36](https://forge.ourworld.tf/lhumina_code/hero_os/issues/36) (dead-code cleanup). This is also what unlocks mobile and PWA from the same codebase. 5. **AI layer coherent.** hero_agent replaces hero_shrimp (already in flight), hero_embedder stands on its own as a peer service (not an osis domain), a workflow coordinator ties agent + embedder + aibroker + browser_mcp + compute together into one execution story. 6. **Ship the binary.** [hero_compute#45](https://forge.ourworld.tf/lhumina_code/hero_compute/issues/45), [hero_compute#63](https://forge.ourworld.tf/lhumina_code/hero_compute/issues/63) (install script), [#15](https://forge.ourworld.tf/lhumina_code/home/issues/15) (cross-compile), [hero_builder#1](https://forge.ourworld.tf/lhumina_code/hero_builder/issues/1). This is the release. At that point hero_zero can retire too, since there's no assembly layer left to maintain. One suggestion for how to move through this without repeating the [hero_osis#23](https://forge.ourworld.tf/lhumina_code/hero_osis/pulls/23) kind of break: make the [#118](https://forge.ourworld.tf/lhumina_code/home/issues/118) pattern the default for any cross-repo refactor. That rollout staged cleanly, consumers prepared first, producer followed, nothing orphaned. Using it as the standard would prevent the flip-flops we've been running into. This is meant to sit alongside project 13, not replace it. The board stays the source of truth for what's actively moving. This just tries to show how the moving pieces assemble into v1.
Author
Owner

Hi @despiegk, I've been trying to figure out how project 13 composes into v1, and this is my best attempt at mapping it. If the ordering makes sense, happy to go deeper on any phase. If not, close it. The project ACTIVE is the actual source of truth and this is just me sketching. Can loop in @timur and @mahmoud if it's worth pursuing.

Hi @despiegk, I've been trying to figure out how project 13 composes into v1, and this is my best attempt at mapping it. If the ordering makes sense, happy to go deeper on any phase. If not, close it. The project ACTIVE is the actual source of truth and this is just me sketching. Can loop in @timur and @mahmoud if it's worth pursuing.
Sign in to join this conversation.
No labels
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
lhumina_code/home#120
No description provided.