Nu-shell services Hero OS Demo
  • Rust 43.2%
  • Shell 37.5%
  • TypeScript 10%
  • JavaScript 5.3%
  • Makefile 1.6%
  • Other 2.4%
Find a file
mik-tf 96947d38fe fix(deploy/cloud_vm): make bootstrap_droplet_source.sh actually run end-to-end
The script was added at s83 close (778d726) but had never been run on a
fresh droplet before s84 attempted the gold P5 deploy. Three independent
bugs blocked Phase 5 install. All three reproduced on a clean DO droplet
in s84; this commit lands the in-flight workarounds as canonical fixes.

1. wait_cloud_init blocked indefinitely on unattended-upgrade-shutdown.
   The shutdown-hook helper (/usr/share/unattended-upgrades/...) idles
   forever waiting for SIGTERM at shutdown. Its `comm` is exactly
   "unattended-upgr" (15-char truncation), matching `pgrep -x` until
   the 10-min timeout fires. Pre-emptively kill it; safe on a running
   box (systemd respawns at next boot).

2. crates/hero_builder was renamed to crates/lab upstream in hero_code
   (lhumina_code/hero_code commit a67f3528a + merge d907902, both AFTER
   s83 close). The `crates/hero_builder` directory still exists but
   contains only vestigial src/ — no Cargo.toml — so
   `cargo install --path crates/hero_builder` 404s.
   Fix: install from `crates/lab`. Add a backward-compat symlink
   `~/hero/bin/hero_builder -> lab` for DEPLOYMENT.md / CLAUDE.md /
   the script's own build_repo() callers that reference the old name.

3. Phase 5 install failed even after the rename because lab's Cargo.toml
   has path-deps on hero_lib:
       herolib_os = { path = "../../../hero_lib/crates/os" }
       herolib_ai = { path = "...", optional = true }
   Cargo's workspace resolver must read both manifests regardless of
   --features. hero_lib was only cloned in Phase 6, after Phase 5 ran.
   Fix: pre-clone hero_lib alongside hero_code in Phase 5.

Also dropped --features agent (only fires with hero_aibroker running on
UDS, irrelevant during bootstrap) and bumped --depth 1 → --depth 50 for
consistency with the Phase 6 clone helper.

Validated empirically on droplet hero-s84 (164.90.192.93) — 5/35 GREEN
at time of commit, sweep ongoing. See home#230 P5 thread.

Signed-off-by: mik-tf
2026-05-09 14:10:42 -04:00
archive chore: archive docker-era build pipeline + stale docs 2026-04-30 08:37:01 -04:00
data docs: reframe runbook as Ubuntu-first with TF Grid sidebars 2026-04-30 08:53:40 -04:00
deploy fix(deploy/cloud_vm): make bootstrap_droplet_source.sh actually run end-to-end 2026-05-09 14:10:42 -04:00
docs docs(ops): add hero_builder canonical-orchestrator section §5.0 2026-05-09 00:44:13 -04:00
profiles feat: integrate per-domain OSIS split into build/deploy pipeline (lhumina_code/home#117) 2026-04-13 11:17:20 -04:00
services fix(services): flip exec/download to *_admin binaries (Phase 1 cluster) 2026-05-09 00:45:41 -04:00
tests feat: integrate per-domain OSIS split into build/deploy pipeline (lhumina_code/home#117) 2026-04-13 11:17:20 -04:00
.gitignore chore: add node_modules and test-results to gitignore 2026-04-13 09:28:29 -04:00
LICENSE feat: initialize hero_zero from hero_services build/deploy pipeline 2026-04-06 23:38:33 -04:00
package-lock.json feat: initialize hero_zero from hero_services build/deploy pipeline 2026-04-06 23:38:33 -04:00
package.json feat: sessions 17-18 — native dioxus islands, new URL routing, OSIS auth fix, build safety 2026-04-12 09:58:05 -04:00
playwright.config.ts feat: initialize hero_zero from hero_services build/deploy pipeline 2026-04-06 23:38:33 -04:00
README.md docs: reframe runbook as Ubuntu-first with TF Grid sidebars 2026-04-30 08:53:40 -04:00

hero_demo

Deployment scaffolding for Hero OS — provisions a VM, bootstraps the operating environment, installs every Hero service from source, and brings the full ecosystem online under the nu-shell orchestration path.

Renamed from hero_zero (Apr 2026). Hero OS moved off docker-compose to a nu-shell-based orchestrator (hero_proc + hero_skills). The legacy docker pipeline lives under archive/legacy-docker-build/ for reference.

The active runbook is docs/ops/DEPLOYMENT.md.


What this repo does

  1. Provisions a VM. On any Ubuntu 24.04+ host: provide the VM yourself (cloud, on-prem, laptop). On TF Grid: use the Terraform modules under deploy/single-vm/.
  2. Bootstraps the OS: driver user, swap, ONNX runtime, Chrome, uv, nu-shell.
  3. Installs all Hero services from source via hero_skills install modules — clones each lhumina_code/hero_* repo, builds with cargo, registers a hero_proc action + service.
  4. Defines service groups via hero_proc (profiles).
  5. Seeds sample content (Office docs, hero_books libraries, OSIS schemas).
  6. Verifies the deployment via smoke scripts.

Reproducible end-to-end from a single runbook.


The Hero ecosystem

A Hero OS deployment is composed of 27 source repositories, 8 support libraries, and 24+ services running on the VM. Each service is one or more native binaries supervised by hero_proc over Unix sockets.

Source repositories

Repo Role
Deploy / Ops
hero_demo This repo — provisioning, runbook, service TOMLs, profiles
hero_skills Install modules + service_<x>.nu per service + Hero shell
Foundation
hero_lib Common Rust utilities (text, OS, network, git)
hero_lib_rhai Rhai scripting engine for Hero
hero_rpc OpenRPC framework — JSON-RPC over Unix sockets, claim auth
hero_proc Process supervisor (daemon + RPC + nu-shell modules)
hero_router TCP entry point — reverse-proxies all /hero_<x>/<sock> paths
hero_proxy Auth-mode aware reverse proxy (hero_router fronts this)
OS shell
hero_os Dioxus 0.7 WASM shell + dock + windows; serves the browser UI
hero_archipelagos Per-island UI crates (one per archipelago feature)
Storage / data
hero_osis Object storage — 17 per-domain servers (identity/business/calendar/code/communication/embedder/files/finance/flow/job/ledger/media/network/projects/settings/base/ai)
hero_db Encrypted Redis-backed DB (graph/vector/stream/ontology)
hero_foundry Fossil SCM + WebDAV file storage
hero_indexer Full-text search backend
hero_embedder Vector embeddings (ONNX, CrIS model)
Intelligence
hero_agent AI assistant (LLM + Hero tool-use)
hero_aibroker LLM router (OpenRouter, Groq, OpenAI, SambaNova)
hero_voice TTS (Kokoro) + STT (Whisper)
hero_livekit WebRTC SFU integration (rooms, conferences)
Productivity / domain
hero_books Documentation / ebook reader + Q&A
hero_office OnlyOffice integration (docx/xlsx/pptx editing)
hero_slides Presentation tooling
hero_whiteboard Collaborative whiteboard
hero_collab Collaboration session services
hero_code In-browser editor / code runner
hero_logic Rhai-based business-logic runner
hero_biz Business-domain helpers
hero_codescalers Codescalers integration
hero_matrixchat Matrix chat bridge
hero_browser Headless browser as a service (Chrome MCP)
Networking
mycelium_network Mycelium mesh networking
Documentation
docs_hero User-facing book (loaded into hero_books on deploy)

Services on the VM

A typical full deploy supervises ~24 services under hero_proc. Each is either user-facing (user class) or system (system class). Inspect with hero_proc service list.

Class Service Repo What it does
system hero_router hero_router Single TCP entry — routes URL prefixes to per-service sockets
system hero_proxy hero_proxy Auth-aware proxy fronted by hero_router
system hero_proc_ui hero_proc Web admin for the supervisor itself
system mycelium mycelium_network Overlay network daemon
system hero_os hero_os WASM shell server + Dioxus app
system hero_osis (×17 domains) hero_osis Per-domain object storage
system hero_foundry hero_foundry File storage + Fossil SCM
system hero_indexer hero_indexer Full-text search
system hero_embedder hero_embedder Vector embeddings
system hero_db hero_db Encrypted graph/vector/stream DB
system hero_aibroker hero_aibroker LLM provider router
user hero_agent hero_agent AI assistant front-end
user hero_voice hero_voice TTS / STT
user hero_books hero_books Book reader
user hero_office hero_office Office editing UI
system hero_onlyoffice hero_skills OnlyOffice Document Server (docker container, hero_proc-supervised)
system hero_slides hero_slides Slide tooling
system hero_whiteboard hero_whiteboard Whiteboard
system hero_collab hero_collab Collaboration session backend
system hero_livekit hero_livekit WebRTC SFU
system hero_code hero_code Code editor / runner
system hero_logic hero_logic Rhai logic runner
system hero_biz hero_biz Business helpers
system hero_codescalers hero_codescalers Codescalers integration
system hero_browser hero_browser Headless Chrome service

Architecture

┌────────────────────────────────────────────────────────────────────────┐
│ Browser  ─────────────────────────────────────────────────────────────┐│
│  └── HTTPS (443)                                                      ││
│         ▼                                                             ││
│      ┌──────────────────┐    on TF Grid: TLS terminates here          ││
│      │ Edge / Gateway   │    on plain Ubuntu: nginx / Caddy / your    ││
│      │  (TLS, optional) │    edge of choice                           ││
│      └────────┬─────────┘                                             ││
│               ▼                                                       ││
│      ┌──────────────────┐  TCP 9988 (or 9990 + 9988 split on TF Grid) ││
│      │   hero_router    │  reverse-proxies /hero_<x>/<rpc|ui>         ││
│      └────┬─────────────┘                                             ││
│           ▼                                                           ││
│ ┌─────────────────────────────────┐   $HERO_SOCKET_DIR/<svc>/{rpc,ui} ││
│ │ Unix sockets, one dir per svc   │   .sock                           ││
│ └────────┬────────────────────────┘                                   ││
│          ▼                                                            ││
│   per-service binaries (server + ui)        ┌─────────────────────┐   ││
│   spawned + supervised by hero_proc ────────┤  hero_proc daemon   │   ││
│                                              └─────────────────────┘   ││
└────────────────────────────────────────────────────────────────────────┘
  • Supervisor: hero_proc — Rust daemon, two-layer model (action → service group). Configured via nu-shell service_<x>.nu modules from hero_skills.
  • Routing: hero_router — single TCP listener, dispatches to per-service Unix sockets based on /hero_<name>/<sock_type> URL prefix.
  • Browser shell: hero_os_app — Dioxus 0.7 WASM, with per-archipelago native islands. Iframe fallback exists for admin _ui panels until they are migrated to Dioxus Bootstrap.

Quickstart — any Ubuntu 24.04+ VM

The full procedure lives in the runbook docs/ops/DEPLOYMENT.md. Summary of the happy path on a fresh Ubuntu 24.04+ host:

# 0. Workstation: have your secrets ready
source ~/hero/cfg/env/env.sh   # FORGEJO_TOKEN, OPENROUTER_API_KEY, GROQ_API_KEY, ...

# 1. Provision the VM yourself (any Ubuntu 24.04+; spec below)
#    On TF Grid: use deploy/single-vm/ Terraform — see "TF Grid path"
#                section in the runbook.

# 2. Bootstrap the host (driver user, /data layout, swap, packages,
#    ONNX, Chrome, uv, nu-shell). Runbook §2.

# 3. Install all services with one nu-shell command. Runbook §4.
su - driver -c '
  source ~/hero/cfg/init.sh
  cd ~/code/hero_skills/install
  nu -c "use service_install_all.nu *; service_install_all"
'
# ~45 min on a 16-CPU host

# 4. Patch action env, restore data from backup if any. Runbook §4.3-§5.
# 5. Build the WASM browser shell + apply theme overlay. Runbook §6.
# 6. Optional: install OnlyOffice for Office editing. Runbook §11.
# 7. Verify, snapshot. Runbook §8 / §9.

Specs — minimum for a working install

Resource Minimum Comfortable Why
CPU 8 cores 16 cores Cargo builds; concurrent embedder + LLM + WASM
Memory 16 GB 32 GB hero_embedder peaks 6-8 GB during indexing; OOM is hard to recover from
Disk 100 GB 200 GB Source checkouts + cargo cache + corpora + backups
Disk type SSD NVMe SSD btrfs/ext4 metadata-heavy workload
OS Ubuntu 24.04+ Required for ONNX 1.23.2 + recent Chrome
Network Public IPv4 + IPv6 (optional Mycelium) Outbound to LLM APIs + Forgejo

The embedder is the memory bottleneck. It loads CrIS embedding models that hold ~2 GB resident and can balloon to 6-8 GB peak when indexing larger corpora. An 8 GB VM will OOM mid-index. 16 GB is the floor; 32 GB is comfortable for the full library set.

The WASM build is the CPU/disk bottleneck. dx build for hero_os_app recompiles ~286 crates and writes ~10 GB to the cargo target dir. A 4-core / 50 GB VM works but takes ~40 min for an incremental build.

TF Grid path (sidebar)

If your VM is on the ThreeFold Grid, three things change vs a plain Ubuntu VM:

Plain Ubuntu TF Grid VM
Provision yourself cd deploy/single-vm/envs/<NAME>/tf && terraform apply
ext4 swap on /swapfile btrfs swap on /data requires chattr +C first
systemctl enable docker TF Grid VMs have no systemd — start dockerd via nohup (handled by install_docker_btrfs)
nginx/Caddy + Let's Encrypt TF Grid gateway terminates TLS for you
OO_PUBLIC_PROTO defaults are fine Same defaults are fine (gateway can forward X-Forwarded-Proto: http; runbook §11.5 explains the override)

Everything else — the install flow, every hero_proc service, every binary, the browser shell — is identical.


Repository layout (active)

hero_demo/
├── README.md                                ← this file
├── deploy/
│   └── single-vm/
│       ├── tf/                              ← Terraform modules (TF Grid)
│       ├── envs/<NAME>/                     ← per-deploy overlay
│       │   ├── tf/credentials.auto.tfvars
│       │   └── app.env
│       └── Makefile                         ← convenience wrappers
├── services/*.toml                          ← canonical per-service action TOMLs
├── profiles/*.toml                          ← service-group profiles (user, core, demo, ...)
├── data/                                    ← seed corpora (books, media, root)
├── docs/
│   ├── README.md                            ← docs entry point
│   ├── ops/
│   │   ├── DEPLOYMENT.md         ← THE RUNBOOK
│   │   ├── FIX_TRIAGE.md                    ← bug-fix triage levels (L1-L4)
│   │   ├── README.md
│   │   └── secrets.md
│   ├── dev/
│   │   ├── architecture.md
│   │   ├── repos.md                         ← detailed repo / binary map
│   │   ├── release.md
│   │   ├── testing.md
│   │   └── e2e_checklist.md
│   ├── service.md, profile.md, TOML_FORMAT_REFERENCE.md
└── archive/                                 ← legacy docker-era pipeline + stale docs

Everything not listed above (legacy Makefile, docker/, crates/, old ops docs) lives under archive/ and is preserved for reference but unused by the active flow.


Resource URL
User-facing docs (loaded into hero_books) docs_hero
Issue tracker (all repos route here) lhumina_code/home
Active demo VM https://herodemo.gent01.grid.tf
Forge index https://forge.ourworld.tf/lhumina_code

License

Apache-2.0