- Rust 43.2%
- Shell 37.5%
- TypeScript 10%
- JavaScript 5.3%
- Makefile 1.6%
- Other 2.4%
The script was added at s83 close (
|
||
|---|---|---|
| archive | ||
| data | ||
| deploy | ||
| docs | ||
| profiles | ||
| services | ||
| tests | ||
| .gitignore | ||
| LICENSE | ||
| package-lock.json | ||
| package.json | ||
| playwright.config.ts | ||
| README.md | ||
hero_demo
Deployment scaffolding for Hero OS — provisions a VM, bootstraps the operating environment, installs every Hero service from source, and brings the full ecosystem online under the nu-shell orchestration path.
Renamed from
hero_zero(Apr 2026). Hero OS moved off docker-compose to a nu-shell-based orchestrator (hero_proc+hero_skills). The legacy docker pipeline lives underarchive/legacy-docker-build/for reference.
The active runbook is docs/ops/DEPLOYMENT.md.
What this repo does
- Provisions a VM. On any Ubuntu 24.04+ host: provide the VM
yourself (cloud, on-prem, laptop). On TF Grid: use the Terraform
modules under
deploy/single-vm/. - Bootstraps the OS: driver user, swap, ONNX runtime, Chrome, uv, nu-shell.
- Installs all Hero services from source via
hero_skillsinstall modules — clones eachlhumina_code/hero_*repo, builds with cargo, registers a hero_proc action + service. - Defines service groups via hero_proc (profiles).
- Seeds sample content (Office docs, hero_books libraries, OSIS schemas).
- Verifies the deployment via smoke scripts.
Reproducible end-to-end from a single runbook.
The Hero ecosystem
A Hero OS deployment is composed of 27 source repositories, 8
support libraries, and 24+ services running on the VM. Each service
is one or more native binaries supervised by hero_proc over Unix
sockets.
Source repositories
| Repo | Role |
|---|---|
| Deploy / Ops | |
hero_demo |
This repo — provisioning, runbook, service TOMLs, profiles |
hero_skills |
Install modules + service_<x>.nu per service + Hero shell |
| Foundation | |
hero_lib |
Common Rust utilities (text, OS, network, git) |
hero_lib_rhai |
Rhai scripting engine for Hero |
hero_rpc |
OpenRPC framework — JSON-RPC over Unix sockets, claim auth |
hero_proc |
Process supervisor (daemon + RPC + nu-shell modules) |
hero_router |
TCP entry point — reverse-proxies all /hero_<x>/<sock> paths |
hero_proxy |
Auth-mode aware reverse proxy (hero_router fronts this) |
| OS shell | |
hero_os |
Dioxus 0.7 WASM shell + dock + windows; serves the browser UI |
hero_archipelagos |
Per-island UI crates (one per archipelago feature) |
| Storage / data | |
hero_osis |
Object storage — 17 per-domain servers (identity/business/calendar/code/communication/embedder/files/finance/flow/job/ledger/media/network/projects/settings/base/ai) |
hero_db |
Encrypted Redis-backed DB (graph/vector/stream/ontology) |
hero_foundry |
Fossil SCM + WebDAV file storage |
hero_indexer |
Full-text search backend |
hero_embedder |
Vector embeddings (ONNX, CrIS model) |
| Intelligence | |
hero_agent |
AI assistant (LLM + Hero tool-use) |
hero_aibroker |
LLM router (OpenRouter, Groq, OpenAI, SambaNova) |
hero_voice |
TTS (Kokoro) + STT (Whisper) |
hero_livekit |
WebRTC SFU integration (rooms, conferences) |
| Productivity / domain | |
hero_books |
Documentation / ebook reader + Q&A |
hero_office |
OnlyOffice integration (docx/xlsx/pptx editing) |
hero_slides |
Presentation tooling |
hero_whiteboard |
Collaborative whiteboard |
hero_collab |
Collaboration session services |
hero_code |
In-browser editor / code runner |
hero_logic |
Rhai-based business-logic runner |
hero_biz |
Business-domain helpers |
hero_codescalers |
Codescalers integration |
hero_matrixchat |
Matrix chat bridge |
hero_browser |
Headless browser as a service (Chrome MCP) |
| Networking | |
mycelium_network |
Mycelium mesh networking |
| Documentation | |
docs_hero |
User-facing book (loaded into hero_books on deploy) |
Services on the VM
A typical full deploy supervises ~24 services under hero_proc. Each is
either user-facing (user class) or system (system class). Inspect with
hero_proc service list.
| Class | Service | Repo | What it does |
|---|---|---|---|
| system | hero_router |
hero_router | Single TCP entry — routes URL prefixes to per-service sockets |
| system | hero_proxy |
hero_proxy | Auth-aware proxy fronted by hero_router |
| system | hero_proc_ui |
hero_proc | Web admin for the supervisor itself |
| system | mycelium |
mycelium_network | Overlay network daemon |
| system | hero_os |
hero_os | WASM shell server + Dioxus app |
| system | hero_osis (×17 domains) |
hero_osis | Per-domain object storage |
| system | hero_foundry |
hero_foundry | File storage + Fossil SCM |
| system | hero_indexer |
hero_indexer | Full-text search |
| system | hero_embedder |
hero_embedder | Vector embeddings |
| system | hero_db |
hero_db | Encrypted graph/vector/stream DB |
| system | hero_aibroker |
hero_aibroker | LLM provider router |
| user | hero_agent |
hero_agent | AI assistant front-end |
| user | hero_voice |
hero_voice | TTS / STT |
| user | hero_books |
hero_books | Book reader |
| user | hero_office |
hero_office | Office editing UI |
| system | hero_onlyoffice |
hero_skills | OnlyOffice Document Server (docker container, hero_proc-supervised) |
| system | hero_slides |
hero_slides | Slide tooling |
| system | hero_whiteboard |
hero_whiteboard | Whiteboard |
| system | hero_collab |
hero_collab | Collaboration session backend |
| system | hero_livekit |
hero_livekit | WebRTC SFU |
| system | hero_code |
hero_code | Code editor / runner |
| system | hero_logic |
hero_logic | Rhai logic runner |
| system | hero_biz |
hero_biz | Business helpers |
| system | hero_codescalers |
hero_codescalers | Codescalers integration |
| system | hero_browser |
hero_browser | Headless Chrome service |
Architecture
┌────────────────────────────────────────────────────────────────────────┐
│ Browser ─────────────────────────────────────────────────────────────┐│
│ └── HTTPS (443) ││
│ ▼ ││
│ ┌──────────────────┐ on TF Grid: TLS terminates here ││
│ │ Edge / Gateway │ on plain Ubuntu: nginx / Caddy / your ││
│ │ (TLS, optional) │ edge of choice ││
│ └────────┬─────────┘ ││
│ ▼ ││
│ ┌──────────────────┐ TCP 9988 (or 9990 + 9988 split on TF Grid) ││
│ │ hero_router │ reverse-proxies /hero_<x>/<rpc|ui> ││
│ └────┬─────────────┘ ││
│ ▼ ││
│ ┌─────────────────────────────────┐ $HERO_SOCKET_DIR/<svc>/{rpc,ui} ││
│ │ Unix sockets, one dir per svc │ .sock ││
│ └────────┬────────────────────────┘ ││
│ ▼ ││
│ per-service binaries (server + ui) ┌─────────────────────┐ ││
│ spawned + supervised by hero_proc ────────┤ hero_proc daemon │ ││
│ └─────────────────────┘ ││
└────────────────────────────────────────────────────────────────────────┘
- Supervisor:
hero_proc— Rust daemon, two-layer model (action → service group). Configured via nu-shellservice_<x>.numodules fromhero_skills. - Routing:
hero_router— single TCP listener, dispatches to per-service Unix sockets based on/hero_<name>/<sock_type>URL prefix. - Browser shell:
hero_os_app— Dioxus 0.7 WASM, with per-archipelago native islands. Iframe fallback exists for admin_uipanels until they are migrated to Dioxus Bootstrap.
Quickstart — any Ubuntu 24.04+ VM
The full procedure lives in the runbook docs/ops/DEPLOYMENT.md. Summary of the happy path on a fresh Ubuntu 24.04+ host:
# 0. Workstation: have your secrets ready
source ~/hero/cfg/env/env.sh # FORGEJO_TOKEN, OPENROUTER_API_KEY, GROQ_API_KEY, ...
# 1. Provision the VM yourself (any Ubuntu 24.04+; spec below)
# On TF Grid: use deploy/single-vm/ Terraform — see "TF Grid path"
# section in the runbook.
# 2. Bootstrap the host (driver user, /data layout, swap, packages,
# ONNX, Chrome, uv, nu-shell). Runbook §2.
# 3. Install all services with one nu-shell command. Runbook §4.
su - driver -c '
source ~/hero/cfg/init.sh
cd ~/code/hero_skills/install
nu -c "use service_install_all.nu *; service_install_all"
'
# ~45 min on a 16-CPU host
# 4. Patch action env, restore data from backup if any. Runbook §4.3-§5.
# 5. Build the WASM browser shell + apply theme overlay. Runbook §6.
# 6. Optional: install OnlyOffice for Office editing. Runbook §11.
# 7. Verify, snapshot. Runbook §8 / §9.
Specs — minimum for a working install
| Resource | Minimum | Comfortable | Why |
|---|---|---|---|
| CPU | 8 cores | 16 cores | Cargo builds; concurrent embedder + LLM + WASM |
| Memory | 16 GB | 32 GB | hero_embedder peaks 6-8 GB during indexing; OOM is hard to recover from |
| Disk | 100 GB | 200 GB | Source checkouts + cargo cache + corpora + backups |
| Disk type | SSD | NVMe SSD | btrfs/ext4 metadata-heavy workload |
| OS | Ubuntu 24.04+ | — | Required for ONNX 1.23.2 + recent Chrome |
| Network | Public IPv4 | + IPv6 (optional Mycelium) | Outbound to LLM APIs + Forgejo |
The embedder is the memory bottleneck. It loads CrIS embedding models that hold ~2 GB resident and can balloon to 6-8 GB peak when indexing larger corpora. An 8 GB VM will OOM mid-index. 16 GB is the floor; 32 GB is comfortable for the full library set.
The WASM build is the CPU/disk bottleneck.
dx buildforhero_os_apprecompiles ~286 crates and writes ~10 GB to the cargo target dir. A 4-core / 50 GB VM works but takes ~40 min for an incremental build.
TF Grid path (sidebar)
If your VM is on the ThreeFold Grid, three things change vs a plain Ubuntu VM:
| Plain Ubuntu | TF Grid VM |
|---|---|
| Provision yourself | cd deploy/single-vm/envs/<NAME>/tf && terraform apply |
ext4 swap on /swapfile |
btrfs swap on /data requires chattr +C first |
systemctl enable docker |
TF Grid VMs have no systemd — start dockerd via nohup (handled by install_docker_btrfs) |
| nginx/Caddy + Let's Encrypt | TF Grid gateway terminates TLS for you |
OO_PUBLIC_PROTO defaults are fine |
Same defaults are fine (gateway can forward X-Forwarded-Proto: http; runbook §11.5 explains the override) |
Everything else — the install flow, every hero_proc service, every
binary, the browser shell — is identical.
Repository layout (active)
hero_demo/
├── README.md ← this file
├── deploy/
│ └── single-vm/
│ ├── tf/ ← Terraform modules (TF Grid)
│ ├── envs/<NAME>/ ← per-deploy overlay
│ │ ├── tf/credentials.auto.tfvars
│ │ └── app.env
│ └── Makefile ← convenience wrappers
├── services/*.toml ← canonical per-service action TOMLs
├── profiles/*.toml ← service-group profiles (user, core, demo, ...)
├── data/ ← seed corpora (books, media, root)
├── docs/
│ ├── README.md ← docs entry point
│ ├── ops/
│ │ ├── DEPLOYMENT.md ← THE RUNBOOK
│ │ ├── FIX_TRIAGE.md ← bug-fix triage levels (L1-L4)
│ │ ├── README.md
│ │ └── secrets.md
│ ├── dev/
│ │ ├── architecture.md
│ │ ├── repos.md ← detailed repo / binary map
│ │ ├── release.md
│ │ ├── testing.md
│ │ └── e2e_checklist.md
│ ├── service.md, profile.md, TOML_FORMAT_REFERENCE.md
└── archive/ ← legacy docker-era pipeline + stale docs
Everything not listed above (legacy Makefile, docker/, crates/,
old ops docs) lives under archive/ and is preserved for
reference but unused by the active flow.
Related
| Resource | URL |
|---|---|
| User-facing docs (loaded into hero_books) | docs_hero |
| Issue tracker (all repos route here) | lhumina_code/home |
| Active demo VM | https://herodemo.gent01.grid.tf |
| Forge index | https://forge.ourworld.tf/lhumina_code |
License
Apache-2.0