single binary release | hero_compute runs all services (server, explorer, ui) #45
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Problem
hero_compute ships currently as three separate binaries:
hero_compute_serverhero_compute_explorerhero_compute_uiThis creates operational overhead:
Solution: Unified CLI Orchestrator
A single
hero_computeCLI binary that orchestrates all three services via hero_proc, following thehero_proc_service_selfstartpattern. The three service binaries remain separate crates — the CLI registers and manages them as actions under onehero_computeservice.Configurable ports:
Architecture
Multi-binary orchestrator — NOT a single embedded binary. The CLI manages lifecycle; each service runs as its own process.
HeroProc Integration
The CLI registers one
hero_computeservice with 3 actions:Each action has:
hero_compute --startregisters and starts all three actions.hero_compute --stopstops all three.hero_compute --statusqueries hero_proc for service state and displays it.Deployment Modes
local(default)masterworkerEnvironment Management
The CLI auto-generates
.envwith mode-specific variables:HERO_PROC_SOCKET— auto-detectedEXPLORER_ADDRESSES— set per modeMASTER_IP— worker mode onlyHERO_COMPUTE_ADVERTISE_ADDRESS— worker auto-detects outbound IPCHVM_MYCELIUM_PEERS— 10 default public peersUser-set variables are preserved across regeneration.
Current State (
development_clibranch)Already implemented:
hero_computeCLI crate with--startand--stop--mode local|master|workerwith--master-ip--port,--explorer-port,--rpc-port).envgeneration with mode-specific varsmake start/stop/status)scripts/configure.shoverhauleddocs/setup.mdupdatedRemaining Work
--statusflag — query hero_proc SDK forhero_computeservice state, display formatted output (actions, PIDs, health, uptime)build-linux.yaml, upload all 4 binaries per architecturedocs/setup.mdand README reflect final CLI interfaceDefinition of Done
hero_computeCLI crate built and workinghero_compute --startregisters and starts all 3 services via hero_prochero_compute --stopstops all serviceshero_compute --statusdisplays service state from hero_proc--mode local|master|workerwith--master-ipsupportmake startcallshero_compute --startdocs/setup.mdupdated with CLI usageGap Analysis:
development_clibranch vs Issue RequirementsWhat's Already Done (5 commits on
development_cli)The
development_clibranch has a workinghero_computeCLI crate (crates/hero_compute/) that:hero_computeservice--startand--stoplifecycle flags--mode local|master|workerwith--master-ipfor worker mode--port(UI),--explorer-port,--rpc-port.envwith mode-specific variables + symlinks to$HOME/.envmake start [MODE=...] [MASTER_IP=...],make stop,make statusscripts/configure.shoverhauled (412 lines — downloads cloud-hypervisor, mycelium, hero_proc, my_hypervisor)docs/setup.mdupdated with new CLI usageGaps: What's Missing or Diverges from the Issue
1. CLI Interface: Flags vs Subcommands
Issue proposes:
Current implementation:
Decision needed: The flag-based approach (
--start/--stop) works and follows thehero_proc_service_selfstartpattern used by other Hero services. The subcommand approach from the issue would be more user-friendly but diverges from the ecosystem pattern. Recommend: keep flags — consistent with hero_proc conventions. Add--statusflag.2. Still 4 Binaries, Not 1
The issue says "single binary release" but the current implementation is an orchestrator binary that launches 3 separate binaries (
hero_compute_server,hero_compute_explorer,hero_compute_ui) via hero_proc.Why this matters:
~/hero/bin/)resolve_bin()in the CLI looks for sibling binaries on diskTrue single-binary approach would require:
pub async fn run(args)inlib.rshero_compute server serve(same binary, different subcommand)Assessment: This is the biggest gap. The current approach is a multi-binary orchestrator, not a true single binary. Making it truly single-binary requires:
hero_compute server serve,hero_compute ui serve,hero_compute explorer servelib.rswith apub async fn run()entry point (server and explorer already havelib.rsstubs)hero_compute server serveinstead ofhero_compute_server3. No
hero_compute statusCommandThe issue lists
hero_compute statusbut it's not implemented. The Makefile has astatustarget that shells out tohero_proc status hero_compute, which works but isn't part of the binary itself.Recommendation: Add
--statusflag that callshero_proc_sdkto query and display service state.4. No Per-Service
logsSubcommandThe issue proposes
hero_compute server logsetc. Currently no log viewing at all from the CLI.Recommendation: If going subcommand route for single-binary, add
logssubcommand per service usinghero_proc_sdklog query API.5. CI Release Not Updated
The
build-linux.yamlworkflow:build_binariesfrombuild_lib.sh(builds all workspace binaries)Needed: Uncomment and update publish step to upload the unified binary (or all 4 if staying multi-binary).
6. No Deprecation Notices
The issue says "Old three-binary setup deprecated but not removed until stable." The old shell scripts (
scripts/start.sh,scripts/stop.sh, etc.) were removed entirely rather than deprecated.Assessment: This is fine — the old scripts were internal. The 3 individual binaries still build and have their own
main.rs, so they can still be run standalone. No action needed.Recommended Implementation Plan
Phase 1: True Single Binary (High Priority)
Add clap subcommands to
hero_computeCLI:Wire up service entry points:
hero_compute_server/src/lib.rs→ exposepub async fn serve(tcp_port: Option<u16>)hero_compute_explorer/src/lib.rs→ exposepub async fn serve(tcp_port: Option<u16>)hero_compute_ui/src/lib.rs→ exposepub async fn serve(port: u16, server_socket: Option<String>, explorer_socket: Option<String>)main.rslogic into theseserve()functionsAdd library dependencies to
crates/hero_compute/Cargo.toml:Update hero_proc action commands to use self-dispatch:
Phase 2: Status & Logs
--status— query hero_proc forhero_computeservice state, display formatted table{service} logs— query hero_proc log API for each action's logsPhase 3: CI & Release
build-linux.yaml:hero_computebinary (not all 4)buildtarget to only copyhero_computebinaryPhase 4: Cleanup
[[bin]]sections from service Cargo.tomls (keep lib only)docs/setup.mdto reflect single binaryscripts/configure.shif neededUpdated Definition of Done
hero_computebinary is the only binary needed (embeds server/explorer/ui)hero_compute --start/--stop/--statusmanages all serviceshero_compute {service} serveruns individual service foreground (for hero_proc)hero_compute {service} logsshows per-service logshero_compute {service} serve(self-dispatch)buildtarget produces one binarymake startcallshero_compute --starthero_computeas single artifact (x86_64 + aarch64)docs/setup.mddocuments single-binary installDecision: Keep Multi-Binary Orchestrator
After review, the approach is:
hero_computeCLI manages 3 separate service binaries via hero_proc. No need to embed all services into a single binary.--start/--stopflags — followshero_proc_service_selfstartpattern, consistent with the Hero ecosystem.--statusflag — the only remaining feature gap.The
development_clibranch already has most of the work done. Remaining:--statusflag (query hero_proc SDK, display service state)Implementation committed:
d797d18Browse:
d797d18Branch:
development_cliChanges:
crates/hero_compute/src/main.rs— added--statusflag withself_status()(queries hero_proc for service state + per-action details)buildenv.sh— addedhero_computeto BINARIES list.forgejo/workflows/build-linux.yaml— uncommented publish step, gated on tag pushMakefile—make statusnow useshero_compute --status