make first version #1
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
hero_bundle— Single Binary, Multi-Service RuntimeWe want a new command called:
The purpose of
hero_bundleis to build and run the core Hero services as one compiled binary, while still preserving the existing Hero service architecture: separate services, separate endpoints, separate sockets, separate admin/server interfaces, and independent runtime loops.This is not a replacement of the internal service model. It is a packaging and runtime optimization.
1. Goal
Today, several Hero services are built and launched as separate binaries.
For the bundled runtime, we want one binary that imports the core functionality from:
The result is:
This binary contains all core functionality inside one executable.
However, each service still behaves as if it were running independently.
2. Important Principle
hero_bundlemust preserve the current Hero service boundaries.Even though the code is compiled into one binary:
So from the outside, the system should still look like the normal Hero stack.
For example:
The sockets, endpoints, service names, routing behavior, and admin interfaces should remain aligned with the existing architecture.
3. Included Services
hero_bundleshould import the core runtime/server/admin functionality from each service crate.Included
Each included service should expose a reusable library entrypoint, for example:
The exact module names can differ, but the principle is that every service must expose a clean programmatic runtime function.
4. Excluded Functionality
The command-line interfaces of the individual services are not needed inside
hero_bundle.So we do not import or expose:
The bundle is not meant to become a giant CLI wrapper.
It is only a runtime/server bundle.
The individual service crates may still keep their own binaries if needed for development, testing, or standalone usage, but
hero_bundleshould use their library/server interfaces directly.5. Thread Model
Inside
hero_bundle, each service runs in its own independent thread or async runtime task.Conceptually:
Each service keeps its own lifecycle.
Each service binds its own sockets.
Each service exposes its own RPC/Admin/UI interfaces as before.
The bundle is responsible for starting them, monitoring them, and shutting them down cleanly.
6.
hero_myceliumIs Message Bus OnlyThis is very important.
In
hero_bundle,hero_myceliumshould be included only as the internal message bus layer.It should not be treated as a full external Mycelium service unless explicitly required.
So the bundled usage is:
Not:
The message bus should allow internal services to communicate through the same abstraction that will later also support distributed communication.
This means local bundled services should be able to use the message bus without needing the full Mycelium network stack to be active.
7. External Behavior Must Stay the Same
Even though the runtime is bundled, external clients should not need to know that.
The following should remain unchanged:
For example, a client calling
hero_procshould still call the same proc endpoint.A UI using the router should still use the same router endpoint.
A tool using
hero_dbshould still use the same DB endpoint.The bundle is an internal packaging optimization, not an API redesign.
8. Why We Do This
The benefits are:
But we still keep:
So
hero_bundlegives us the simplicity of one binary without destroying the modular Hero service architecture.9. Required Refactor in Existing Services
Each service should separate three layers:
Suggested structure:
The standalone binary can keep using the service like this:
But
hero_bundleshould use something like:The CLI parsing should not be required by the bundle.
10. Suggested
hero_bundleRuntime FlowAt startup:
Example order:
Router should start last because it maps to the other service endpoints.
11. Conceptual Rust Shape
This is only the conceptual shape. The exact implementation can use either OS threads, Tokio tasks, or a mix, depending on each service.
12. Key Requirement for the Coding Agent
The coding agent should not merge all service logic into one large monolithic codebase.
Instead, it should make each service reusable as a library runtime and then create
hero_bundleas an orchestration binary.Correct approach:
Wrong approach:
13. Final Definition
hero_bundleis a single executable that contains the core Hero services and starts them together.It does not collapse the architecture.
It bundles the deployment, not the interfaces.
The outside world still sees the normal Hero service layout.
The inside of the binary runs each service independently, with
hero_myceliumacting as the local message bus only.Implementation Spec for Issue #1 — hero_bundle first version
Objective
Replace the current SDK-based remote-launcher scaffolding in
hero_bundlewith an in-process supervisor that compiles the six core Hero services into a single binary and runs each as an independent tokio task. Each service must keep binding the same Unix sockets, expose the same OpenRPC/REST endpoints, and observe the same env-driven configuration as when run standalone — so external clients (hero_router scanner, SDKs, CLIs) cannot tell the difference.hero_myceliumis included only as the messages-only bus (no TUN, no full peer/network surface beyond whatmycelium_server_msgalready does).Requirements
hero_bundlethat depends on the six services as Rust libraries.run(config, shutdown_token) -> anyhow::Result<()>, free of clap parsing and free of#[tokio::main].hero_proc::cli,hero_routerbinary's clap subcommands, etc.).hero_proc_server,hero_router,hero_db_server,hero_code_server,hero_indexer_server,mycelium_server_msg) keep building and behaving exactly as today; they are refactored to be thin shells around the newserver::runlibrary functions.tokio_util::sync::CancellationTokenwires SIGINT / SIGTERM to every service. Bundle usestokio::select!to cancel on either signal.--config <path>(default~/hero/cfg/hero_bundle.toml), and per-service startup banner aggregation.$HERO_SOCKET_DIR/<service>/..., identical to current standalone behavior.Files to Modify/Create
Repo: hero_bundle (this repo)
Cargo.toml— workspace[workspace.dependencies]add the six service-server crates as git deps ondevelopment; drophero_proc_sdk(not needed for in-process).src/Cargo.toml— depend on the six libs, addtokio-utilforCancellationToken,tomlfor config.src/src/main.rs— replace SDK launcher with in-process supervisor (sequential service start, signal handling, shutdown join).src/src/config.rs— new:HeroBundleConfigwith sub-configs per service, TOML load with sensible defaults.src/src/services.rs— new: per-servicestart_*helpers that translate bundle config into each service's config struct and callserver::run(...).src/src/banner.rs— new: aggregated startup banner, hides per-service banner spam.README.md— rewrite around the in-process model.docs/{architecture,concepts,configuration,api,setup,testing}.md— rewrite to drop SDK/launcher narrative.scripts/— review and simplify (no longer needed to pre-register with hero_proc).Repo: hero_proc
crates/hero_proc_server/src/lib.rs— addpub mod server;and re-exportpub use server::{ServerConfig, run};.crates/hero_proc_server/src/server.rs— new file. Move the body ofmain.rs(everything after clap parsing) intopub async fn run(cfg: ServerConfig, cancel: CancellationToken) -> anyhow::Result<()>. DefineServerConfig { db_path, socket_path, log_level }.crates/hero_proc_server/src/main.rs— slim to: parse clap, init logging, buildServerConfig, callserver::run.Repo: hero_router
crates/hero_router/src/lib.rs— addpub mod server_run;and re-exports.crates/hero_router/src/server_run.rs— new. Extract the server path ofmain.rsintopub async fn run(cfg, cancel). CLI subcommands stay inmain.rs.crates/hero_router/src/main.rs— server branch becomes a thin call intoserver_run::run.Repo: hero_db
crates/hero_db_server/src/lib.rs— addpub mod server;. Moverun_server(...)intopub async fn run(cfg, cancel).crates/hero_db_server/src/main.rs— slim to env parsing +server::run.Repo: hero_code
crates/hero_code_server/src/lib.rs— addpub mod server;.crates/hero_code_server/src/server.rs— new. Wrapasync_mainbody aspub async fn run(cfg, cancel).crates/hero_code_server/src/main.rs— slim to banner, tracing init, build config, callserver::run.Repo: hero_indexer
crates/hero_indexer_server/Cargo.toml— add[lib]line (currently bin-only).crates/hero_indexer_server/src/lib.rs— new. Moverun_server,AppState,RpcHandlerState, all HTTP handlers, and helpers into the lib. Exposepub async fn run(cfg, cancel).crates/hero_indexer_server/src/main.rs— slim to clap + banner +run.Repo: mycelium_network — messages-only bus
crates/mycelium_daemon/src/runner.rs— addpub async fn run_node_with_cancel(state, private_network_config, cancel) -> Result<(), Box<dyn Error>>that races the existing run loop againstcancel.cancelled().crates/mycelium_server_msg/src/main.rs— switch to the cancellable runner with a signal-driven token.Implementation Plan
Step 1: Refactor hero_proc_server into a library entrypoint
Repo: hero_proc
Files:
crates/hero_proc_server/src/{server.rs (new), lib.rs, main.rs, Cargo.toml}Changes:
pub struct ServerConfigwithdb_path,socket_path,log_level; providefrom_env_defaults().pub async fn run(cfg, cancel) -> Result<()>containing supervisor + scheduler + scanner + cleanup + stats + web tasks. Replace SIGINT/SIGTERM/RPC shutdowntokio::select!with one that also acceptscancel.cancelled().tokio-utildep with["sync"].Dependencies: none
Step 2: Refactor hero_router server branch into a library entrypoint
Repo: hero_router
Files:
crates/hero_router/src/{server_run.rs (new), lib.rs, main.rs, Cargo.toml}Changes:
ServerConfig { port, bind, address, ui_port, router_config }.run(cfg, cancel)extracts the server-mode body ofmain.rs(build_and_start, RPC/UI socket bind, optional extra TCP listener, final select replaced bycancel.cancelled()).main.rsso the bundle does not double-install it.tokio-utildep.Dependencies: none
Step 3: Refactor hero_db_server into a library entrypoint
Repo: hero_db
Files:
crates/hero_db_server/src/{lib.rs, main.rs, Cargo.toml}Changes:
run_serverfrommain.rsinto the lib asserver::run(cfg, cancel). Internally keep the existingbroadcast::channel::<()>shutdown; spawn a tiny task that mapscancel.cancelled()intoshutdown_tx.send(()).expand_path,socket_dir,default_socket_path,resp_socket_pathin the lib.ServerConfigmirroring the env vars currently parsed bymain.rs.tokio-utildep.Dependencies: none
Step 4: Refactor hero_code_server into a library entrypoint
Repo: hero_code
Files:
crates/hero_code_server/src/{server.rs (new), lib.rs, main.rs, Cargo.toml}Changes:
async_mainandbuild_editorinto the lib. Exposeserver::run(cfg, cancel).tokio::sync::watch::<bool>shutdown wiring to also fire oncancel.cancelled(). Web server contract (shutdown_tx) preserved.tokio-utildep.Dependencies: none
Step 5: Refactor hero_indexer_server into a library + add
[lib]Repo: hero_indexer
Files:
crates/hero_indexer_server/src/{lib.rs (new), main.rs, Cargo.toml}Changes:
lib.rs; move everything except clap parsing and--infoprinting into it: socket-path helpers,AppState,RpcHandlerState, log_* helpers, all HTTP handlers,create_demo_db, andrun_server.run_servertopub async fn run(cfg, cancel). Replace the spawned signal-watcher with one that also cancels oncancel.cancelled().[lib]entry in Cargo.toml. Addtokio-utildep.Dependencies: none
Step 6: Add a cancellable run path to mycelium_daemon
Repo: mycelium_network
Files:
crates/mycelium_daemon/src/{runner.rs, lib.rs},crates/mycelium_server_msg/src/main.rsChanges:
pub async fn run_node_with_cancel(state, private_network_config, cancel). Internally race the existing run loop againstcancel.cancelled()(e.g. add the token torun_one_iteration's innertokio::select!, returningOk(None)on cancel).lib.rs. Update the standalone binary to install a signal listener that cancels the token. Behavior unchanged.tokio-utildep.Dependencies: none (parallelizable with steps 1–5)
Step 7: Bundle workspace dep wiring
Repo: hero_bundle
Files:
Cargo.toml,src/Cargo.tomlChanges:
[workspace.dependencies]: removehero_proc_sdk; addhero_proc_server,hero_db_server,hero_code_server,hero_indexer_server,herolib_router(the lib name of the hero_router crate), each pinned to the matchingforge.ourworld.tf/lhumina_code/<repo>.gitbranch = "development"git source. Addmycelium_daemon(forge URL TBD with user during implementation).tokio-util = { version = "0.7", features = ["rt"] }andtoml = "0.8".src/Cargo.tomlreference all of the above.Dependencies: Steps 1–6
Step 8: Bundle config module
Repo: hero_bundle
Files:
src/src/config.rs(new)Changes:
HeroBundleConfigwith one sub-config per service plussocket_dir: Option<PathBuf>andlog_level.#[serde(default)]and translate one-to-one to each service'sServerConfig.pub fn load(path: Option<PathBuf>) -> anyhow::Result<HeroBundleConfig>reads the file if present, otherwise returns defaults; resolves~; honorsHERO_SOCKET_DIR.~/hero/cfg/hero_bundle.toml.Dependencies: Step 7
Step 9: Bundle service launcher
Repo: hero_bundle
Files:
src/src/services.rs(new)Changes:
start_<svc>(cfg, cancel) -> JoinHandle<Result<()>>per service. Each builds the service'sServerConfigfrom the bundle config and spawns<svc>::server::run(cfg, cancel.clone()).start_mycelium_bus(cfg, cancel)mirrorsmycelium_server_msg::main: loadDaemonState, forceno_tun = true, persist if dirty, callmycelium_daemon::run_node_with_cancel.ServiceHandle { name, join }and aVec<ServiceHandle>for join-on-shutdown.Dependencies: Step 8
Step 10: Bundle main.rs supervisor
Repo: hero_bundle
Files:
src/src/main.rs,src/src/banner.rs(new)Changes:
--config <PATH>,--info,--help. No subcommands.RUST_LOG.CancellationToken. Spawn a signal watcher that callscancel.cancel()on SIGINT or SIGTERM.mycelium_bus->hero_db->hero_indexer->hero_code->hero_proc->hero_router.cancel.cancelled(): await allJoinHandles with a 30s timeout, then exit 0.Dependencies: Step 9
Step 11: Replace docs and README
Repo: hero_bundle
Files:
README.md,docs/{architecture,concepts,configuration,api,setup,testing}.mdChanges:
hero_proc_sdk,service_bundle, "registers each service with hero_proc".Dependencies: Step 10 (parallelizable with Step 12)
Step 12: Smoke test script
Repo: hero_bundle
Files:
scripts/smoke.sh(new or rewrite existing)Changes:
hero_bundlein background, poll for each socket under$HERO_SOCKET_DIR/<service>/rpc.sock, hit a health endpoint viacurl --unix-socket, then SIGTERM and confirm clean exit.Dependencies: Step 10
Acceptance Criteria
cargo build --release --workspacesucceeds inhero_bundle.hero_bundlebinary starts all six services in-process from a singletokio::main.$HERO_SOCKET_DIR/{hero_proc,hero_router,hero_db,hero_code,hero_indexer,mycelium}/rpc.sockplus the auxiliary sockets —hero_db/resp.sock,hero_router/admin.sock,hero_code/editor.sock).no_tun = trueforced; no TUN device created).hero_proc_server,hero_router,hero_db_server,hero_code_server,hero_indexer_server,mycelium_server_msg) still build and run with no behavior change.hero_router scanfrom outside the bundle finds all five other services through their UDS sockets.hero_bundle --help(only--config,--info,--help).cargo buildsucceeds in each touched repo (hero_proc,hero_router,hero_db,hero_code,hero_indexer,mycelium_network).Notes
hero_indexer(nohero_index). The plan uses that name.hero_myceliumrepo on disk; the mycelium codebase lives atmycelium_network(workspace with cratesmycelium_daemon,mycelium_engine,mycelium_api,mycelium_server_msg,mycelium_server, etc.). The forge URL formycelium_networkwill need to be confirmed before pinning the git dep.hero_routerlib crate name isherolib_router(set via[lib] name = "herolib_router"in its Cargo.toml). The bundle imports it asherolib_router::server_run::run.hero_proc_servergraceful shutdown has a 30s hard timeout and a force-kill RPC path. The bundle should always trigger the graceful path on SIGINT/SIGTERM. Force-shutdown remains an internal RPC affordance unchanged.hero_code_serverbuilds its own multi-thread runtime and callshero_tracing::initin main. The bundle owns the runtime and the global tracing subscriber, so the librunmust be a plainasync fnthat does NOT install tracing or build a runtime; gate any per-service tracing init behind acfg.init_tracing: boolflag.tracing_subscriber::registry().init()in main. When bundled this would panic on second install. Each service'srun(...)must be tracing-init free; the binary'smain.rskeeps tracing init.hero_routerinstalls a process-wide panic hook inmain.rs. The bundle should not install it (or installs its own bundle-named version). Step 2 keeps the installer out of the lib path.hero_dband others read env vars at startup. With six services in one process, env vars become shared. Bundle config should set per-service fields explicitly rather than rely on env reads insideserver::run.docs/architecture.md.Test Results
cargo test -p hero_bundlecargo check --workspaceNotes
scripts/smoke.shwas NOT executed (requires a clean machine state); included in the repo for manual verification.cargo build --releasewas NOT run; a known unrelated issue exists inmycelium_clirelease-mode (non-exhaustive match oncli::Command::HttpGateway).Implementation Summary
The first version of
hero_bundleis complete. The crate now hosts the six core Hero services in-process as independent tokio tasks, with each service binding the same Unix sockets and exposing the same OpenRPC, REST, and admin/UI interfaces it would when launched standalone.Changes in
hero_bundleCargo.toml— workspace deps swapped fromhero_proc_sdk+hero_serviceto path deps on the six service crates (hero_proc_server,hero_db_server,hero_code_server,hero_indexer_server,herolib_router,mycelium_daemon); addedtokio-utilandtoml.src/Cargo.toml— depends on the six libs via{ workspace = true }.src/src/main.rs— clap CLI (--config,--info); tracing init; signal-drivenCancellationTokenwatcher; supervised in-process startup of all six services in order (mycelium → db → indexer → code → proc → router) with a 200 ms stagger; graceful shutdown with a 30 s per-service join timeout.src/src/config.rs(new) —HeroBundleConfigwith one sub-config per service. TOML loader; default path~/hero/cfg/hero_bundle.toml;~expansion; silent defaults when the file is absent.src/src/services.rs(new) — one launcher per service that builds the service'sServerConfigfrom bundle config and spawns<svc>::server::run(cfg, cancel).start_mycelium_busmirrorsmycelium_server_msg's state-load +no_tun = truelogic.src/src/banner.rs(new) —print_startupandprint_info_json.README.mdanddocs/{architecture,concepts,configuration,api,setup,testing}.md— rewritten to describe the in-process model.scripts/smoke.sh(new) — builds the bundle, runs it against an isolatedHERO_SOCKET_DIR, polls for service sockets, sends SIGTERM, asserts clean exit. Lenient on socket filenames so it tolerates whatever exact paths each service uses.Refactors in sibling repos
Each service crate gained a reusable async library entrypoint of the shape
run(cfg, cancel) -> anyhow::Result<()>(or equivalent). The standalone binary in each repo was reduced to a thin shell that builds the config, installs a signal-drivenCancellationToken, and calls into the lib. CLI surface and tracing init stay in each binary; the bundle owns those for itself.hero_proc/crates/hero_proc_server— addedserver.rswithServerConfig+run. Cancel branch uses the existing 30 s graceful path (force-shutdown remains an internal RPC affordance).hero_router/crates/hero_router— addedserver_run.rs. Server-mode subcommand body extracted; CLI subcommands and panic-hook stay inmain.rs.hero_db/crates/hero_db_server—run_servermoved to librun; cancel forwarded into the existingbroadcast::channelshutdown.hero_code/crates/hero_code_server—async_mainmoved to librun; tracing init gated behindcfg.init_tracing(bundle setsfalse).hero_indexer/crates/hero_indexer_server— added[lib](was bin-only); movedrun_server,AppState,RpcHandlerState, all axum handlers and helpers intolib.rs.mycelium_network/crates/mycelium_daemon— addedrun_node_with_cancel; threaded aCancellationTokenintorun_one_iteration's innertokio::select!so a cancel triggers graceful shutdown. Publicrun_nodesignature unchanged.run_node_innerandrun_one_iterationhad their error types upgraded toBox<dyn Error + Send + Sync>so the future isSend-bound and the bundle can drive mycelium with a normaltokio::spawn.mycelium_network/crates/mycelium_server_msg— switched torun_node_with_cancelwith a signal-driven token. Behaviour unchanged.Architecture properties preserved
$HERO_SOCKET_DIR/<service>/...it would standalone.hero_router scanfrom outside the bundle continues to discover the bundled services through their UDS sockets.hero_myceliumruns as messages-only (TUN forced off); not a full network node inside the bundle.hero_proc.hero_procruns as a bundled sibling and continues to supervise external child processes via fork/exec.Test results
cargo test -p hero_bundle: pass (0 tests; orchestration binary, no unit tests).cargo check --workspace: pass.cargo checkafter each refactor: pass for all six external crates.cargo build --releasewas not run forhero_bundle; a known unrelated issue exists in transitivemycelium_clirelease-mode (non-exhaustive match oncli::Command::HttpGateway).scripts/smoke.shwas not executed in this session.Notes / follow-ups
pathdeps to the six sibling repos (../hero_proc, etc.). Switching to git deps onbranch = "development"is a follow-up once each repo's refactor lands and is pushed.Cargo.tomlfiles (hero_indexer/Cargo.toml,hero_indexer/crates/hero_indexer_sdk/Cargo.toml,mycelium_network/crates/mycelium_daemon/Cargo.toml) had staleversion = "0.5.0"pins onherolib_coreandhero_rpc_*while git HEAD ondevelopmentis0.6.0. They were bumped to0.6.0to allow resolution. No code changes.~/hero/var/<service>/...and~/code(forhero_code.coderoot). Centralising paths under a[paths]block is a possible follow-up.hero_bundleare LOCAL only. Nothing has been committed yet — awaiting user approval.