Slice-based VM compute manager for the Hero Ecosystem, built on hero_rpc OSIS framework
  • Rust 55.9%
  • JavaScript 20%
  • HTML 14.4%
  • Shell 8%
  • CSS 1.4%
  • Other 0.3%
Find a file
Mahmoud-Emad 59f6067d6e
All checks were successful
Test / test (push) Successful in 1m43s
Build Linux / build-linux (linux-amd64, false, x86_64-unknown-linux-musl) (push) Successful in 2m29s
Build Linux / build-linux (linux-arm64, true, aarch64-unknown-linux-gnu) (push) Successful in 2m41s
feat: Add chart data persistence to sessionStorage
- Implement sessionStorage for chart data persistence
- Load and save chart histories across page navigation
- Enhance VM console modal behavior and cleanup
- Improve console theme application and session management
2026-03-26 01:11:34 +02:00
.forgejo/workflows build: Relocate buildenv.sh to scripts/ directory 2026-03-25 18:06:04 +02:00
crates feat: Add chart data persistence to sessionStorage 2026-03-26 01:11:34 +02:00
docs refactor: organize files and update documentation 2026-03-25 18:14:30 +02:00
schemas feat: add ExplorerService proxied deployment & VM APIs 2026-03-24 13:10:17 +02:00
scripts build: Relocate buildenv.sh to scripts/ directory 2026-03-25 18:06:04 +02:00
sdk/js refactor: Rename hero_cloud to hero_compute 2026-03-15 14:40:06 +02:00
.env.example refactor: organize files and update documentation 2026-03-25 18:14:30 +02:00
.gitignore feat: implement VM details modal dialog 2026-03-18 11:35:58 +02:00
Cargo.lock feat: centralize service orchestration with hero_compute CLI 2026-03-25 11:50:32 +02:00
Cargo.toml feat: centralize service orchestration with hero_compute CLI 2026-03-25 11:50:32 +02:00
Makefile feat: add --status flag and enable CI publish 2026-03-25 16:35:17 +02:00
README.md refactor: switch default UI port from 80 to 9001 2026-03-25 18:01:43 +02:00

Hero Compute

Slice-based virtual machine manager for the Hero Ecosystem. Divides a physical host into 4 GB RAM slices, each backing exactly one VM. Built on the hero_rpc OSIS framework with JSON-RPC 2.0 over Unix sockets.

New here? Read the Hero Compute Explainer for a visual guide to how slices, VMs, secrets, and the explorer work together.

How It Works

On bootstrap the server reads /proc/meminfo and df, reserves 1 GB for the OS, and carves the rest into slices:

Example: 64 GB RAM, 2 TB SSD
  usable     = 64 - 1 = 63 GB
  slices     = floor(63 / 4) = 15
  disk/slice = floor(2000 / 15) = 133 GB

Deploy a VM into any free slice; start/stop/restart return immediately while the hypervisor works in the background.

Requirements

  • Linux (x86_64) bare-metal server with hardware virtualization (KVM)
  • Rust toolchain (1.92+)
  • System packages: libssl-dev, pkg-config, iproute2, busybox-static
  • hero_proc process supervisor (must be running)
  • my_hypervisor (VM hypervisor)
  • cloud-hypervisor (VMM backend)

Quick Start

# First time -- install all dependencies and build:
make configure

# Start in local mode (single node):
make start

# Open the dashboard:
# http://<server-ip>:9001

Register the node, then deploy a VM. The UI guides you through image selection (images come from the hero_compute_registry, all with SSH key auth pre-configured). Add your SSH key in Settings, then SSH in via Mycelium IPv6: ssh root@<ip>.

Multi-Node Setup

# Master node (explorer hub -- other nodes connect here):
make start MODE=master

# Worker node (connects to a master):
make start MODE=worker MASTER_IP=<master-ip>

See Setup Guide for full installation and multi-node instructions.

Service Architecture

Hero Compute uses the hero_proc_service_selfstart pattern:

  • hero_compute -- CLI binary that registers all components with hero_proc (--start / --stop)
  • hero_compute_server -- JSON-RPC daemon (foreground, managed by hero_proc)
  • hero_compute_ui -- Admin dashboard (foreground, binds TCP port 9001 directly)
  • hero_compute_explorer -- Multi-node registry (foreground, managed by hero_proc)
hero_compute --start                                  # Local mode (default)
hero_compute --start --mode master                    # Explorer hub
hero_compute --start --mode worker --master-ip X.X.X  # Worker node
hero_compute --stop                                   # Stop everything

Make Targets

Target Description
make configure Install all dependencies and build
make start Build + start in local mode (single node)
make start MODE=master Start as master (explorer hub for workers)
make start MODE=worker MASTER_IP=x.x.x.x Start as worker connected to a master
make stop Stop all services
make status Show service status via hero_proc
make build Build all binaries
make clean Remove build artifacts
make test Run unit tests
make lint Run clippy linter
make fmt Format code

Security -- VM Secrets

VMs are protected by a secret -- a capability token you set at deploy time. All VM operations (start, stop, delete, list) require the matching secret.

Important: The secret is your identity, not a password. Anyone who knows your secret can see and manage your VMs. This is by design for simplicity.

  • Always use generated secrets. The UI auto-generates a 16-character random secret on first visit. Use it.
  • Never use common words or short strings. If two users pick the same secret, they share VM access.
  • Treat it like a private key. Store it securely. Don't share it.
  • Empty secret = no protection. All operations work without a secret (backward compatible, for single-tenant setups).

See API Reference -- Security Model for full details.

Documentation

Crates

Crate Description
hero_compute CLI -- registers and manages all service components via hero_proc
hero_compute_server JSON-RPC daemon -- VM lifecycle, slice management
hero_compute_explorer Multi-node registry -- aggregates nodes via heartbeats
hero_compute_sdk Generated OpenRPC client library
hero_compute_ui Admin dashboard (Bootstrap + Askama + Axum)
hero_compute_examples SDK usage examples

License

Apache-2.0