Rethinking hero RPC, OSIS and backend architecture — hero_sdk + hero_core #13

Open
opened 2026-03-27 10:45:57 +00:00 by timur · 20 comments
Owner

Rethinking hero RPC, OSIS and backend architecture

Summary

Rename and restructure hero_rpc → hero_sdk and hero_osis → hero_core to simplify the service architecture, reduce boilerplate, and provide a true SDK for building hero services.


Agreed Architecture

hero_sdk (formerly hero_rpc)

A single workspace containing everything needed to build hero services:

hero_sdk/
  crates/
    oschema/              # Schema parser
    derive/               # Proc macros: OsisObject, openrpc_client!, hero_client!
    generator/            # Code generation engine
    hero_sdk_osis/        # Storage layer: DBTyped, SmartID, OTOML, indexing
    openrpc/              # OpenRPC 1.3 spec tooling
    client/               # Cross-platform RPC client (native + WASM, target-gated)
    server/               # Unified HeroServer (merged server + service crates)
    models/               # Shared domain models (feature-gated per domain)
      identity/           # User, Profile, Contact, Group, Session, Device
      communication/      # Chat, Message, Room, Call
      calendar/           # Calendar, Event, Planning
      ...                 # All 16 domains from hero_osis
  schemas/                # OSchema definitions (moved from hero_osis)

Key decisions:

  • Feature gating: per-domain features, all-domains composite, default = no domains
  • Models include basic generic business logic (send_message, etc.) that services extend (not override)
  • WASM support via target gating (#[cfg(target_arch = "wasm32")]), not feature gating
  • hero_sdk_osis keeps the OSIS name (no confusion since hero_osis becomes hero_core)

HeroServer — unified builder API

Services are single binaries. The server auto-creates 3 socket types:

~/hero/var/sockets/{service}.sock                         # Service info, context management
~/hero/var/sockets/{service}_ui.sock                      # UI
~/hero/var/sockets/{service}_server/{context}/{domain}.sock  # Per-context, per-domain OpenRPC
HeroServer::new("hero_food")
    .with_sdk_domain::<hero_sdk::models::identity::IdentityDomain>()
    .with_sdk_domain::<hero_sdk::models::communication::CommunicationDomain>()
    .with_domain::<models::delivery::DeliveryDomain>()
    .with_ui(ui::router())
    .run()
    .await

Per-domain sockets enable:

  • Context namespacing (multi-tenant isolation)
  • Agent-friendly access (give AI agent only the domain socket it needs)
  • Independent health checks per domain
  • Clean, focused OpenRPC specs per domain

The service socket ({service}.sock) handles context management — adding/removing contexts to the service (not automatic, user-controlled). Root context in hero_core manages global context lifecycle.

hero_client! macro

Generates a unified typed client composing domain clients:

hero_client! {
    service: "hero_food",
    sdk_domains: [identity, communication],
    custom_domains: [delivery, restaurant, menu],
}
// Generates HeroFoodClient with .identity(), .delivery(), etc.
// Works on native (Unix sockets) and WASM (HTTP) via target gating

hero_core (formerly hero_osis)

Thin service embedding all hero_sdk models:

HeroServer::new("hero_core")
    .with_all_sdk_domains()
    .with_ui(core_ui::router())
    .run()
    .await

The root context manages context lifecycle (create/delete contexts). Other services add contexts manually via their service socket.

Service structure (single binary)

hero_food/
  Cargo.toml
  build.rs                    # OschemaBuilder for custom schemas
  schemas/                    # Custom .oschema files
  src/
    main.rs                   # ~15 lines: HeroServer builder
    lib.rs                    # Re-exports
    models/                   # Custom domain modules
    client/                   # hero_client! invocation
    ui/                       # Optional Axum router

Services extend hero_sdk's base logic without overriding. E.g., pay_for_order calls hero_sdk's make_transaction and adds app logic on top.

Cross-domain communication: via domain sockets (RPC clients), not in-process wiring. Unix socket IPC is ~10-50μs — negligible. Keeps domains independent and splittable.

External services (hero_embedder, hero_indexer)

hero_core uses OpenRPC clients to communicate with hero_embedder and hero_indexer — they remain separate services. Heavy deps (ONNX Runtime) stay out of hero_sdk.


Implementation Plan

All on development_rethinking branch. No repo renaming yet (backward compat).

  1. Restructure workspace — rename crates, merge server+service, create models scaffold, add hero_sdk prelude
  2. Build HeroServer API — builder pattern, 3-socket convention, auto health/openrpc/discovery, lifecycle CLI, HeroDomain trait
  3. Move models — schemas + generated types from hero_osis into hero_sdk/models, feature-gated
  4. Build hero_client! macro — compose domain clients, auto-socket paths, target-gated WASM
  5. Create hero_food example — proof-of-concept service validating the full DX
  6. Slim hero_osis → hero_core — thin wrapper, migrate remaining services one by one
## Rethinking hero RPC, OSIS and backend architecture ### Summary Rename and restructure **hero_rpc → hero_sdk** and **hero_osis → hero_core** to simplify the service architecture, reduce boilerplate, and provide a true SDK for building hero services. --- ### Agreed Architecture #### hero_sdk (formerly hero_rpc) A single workspace containing everything needed to build hero services: ``` hero_sdk/ crates/ oschema/ # Schema parser derive/ # Proc macros: OsisObject, openrpc_client!, hero_client! generator/ # Code generation engine hero_sdk_osis/ # Storage layer: DBTyped, SmartID, OTOML, indexing openrpc/ # OpenRPC 1.3 spec tooling client/ # Cross-platform RPC client (native + WASM, target-gated) server/ # Unified HeroServer (merged server + service crates) models/ # Shared domain models (feature-gated per domain) identity/ # User, Profile, Contact, Group, Session, Device communication/ # Chat, Message, Room, Call calendar/ # Calendar, Event, Planning ... # All 16 domains from hero_osis schemas/ # OSchema definitions (moved from hero_osis) ``` **Key decisions:** - Feature gating: per-domain features, `all-domains` composite, **default = no domains** - Models include basic generic business logic (send_message, etc.) that services **extend** (not override) - WASM support via target gating (`#[cfg(target_arch = "wasm32")]`), not feature gating - `hero_sdk_osis` keeps the OSIS name (no confusion since hero_osis becomes hero_core) #### HeroServer — unified builder API Services are **single binaries**. The server auto-creates 3 socket types: ``` ~/hero/var/sockets/{service}.sock # Service info, context management ~/hero/var/sockets/{service}_ui.sock # UI ~/hero/var/sockets/{service}_server/{context}/{domain}.sock # Per-context, per-domain OpenRPC ``` ```rust HeroServer::new("hero_food") .with_sdk_domain::<hero_sdk::models::identity::IdentityDomain>() .with_sdk_domain::<hero_sdk::models::communication::CommunicationDomain>() .with_domain::<models::delivery::DeliveryDomain>() .with_ui(ui::router()) .run() .await ``` Per-domain sockets enable: - Context namespacing (multi-tenant isolation) - Agent-friendly access (give AI agent only the domain socket it needs) - Independent health checks per domain - Clean, focused OpenRPC specs per domain The service socket (`{service}.sock`) handles context management — adding/removing contexts to the service (not automatic, user-controlled). Root context in hero_core manages global context lifecycle. #### hero_client! macro Generates a unified typed client composing domain clients: ```rust hero_client! { service: "hero_food", sdk_domains: [identity, communication], custom_domains: [delivery, restaurant, menu], } // Generates HeroFoodClient with .identity(), .delivery(), etc. // Works on native (Unix sockets) and WASM (HTTP) via target gating ``` #### hero_core (formerly hero_osis) Thin service embedding all hero_sdk models: ```rust HeroServer::new("hero_core") .with_all_sdk_domains() .with_ui(core_ui::router()) .run() .await ``` The root context manages context lifecycle (create/delete contexts). Other services add contexts manually via their service socket. #### Service structure (single binary) ``` hero_food/ Cargo.toml build.rs # OschemaBuilder for custom schemas schemas/ # Custom .oschema files src/ main.rs # ~15 lines: HeroServer builder lib.rs # Re-exports models/ # Custom domain modules client/ # hero_client! invocation ui/ # Optional Axum router ``` Services extend hero_sdk's base logic without overriding. E.g., `pay_for_order` calls hero_sdk's `make_transaction` and adds app logic on top. Cross-domain communication: via domain sockets (RPC clients), not in-process wiring. Unix socket IPC is ~10-50μs — negligible. Keeps domains independent and splittable. #### External services (hero_embedder, hero_indexer) hero_core uses OpenRPC clients to communicate with hero_embedder and hero_indexer — they remain separate services. Heavy deps (ONNX Runtime) stay out of hero_sdk. --- ### Implementation Plan All on `development_rethinking` branch. No repo renaming yet (backward compat). 1. **Restructure workspace** — rename crates, merge server+service, create models scaffold, add hero_sdk prelude 2. **Build HeroServer API** — builder pattern, 3-socket convention, auto health/openrpc/discovery, lifecycle CLI, HeroDomain trait 3. **Move models** — schemas + generated types from hero_osis into hero_sdk/models, feature-gated 4. **Build hero_client! macro** — compose domain clients, auto-socket paths, target-gated WASM 5. **Create hero_food example** — proof-of-concept service validating the full DX 6. **Slim hero_osis → hero_core** — thin wrapper, migrate remaining services one by one
Author
Owner

Deep Analysis & Architectural Proposal

I've done a thorough exploration of the entire Hero ecosystem — hero_rpc (7 crates), hero_osis (6 crates, 16 domains), and 23 hero_* service repos — to understand the current state, pain points, and what the ideal architecture looks like.


The Core Problem

The current split creates three friction layers:

  1. hero_rpc is misnamed and overloaded — it's not just an RPC library. It contains: schema parser (oschema), code generators, storage layer (DBTyped/OTOML), RPC server runtime (3,400-line server.rs), multi-socket orchestrator (OServer), service lifecycle (HeroRpcServer/HeroUiServer), proc macros, and OpenRPC tooling. It's really a full SDK.

  2. hero_osis is trapped — it has 16 carefully designed domain models (identity, communication, calendar, AI, finance, network, etc.) with ~50 schemas, but they're locked inside a single service binary. Any other service that wants User, Contact, or Chat types must either depend on hero_osis (heavy) or redefine them (fragile).

  3. Every new service reinvents the wheel — across 23 services, I found ~3,000+ lines of duplicated Makefile/build boilerplate, repeated server startup patterns, copy-paste lifecycle wrappers, and two competing implementation approaches (schema-driven vs manual Axum) with no middle ground.


What Works Well Today

Before proposing changes, it's worth acknowledging what's solid:

  • OSchema codegen pipeline — the schema-first approach with auto-generated types, handlers, SDKs, and OpenRPC specs is powerful
  • Unix socket + JSON-RPC 2.0 — clean IPC that all 23 services converge on
  • Feature-gated domains — compile only what you need
  • The 5-crate pattern (core + server + sdk + ui + examples) is well-understood
  • hero_proc lifecycle integration — standardized start/stop/status
  • DBTyped + SmartID + OTOML storage — simple, file-based, works

Proposed Architecture: hero_sdk + hero_core

1. hero_rpc becomes hero_sdk

The rename reflects reality: this IS a software development kit. But beyond renaming, the key structural change is absorbing the shared domain models and simplifying the server API.

hero_sdk/
  crates/
    oschema/              # Schema parser (unchanged)
    derive/               # Proc macros: OsisObject, openrpc_client! (unchanged)
    generator/            # Code generation engine (unchanged)
    storage/              # Renamed from "osis" — DBTyped, SmartID, OTOML, indexing
    openrpc/              # OpenRPC 1.3 spec tooling (unchanged)
    client/               # Cross-platform RPC client (unchanged)
    server/               # UNIFIED server — the big simplification
                          # Merges: hero_rpc_server (OServer) + hero_service (HeroRpcServer/HeroUiServer)
                          # Single entry point: HeroServer
                          # Auto-creates sockets per domain
                          # Auto-serves UI alongside RPC
                          # Built-in: health, openrpc, discovery, lifecycle CLI
    models/               # NEW — shared domain models (moved from hero_osis)
      identity/           # User, Profile, Contact, Group, Session, Device
      communication/      # Chat, Message, Room, Call
      calendar/           # Calendar, Event, Planning
      projects/           # Project, Story, Requirement
      business/           # Company, Deal, Contract
      finance/            # Account, Transaction
      network/            # Node, Farm, Grid
      ai/                 # Agent, Bot, AgentData
      flow/               # Workflow, WorkflowStep
      media/              # Photo, Song, Video
      ...                 # Each domain is feature-gated
                          # Each domain ships: types + generated CRUD handlers + client SDK
  schemas/                # OSchema definitions (moved from hero_osis)
  examples/
    recipe_server/        # Minimal example

Why models belong in hero_sdk (not in a separate repo):

  • They ARE the SDK — the whole point is "import hero_sdk with the domains you want"
  • Feature gates keep compilation fast (only compile selected domains)
  • Single version to track — no cross-repo version drift
  • The schemas, generated types, and generated handlers are all part of the same codegen pipeline

2. hero_osis becomes hero_core

hero_core becomes a thin service that embeds ALL hero_sdk models as the canonical hero backend:

hero_core/
  Cargo.toml              # depends on hero_sdk with features = ["all-domains"]
  crates/
    hero_core/            # Any custom business logic beyond CRUD
    hero_core_server/     # ~50 lines: create HeroServer, register all domains, run
    hero_core_ui/         # Admin dashboard

3. The Dream DX: Creating a New Service

This is the real payoff. Today, creating a hero service means setting up 5-7 crates, 150+ lines of Makefile, copy-pasting buildenv.sh, lifecycle wrapper, server startup, socket handling, and manually registering routes, health checks, and OpenRPC.

With hero_sdk, a new service looks like:

use hero_sdk::prelude::*;
use hero_sdk::models::{identity, communication};

// Define custom domain
mod tasks;

#[tokio::main]
async fn main() -> Result<()> {
    HeroServer::new("my_service")
        // Pick existing domains from hero_sdk
        .with_domain::<identity::IdentityDomain>()
        .with_domain::<communication::CommunicationDomain>()
        // Add custom domains
        .with_domain::<tasks::TasksDomain>()
        // Optional: custom UI
        .with_ui(my_ui_router())
        .run()
        .await
}

This single call automatically:

  • Creates socket: ~/hero/var/sockets/my_service.sock
  • Registers all CRUD handlers for selected domains
  • Exposes /health, /openrpc.json, /.well-known/heroservice.json
  • Handles CLI lifecycle (--start, --stop, --status)
  • Serves UI on the same socket (or separate socket via convention)
  • Enables hero_proc integration

Custom domains follow the same pattern, simplified:

my_service/
  Cargo.toml
  schemas/
    tasks.oschema          # Custom schema
  src/
    main.rs                # ~20 lines (see above)
    tasks/
      mod.rs               # Custom business logic
      types_generated.rs   # Auto-generated from schema
  ui/                      # Optional custom UI

Key Design Decisions to Discuss

A. Socket strategy: one socket per service vs one per domain?

Currently OServer creates hero_db_{context}_{domain}.sock — one socket per domain. This is fine for hero_core which runs all domains, but for individual services it adds complexity.

Proposal: Default to one socket per service (simpler), with an opt-in to split into per-domain sockets for services that need it. The server internally routes domain.Type.method calls to the right handler regardless. Most services only have 1-3 domains anyway.

B. Where does osis (storage/indexing) live?

The osis crate name is confusing because "OSIS" means different things at different levels — as a concept (Object Storage with Indexing & SmartID), as a crate (the storage engine), and as a service (hero_osis the backend).

Proposal: Rename the crate to hero_sdk_storage (or just storage within the workspace). It contains: DBTyped, SmartID, OTOML persistence, Tantivy indexing, and the CRUD dispatch layer. The name "OSIS" survives as the overarching concept but not as a crate name.

C. Should hero_sdk models include business logic or just types?

Two options:

  1. Types + CRUD only — hero_sdk models provide structs and auto-generated get/set/delete/list/find. Custom logic lives in each service.
  2. Types + CRUD + standard services — hero_sdk models also include standard service methods (e.g., ChatService.send_message(), UserService.authenticate()).

Proposal: Option 1 for now. Types + CRUD is the 80/20 — it covers most use cases and keeps hero_sdk lean. Services that need custom logic implement it in their own crate. We can revisit adding standard service methods later.

D. What about the monolith risk?

Moving 16 domains + schemas + generators + server into one repo makes hero_sdk large. Mitigations:

  • Feature gates (already proven to work in hero_osis)
  • Clear crate boundaries within the workspace
  • Each domain is self-contained (can be compiled independently)
  • The alternative (cross-repo version drift) is worse

E. Cross-domain wiring (AI + Flow, etc.)

Currently hero_osis_server has explicit wiring: ai.wire_flow_domain(flow). With the new architecture, this becomes:

HeroServer::new("my_service")
    .with_domain::<AiDomain>()
    .with_domain::<FlowDomain>()
    .with_wiring(|domains| {
        if let (Some(ai), Some(flow)) = (domains.get::<AiDomain>(), domains.get::<FlowDomain>()) {
            ai.wire_flow(flow);
        }
    })
    .run()
    .await

Or better: domains declare their optional dependencies, and the server auto-wires when both are present.


Migration Path

This doesn't need to happen all at once. Suggested phases:

Phase 1: Rename + restructure hero_rpc to hero_sdk

  • Rename repo
  • Rename osis crate to storage
  • Merge server + service into unified server crate
  • Update all downstream dependencies

Phase 2: Move models from hero_osis to hero_sdk/models

  • Move schemas/ directory
  • Move generated types into hero_sdk
  • hero_osis becomes thin wrapper (hero_core preview)
  • Verify all 16 domains compile and pass tests

Phase 3: Simplify HeroServer API

  • Implement the builder pattern (.with_domain(), .with_ui(), .run())
  • Add auto-socket creation, auto-discovery
  • Create migration guide for existing services

Phase 4: Rename hero_osis to hero_core

  • Slim down to minimal service
  • Update hero_services orchestrator
  • Update hero_proxy routing

Phase 5: Migrate existing services

  • Update hero_books, hero_fossil, hero_auth etc. to use new hero_sdk API
  • Eliminate duplicated build boilerplate
  • Standardize on single service pattern

Questions for Discussion

  1. Do we want hero_sdk to be the repo name, or keep hero_rpc as the repo and use hero_sdk as the crate name? (I lean toward full rename for clarity)

  2. Should the unified HeroServer default to one socket per service or keep the per-domain socket pattern?

  3. For hero_core: should it expose ALL models by default, or should users compose their own hero_core with selected features?

  4. How do we handle the hero_embedder dependency? It requires ONNX Runtime which is heavy. Keep it as a separate opt-in or integrate into hero_sdk with a feature gate?

  5. Timeline preference: big-bang migration or incremental phases?

## Deep Analysis & Architectural Proposal I've done a thorough exploration of the entire Hero ecosystem — hero_rpc (7 crates), hero_osis (6 crates, 16 domains), and 23 hero_* service repos — to understand the current state, pain points, and what the ideal architecture looks like. --- ### The Core Problem The current split creates three friction layers: 1. **hero_rpc is misnamed and overloaded** — it's not just an RPC library. It contains: schema parser (oschema), code generators, storage layer (DBTyped/OTOML), RPC server runtime (3,400-line server.rs), multi-socket orchestrator (OServer), service lifecycle (HeroRpcServer/HeroUiServer), proc macros, and OpenRPC tooling. It's really a full SDK. 2. **hero_osis is trapped** — it has 16 carefully designed domain models (identity, communication, calendar, AI, finance, network, etc.) with ~50 schemas, but they're locked inside a single service binary. Any other service that wants User, Contact, or Chat types must either depend on hero_osis (heavy) or redefine them (fragile). 3. **Every new service reinvents the wheel** — across 23 services, I found ~3,000+ lines of duplicated Makefile/build boilerplate, repeated server startup patterns, copy-paste lifecycle wrappers, and two competing implementation approaches (schema-driven vs manual Axum) with no middle ground. --- ### What Works Well Today Before proposing changes, it's worth acknowledging what's solid: - **OSchema codegen pipeline** — the schema-first approach with auto-generated types, handlers, SDKs, and OpenRPC specs is powerful - **Unix socket + JSON-RPC 2.0** — clean IPC that all 23 services converge on - **Feature-gated domains** — compile only what you need - **The 5-crate pattern** (core + server + sdk + ui + examples) is well-understood - **hero_proc lifecycle integration** — standardized start/stop/status - **DBTyped + SmartID + OTOML** storage — simple, file-based, works --- ### Proposed Architecture: hero_sdk + hero_core #### 1. hero_rpc becomes hero_sdk The rename reflects reality: this IS a software development kit. But beyond renaming, the key structural change is **absorbing the shared domain models** and **simplifying the server API**. ``` hero_sdk/ crates/ oschema/ # Schema parser (unchanged) derive/ # Proc macros: OsisObject, openrpc_client! (unchanged) generator/ # Code generation engine (unchanged) storage/ # Renamed from "osis" — DBTyped, SmartID, OTOML, indexing openrpc/ # OpenRPC 1.3 spec tooling (unchanged) client/ # Cross-platform RPC client (unchanged) server/ # UNIFIED server — the big simplification # Merges: hero_rpc_server (OServer) + hero_service (HeroRpcServer/HeroUiServer) # Single entry point: HeroServer # Auto-creates sockets per domain # Auto-serves UI alongside RPC # Built-in: health, openrpc, discovery, lifecycle CLI models/ # NEW — shared domain models (moved from hero_osis) identity/ # User, Profile, Contact, Group, Session, Device communication/ # Chat, Message, Room, Call calendar/ # Calendar, Event, Planning projects/ # Project, Story, Requirement business/ # Company, Deal, Contract finance/ # Account, Transaction network/ # Node, Farm, Grid ai/ # Agent, Bot, AgentData flow/ # Workflow, WorkflowStep media/ # Photo, Song, Video ... # Each domain is feature-gated # Each domain ships: types + generated CRUD handlers + client SDK schemas/ # OSchema definitions (moved from hero_osis) examples/ recipe_server/ # Minimal example ``` **Why models belong in hero_sdk (not in a separate repo):** - They ARE the SDK — the whole point is "import hero_sdk with the domains you want" - Feature gates keep compilation fast (only compile selected domains) - Single version to track — no cross-repo version drift - The schemas, generated types, and generated handlers are all part of the same codegen pipeline #### 2. hero_osis becomes hero_core hero_core becomes a thin service that embeds ALL hero_sdk models as the canonical hero backend: ``` hero_core/ Cargo.toml # depends on hero_sdk with features = ["all-domains"] crates/ hero_core/ # Any custom business logic beyond CRUD hero_core_server/ # ~50 lines: create HeroServer, register all domains, run hero_core_ui/ # Admin dashboard ``` #### 3. The Dream DX: Creating a New Service This is the real payoff. Today, creating a hero service means setting up 5-7 crates, 150+ lines of Makefile, copy-pasting buildenv.sh, lifecycle wrapper, server startup, socket handling, and manually registering routes, health checks, and OpenRPC. With hero_sdk, a new service looks like: ```rust use hero_sdk::prelude::*; use hero_sdk::models::{identity, communication}; // Define custom domain mod tasks; #[tokio::main] async fn main() -> Result<()> { HeroServer::new("my_service") // Pick existing domains from hero_sdk .with_domain::<identity::IdentityDomain>() .with_domain::<communication::CommunicationDomain>() // Add custom domains .with_domain::<tasks::TasksDomain>() // Optional: custom UI .with_ui(my_ui_router()) .run() .await } ``` This single call automatically: - Creates socket: `~/hero/var/sockets/my_service.sock` - Registers all CRUD handlers for selected domains - Exposes `/health`, `/openrpc.json`, `/.well-known/heroservice.json` - Handles CLI lifecycle (`--start`, `--stop`, `--status`) - Serves UI on the same socket (or separate socket via convention) - Enables hero_proc integration **Custom domains** follow the same pattern, simplified: ``` my_service/ Cargo.toml schemas/ tasks.oschema # Custom schema src/ main.rs # ~20 lines (see above) tasks/ mod.rs # Custom business logic types_generated.rs # Auto-generated from schema ui/ # Optional custom UI ``` --- ### Key Design Decisions to Discuss **A. Socket strategy: one socket per service vs one per domain?** Currently OServer creates `hero_db_{context}_{domain}.sock` — one socket per domain. This is fine for hero_core which runs all domains, but for individual services it adds complexity. **Proposal:** Default to **one socket per service** (simpler), with an opt-in to split into per-domain sockets for services that need it. The server internally routes `domain.Type.method` calls to the right handler regardless. Most services only have 1-3 domains anyway. **B. Where does osis (storage/indexing) live?** The `osis` crate name is confusing because "OSIS" means different things at different levels — as a concept (Object Storage with Indexing & SmartID), as a crate (the storage engine), and as a service (hero_osis the backend). **Proposal:** Rename the crate to `hero_sdk_storage` (or just `storage` within the workspace). It contains: DBTyped, SmartID, OTOML persistence, Tantivy indexing, and the CRUD dispatch layer. The name "OSIS" survives as the overarching concept but not as a crate name. **C. Should hero_sdk models include business logic or just types?** Two options: 1. **Types + CRUD only** — hero_sdk models provide structs and auto-generated get/set/delete/list/find. Custom logic lives in each service. 2. **Types + CRUD + standard services** — hero_sdk models also include standard service methods (e.g., `ChatService.send_message()`, `UserService.authenticate()`). **Proposal:** Option 1 for now. Types + CRUD is the 80/20 — it covers most use cases and keeps hero_sdk lean. Services that need custom logic implement it in their own crate. We can revisit adding standard service methods later. **D. What about the monolith risk?** Moving 16 domains + schemas + generators + server into one repo makes hero_sdk large. Mitigations: - Feature gates (already proven to work in hero_osis) - Clear crate boundaries within the workspace - Each domain is self-contained (can be compiled independently) - The alternative (cross-repo version drift) is worse **E. Cross-domain wiring (AI + Flow, etc.)** Currently hero_osis_server has explicit wiring: `ai.wire_flow_domain(flow)`. With the new architecture, this becomes: ```rust HeroServer::new("my_service") .with_domain::<AiDomain>() .with_domain::<FlowDomain>() .with_wiring(|domains| { if let (Some(ai), Some(flow)) = (domains.get::<AiDomain>(), domains.get::<FlowDomain>()) { ai.wire_flow(flow); } }) .run() .await ``` Or better: domains declare their optional dependencies, and the server auto-wires when both are present. --- ### Migration Path This doesn't need to happen all at once. Suggested phases: **Phase 1: Rename + restructure hero_rpc to hero_sdk** - Rename repo - Rename `osis` crate to `storage` - Merge `server` + `service` into unified `server` crate - Update all downstream dependencies **Phase 2: Move models from hero_osis to hero_sdk/models** - Move schemas/ directory - Move generated types into hero_sdk - hero_osis becomes thin wrapper (hero_core preview) - Verify all 16 domains compile and pass tests **Phase 3: Simplify HeroServer API** - Implement the builder pattern (`.with_domain()`, `.with_ui()`, `.run()`) - Add auto-socket creation, auto-discovery - Create migration guide for existing services **Phase 4: Rename hero_osis to hero_core** - Slim down to minimal service - Update hero_services orchestrator - Update hero_proxy routing **Phase 5: Migrate existing services** - Update hero_books, hero_fossil, hero_auth etc. to use new hero_sdk API - Eliminate duplicated build boilerplate - Standardize on single service pattern --- ### Questions for Discussion 1. Do we want `hero_sdk` to be the repo name, or keep `hero_rpc` as the repo and use `hero_sdk` as the crate name? (I lean toward full rename for clarity) 2. Should the unified HeroServer default to one socket per service or keep the per-domain socket pattern? 3. For hero_core: should it expose ALL models by default, or should users compose their own hero_core with selected features? 4. How do we handle the hero_embedder dependency? It requires ONNX Runtime which is heavy. Keep it as a separate opt-in or integrate into hero_sdk with a feature gate? 5. Timeline preference: big-bang migration or incremental phases?
Author
Owner

Few comments:

  • 1: likewise, there should be a $service_client crate that also embeds the client for the hero server with the domains imported, and also can have more domains added customly, just like the server. this could even be hero_client. would be great if generation of custom schemas and integration into client and server is parallel. maybe, for the client, we can even use some macro or derive to pass it the domains we want from the hero_sdk and those generated from our custom implemented schemas and just be like okay i want a client that supports all this because my server has all this. opinions?

  • 2: about sockets, we want three sockets (sort of, see below about server) by default to begin with:
    ~/hero/var/sockets/my_service.sock
    ~/hero/var/sockets/my_service_ui.sock
    ~/hero/var/sockets/my_service_server//.sock

the *s in the _server socket is: we want ...service_server/$context/$domain where the domain is the models domain and the context is the context. there are two purposes here: 1. context namespacing which has many benefits. 2. modular domain openrpc server, which helps a lot especially with context limited agents using them.

the _ui is simply the servers ui endpoint, and service.sock can be like a generic service screen showing sockets health domains service description etc.

    1. this all means that services are now a single binary. this is great for simplicity and reduced memory use as we have shared runtime. this also means we only need the lifecycle of the server. the ui and models etc crates just become lib. so in fact, perhaps a better way to structure is to have a single lib crate with main.rs, and /client /ui /models etc modules. lets explore this with an example service for food delivery

as for B. Where does osis (storage/indexing) live?

lets just have it in hero_sdk as hero_sdk_osis (instead of storage) as you proposed. its ok to keep osis name because now this will be the only osis since hero_osis becomes hero_core

c: option 2
d: agreed. just make sure to not to make a mess out of feature gating. we just want per domain feature gating, and perhaps one for all domains, and default is no domains.

e: i dont exactly get E. Cross-domain wiring (AI + Flow, etc.), elaborate.

as for migration path, i sugges: 1,2,3,4 all at once, all except repo renaming to keep backward compatibility, and working on a branch called development_rethinking and importing that branch as dep. then i suggest we start with an example service, like hero_ride or hero_food or smt. then we can use that example to make sure things are as we desire to then one by one migrate remaining services

as for qs:

1,2,3 should already be answered. 4: hero_embedder is another hero service and hero_osis should be using an openrpc client for hero embedder to work against that service, for the reasons you mention, as it should also be doing against hero_indexer. 5 should also be answered

Few comments: - 1: likewise, there should be a $service_client crate that also embeds the client for the hero server with the domains imported, and also can have more domains added customly, just like the server. this could even be hero_client. would be great if generation of custom schemas and integration into client and server is parallel. maybe, for the client, we can even use some macro or derive to pass it the domains we want from the hero_sdk and those generated from our custom implemented schemas and just be like okay i want a client that supports all this because my server has all this. opinions? - 2: about sockets, we want three sockets (sort of, see below about server) by default to begin with: ~/hero/var/sockets/my_service.sock ~/hero/var/sockets/my_service_ui.sock ~/hero/var/sockets/my_service_server/*/*.sock the *s in the _server socket is: we want ...service_server/$context/$domain where the domain is the models domain and the context is the context. there are two purposes here: 1. context namespacing which has many benefits. 2. modular domain openrpc server, which helps a lot especially with context limited agents using them. the _ui is simply the servers ui endpoint, and service.sock can be like a generic service screen showing sockets health domains service description etc. - 3. this all means that services are now a single binary. this is great for simplicity and reduced memory use as we have shared runtime. this also means we only need the lifecycle of the server. the ui and models etc crates just become lib. so in fact, perhaps a better way to structure is to have a single lib crate with main.rs, and /client /ui /models etc modules. lets explore this with an example service for food delivery as for B. Where does osis (storage/indexing) live? lets just have it in hero_sdk as hero_sdk_osis (instead of storage) as you proposed. its ok to keep osis name because now this will be the only osis since hero_osis becomes hero_core c: option 2 d: agreed. just make sure to not to make a mess out of feature gating. we just want per domain feature gating, and perhaps one for all domains, and default is no domains. e: i dont exactly get E. Cross-domain wiring (AI + Flow, etc.), elaborate. as for migration path, i sugges: 1,2,3,4 all at once, all except repo renaming to keep backward compatibility, and working on a branch called development_rethinking and importing that branch as dep. then i suggest we start with an example service, like hero_ride or hero_food or smt. then we can use that example to make sure things are as we desire to then one by one migrate remaining services as for qs: 1,2,3 should already be answered. 4: hero_embedder is another hero service and hero_osis should be using an openrpc client for hero embedder to work against that service, for the reasons you mention, as it should also be doing against hero_indexer. 5 should also be answered
Author
Owner

Great feedback, this crystallizes things. Let me address each point, elaborate on cross-domain wiring, and then lay out a concrete implementation plan with the food delivery example.


1. Client Composition: hero_client! macro

Agreed — the client should mirror the server. Since each domain already generates its own typed client (from oschema codegen), the service client is just a compositor that holds all domain clients and knows the socket paths by convention.

Proposed macro:

// hero_food/src/client/mod.rs
use hero_sdk::client::prelude::*;

hero_client! {
    service: "hero_food",
    sdk_domains: [identity, communication],
    custom_domains: [delivery, restaurant, menu],
}

This generates:

pub struct HeroFoodClient {
    identity: hero_sdk::models::identity::IdentityClient,
    communication: hero_sdk::models::communication::CommunicationClient,
    delivery: crate::models::delivery::DeliveryClient,
    restaurant: crate::models::restaurant::RestaurantClient,
    menu: crate::models::menu::MenuClient,
}

impl HeroFoodClient {
    /// Connect to all domain sockets for the given context
    pub async fn connect(context: &str) -> Result<Self> {
        let base = hero_sdk::socket_path("hero_food_server", context);
        Ok(Self {
            identity: IdentityClient::connect_socket(
                &format!("{}/identity.sock", base)).await?,
            communication: CommunicationClient::connect_socket(
                &format!("{}/communication.sock", base)).await?,
            delivery: DeliveryClient::connect_socket(
                &format!("{}/delivery.sock", base)).await?,
            restaurant: RestaurantClient::connect_socket(
                &format!("{}/restaurant.sock", base)).await?,
            menu: MenuClient::connect_socket(
                &format!("{}/menu.sock", base)).await?,
        })
    }

    // Accessor methods
    pub fn identity(&self) -> &IdentityClient { &self.identity }
    pub fn communication(&self) -> &CommunicationClient { &self.communication }
    pub fn delivery(&self) -> &DeliveryClient { &self.delivery }
    pub fn restaurant(&self) -> &RestaurantClient { &self.restaurant }
    pub fn menu(&self) -> &MenuClient { &self.menu }
}

Usage:

let client = HeroFoodClient::connect("root").await?;
let user = client.identity().user_get("abc1").await?;
let order = client.delivery().order_create(new_order).await?;

Parallelism: Since client generation comes from oschema (same pipeline that generates server handlers), adding a new domain to both server and client is just one line each — add the domain to HeroServer::new().with_domain() and to hero_client!{ custom_domains: [...] }. The oschema build.rs generates both server handlers AND client code from the same schema.


2. Socket Strategy — confirmed

~/hero/var/sockets/hero_food.sock                              # Service info/health
~/hero/var/sockets/hero_food_ui.sock                           # UI
~/hero/var/sockets/hero_food_server/root/identity.sock         # SDK domain
~/hero/var/sockets/hero_food_server/root/communication.sock    # SDK domain
~/hero/var/sockets/hero_food_server/root/delivery.sock         # Custom domain
~/hero/var/sockets/hero_food_server/root/restaurant.sock       # Custom domain
~/hero/var/sockets/hero_food_server/root/menu.sock             # Custom domain
~/hero/var/sockets/hero_food_server/org_abc/identity.sock      # Another context
~/hero/var/sockets/hero_food_server/org_abc/delivery.sock      # Another context

Benefits:

  • Context namespacing: Multi-tenant isolation at socket level
  • Agent-friendly: An AI agent working with orders only needs access to delivery.sock, not the whole service. Context-limited scoping.
  • Independent lifecycle: Can health-check individual domains
  • OpenRPC per domain: Each socket serves its own /openrpc.json — clean, focused specs

The hero_food.sock top-level socket serves as a service registry — listing all contexts, domains, health status, and socket paths. Lightweight introspection endpoint.


3. Single Binary / Module Architecture — the hero_food example

Agreed on single binary. Here's the concrete structure:

hero_food/
  Cargo.toml
  build.rs                    # OschemaBuilder for custom schemas
  Makefile                    # Minimal — inherits from hero_sdk build tooling
  schemas/
    delivery.oschema          # Order, Driver, DeliveryZone
    restaurant.oschema        # Restaurant, Branch, Review
    menu.oschema              # Menu, MenuItem, Category
  src/
    main.rs                   # ~15 lines: HeroServer builder
    lib.rs                    # Re-exports for external use
    models/
      mod.rs                  # Domain registration
      delivery/
        mod.rs                # DeliveryDomain + custom business logic
        types_generated.rs    # Generated from delivery.oschema
        osis_server_generated.rs  # Generated CRUD handlers
      restaurant/
        mod.rs
        types_generated.rs
        osis_server_generated.rs
      menu/
        mod.rs
        types_generated.rs
        osis_server_generated.rs
    client/
      mod.rs                  # hero_client! macro invocation
    ui/
      mod.rs                  # Axum router
      templates/
        dashboard.html        # Service-specific UI
      static/
        app.js

main.rs (~15 lines):

use hero_sdk::prelude::*;

mod models;
mod client;
mod ui;

#[tokio::main]
async fn main() -> Result<()> {
    HeroServer::new("hero_food")
        // Standard domains from hero_sdk
        .with_sdk_domain::<hero_sdk::models::identity::IdentityDomain>()
        .with_sdk_domain::<hero_sdk::models::communication::CommunicationDomain>()
        // Custom domains (defined in this service)
        .with_domain::<models::delivery::DeliveryDomain>()
        .with_domain::<models::restaurant::RestaurantDomain>()
        .with_domain::<models::menu::MenuDomain>()
        // UI
        .with_ui(ui::router())
        .run()
        .await
}

build.rs:

use hero_sdk::generator::OschemaBuildConfig;

fn main() {
    OschemaBuildConfig::new()
        .schemas_dir("schemas")
        .domain("delivery", "Orders, drivers, delivery zones")
        .domain("restaurant", "Restaurants, branches, reviews")
        .domain("menu", "Menus, items, categories")
        .generate_server()
        .generate_client()
        .build();
}

Example schemaschemas/delivery.oschema:

// Order management for food delivery

object Order {
    sid: SmartId           // Auto-generated
    customer_id: String    // Reference to identity.User
    restaurant_id: String  // Reference to restaurant.Restaurant
    items: Vec<OrderItem>
    status: OrderStatus
    total_price: f64
    delivery_address: String
    driver_id: Option<String>
    created_at: String
    updated_at: String
}

enum OrderStatus {
    Pending
    Confirmed
    Preparing
    ReadyForPickup
    InTransit
    Delivered
    Cancelled
}

object OrderItem {
    menu_item_id: String
    quantity: u32
    price: f64
    notes: Option<String>
}

object Driver {
    sid: SmartId
    user_id: String        // Reference to identity.User
    vehicle_type: String
    status: DriverStatus
    current_location: Option<Location>
    rating: f64
}

enum DriverStatus {
    Available
    OnDelivery
    Offline
}

object DeliveryZone {
    sid: SmartId
    name: String
    polygon: Vec<Location>
    base_fee: f64
    active: bool
}

object Location {
    lat: f64
    lng: f64
}

// Custom service methods beyond CRUD
service DeliveryService {
    assign_driver(order_id: String, driver_id: String) -> Order
    estimate_delivery(restaurant_id: String, address: String) -> DeliveryEstimate
    get_active_orders(driver_id: String) -> Vec<Order>
    update_driver_location(driver_id: String, location: Location) -> Driver
}

object DeliveryEstimate {
    estimated_minutes: u32
    fee: f64
    zone: String
}

Custom business logicsrc/models/delivery/mod.rs:

// types_generated.rs and osis_server_generated.rs are auto-included
include!(concat!(env!("OUT_DIR"), "/delivery/types_generated.rs"));
include!(concat!(env!("OUT_DIR"), "/delivery/osis_server_generated.rs"));

use hero_sdk::prelude::*;

impl DeliveryDomain {
    /// Custom: assign nearest available driver to an order
    pub fn assign_driver(&self, order_id: &str, driver_id: &str) -> Result<Order> {
        let mut order = self.order_db.get(&SmartId::from(order_id))?;
        let driver = self.driver_db.get(&SmartId::from(driver_id))?;

        if driver.status != DriverStatus::Available {
            return Err(RpcError::InvalidParams("Driver not available".into()));
        }

        order.driver_id = Some(driver_id.to_string());
        order.status = OrderStatus::Confirmed;
        self.order_db.set(&order)?;
        Ok(order)
    }

    /// Custom: estimate delivery time and fee
    pub fn estimate_delivery(&self, restaurant_id: &str, address: &str) -> Result<DeliveryEstimate> {
        // Business logic: calculate based on zone, distance, current load
        let zones = self.delivery_zone_db.list()?;
        // ... zone matching, fee calculation
        Ok(DeliveryEstimate {
            estimated_minutes: 35,
            fee: 5.99,
            zone: "downtown".into(),
        })
    }
}

client/mod.rs:

use hero_sdk::client::prelude::*;

hero_client! {
    service: "hero_food",
    sdk_domains: [identity, communication],
    custom_domains: [delivery, restaurant, menu],
}

That's it. The entire service is:

  • 3 schema files defining the data model
  • 1 main.rs (~15 lines)
  • 1 build.rs (~12 lines)
  • 1 client/mod.rs (~6 lines)
  • Custom business logic only where needed (delivery/mod.rs)
  • Optional UI

Compare to today: 5-7 crates, 150+ lines Makefile, manual lifecycle wrapper, manual socket handling, manual route registration.


Elaboration on E: Cross-Domain Wiring

Today in hero_osis_server/main.rs there's this:

#[cfg(all(feature = "ai", feature = "flow"))]
if let (Some(ai), Some(flow)) = (ai_domain.clone(), flow_domain.clone()) {
    ai.wire_flow_domain(flow);
}

What this does: the AI domain handler receives an Arc<FlowDomain> reference so it can call self.flow_domain.execute_workflow(workflow_id, params) directly in-process — no network hop, no serialization. When an AI agent decides to run a workflow, it bypasses the Flow domain's socket and calls the handler directly via the Arc pointer.

The question was: in the new architecture with per-domain sockets, how should domains that need each other communicate?

Answer: Since we're standardizing on per-domain sockets and you've confirmed that hero_embedder/hero_indexer should use OpenRPC clients (service-to-service over sockets), the consistent approach is: domains within the same service also communicate via their domain sockets.

Example: AI domain wants to execute a Flow workflow:

// Inside AI domain handler
impl AiDomain {
    async fn run_agent_workflow(&self, agent: &Agent, input: Value) -> Result<Value> {
        // Connect to the flow domain's socket (same service, same machine)
        let flow_client = FlowClient::connect_socket(
            &self.service_socket_path("flow")  // ~/hero/var/sockets/hero_food_server/root/flow.sock
        ).await?;

        flow_client.workflow_execute(WorkflowExecuteInput {
            workflow_id: agent.workflow_id.clone(),
            params: input,
        }).await
    }
}

Why this is fine:

  • Unix socket IPC is ~10-50 microseconds per call — negligible
  • Consistent mental model: every domain interaction is an RPC call
  • No special wiring code needed
  • Domains stay truly independent — can be split into separate services later without code changes
  • Each domain's OpenRPC spec is the complete contract

When in-process might still make sense: Hot paths where even microseconds matter (e.g., a loop processing thousands of items). For those rare cases, we can keep an optional .with_wiring() escape hatch. But the default should be socket-based.


Implementation Plan

All on development_rethinking branch, importing as git dep. No repo renaming yet (backward compat).

Step 1: Restructure hero_rpc workspace into hero_sdk layout

What changes:

  • Rename crates/osis/crates/hero_sdk_osis/ (the storage/db/index layer)
  • Merge crates/server/ + crates/service/crates/server/ (unified HeroServer)
  • Create crates/models/ (empty, scaffold for domain models)
  • Update all internal Cargo.toml dependencies
  • Add hero_sdk top-level crate that re-exports everything via prelude

Files touched: Cargo.toml (workspace), each crate's Cargo.toml, internal use paths.
No behavioral change — just reorganization.

Step 2: Build the unified HeroServer API

What changes:

  • Implement HeroServer builder:
    • .new(service_name) — initializes config
    • .with_sdk_domain::<D>() — registers a domain from hero_sdk models
    • .with_domain::<D>() — registers a custom domain
    • .with_ui(router) — adds UI router
    • .run() — creates all sockets, starts listening
  • Auto socket creation following the 3-socket convention:
    • {service}.sock — service info
    • {service}_ui.sock — UI
    • {service}_server/{context}/{domain}.sock — per-domain
  • Auto-inject /health, /openrpc.json, /.well-known/heroservice.json per domain socket
  • CLI lifecycle integration (--start, --stop, --status)
  • Trait: HeroDomain — what a domain must implement to be registered
pub trait HeroDomain: Send + Sync + 'static {
    fn domain_name() -> &'static str;
    fn type_names() -> &'static [&'static str];
    fn create(db_path: &str, user_id: u16) -> Result<Arc<Self>>;
    fn handle_rpc(&self, type_name: &str, method: &str, data: &str) -> Result<String>;
    fn handle_service(&self, method: &str, params: &Value) -> Result<String>;
    fn openrpc_spec() -> &'static str;
}

Step 3: Move models from hero_osis → hero_sdk/models

What changes:

  • Move hero_osis/schemas/hero_sdk/schemas/
  • Move generated types + handlers into hero_sdk/crates/models/
  • Feature-gate each domain: identity, communication, calendar, etc.
  • Default feature: no domains. all-domains feature enables all.
  • Include standard service methods (option 2) in each domain
  • Update hero_osis to depend on hero_sdk/models instead of having its own

Step 4: Build hero_client! macro

What changes:

  • New proc macro in hero_sdk/crates/derive/
  • Generates service client struct composing domain clients
  • Auto-connects to socket paths by convention
  • Supports both sdk_domains and custom_domains
  • WASM support: falls back to HTTP transport when not on native

Step 5: Create hero_food example service

What changes:

  • New repo: hero_food (or a directory within hero_sdk/examples/)
  • Define 3 custom schemas: delivery, restaurant, menu
  • Import 2 SDK domains: identity, communication
  • Implement custom business logic for delivery
  • Build minimal UI
  • Generate client with hero_client!
  • Write integration tests
  • This is the proof-of-concept — iterate on DX until it feels right

Step 6: Slim hero_osis → hero_core

What changes:

  • hero_osis becomes a thin service depending on hero_sdk with all-domains
  • main.rs shrinks to ~20 lines (HeroServer builder)
  • Custom business logic (AI wiring, embedder integration) stays as service-level code
  • hero_osis_ui becomes hero_core_ui
  • Update hero_services orchestrator, hero_proxy

Order of execution:

Steps 1-4 happen in hero_rpc repo on development_rethinking branch.
Step 5 can start as soon as steps 1-2 are done (doesn't need models moved yet — just needs HeroServer working).
Step 6 happens after step 5 validates the DX.


Questions (if any remain)

  1. For hero_client! — should it also generate a WASM-compatible client automatically (for use in hero_archipelagos/Dioxus)? The current OsisClient already has #[cfg(target_arch = "wasm32")] paths. We could make the generated client work in both native and WASM out of the box.

  2. For the service info socket (hero_food.sock) — what should it serve beyond health/discovery? Should it support administrative commands like listing active contexts, showing domain stats, or triggering context imports/exports? (Currently OServer's core socket does some of this.)

  3. For the standard service methods (option 2 — business logic in hero_sdk models): how much logic should live in hero_sdk vs in individual services? For example, ChatService.send_message() requires knowing about message routing, notifications, etc. Should hero_sdk provide a "default" implementation that services can override? Or just the interface/trait that services must implement?

Great feedback, this crystallizes things. Let me address each point, elaborate on cross-domain wiring, and then lay out a concrete implementation plan with the food delivery example. --- ## 1. Client Composition: `hero_client!` macro Agreed — the client should mirror the server. Since each domain already generates its own typed client (from oschema codegen), the service client is just a **compositor** that holds all domain clients and knows the socket paths by convention. Proposed macro: ```rust // hero_food/src/client/mod.rs use hero_sdk::client::prelude::*; hero_client! { service: "hero_food", sdk_domains: [identity, communication], custom_domains: [delivery, restaurant, menu], } ``` This generates: ```rust pub struct HeroFoodClient { identity: hero_sdk::models::identity::IdentityClient, communication: hero_sdk::models::communication::CommunicationClient, delivery: crate::models::delivery::DeliveryClient, restaurant: crate::models::restaurant::RestaurantClient, menu: crate::models::menu::MenuClient, } impl HeroFoodClient { /// Connect to all domain sockets for the given context pub async fn connect(context: &str) -> Result<Self> { let base = hero_sdk::socket_path("hero_food_server", context); Ok(Self { identity: IdentityClient::connect_socket( &format!("{}/identity.sock", base)).await?, communication: CommunicationClient::connect_socket( &format!("{}/communication.sock", base)).await?, delivery: DeliveryClient::connect_socket( &format!("{}/delivery.sock", base)).await?, restaurant: RestaurantClient::connect_socket( &format!("{}/restaurant.sock", base)).await?, menu: MenuClient::connect_socket( &format!("{}/menu.sock", base)).await?, }) } // Accessor methods pub fn identity(&self) -> &IdentityClient { &self.identity } pub fn communication(&self) -> &CommunicationClient { &self.communication } pub fn delivery(&self) -> &DeliveryClient { &self.delivery } pub fn restaurant(&self) -> &RestaurantClient { &self.restaurant } pub fn menu(&self) -> &MenuClient { &self.menu } } ``` Usage: ```rust let client = HeroFoodClient::connect("root").await?; let user = client.identity().user_get("abc1").await?; let order = client.delivery().order_create(new_order).await?; ``` **Parallelism**: Since client generation comes from oschema (same pipeline that generates server handlers), adding a new domain to both server and client is just one line each — add the domain to `HeroServer::new().with_domain()` and to `hero_client!{ custom_domains: [...] }`. The oschema build.rs generates both server handlers AND client code from the same schema. --- ## 2. Socket Strategy — confirmed ``` ~/hero/var/sockets/hero_food.sock # Service info/health ~/hero/var/sockets/hero_food_ui.sock # UI ~/hero/var/sockets/hero_food_server/root/identity.sock # SDK domain ~/hero/var/sockets/hero_food_server/root/communication.sock # SDK domain ~/hero/var/sockets/hero_food_server/root/delivery.sock # Custom domain ~/hero/var/sockets/hero_food_server/root/restaurant.sock # Custom domain ~/hero/var/sockets/hero_food_server/root/menu.sock # Custom domain ~/hero/var/sockets/hero_food_server/org_abc/identity.sock # Another context ~/hero/var/sockets/hero_food_server/org_abc/delivery.sock # Another context ``` Benefits: - **Context namespacing**: Multi-tenant isolation at socket level - **Agent-friendly**: An AI agent working with orders only needs access to `delivery.sock`, not the whole service. Context-limited scoping. - **Independent lifecycle**: Can health-check individual domains - **OpenRPC per domain**: Each socket serves its own `/openrpc.json` — clean, focused specs The `hero_food.sock` top-level socket serves as a service registry — listing all contexts, domains, health status, and socket paths. Lightweight introspection endpoint. --- ## 3. Single Binary / Module Architecture — the hero_food example Agreed on single binary. Here's the concrete structure: ``` hero_food/ Cargo.toml build.rs # OschemaBuilder for custom schemas Makefile # Minimal — inherits from hero_sdk build tooling schemas/ delivery.oschema # Order, Driver, DeliveryZone restaurant.oschema # Restaurant, Branch, Review menu.oschema # Menu, MenuItem, Category src/ main.rs # ~15 lines: HeroServer builder lib.rs # Re-exports for external use models/ mod.rs # Domain registration delivery/ mod.rs # DeliveryDomain + custom business logic types_generated.rs # Generated from delivery.oschema osis_server_generated.rs # Generated CRUD handlers restaurant/ mod.rs types_generated.rs osis_server_generated.rs menu/ mod.rs types_generated.rs osis_server_generated.rs client/ mod.rs # hero_client! macro invocation ui/ mod.rs # Axum router templates/ dashboard.html # Service-specific UI static/ app.js ``` **main.rs** (~15 lines): ```rust use hero_sdk::prelude::*; mod models; mod client; mod ui; #[tokio::main] async fn main() -> Result<()> { HeroServer::new("hero_food") // Standard domains from hero_sdk .with_sdk_domain::<hero_sdk::models::identity::IdentityDomain>() .with_sdk_domain::<hero_sdk::models::communication::CommunicationDomain>() // Custom domains (defined in this service) .with_domain::<models::delivery::DeliveryDomain>() .with_domain::<models::restaurant::RestaurantDomain>() .with_domain::<models::menu::MenuDomain>() // UI .with_ui(ui::router()) .run() .await } ``` **build.rs**: ```rust use hero_sdk::generator::OschemaBuildConfig; fn main() { OschemaBuildConfig::new() .schemas_dir("schemas") .domain("delivery", "Orders, drivers, delivery zones") .domain("restaurant", "Restaurants, branches, reviews") .domain("menu", "Menus, items, categories") .generate_server() .generate_client() .build(); } ``` **Example schema** — `schemas/delivery.oschema`: ``` // Order management for food delivery object Order { sid: SmartId // Auto-generated customer_id: String // Reference to identity.User restaurant_id: String // Reference to restaurant.Restaurant items: Vec<OrderItem> status: OrderStatus total_price: f64 delivery_address: String driver_id: Option<String> created_at: String updated_at: String } enum OrderStatus { Pending Confirmed Preparing ReadyForPickup InTransit Delivered Cancelled } object OrderItem { menu_item_id: String quantity: u32 price: f64 notes: Option<String> } object Driver { sid: SmartId user_id: String // Reference to identity.User vehicle_type: String status: DriverStatus current_location: Option<Location> rating: f64 } enum DriverStatus { Available OnDelivery Offline } object DeliveryZone { sid: SmartId name: String polygon: Vec<Location> base_fee: f64 active: bool } object Location { lat: f64 lng: f64 } // Custom service methods beyond CRUD service DeliveryService { assign_driver(order_id: String, driver_id: String) -> Order estimate_delivery(restaurant_id: String, address: String) -> DeliveryEstimate get_active_orders(driver_id: String) -> Vec<Order> update_driver_location(driver_id: String, location: Location) -> Driver } object DeliveryEstimate { estimated_minutes: u32 fee: f64 zone: String } ``` **Custom business logic** — `src/models/delivery/mod.rs`: ```rust // types_generated.rs and osis_server_generated.rs are auto-included include!(concat!(env!("OUT_DIR"), "/delivery/types_generated.rs")); include!(concat!(env!("OUT_DIR"), "/delivery/osis_server_generated.rs")); use hero_sdk::prelude::*; impl DeliveryDomain { /// Custom: assign nearest available driver to an order pub fn assign_driver(&self, order_id: &str, driver_id: &str) -> Result<Order> { let mut order = self.order_db.get(&SmartId::from(order_id))?; let driver = self.driver_db.get(&SmartId::from(driver_id))?; if driver.status != DriverStatus::Available { return Err(RpcError::InvalidParams("Driver not available".into())); } order.driver_id = Some(driver_id.to_string()); order.status = OrderStatus::Confirmed; self.order_db.set(&order)?; Ok(order) } /// Custom: estimate delivery time and fee pub fn estimate_delivery(&self, restaurant_id: &str, address: &str) -> Result<DeliveryEstimate> { // Business logic: calculate based on zone, distance, current load let zones = self.delivery_zone_db.list()?; // ... zone matching, fee calculation Ok(DeliveryEstimate { estimated_minutes: 35, fee: 5.99, zone: "downtown".into(), }) } } ``` **client/mod.rs**: ```rust use hero_sdk::client::prelude::*; hero_client! { service: "hero_food", sdk_domains: [identity, communication], custom_domains: [delivery, restaurant, menu], } ``` That's it. The entire service is: - 3 schema files defining the data model - 1 main.rs (~15 lines) - 1 build.rs (~12 lines) - 1 client/mod.rs (~6 lines) - Custom business logic only where needed (delivery/mod.rs) - Optional UI Compare to today: 5-7 crates, 150+ lines Makefile, manual lifecycle wrapper, manual socket handling, manual route registration. --- ## Elaboration on E: Cross-Domain Wiring Today in hero_osis_server/main.rs there's this: ```rust #[cfg(all(feature = "ai", feature = "flow"))] if let (Some(ai), Some(flow)) = (ai_domain.clone(), flow_domain.clone()) { ai.wire_flow_domain(flow); } ``` What this does: the AI domain handler receives an `Arc<FlowDomain>` reference so it can call `self.flow_domain.execute_workflow(workflow_id, params)` **directly in-process** — no network hop, no serialization. When an AI agent decides to run a workflow, it bypasses the Flow domain's socket and calls the handler directly via the Arc pointer. **The question was**: in the new architecture with per-domain sockets, how should domains that need each other communicate? **Answer**: Since we're standardizing on per-domain sockets and you've confirmed that hero_embedder/hero_indexer should use OpenRPC clients (service-to-service over sockets), the consistent approach is: **domains within the same service also communicate via their domain sockets**. Example: AI domain wants to execute a Flow workflow: ```rust // Inside AI domain handler impl AiDomain { async fn run_agent_workflow(&self, agent: &Agent, input: Value) -> Result<Value> { // Connect to the flow domain's socket (same service, same machine) let flow_client = FlowClient::connect_socket( &self.service_socket_path("flow") // ~/hero/var/sockets/hero_food_server/root/flow.sock ).await?; flow_client.workflow_execute(WorkflowExecuteInput { workflow_id: agent.workflow_id.clone(), params: input, }).await } } ``` **Why this is fine:** - Unix socket IPC is ~10-50 microseconds per call — negligible - Consistent mental model: every domain interaction is an RPC call - No special wiring code needed - Domains stay truly independent — can be split into separate services later without code changes - Each domain's OpenRPC spec is the complete contract **When in-process might still make sense**: Hot paths where even microseconds matter (e.g., a loop processing thousands of items). For those rare cases, we can keep an optional `.with_wiring()` escape hatch. But the default should be socket-based. --- ## Implementation Plan All on `development_rethinking` branch, importing as git dep. No repo renaming yet (backward compat). ### Step 1: Restructure hero_rpc workspace into hero_sdk layout **What changes:** - Rename `crates/osis/` → `crates/hero_sdk_osis/` (the storage/db/index layer) - Merge `crates/server/` + `crates/service/` → `crates/server/` (unified HeroServer) - Create `crates/models/` (empty, scaffold for domain models) - Update all internal Cargo.toml dependencies - Add `hero_sdk` top-level crate that re-exports everything via `prelude` **Files touched:** Cargo.toml (workspace), each crate's Cargo.toml, internal `use` paths. **No behavioral change** — just reorganization. ### Step 2: Build the unified HeroServer API **What changes:** - Implement `HeroServer` builder: - `.new(service_name)` — initializes config - `.with_sdk_domain::<D>()` — registers a domain from hero_sdk models - `.with_domain::<D>()` — registers a custom domain - `.with_ui(router)` — adds UI router - `.run()` — creates all sockets, starts listening - Auto socket creation following the 3-socket convention: - `{service}.sock` — service info - `{service}_ui.sock` — UI - `{service}_server/{context}/{domain}.sock` — per-domain - Auto-inject `/health`, `/openrpc.json`, `/.well-known/heroservice.json` per domain socket - CLI lifecycle integration (`--start`, `--stop`, `--status`) - Trait: `HeroDomain` — what a domain must implement to be registered ```rust pub trait HeroDomain: Send + Sync + 'static { fn domain_name() -> &'static str; fn type_names() -> &'static [&'static str]; fn create(db_path: &str, user_id: u16) -> Result<Arc<Self>>; fn handle_rpc(&self, type_name: &str, method: &str, data: &str) -> Result<String>; fn handle_service(&self, method: &str, params: &Value) -> Result<String>; fn openrpc_spec() -> &'static str; } ``` ### Step 3: Move models from hero_osis → hero_sdk/models **What changes:** - Move `hero_osis/schemas/` → `hero_sdk/schemas/` - Move generated types + handlers into `hero_sdk/crates/models/` - Feature-gate each domain: `identity`, `communication`, `calendar`, etc. - Default feature: no domains. `all-domains` feature enables all. - Include standard service methods (option 2) in each domain - Update hero_osis to depend on `hero_sdk/models` instead of having its own ### Step 4: Build `hero_client!` macro **What changes:** - New proc macro in `hero_sdk/crates/derive/` - Generates service client struct composing domain clients - Auto-connects to socket paths by convention - Supports both `sdk_domains` and `custom_domains` - WASM support: falls back to HTTP transport when not on native ### Step 5: Create hero_food example service **What changes:** - New repo: `hero_food` (or a directory within hero_sdk/examples/) - Define 3 custom schemas: delivery, restaurant, menu - Import 2 SDK domains: identity, communication - Implement custom business logic for delivery - Build minimal UI - Generate client with `hero_client!` - Write integration tests - **This is the proof-of-concept** — iterate on DX until it feels right ### Step 6: Slim hero_osis → hero_core **What changes:** - hero_osis becomes a thin service depending on `hero_sdk` with `all-domains` - main.rs shrinks to ~20 lines (HeroServer builder) - Custom business logic (AI wiring, embedder integration) stays as service-level code - hero_osis_ui becomes hero_core_ui - Update hero_services orchestrator, hero_proxy ### Order of execution: Steps 1-4 happen in hero_rpc repo on `development_rethinking` branch. Step 5 can start as soon as steps 1-2 are done (doesn't need models moved yet — just needs HeroServer working). Step 6 happens after step 5 validates the DX. --- ## Questions (if any remain) 1. For `hero_client!` — should it also generate a **WASM-compatible** client automatically (for use in hero_archipelagos/Dioxus)? The current `OsisClient` already has `#[cfg(target_arch = "wasm32")]` paths. We could make the generated client work in both native and WASM out of the box. 2. For the service info socket (`hero_food.sock`) — what should it serve beyond health/discovery? Should it support administrative commands like listing active contexts, showing domain stats, or triggering context imports/exports? (Currently `OServer`'s core socket does some of this.) 3. For the standard service methods (option 2 — business logic in hero_sdk models): how much logic should live in hero_sdk vs in individual services? For example, `ChatService.send_message()` requires knowing about message routing, notifications, etc. Should hero_sdk provide a "default" implementation that services can override? Or just the interface/trait that services must implement?
Author
Owner
  1. yes, not as a separate client but as one unified client. there use to be code for this, like a unified client generator, which can be used for instpiration. target or feature gating (whichever is more appropriate, for wasm id say target) should prevent bloating from this additional infra.

  2. for now, can remain. but, something i forgot to mention which reminds me: contexts will be managed in hero_core service's root context. the root context is where contexts are added and removed etc. this means, when a context is created, there should be a way to make the service aware of the new context, so it can dynamically listen to it. but perhaps instead of this being an automatic trigger, users can manually add a context to the service, since all services dont need to be aware of all contexts. as such, perhaps that service sock can offer that functionality of adding removing contexts.

  3. hero_sdk is meant to offer generic logic and models for you entire digital life. but basic stuff like send_message. any application specific logic, should be defined in app. instead of saying hero_sdk provide a "default" implementation that services can override? Or just the interface/trait that services must implement? i would put it as hero_sdk provides basic implementation that services can extend. we dont want them to override, because we want the base functionality to be generic enough so it can cater to all apps. like pay_for_order should use a common make_transaction (or whatever it is) service, and add more to it, not override it

1. yes, not as a separate client but as one unified client. there use to be code for this, like a unified client generator, which can be used for instpiration. target or feature gating (whichever is more appropriate, for wasm id say target) should prevent bloating from this additional infra. 2. for now, can remain. but, something i forgot to mention which reminds me: contexts will be managed in hero_core service's root context. the root context is where contexts are added and removed etc. this means, when a context is created, there should be a way to make the service aware of the new context, so it can dynamically listen to it. but perhaps instead of this being an automatic trigger, users can manually add a context to the service, since all services dont need to be aware of all contexts. as such, perhaps that service sock can offer that functionality of adding removing contexts. 3. hero_sdk is meant to offer generic logic and models for you entire digital life. but basic stuff like send_message. any application specific logic, should be defined in app. instead of saying hero_sdk provide a "default" implementation that services can override? Or just the interface/trait that services must implement? i would put it as hero_sdk provides basic implementation that services can extend. we dont want them to override, because we want the base functionality to be generic enough so it can cater to all apps. like pay_for_order should use a common make_transaction (or whatever it is) service, and add more to it, not override it
timur changed title from Rethinking hero RPC, OSIS and backend architecture to Rethinking hero RPC, OSIS and backend architecture — hero_sdk + hero_core 2026-03-27 12:04:11 +00:00
Author
Owner

Implementation Complete — development_rethinking branch

The initial restructuring is pushed to development_rethinking. Full workspace compiles clean (zero warnings).

What was done

1. Crate renaming (hero_rpc_*hero_sdk_*)

  • All 8 crates renamed: hero_sdk_oschema, hero_sdk_derive, hero_sdk_generator, hero_sdk_osis, hero_sdk_openrpc, hero_sdk_client, hero_sdk_server, hero_sdk_models
  • All internal imports updated across the workspace

2. Structural changes

  • crates/openrpc_http_client_lib/crates/client/ (hero_sdk_client)
  • crates/service/ merged into crates/server/ (hero_sdk_server now contains both OServer and HeroServer APIs)
  • New crates/models/ (hero_sdk_models) — scaffold ready for domain model migration
  • New crates/hero_sdk/ — top-level re-export crate with prelude

3. HeroServer builder API (crates/server/src/builder.rs)

pub trait HeroDomain: OsisAppRpcHandler + OsisDomainInit + 'static {
    fn domain_name() -> &'static str;
}

// Usage:
HeroServer::new("hero_food")
    .description("Food delivery service")
    .with_domain::<OsisDelivery>()
    .with_domain::<OsisRestaurant>()
    .with_ui(ui_router())
    .run()
    .await

Socket convention: {service}.sock (info), {service}_ui.sock (UI), domain sockets via DomainServer.

4. hero_food example (example/hero_food/)

  • Two domains: delivery (Order, Driver, DeliveryZone) and restaurant (Restaurant, MenuItem)
  • OSchema definitions → build.rs code generation → working server binary
  • Demonstrates the full dream DX from schemas to running service

5. Backward compatibility

  • OServer API preserved alongside HeroServer builder
  • Existing recipe_server example still compiles and works

What's next (follow-up tasks)

  • Domain models migration: Move schemas/types from hero_osis into hero_sdk_models with feature gates
  • hero_client! macro: Proc macro composing domain clients into typed service client (WASM + native targets)
  • Socket path convention: Update DomainServer to use {service}_server/{context}/{domain}.sock instead of legacy hero_db_{ctx}_{domain}.sock
  • Generator integration: Have OschemaBuilder auto-generate HeroDomain impls so manual bridge code is unnecessary
  • Multi-context support: The builder currently only registers domains for the first context (FnOnce limitation) — needs FnMut or clone-based registration

All core infrastructure is in place. The remaining tasks are incremental improvements on this foundation.

## Implementation Complete — `development_rethinking` branch The initial restructuring is pushed to [`development_rethinking`](https://forge.ourworld.tf/lhumina_code/hero_rpc/src/branch/development_rethinking). Full workspace compiles clean (zero warnings). ### What was done **1. Crate renaming (`hero_rpc_*` → `hero_sdk_*`)** - All 8 crates renamed: `hero_sdk_oschema`, `hero_sdk_derive`, `hero_sdk_generator`, `hero_sdk_osis`, `hero_sdk_openrpc`, `hero_sdk_client`, `hero_sdk_server`, `hero_sdk_models` - All internal imports updated across the workspace **2. Structural changes** - `crates/openrpc_http_client_lib/` → `crates/client/` (hero_sdk_client) - `crates/service/` merged into `crates/server/` (hero_sdk_server now contains both OServer and HeroServer APIs) - New `crates/models/` (hero_sdk_models) — scaffold ready for domain model migration - New `crates/hero_sdk/` — top-level re-export crate with prelude **3. HeroServer builder API** (`crates/server/src/builder.rs`) ```rust pub trait HeroDomain: OsisAppRpcHandler + OsisDomainInit + 'static { fn domain_name() -> &'static str; } // Usage: HeroServer::new("hero_food") .description("Food delivery service") .with_domain::<OsisDelivery>() .with_domain::<OsisRestaurant>() .with_ui(ui_router()) .run() .await ``` Socket convention: `{service}.sock` (info), `{service}_ui.sock` (UI), domain sockets via DomainServer. **4. hero_food example** (`example/hero_food/`) - Two domains: `delivery` (Order, Driver, DeliveryZone) and `restaurant` (Restaurant, MenuItem) - OSchema definitions → build.rs code generation → working server binary - Demonstrates the full dream DX from schemas to running service **5. Backward compatibility** - OServer API preserved alongside HeroServer builder - Existing recipe_server example still compiles and works ### What's next (follow-up tasks) - [ ] **Domain models migration**: Move schemas/types from hero_osis into hero_sdk_models with feature gates - [ ] **hero_client! macro**: Proc macro composing domain clients into typed service client (WASM + native targets) - [ ] **Socket path convention**: Update DomainServer to use `{service}_server/{context}/{domain}.sock` instead of legacy `hero_db_{ctx}_{domain}.sock` - [ ] **Generator integration**: Have OschemaBuilder auto-generate `HeroDomain` impls so manual bridge code is unnecessary - [ ] **Multi-context support**: The builder currently only registers domains for the first context (FnOnce limitation) — needs FnMut or clone-based registration All core infrastructure is in place. The remaining tasks are incremental improvements on this foundation.
Author
Owner

Follow-up: Additional improvements pushed

Second commit on development_rethinking addresses all remaining items:

1. hero_client! macro (crates/client/src/lib.rs)

Compose domain clients into a typed service client — works on both WASM and native:

hero_sdk_client::hero_client! {
    pub HeroFoodClient {
        delivery: "delivery",
        restaurant: "restaurant",
    }
}

// Native:
let client = HeroFoodClient::new("http://localhost:8080", "root")?;
let orders = client.delivery.rpc_call::<Vec<String>>("order.list", json!({})).await?;

// WASM:
let client = HeroFoodClient::new("http://localhost:8080", "root");

Shared token management: client.set_token(token) propagates to all domain clients.

2. Multi-context domain registration fixed

Changed DomainFactory from FnOnce to Fn so each domain is registered across all requested contexts (was previously limited to the first context only).

3. New socket path convention

HeroServer builder now uses:

~/hero/var/sockets/{service}_server/{context}/{domain}.sock

Legacy OServer API still uses hero_db_{ctx}_{domain}.sock for backward compatibility.

Added DomainServer::spawn_at(socket_path, ...) for custom socket paths.

4. Auto-generated HeroDomain impls

The code generator now emits:

impl hero_sdk_server::HeroDomain for OsisDelivery {
    fn domain_name() -> &'static str { "delivery" }
}

No more manual bridge code needed — the hero_food example's main.rs is now clean:

use hero_sdk_server::HeroServer;

HeroServer::new("hero_food")
    .description("Food delivery service")
    .with_domain::<OsisDelivery>()      // HeroDomain auto-generated
    .with_domain::<OsisRestaurant>()    // HeroDomain auto-generated
    .run()
    .await

Summary — all agreed items implemented

Item Status
Rename hero_rpc_* → hero_sdk_* Done
Merge service into server Done
HeroServer builder API Done
3-socket convention Done
hero_client! macro Done
Multi-context registration Done
HeroDomain auto-codegen Done
hero_food example Done
hero_sdk top-level crate Done
hero_sdk_models scaffold Done

Remaining for future PRs:

  • Domain model migration from hero_osis → hero_sdk_models (cross-repo, 38+ dependents)
  • Generator fix for use super::core::*; path when files are in server/ subdirectory
## Follow-up: Additional improvements pushed Second commit on `development_rethinking` addresses all remaining items: ### 1. `hero_client!` macro (`crates/client/src/lib.rs`) Compose domain clients into a typed service client — works on both WASM and native: ```rust hero_sdk_client::hero_client! { pub HeroFoodClient { delivery: "delivery", restaurant: "restaurant", } } // Native: let client = HeroFoodClient::new("http://localhost:8080", "root")?; let orders = client.delivery.rpc_call::<Vec<String>>("order.list", json!({})).await?; // WASM: let client = HeroFoodClient::new("http://localhost:8080", "root"); ``` Shared token management: `client.set_token(token)` propagates to all domain clients. ### 2. Multi-context domain registration fixed Changed `DomainFactory` from `FnOnce` to `Fn` so each domain is registered across all requested contexts (was previously limited to the first context only). ### 3. New socket path convention HeroServer builder now uses: ``` ~/hero/var/sockets/{service}_server/{context}/{domain}.sock ``` Legacy `OServer` API still uses `hero_db_{ctx}_{domain}.sock` for backward compatibility. Added `DomainServer::spawn_at(socket_path, ...)` for custom socket paths. ### 4. Auto-generated `HeroDomain` impls The code generator now emits: ```rust impl hero_sdk_server::HeroDomain for OsisDelivery { fn domain_name() -> &'static str { "delivery" } } ``` No more manual bridge code needed — the hero_food example's main.rs is now clean: ```rust use hero_sdk_server::HeroServer; HeroServer::new("hero_food") .description("Food delivery service") .with_domain::<OsisDelivery>() // HeroDomain auto-generated .with_domain::<OsisRestaurant>() // HeroDomain auto-generated .run() .await ``` ### Summary — all agreed items implemented | Item | Status | |------|--------| | Rename hero_rpc_* → hero_sdk_* | Done | | Merge service into server | Done | | HeroServer builder API | Done | | 3-socket convention | Done | | hero_client! macro | Done | | Multi-context registration | Done | | HeroDomain auto-codegen | Done | | hero_food example | Done | | hero_sdk top-level crate | Done | | hero_sdk_models scaffold | Done | **Remaining for future PRs:** - Domain model migration from hero_osis → hero_sdk_models (cross-repo, 38+ dependents) - Generator fix for `use super::core::*;` path when files are in `server/` subdirectory
Author
Owner

Currently there is a lot of mess in generated code, and also in repo.

  1. we need to get rid of the deprecated old crates in this repo and update readme so it actually becomes the new hero_sdk repo. So far we've been appending without modifying existing stuff. Lets rename example to examples, get rid of old examples, only keep hero food, clean up crates as necessary, like why do we have all these crates still, are all necessary for rethinked hero_rpc?

  2. for hero_food example in examples: why do we have schema in docs/schemas and schemas, why do we have generated code in both /core and /server folders but same files also in root. we should have a clean unified way for keeping generated code, i'd suggest flat per domain to keep it simple. also types_generated_wasm and types_generated can be merged into one and target gated.

  3. Code in generated files such as :

// ═══════════════════════════════════════════════════════════════
// AUTO-GENERATED CRUD SERVICES FOR ROOT OBJECTS
// These provide standard create/read/update/delete/list/find ops
// ═══════════════════════════════════════════════════════════════

/// Error type for CRUD operations.
#[derive(Debug, Clone)]
pub enum CrudError {
    /// Object not found.
    NotFound(String),
    /// Storage error.
    StorageError(String),
    /// Validation error.
    ValidationError(String),
}

impl std::fmt::Display for CrudError {
    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
        match self {
            CrudError::NotFound(msg) => write!(f, "Not found: {}", msg),
            CrudError::StorageError(msg) => write!(f, "Storage error: {}", msg),
            CrudError::ValidationError(msg) => write!(f, "Validation error: {}", msg),
        }
    }
}

impl std::error::Error for CrudError {}

should be imported from hero_sdk, no need for each domain to define the same.

Currently there is a lot of mess in generated code, and also in repo. 1. we need to get rid of the deprecated old crates in this repo and update readme so it actually becomes the new hero_sdk repo. So far we've been appending without modifying existing stuff. Lets rename example to examples, get rid of old examples, only keep hero food, clean up crates as necessary, like why do we have all these crates still, are all necessary for rethinked hero_rpc? 2. for hero_food example in examples: why do we have schema in docs/schemas and schemas, why do we have generated code in both /core and /server folders but same files also in root. we should have a clean unified way for keeping generated code, i'd suggest flat per domain to keep it simple. also types_generated_wasm and types_generated can be merged into one and target gated. 3. Code in generated files such as : ``` // ═══════════════════════════════════════════════════════════════ // AUTO-GENERATED CRUD SERVICES FOR ROOT OBJECTS // These provide standard create/read/update/delete/list/find ops // ═══════════════════════════════════════════════════════════════ /// Error type for CRUD operations. #[derive(Debug, Clone)] pub enum CrudError { /// Object not found. NotFound(String), /// Storage error. StorageError(String), /// Validation error. ValidationError(String), } impl std::fmt::Display for CrudError { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { match self { CrudError::NotFound(msg) => write!(f, "Not found: {}", msg), CrudError::StorageError(msg) => write!(f, "Storage error: {}", msg), CrudError::ValidationError(msg) => write!(f, "Validation error: {}", msg), } } } impl std::error::Error for CrudError {} ``` should be imported from hero_sdk, no need for each domain to define the same.
Author
Owner

Third commit on development_rethinking: generator fix + hero_sdk_models scaffold

1. Fixed generator import paths for server/ subdirectory layout

Problem: Generated osis_server_generated.rs used use super::core::*; which breaks when the file is inside a server/ subdirectory (since super points to server, not the domain module).

Fix: Generator now uses absolute use crate::{domain}::core::*; imports — matching the pattern already used by rpc_generated.rs. Works in both:

  • Default path: src/{domain}/server/osis_server_generated.rs
  • Separate crate path: server_crate/src/{domain}/osis_server_generated.rs

2. Generator now writes proper subdirectory layout

The Generator previously wrote all files flat to src/{domain}/ but the build system's mod.rs expected core/ and server/ subdirectories. Now:

  • Core files go to src/{domain}/core/ (types, openrpc, mod.rs)
  • Server files go to src/{domain}/server/ (osis_server_generated, rpc_generated, rpc, mod.rs)
  • Separate server crate path unchanged (flat layout with core alias)

3. Target-aware domain module generation

generate_domain_mod() now checks the generation target:

  • Models target: Only generates pub mod core; (no server module)
  • Server/RPC target: Generates both pub mod core; and pub mod server;

4. hero_sdk_models scaffolded with all 18 domains

Copied all schemas from hero_osis (57 .oschema files across 18 domains):
ai, base, business, calendar, code, communication, embedder, files, finance, flow, identity, job, ledger, media, money, network, projects, settings

  • build.rs generates models-only (types + WASM types)
  • Feature-gated per domain + all-domains meta-feature
  • Compiles clean with all domains enabled
  • Ready for downstream migration from hero_osis → hero_sdk_models

Updated status

Item Status
Rename hero_rpc_* → hero_sdk_* Done
Merge service into server Done
HeroServer builder API Done
3-socket convention Done
hero_client! macro Done
Multi-context registration Done
HeroDomain auto-codegen Done
hero_food example Done
hero_sdk top-level crate Done
hero_sdk_models scaffold Done
Generator import path fix Done
Generator subdirectory layout Done
Target-aware mod generation Done

Remaining for separate PRs:

  • Downstream migration: hero_osis consumers → hero_sdk_models (38+ dependents)
  • Slim hero_osis → hero_core (separate repo)
## Third commit on `development_rethinking`: generator fix + hero_sdk_models scaffold ### 1. Fixed generator import paths for server/ subdirectory layout **Problem**: Generated `osis_server_generated.rs` used `use super::core::*;` which breaks when the file is inside a `server/` subdirectory (since `super` points to `server`, not the domain module). **Fix**: Generator now uses absolute `use crate::{domain}::core::*;` imports — matching the pattern already used by `rpc_generated.rs`. Works in both: - **Default path**: `src/{domain}/server/osis_server_generated.rs` - **Separate crate path**: `server_crate/src/{domain}/osis_server_generated.rs` ### 2. Generator now writes proper subdirectory layout The Generator previously wrote all files flat to `src/{domain}/` but the build system's `mod.rs` expected `core/` and `server/` subdirectories. Now: - Core files go to `src/{domain}/core/` (types, openrpc, mod.rs) - Server files go to `src/{domain}/server/` (osis_server_generated, rpc_generated, rpc, mod.rs) - Separate server crate path unchanged (flat layout with core alias) ### 3. Target-aware domain module generation `generate_domain_mod()` now checks the generation target: - **Models target**: Only generates `pub mod core;` (no server module) - **Server/RPC target**: Generates both `pub mod core;` and `pub mod server;` ### 4. hero_sdk_models scaffolded with all 18 domains Copied all schemas from hero_osis (57 .oschema files across 18 domains): `ai, base, business, calendar, code, communication, embedder, files, finance, flow, identity, job, ledger, media, money, network, projects, settings` - `build.rs` generates models-only (types + WASM types) - Feature-gated per domain + `all-domains` meta-feature - Compiles clean with all domains enabled - Ready for downstream migration from hero_osis → hero_sdk_models ### Updated status | Item | Status | |------|--------| | Rename hero_rpc_* → hero_sdk_* | Done | | Merge service into server | Done | | HeroServer builder API | Done | | 3-socket convention | Done | | hero_client\! macro | Done | | Multi-context registration | Done | | HeroDomain auto-codegen | Done | | hero_food example | Done | | hero_sdk top-level crate | Done | | hero_sdk_models scaffold | Done | | Generator import path fix | Done | | Generator subdirectory layout | Done | | Target-aware mod generation | Done | **Remaining for separate PRs:** - Downstream migration: hero_osis consumers → hero_sdk_models (38+ dependents) - Slim hero_osis → hero_core (separate repo)
Author
Owner

Addressed the cleanup items from comment #16495 in commit c5fff2b:

1. Repo cleanup

  • Deleted deprecated crates/service/, old examples (petstore_client, petstore_server, recipe_server)
  • Removed old shell scripts (build.sh, run.sh, install.sh, buildenv.sh)
  • Renamed example/examples/, updated workspace members
  • Cleaned up generated build artifacts (docs/, sdk/)

2. Flat generated code layout

  • Removed core/ and server/ subdirectories — all generated files now live directly in src/{domain}/
  • Updated generator (generate.rs, build.rs, rust_osis.rs, rust_rpc.rs) to produce flat layout by default
  • Domain mod.rs uses target-gated type imports (native types.rs vs WASM types_wasm_generated.rs)
  • Flattened all 18 models domains and both hero_food domains

3. Shared CrudError

  • Added CrudError enum to hero_sdk_osis::rpc
  • Generator now emits pub use hero_sdk_osis::rpc::CrudError; instead of inline definition
  • Exported through hero_sdk prelude

4. README

  • Rewritten for hero_sdk architecture with current crate names and flat layout

All changes verified: cargo check --workspace passes cleanly, cargo test -p hero_sdk_generator --lib (103/105 pass, 1 pre-existing scaffold failure, 1 ignored), cargo test -p hero_sdk_osis --lib (64/64 pass).

Addressed the cleanup items from comment #16495 in commit c5fff2b: **1. Repo cleanup** - Deleted deprecated `crates/service/`, old examples (`petstore_client`, `petstore_server`, `recipe_server`) - Removed old shell scripts (`build.sh`, `run.sh`, `install.sh`, `buildenv.sh`) - Renamed `example/` → `examples/`, updated workspace members - Cleaned up generated build artifacts (`docs/`, `sdk/`) **2. Flat generated code layout** - Removed `core/` and `server/` subdirectories — all generated files now live directly in `src/{domain}/` - Updated generator (`generate.rs`, `build.rs`, `rust_osis.rs`, `rust_rpc.rs`) to produce flat layout by default - Domain `mod.rs` uses target-gated type imports (native `types.rs` vs WASM `types_wasm_generated.rs`) - Flattened all 18 models domains and both hero_food domains **3. Shared CrudError** - Added `CrudError` enum to `hero_sdk_osis::rpc` - Generator now emits `pub use hero_sdk_osis::rpc::CrudError;` instead of inline definition - Exported through `hero_sdk` prelude **4. README** - Rewritten for `hero_sdk` architecture with current crate names and flat layout All changes verified: `cargo check --workspace` passes cleanly, `cargo test -p hero_sdk_generator --lib` (103/105 pass, 1 pre-existing scaffold failure, 1 ignored), `cargo test -p hero_sdk_osis --lib` (64/64 pass).
Author
Owner

Few other improvements:

  • still we both have docs/schemas and schemas
  • the generator no longer should generate docs except for the openrpc.json specs as it does.
  • we dont really need src/services for custom service implementations, the hero_food restaurant and delivery are custom services.
Few other improvements: - still we both have docs/schemas and schemas - the generator no longer should generate docs except for the openrpc.json specs as it does. - we dont really need src/services for custom service implementations, the hero_food restaurant and delivery are custom services.
Author
Owner

Addressed comment #16502 in commit 2dda443:

  • Removed docs/schemas/ generation — generator no longer produces docs/schemas/ directories with README/schema.md/html files. The openrpc.json in src/{domain}/ is the only generated spec artifact.
  • Removed src/services/ placeholder — the delivery and restaurant domains ARE the custom services, no separate services module needed. Removed from generator (generate_services_placeholder, pub mod services; in lib.rs), hero_food, and models.
  • Added .gitignore entries for docs/schemas/ and sdk/ as build artifacts.

schemas/ directories (source .oschema files) are kept — only the generated docs/schemas/ output was removed.

Addressed comment #16502 in commit 2dda443: - **Removed `docs/schemas/` generation** — generator no longer produces `docs/schemas/` directories with README/schema.md/html files. The `openrpc.json` in `src/{domain}/` is the only generated spec artifact. - **Removed `src/services/` placeholder** — the delivery and restaurant domains ARE the custom services, no separate services module needed. Removed from generator (`generate_services_placeholder`, `pub mod services;` in `lib.rs`), hero_food, and models. - **Added `.gitignore` entries** for `docs/schemas/` and `sdk/` as build artifacts. `schemas/` directories (source `.oschema` files) are kept — only the generated `docs/schemas/` output was removed.
Author
Owner

Part 1 complete: hero_food imports hero_sdk_models (f89001d)

  • Added hero_sdk_models dep with identity feature
  • Both domains import and use hero_sdk_models::identity::Address for structured address parsing
  • All 4 custom service methods implemented with real business logic (no more todo\!())
  • Verified: cargo check --workspace clean, all RPC methods tested end-to-end

Moving to Part 2: hero_osis development_rethinking branch migration.

**Part 1 complete: hero_food imports hero_sdk_models** (`f89001d`) - Added `hero_sdk_models` dep with `identity` feature - Both domains import and use `hero_sdk_models::identity::Address` for structured address parsing - All 4 custom service methods implemented with real business logic (no more `todo\!()`) - Verified: `cargo check --workspace` clean, all RPC methods tested end-to-end Moving to Part 2: hero_osis `development_rethinking` branch migration.
Author
Owner

Updated Socket Strategy — Context via Headers, Not Socket Directories

Based on recent architectural changes in hero_skills (hero_sockets, hero_context, hero_proc_sdk skills), the socket-per-context model proposed in this issue needs to be revised.

What Changed

The updated hero_sockets skill (in hero_skills repo) establishes:

  1. Per-service directory, not per-context: $HERO_SOCKET_DIR/<service_name>/
  2. Socket types by function: rpc.sock, ui.sock, rest.sock, resp.sock, web_<name>.sock
  3. Context via HTTP header: X-Hero-Context: <integer> — not via separate sockets
  4. Claims via header: X-Hero-Claims: admin,users.read — authorization capabilities
  5. hero_router (not hero_proxy) is the sole TCP entry point

The hero_context skill defines a 3-dimension security model:

  • Prefix — which service handles the request
  • Context — where it runs (isolation boundary, integer ID)
  • Claims — what is allowed (capability-based auth)

Trust model: missing claims header = FULL TRUST (internal call). Claims present = restricted.

Impact on hero_sdk Rethinking

Before (proposed in this issue):

~/hero/var/sockets/{service}_server/{context}/{domain}.sock  # per-context, per-domain
~/hero/var/sockets/{service}.sock                            # service management
~/hero/var/sockets/{service}_ui.sock                         # UI

After (aligned with updated skills):

~/hero/var/sockets/{service_name}/
  rpc.sock          # Single OpenRPC endpoint — context passed via X-Hero-Context header
  ui.sock           # Admin dashboard
  rest.sock         # Optional REST API

Why This Is Better

  1. Simpler service startup — no need to know contexts at boot time, no dynamic socket creation
  2. Simpler discovery — hero_inspector scans $HERO_SOCKET_DIR/*/rpc.sock, no nested directories
  3. Context lifecycle decoupled — adding/removing contexts doesn't require restarting services or creating sockets
  4. Consistent with hero_proc — hero_proc_factory auto-detects a single socket path, not a tree
  5. Per-domain separation still possible — via JSON-RPC method namespacing (identity.get, delivery.create) on a single rpc.sock, or optionally separate domain sockets in the same service directory

Revised HeroServer API

HeroServer::new("hero_food")
    .with_sdk_domain::<IdentityDomain>()
    .with_domain::<DeliveryDomain>()
    .with_ui(ui::router())
    .run()  // Binds: ~/hero/var/sockets/hero_food/rpc.sock + ui.sock
    .await

Context is extracted from the X-Hero-Context header in each request, not from which socket was connected to. The server routes to the correct context-scoped storage internally.

What Stays the Same

  • hero_rpc → hero_sdk rename
  • hero_osis → hero_core rename
  • Feature-gated domain models in hero_sdk_models
  • hero_client! macro for typed clients
  • Single-binary service structure
  • OpenRPC spec as source of truth

Action Items

  • Update HeroServer builder to bind rpc.sock + ui.sock in service directory
  • Add context extraction from X-Hero-Context header in RPC dispatch
  • Update hero_food example to use header-based context
  • Align OServer::run_cli() with hero_proc (replace zinit lifecycle)
  • Update getting-started guide for new conventions
  • Sync local skills (hero_sockets, hero_service) with hero_skills repo versions
## Updated Socket Strategy — Context via Headers, Not Socket Directories Based on recent architectural changes in hero_skills (hero_sockets, hero_context, hero_proc_sdk skills), the socket-per-context model proposed in this issue needs to be revised. ### What Changed The updated `hero_sockets` skill (in hero_skills repo) establishes: 1. **Per-service directory**, not per-context: `$HERO_SOCKET_DIR/<service_name>/` 2. **Socket types by function**: `rpc.sock`, `ui.sock`, `rest.sock`, `resp.sock`, `web_<name>.sock` 3. **Context via HTTP header**: `X-Hero-Context: <integer>` — not via separate sockets 4. **Claims via header**: `X-Hero-Claims: admin,users.read` — authorization capabilities 5. **hero_router** (not hero_proxy) is the sole TCP entry point The `hero_context` skill defines a 3-dimension security model: - **Prefix** — which service handles the request - **Context** — where it runs (isolation boundary, integer ID) - **Claims** — what is allowed (capability-based auth) Trust model: missing claims header = FULL TRUST (internal call). Claims present = restricted. ### Impact on hero_sdk Rethinking **Before (proposed in this issue):** ``` ~/hero/var/sockets/{service}_server/{context}/{domain}.sock # per-context, per-domain ~/hero/var/sockets/{service}.sock # service management ~/hero/var/sockets/{service}_ui.sock # UI ``` **After (aligned with updated skills):** ``` ~/hero/var/sockets/{service_name}/ rpc.sock # Single OpenRPC endpoint — context passed via X-Hero-Context header ui.sock # Admin dashboard rest.sock # Optional REST API ``` ### Why This Is Better 1. **Simpler service startup** — no need to know contexts at boot time, no dynamic socket creation 2. **Simpler discovery** — hero_inspector scans `$HERO_SOCKET_DIR/*/rpc.sock`, no nested directories 3. **Context lifecycle decoupled** — adding/removing contexts doesn't require restarting services or creating sockets 4. **Consistent with hero_proc** — hero_proc_factory auto-detects a single socket path, not a tree 5. **Per-domain separation still possible** — via JSON-RPC method namespacing (`identity.get`, `delivery.create`) on a single `rpc.sock`, or optionally separate domain sockets in the same service directory ### Revised HeroServer API ```rust HeroServer::new("hero_food") .with_sdk_domain::<IdentityDomain>() .with_domain::<DeliveryDomain>() .with_ui(ui::router()) .run() // Binds: ~/hero/var/sockets/hero_food/rpc.sock + ui.sock .await ``` Context is extracted from the `X-Hero-Context` header in each request, not from which socket was connected to. The server routes to the correct context-scoped storage internally. ### What Stays the Same - hero_rpc → hero_sdk rename - hero_osis → hero_core rename - Feature-gated domain models in hero_sdk_models - hero_client! macro for typed clients - Single-binary service structure - OpenRPC spec as source of truth ### Action Items - [ ] Update HeroServer builder to bind `rpc.sock` + `ui.sock` in service directory - [ ] Add context extraction from `X-Hero-Context` header in RPC dispatch - [ ] Update hero_food example to use header-based context - [ ] Align OServer::run_cli() with hero_proc (replace zinit lifecycle) - [ ] Update getting-started guide for new conventions - [ ] Sync local skills (hero_sockets, hero_service) with hero_skills repo versions
Author
Owner

Implementation Plan — Socket Strategy Alignment (development_13)

Based on the updated hero_sockets and hero_context skills in hero_skills, here's the concrete plan to align hero_rpc with the new architecture. This builds on the existing development branch (which already has HERO_SOCKET_DIR support in crates/service/) and addresses the action items from comment #17993.

Current State Analysis

Already aligned (in crates/service/):

  • HeroRpcServer → binds $HERO_SOCKET_DIR/<service_name>/rpc.sock
  • HeroUiServer → binds $HERO_SOCKET_DIR/<service_name>/ui.sock
  • Socket base dir resolution via HERO_SOCKET_DIR env var
  • Mandatory endpoints: /health, /openrpc.json, /.well-known/heroservice.json

Needs alignment (in crates/server/ — OServer):

  • Core socket: hero_db_core.sock (old flat naming, should be in service dir)
  • Domain sockets: hero_db_{context}_{domain}.sock (should be single rpc.sock)
  • No X-Hero-Context header extraction (context is in socket path, not header)
  • No X-Hero-Claims header extraction (no claims-based auth)
  • No trust model (missing claims ≠ full trust)

Needs alignment (in crates/osis/ — RequestContext):

  • Missing hero_context: u32 field
  • Missing hero_claims: Option<Vec<String>> field
  • No X-Hero-Context / X-Hero-Claims parsing

Changes

1. Extend RequestContext with Hero Context & Claims

File: crates/osis/src/rpc/request_context.rs

Add fields per the hero_context skill spec:

pub struct RequestContext {
    // existing fields...
    pub hero_context: u32,                    // X-Hero-Context (default: 0 = admin)
    pub hero_claims: Option<Vec<String>>,     // X-Hero-Claims (None = full trust)
    pub forwarded_prefix: Option<String>,     // X-Forwarded-Prefix
}

Update from_headers() to parse:

  • X-Hero-Context → integer (default 0)
  • X-Hero-ClaimsSome(vec![...]) if present, None if missing (= full trust)
  • X-Forwarded-Prefix → optional string

Add authorization helper:

impl RequestContext {
    pub fn is_trusted(&self) -> bool {
        self.hero_claims.is_none()  // Missing claims = internal call = full trust
    }
    
    pub fn has_claim(&self, claim: &str) -> bool {
        self.is_trusted() || self.hero_claims.as_ref().map_or(false, |c| c.iter().any(|x| x == claim))
    }
}

2. Unify OServer to Single rpc.sock Per Service

Files: crates/server/src/server/config.rs, server.rs, core_server.rs, domain_server.rs

Change OServer from spawning N+1 sockets (1 core + N domain) to a single rpc.sock:

  • OServerConfig: Replace core_socket() and domain_socket() with service_rpc_socket(name) that returns $HERO_SOCKET_DIR/<name>/rpc.sock
  • OServer::run(): Merge core management methods and all domain dispatch into one Axum router, bind to single rpc.sock
  • Domain dispatch: Route by method prefix — identity.User.get goes to identity domain, context.list goes to core management
  • Context extraction: Read X-Hero-Context header → determines which context's storage to use
  • DomainServer: Keep the dispatch logic but remove socket-per-domain spawning; instead, register domains into the unified router

Before:

~/hero/var/sockets/hero_db_core.sock
~/hero/var/sockets/hero_db_root_recipes.sock
~/hero/var/sockets/hero_db_root_identity.sock

After:

~/hero/var/sockets/hero_osis/rpc.sock    ← single socket, all domains

3. Multi-Context Storage via Headers

Instead of separate sockets per context, the server determines context from X-Hero-Context:

  • Context 0 (default) = admin/root context
  • Context ≥1 = user contexts
  • Storage path still uses context-based directories: ~/hero/var/osisdb/{context_id}/{domain}/
  • Context registry maps integer IDs to context configs

4. Update recipe_server Example

Update the recipe_server example to demonstrate:

  • Single rpc.sock binding
  • Context header extraction
  • New server API

5. Update HeroLifecycle for hero_proc Self-Start

Align start() with the hero_proc_service_singlebin skill:

  • Use restart_service() (idempotent) instead of separate service_set + service_start
  • Add kill_other for socket cleanup
  • Add health_checks with openrpc_socket
  • Use is_process() on actions

What's NOT Changing

  • crates/service/ (HeroRpcServer/HeroUiServer) — already aligned
  • Schema codegen pipeline — unchanged
  • OSIS storage layer (DBTyped, SmartID, OTOML) — unchanged
  • OpenRPC spec generation — unchanged
  • The hero_sdk rename from development_rethinking — separate concern, will cherry-pick later

Order of Implementation

  1. RequestContext changes (foundation)
  2. OServerConfig socket path updates
  3. Unified OServer router (merge core + domain dispatch)
  4. Context header routing in domain dispatch
  5. Update recipe_server example
  6. HeroLifecycle self-start alignment
  7. Tests and verification

Implementing on development_13 branch. Will push after each significant change.

## Implementation Plan — Socket Strategy Alignment (`development_13`) Based on the updated `hero_sockets` and `hero_context` skills in hero_skills, here's the concrete plan to align hero_rpc with the new architecture. This builds on the existing `development` branch (which already has `HERO_SOCKET_DIR` support in `crates/service/`) and addresses the action items from comment #17993. ### Current State Analysis **Already aligned (in `crates/service/`):** - `HeroRpcServer` → binds `$HERO_SOCKET_DIR/<service_name>/rpc.sock` ✅ - `HeroUiServer` → binds `$HERO_SOCKET_DIR/<service_name>/ui.sock` ✅ - Socket base dir resolution via `HERO_SOCKET_DIR` env var ✅ - Mandatory endpoints: `/health`, `/openrpc.json`, `/.well-known/heroservice.json` ✅ **Needs alignment (in `crates/server/` — OServer):** - ❌ Core socket: `hero_db_core.sock` (old flat naming, should be in service dir) - ❌ Domain sockets: `hero_db_{context}_{domain}.sock` (should be single `rpc.sock`) - ❌ No `X-Hero-Context` header extraction (context is in socket path, not header) - ❌ No `X-Hero-Claims` header extraction (no claims-based auth) - ❌ No trust model (missing claims ≠ full trust) **Needs alignment (in `crates/osis/` — RequestContext):** - ❌ Missing `hero_context: u32` field - ❌ Missing `hero_claims: Option<Vec<String>>` field - ❌ No `X-Hero-Context` / `X-Hero-Claims` parsing --- ### Changes #### 1. Extend `RequestContext` with Hero Context & Claims **File**: `crates/osis/src/rpc/request_context.rs` Add fields per the `hero_context` skill spec: ```rust pub struct RequestContext { // existing fields... pub hero_context: u32, // X-Hero-Context (default: 0 = admin) pub hero_claims: Option<Vec<String>>, // X-Hero-Claims (None = full trust) pub forwarded_prefix: Option<String>, // X-Forwarded-Prefix } ``` Update `from_headers()` to parse: - `X-Hero-Context` → integer (default 0) - `X-Hero-Claims` → `Some(vec![...])` if present, `None` if missing (= full trust) - `X-Forwarded-Prefix` → optional string Add authorization helper: ```rust impl RequestContext { pub fn is_trusted(&self) -> bool { self.hero_claims.is_none() // Missing claims = internal call = full trust } pub fn has_claim(&self, claim: &str) -> bool { self.is_trusted() || self.hero_claims.as_ref().map_or(false, |c| c.iter().any(|x| x == claim)) } } ``` #### 2. Unify OServer to Single `rpc.sock` Per Service **Files**: `crates/server/src/server/config.rs`, `server.rs`, `core_server.rs`, `domain_server.rs` Change OServer from spawning N+1 sockets (1 core + N domain) to a single `rpc.sock`: - **OServerConfig**: Replace `core_socket()` and `domain_socket()` with `service_rpc_socket(name)` that returns `$HERO_SOCKET_DIR/<name>/rpc.sock` - **OServer::run()**: Merge core management methods and all domain dispatch into one Axum router, bind to single `rpc.sock` - **Domain dispatch**: Route by method prefix — `identity.User.get` goes to identity domain, `context.list` goes to core management - **Context extraction**: Read `X-Hero-Context` header → determines which context's storage to use - **DomainServer**: Keep the dispatch logic but remove socket-per-domain spawning; instead, register domains into the unified router Before: ``` ~/hero/var/sockets/hero_db_core.sock ~/hero/var/sockets/hero_db_root_recipes.sock ~/hero/var/sockets/hero_db_root_identity.sock ``` After: ``` ~/hero/var/sockets/hero_osis/rpc.sock ← single socket, all domains ``` #### 3. Multi-Context Storage via Headers Instead of separate sockets per context, the server determines context from `X-Hero-Context`: - Context 0 (default) = admin/root context - Context ≥1 = user contexts - Storage path still uses context-based directories: `~/hero/var/osisdb/{context_id}/{domain}/` - Context registry maps integer IDs to context configs #### 4. Update recipe_server Example Update the recipe_server example to demonstrate: - Single `rpc.sock` binding - Context header extraction - New server API #### 5. Update HeroLifecycle for hero_proc Self-Start Align `start()` with the `hero_proc_service_singlebin` skill: - Use `restart_service()` (idempotent) instead of separate `service_set` + `service_start` - Add `kill_other` for socket cleanup - Add `health_checks` with `openrpc_socket` - Use `is_process()` on actions --- ### What's NOT Changing - `crates/service/` (HeroRpcServer/HeroUiServer) — already aligned - Schema codegen pipeline — unchanged - OSIS storage layer (DBTyped, SmartID, OTOML) — unchanged - OpenRPC spec generation — unchanged - The hero_sdk rename from development_rethinking — separate concern, will cherry-pick later ### Order of Implementation 1. RequestContext changes (foundation) 2. OServerConfig socket path updates 3. Unified OServer router (merge core + domain dispatch) 4. Context header routing in domain dispatch 5. Update recipe_server example 6. HeroLifecycle self-start alignment 7. Tests and verification Implementing on `development_13` branch. Will push after each significant change.
Author
Owner

First commit pushed to development_13 — Socket Strategy Alignment

Branch: development_13 — commit a0f4a08

What was implemented

1. RequestContext extended (crates/osis/src/rpc/request_context.rs)

  • Added hero_context: u32 — from X-Hero-Context header (default 0 = admin)
  • Added hero_claims: Option<Vec<String>> — from X-Hero-Claims header (None = FULL TRUST)
  • Added forwarded_prefix: Option<String> — from X-Forwarded-Prefix header
  • New helpers: is_trusted(), has_claim(claim), context_name()
  • All existing + 5 new tests pass

2. Unified single-socket OServer (crates/server/src/server/unified_server.rs)

  • New UnifiedServerBuilder — accumulates domain registrations, serves through single rpc.sock
  • All domains + core management methods (context.list, domain.list, etc.) on ONE socket
  • Method routing by type name prefix: recipe.list → recipes domain, context.list → management
  • Context extracted from X-Hero-Context header, NOT from socket path
  • Combined OpenRPC spec merging all domain + management methods
  • Health, discovery, and inspector endpoints on same socket

3. OServerConfig updated (crates/server/src/server/config.rs)

  • New: rpc_socket(name)$HERO_SOCKET_DIR/<name>/rpc.sock
  • New: ui_socket(name)$HERO_SOCKET_DIR/<name>/ui.sock
  • New: service_socket_dir(name)$HERO_SOCKET_DIR/<name>/
  • Old core_socket() and domain_socket() deprecated but kept

4. HeroLifecycle aligned with hero_proc self-start (crates/service/src/lifecycle.rs)

  • start_with_overrides() now uses restart_service() (idempotent)
  • Adds kill_other with socket cleanup (rpc.sock)
  • Adds health_checks with openrpc_socket health check
  • Uses is_process() on actions
  • stop() uses stop_service() with timeout

5. Examples updated

  • Both recipe_server examples updated with new socket convention comments

Socket layout change

Before:

~/hero/var/sockets/hero_db_core.sock
~/hero/var/sockets/hero_db_root_recipes.sock

After:

$HERO_SOCKET_DIR/recipe-server/rpc.sock   ← single socket, all domains

Backward compatibility

  • Old core_server.rs and domain_server.rs kept but deprecated
  • OServer::register() API unchanged — works the same but routes to unified socket
  • Legacy socket path helpers marked #[deprecated]

Test results

  • hero_rpc_osis: 68/68 pass
  • hero_rpc_generator: 103/103 pass
  • cargo check --workspace: clean (only 2 expected deprecation warnings)
## First commit pushed to `development_13` — Socket Strategy Alignment Branch: [`development_13`](https://forge.ourworld.tf/lhumina_code/hero_rpc/src/branch/development_13) — commit `a0f4a08` ### What was implemented **1. RequestContext extended** (`crates/osis/src/rpc/request_context.rs`) - Added `hero_context: u32` — from `X-Hero-Context` header (default 0 = admin) - Added `hero_claims: Option<Vec<String>>` — from `X-Hero-Claims` header (None = FULL TRUST) - Added `forwarded_prefix: Option<String>` — from `X-Forwarded-Prefix` header - New helpers: `is_trusted()`, `has_claim(claim)`, `context_name()` - All existing + 5 new tests pass **2. Unified single-socket OServer** (`crates/server/src/server/unified_server.rs`) - New `UnifiedServerBuilder` — accumulates domain registrations, serves through single `rpc.sock` - All domains + core management methods (`context.list`, `domain.list`, etc.) on ONE socket - Method routing by type name prefix: `recipe.list` → recipes domain, `context.list` → management - Context extracted from `X-Hero-Context` header, NOT from socket path - Combined OpenRPC spec merging all domain + management methods - Health, discovery, and inspector endpoints on same socket **3. OServerConfig updated** (`crates/server/src/server/config.rs`) - New: `rpc_socket(name)` → `$HERO_SOCKET_DIR/<name>/rpc.sock` - New: `ui_socket(name)` → `$HERO_SOCKET_DIR/<name>/ui.sock` - New: `service_socket_dir(name)` → `$HERO_SOCKET_DIR/<name>/` - Old `core_socket()` and `domain_socket()` deprecated but kept **4. HeroLifecycle aligned with hero_proc self-start** (`crates/service/src/lifecycle.rs`) - `start_with_overrides()` now uses `restart_service()` (idempotent) - Adds `kill_other` with socket cleanup (`rpc.sock`) - Adds `health_checks` with `openrpc_socket` health check - Uses `is_process()` on actions - `stop()` uses `stop_service()` with timeout **5. Examples updated** - Both recipe_server examples updated with new socket convention comments ### Socket layout change **Before:** ``` ~/hero/var/sockets/hero_db_core.sock ~/hero/var/sockets/hero_db_root_recipes.sock ``` **After:** ``` $HERO_SOCKET_DIR/recipe-server/rpc.sock ← single socket, all domains ``` ### Backward compatibility - Old `core_server.rs` and `domain_server.rs` kept but deprecated - `OServer::register()` API unchanged — works the same but routes to unified socket - Legacy socket path helpers marked `#[deprecated]` ### Test results - `hero_rpc_osis`: 68/68 pass ✅ - `hero_rpc_generator`: 103/103 pass ✅ - `cargo check --workspace`: clean (only 2 expected deprecation warnings) ✅
Author
Owner

hero_skills Compliance Fix (commit 17478db)

Cross-checked implementation against all 5 hero_skills SKILL.md files. Fixed:

Issues Found & Fixed

  1. Discovery manifest field name (unified_server.rs)

    • socket_typesocket per hero_sockets spec
    • The /.well-known/heroservice.json endpoint now returns {"socket": "rpc"} (not socket_type)
  2. Missing .interpreter("exec") on action builder (lifecycle.rs)

    • hero_proc_service_singlebin and hero_proc_service_selfstart skills require .interpreter("exec") on all daemon actions
    • Without it, hero_proc may not properly exec the binary
  3. Recipe server example updated for hero_proc (example/recipe_server/src/main.rs)

    • Now uses OServer::run_cli() with HeroLifecycle instead of bare OServer::new().run()
    • Supports full hero_proc lifecycle: start, stop, status, logs, serve subcommands
    • Follows hero_proc_service_singlebin pattern

Compliance Checklist

  • HERO_SOCKET_DIR respected, defaults to ~/hero/var/sockets
  • Binds to $HERO_SOCKET_DIR/<service>/rpc.sock
  • Creates service directory + removes stale socket before binding
  • Socket permissions 0o660
  • POST /rpc, GET /health, GET /openrpc.json, GET /.well-known/heroservice.json
  • X-Hero-Context (integer, default 0), X-Hero-Claims (None=trust), X-Forwarded-Prefix
  • restart_service() idempotent, kill_other with socket cleanup, .is_process(), .interpreter("exec")
  • Health checks via openrpc_socket
  • All 68 osis tests + 13 server tests pass
## hero_skills Compliance Fix (commit 17478db) Cross-checked implementation against all 5 hero_skills SKILL.md files. Fixed: ### Issues Found & Fixed 1. **Discovery manifest field name** (`unified_server.rs`) - `socket_type` → `socket` per hero_sockets spec - The `/.well-known/heroservice.json` endpoint now returns `{"socket": "rpc"}` (not `socket_type`) 2. **Missing `.interpreter("exec")` on action builder** (`lifecycle.rs`) - hero_proc_service_singlebin and hero_proc_service_selfstart skills require `.interpreter("exec")` on all daemon actions - Without it, hero_proc may not properly exec the binary 3. **Recipe server example updated for hero_proc** (`example/recipe_server/src/main.rs`) - Now uses `OServer::run_cli()` with `HeroLifecycle` instead of bare `OServer::new().run()` - Supports full hero_proc lifecycle: `start`, `stop`, `status`, `logs`, `serve` subcommands - Follows hero_proc_service_singlebin pattern ### Compliance Checklist ✅ - ✅ `HERO_SOCKET_DIR` respected, defaults to `~/hero/var/sockets` - ✅ Binds to `$HERO_SOCKET_DIR/<service>/rpc.sock` - ✅ Creates service directory + removes stale socket before binding - ✅ Socket permissions `0o660` - ✅ `POST /rpc`, `GET /health`, `GET /openrpc.json`, `GET /.well-known/heroservice.json` - ✅ `X-Hero-Context` (integer, default 0), `X-Hero-Claims` (None=trust), `X-Forwarded-Prefix` - ✅ `restart_service()` idempotent, `kill_other` with socket cleanup, `.is_process()`, `.interpreter("exec")` - ✅ Health checks via `openrpc_socket` - ✅ All 68 osis tests + 13 server tests pass
Author
Owner

Docs & Example Alignment (commit b896338)

Full alignment of docs and recipe_server example with hero_skills conventions:

Recipe Server Example

  • main.rs: Fixed misleading comment — accurately describes CLI subcommands (start/stop/serve) not flags
  • Makefile: Now uses hero_proc integration (make runrecipe_server start, make stoprecipe_server stop), removed manual PID/kill-server logic, uses correct socket path $HERO_SOCKET_DIR/recipe-server/rpc.sock
  • README.md: Updated socket paths, curl examples, added CLI subcommand reference table, documented X-Hero-Context header
  • curl test script: Uses $HERO_SOCKET_DIR with proper default fallback

Root Docs

  • GETTING_STARTED.md: Complete rewrite — unified socket model, HeroLifecycle/OServer::run_cli() pattern, correct main.rs example with clap dep, hero_proc integration section, context headers, management methods, Rust version corrected to 1.93+
  • README.md: Fixed socket paths (was ~/hero/var/sockets/{context}/hero_recipes.sock, now $HERO_SOCKET_DIR/hero-recipes/rpc.sock), updated architecture & runtime flow for unified socket, added hero_skills and hero_proc references

Remaining Note

hero_rpc uses CLI subcommands (start/stop/serve) rather than the flags (--start/--stop) pattern from hero_skills. This is intentional — the subcommand pattern is richer (supports status, logs, run, install, seed args, env overrides). Both patterns use restart_service() under the hood.

## Docs & Example Alignment (commit b896338) Full alignment of docs and recipe_server example with hero_skills conventions: ### Recipe Server Example - **main.rs**: Fixed misleading comment — accurately describes CLI subcommands (`start`/`stop`/`serve`) not flags - **Makefile**: Now uses hero_proc integration (`make run` → `recipe_server start`, `make stop` → `recipe_server stop`), removed manual PID/kill-server logic, uses correct socket path `$HERO_SOCKET_DIR/recipe-server/rpc.sock` - **README.md**: Updated socket paths, curl examples, added CLI subcommand reference table, documented `X-Hero-Context` header - **curl test script**: Uses `$HERO_SOCKET_DIR` with proper default fallback ### Root Docs - **GETTING_STARTED.md**: Complete rewrite — unified socket model, `HeroLifecycle`/`OServer::run_cli()` pattern, correct `main.rs` example with `clap` dep, hero_proc integration section, context headers, management methods, Rust version corrected to 1.93+ - **README.md**: Fixed socket paths (was `~/hero/var/sockets/{context}/hero_recipes.sock`, now `$HERO_SOCKET_DIR/hero-recipes/rpc.sock`), updated architecture & runtime flow for unified socket, added hero_skills and hero_proc references ### Remaining Note hero_rpc uses CLI **subcommands** (`start`/`stop`/`serve`) rather than the **flags** (`--start`/`--stop`) pattern from hero_skills. This is intentional — the subcommand pattern is richer (supports `status`, `logs`, `run`, `install`, seed args, env overrides). Both patterns use `restart_service()` under the hood.
Author
Owner

Progress: CLI standardized to hero_skills singlebin pattern

Replaced the subcommand-based CLI (start/stop/serve) with the hero_skills singlebin pattern (--start/--stop/bare) across the entire codebase:

Code changes

  • crates/server/src/server/cli.rsServerCli now uses --start/--stop bool flags instead of subcommands
  • crates/server/src/server/server.rsOServer::run_cli() dispatches on flags, not subcommands
  • crates/service/src/hero_server.rsHeroServer, HeroRpcServer, HeroUiServer all use --start/--stop flags. Internal HeroCli<A> struct replaced from subcommand-based to flag-based.
  • crates/service/src/lifecycle.rsexec_command() no longer appends serve subcommand (bare binary = foreground mode)
  • crates/server/src/lib.rs — Removed ServerCommand/LifecycleCommand re-exports

Example & docs

  • example/recipe_server/ — main.rs, Makefile, README all updated for --start/--stop
  • GETTING_STARTED.md — All CLI examples use --start/--stop/bare pattern
  • README.md — Updated for foreground-only cargo run

Tests

  • 68 osis + 13 server tests pass
  • recipe_server compiles clean

Commit: eb53a1d

## Progress: CLI standardized to hero_skills singlebin pattern Replaced the subcommand-based CLI (`start`/`stop`/`serve`) with the hero_skills singlebin pattern (`--start`/`--stop`/bare) across the entire codebase: ### Code changes - **`crates/server/src/server/cli.rs`** — `ServerCli` now uses `--start`/`--stop` bool flags instead of subcommands - **`crates/server/src/server/server.rs`** — `OServer::run_cli()` dispatches on flags, not subcommands - **`crates/service/src/hero_server.rs`** — `HeroServer`, `HeroRpcServer`, `HeroUiServer` all use `--start`/`--stop` flags. Internal `HeroCli<A>` struct replaced from subcommand-based to flag-based. - **`crates/service/src/lifecycle.rs`** — `exec_command()` no longer appends `serve` subcommand (bare binary = foreground mode) - **`crates/server/src/lib.rs`** — Removed `ServerCommand`/`LifecycleCommand` re-exports ### Example & docs - **`example/recipe_server/`** — main.rs, Makefile, README all updated for `--start`/`--stop` - **`GETTING_STARTED.md`** — All CLI examples use `--start`/`--stop`/bare pattern - **`README.md`** — Updated for foreground-only `cargo run` ### Tests - 68 osis + 13 server tests pass - `recipe_server` compiles clean Commit: `eb53a1d`
Author
Owner

Scaffolder aligned with hero_skills conventions (17cc961)

The hero_rpc_generator workspace scaffolder now generates projects that follow hero_skills patterns:

  • Crate naming: _openrpc_server, _http_ui (matches naming convention skill)
  • Server main.rs: Uses --start/--stop singlebin pattern (matches selfstart skill)
  • UI main.rs: Uses HeroUiServer from hero_service — Unix socket only, no raw TCP (matches hero_sockets skill)
  • Makefile: Three-layer build_lib.sh pattern with standard targets (matches build_lib skill)
  • buildenv.sh: Generated with correct PROJECT_NAME and BINARIES
  • rust-version: Updated to 1.93.0

All 186 tests pass (105 generator + 13 server + 68 OSIS).

**Scaffolder aligned with hero_skills conventions** (`17cc961`) The `hero_rpc_generator` workspace scaffolder now generates projects that follow hero_skills patterns: - **Crate naming**: `_openrpc` → `_server`, `_http` → `_ui` (matches naming convention skill) - **Server main.rs**: Uses `--start`/`--stop` singlebin pattern (matches selfstart skill) - **UI main.rs**: Uses `HeroUiServer` from hero_service — Unix socket only, no raw TCP (matches hero_sockets skill) - **Makefile**: Three-layer `build_lib.sh` pattern with standard targets (matches build_lib skill) - **buildenv.sh**: Generated with correct PROJECT_NAME and BINARIES - **rust-version**: Updated to 1.93.0 All 186 tests pass (105 generator + 13 server + 68 OSIS).
Author
Owner

All work merged to development (branch development_13 merged)

Summary of changes:

  • Unified socket strategy (single rpc.sock per service)
  • X-Hero-Context header for context isolation
  • --start/--stop singlebin CLI pattern (HeroServer, HeroRpcServer, HeroUiServer, OServer)
  • Scaffolder aligned: _server/_ui crate naming, Makefile + buildenv.sh generation, HeroUiServer template
  • Service method routing fix: custom methods (e.g. recipeservice.get_by_category) now properly dispatched
  • Recipe server example fully functional with implemented custom methods
  • All 186 tests passing

Remaining items for separate issues:

  • Generator layout inconsistency (flat files vs subdirectory mod.rs)
  • Deprecation warnings for core_socket()/domain_socket() (code exists but unused)
**All work merged to development** (branch `development_13` merged) Summary of changes: - Unified socket strategy (single `rpc.sock` per service) - `X-Hero-Context` header for context isolation - `--start`/`--stop` singlebin CLI pattern (HeroServer, HeroRpcServer, HeroUiServer, OServer) - Scaffolder aligned: `_server`/`_ui` crate naming, Makefile + buildenv.sh generation, HeroUiServer template - Service method routing fix: custom methods (e.g. `recipeservice.get_by_category`) now properly dispatched - Recipe server example fully functional with implemented custom methods - All 186 tests passing Remaining items for separate issues: - Generator layout inconsistency (flat files vs subdirectory mod.rs) - Deprecation warnings for `core_socket()`/`domain_socket()` (code exists but unused)
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
lhumina_code/hero_rpc#13
No description provided.