Integrate zinit SDK: ZinitLifecycle for all binaries, logging via zinit, health checks #24
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Context
Parent issue: lhumina_code/hero_os#24
Related: lhumina_code/hero_rpc#7, lhumina_code/home#6
hero_aibroker_openrpc already has full
ZinitLifecyclewithrun/start/stop/status/logs/servesubcommands (Pattern B).Important: In-process operations (streaming chat completions, MCP tool calls) stay as in-process
tokio::spawntasks. They work with live streaming sessions and in-memory provider state that cannot be externalized to zinit subprocess jobs. However, they should log through zinit for centralized visibility.1. Add
ZinitLifecycleto hero_aibroker_httpFile:
crates/hero_aibroker_http/src/main.rsCurrently binds directly to TCP (
127.0.0.1:3385) or Unix socket with no CLI subcommands, no zinit integration, no graceful shutdown coordination.Improvement: Add
ZinitLifecycle(non-OpenRPC binary pattern):Add
zinit_sdktohero_aibroker_http/Cargo.toml.Update Makefile:
2. Replace custom OperationLogger with zinit logs
File:
crates/hero_aibroker/src/logging.rsCustom circular-buffer
OperationLogger(500 entries) tracking Chat, Embedding, TTS, STT, ModelsList operations. Exposed vialogs.get,logs.clear,logs.streamRPC methods.Improvement: Forward operation logs to zinit via
logs.insert()with structured source names.Log source naming convention
hero_aibroker.chat.{provider}hero_aibroker.chat.stream.{provider}hero_aibroker.embed.{provider}hero_aibroker.tts.{provider}hero_aibroker.stt.{provider}hero_aibroker.modelshero_aibroker.mcp.{server}.{tool}hero_aibroker.startup3. Clean up println!/eprintln! → tracing::
40+
println!/eprintln!calls scattered across the codebase (startup messages, API key manager initialization, error reporting). These don't appear in zinit logs if running as a managed service.Improvement: Replace all with
tracing::info!/tracing::error!so they flow through the tracing subscriber and are captured by zinit.4. Health check improvements
Current:
/healthreturns plain text"OK"with no structured data.Improvement:
{"status": "ok", "providers": N, "mcp_servers": N}ZinitLifecycleservice registration for both OpenRPC and HTTP services5. Graceful shutdown for HTTP service
Current: HTTP binary exits on panic/error without cleaning up the socket file or draining active connections.
Improvement:
ZinitLifecyclehandles signals. Theservesubcommand should:McpManager::stop_servers()to shut down MCP processesSummary
logs.insert()tracing::"OK"tokio::spawn(streaming, MCP)Acceptance Criteria
run/start/stop/status/logs/servesubcommandsprintln!/eprintln!replaced withtracing::Correction: scope of zinit jobs vs in-process operations
After further discussion, the recommendation to convert in-process operations to zinit jobs was incorrect. Zinit jobs are subprocess-based — they spawn external commands. hero_aibroker operations work with live streaming sessions and in-memory provider state that cannot be externalized to subprocesses.
What should NOT become zinit jobs (stays in-process)
tokio::spawntask reads SSE chunks and forwards viampsc::channel. This is inherently in-process.McpManager. They could theoretically be zinit services, but the tight coupling with the HTTP service's state (tool registry, client sessions) makes this impractical for now.What SHOULD use zinit
ZinitLifecyclefor hero_aibroker_httplogs.insert()with structured source namestracing::so zinit captures themZinitLifecyclefor HTTP serviceRevised summary
The core improvements are:
ZinitLifecyclefor hero_aibroker_http (OpenRPC server already done)OperationLoggerwith zinitlogs.insert()using structured source namesprintln!→tracing::so all output is captured by zinitIn-process operations (streaming chat, MCP tool calls) stay as
tokio::spawntasks but should log through zinit for centralized visibility.Integrate zinit SDK: lifecycle for HTTP binary, jobs API for streaming/MCP, logging via zinitto Integrate zinit SDK: ZinitLifecycle for HTTP binary, logging via zinit, health checksImplementation audit — OpenRPC lifecycle done, HTTP still needed
Audited current state (clean working tree on
development_timur):hero_aibroker_openrpc/src/lifecycle.rs) — ✅ fully implemented withServiceBuilder/ActionBuilder/RetryPolicyBuilder, all subcommands workingStill outstanding:
ZinitLifecycle(item 1)OperationLogger→ zinitlogs.insert()(item 2)println!→tracing::cleanup (item 3)Integrate zinit SDK: ZinitLifecycle for HTTP binary, logging via zinit, health checksto Integrate zinit SDK: ZinitLifecycle for all binaries, logging via zinit, health checksComplete — all lifecycle work done
Crates have been renamed to follow new conventions:
hero_aibroker_openrpc→hero_aibroker_serverhero_aibroker_http→hero_aibroker_uihero_aibroker_client→hero_aibroker_sdkBoth service binaries now have full
ZinitLifecycle:hero_aibroker_server— lifecycle.rs with run/start/stop/serve/status/logshero_aibroker_ui— lifecycle.rs with run/start/stop/serve/status/logs (commitaaebac1)development_timurRemaining items (logging, println cleanup, health checks) can be tracked separately. Core lifecycle integration is complete.
Closing.