Merge branch development to main #46
No reviewers
Labels
No labels
prio_critical
prio_low
type_bug
type_contact
type_issue
type_lead
type_question
type_story
type_task
No milestone
No project
No assignees
5 participants
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
lhumina_code/hero_proc!46
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "development"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
- Add #logs/<service> hash routing: cold load, hashchange, and navigateToLogs all now preselect the named service and call loadLogs() automatically - Preserve #logs/<service> hash when clicking the Logs tab button - Add updateLogsHash() to keep URL in sync with source selection changes - Fix DOMContentLoaded /logs/{name} pathname handler that never called loadLogs() - Also bundle pre-existing dashboard form updates (retry policy fields, health check row) - Fix stale pty.rs test: ActionSpec.args field was removed in SDK refactor #17 Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>- Add missing wildcard (*) to logs.filter src parameter for prefix matching - Add /logs and /service routes (without {name}) for SPA fallback routing - Align frontend log filtering with job log filtering behavior The logs.filter RPC expects src patterns to include * for prefix matching, but the main logs view wasn't appending it. The job detail panel was doing this correctly. Now both are consistent. Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>- Replace log_src from job.action to job_{id} format in executor, job RPC, and log_archive so log lookups are correctly keyed per job instance - Refactor script.run to use hero_do directly without requiring action registration; builds an inline ActionSpec from the rel_path - Remove HERO_PROC_UI_BASE_PATH env var and base_path AppState field; replace with per-request X-Forwarded-Prefix header middleware (BasePath) - Add CodeMirror editor assets (JS/CSS) for syntax-highlighted script editing in the dashboard UI - Expand heroscript.js with richer editor integration and dashboard.js with improved UX interactions Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>Replace manual patterns with hero_lib_rhai functions: - hero_init() instead of env_get("HOME") + path concatenation - forge_client() + pull() instead of git_tree_new() + clone_repo_full() - ensure_installed("dx") instead of execute("which dx") manual check - proc_process_action() + action_set() + service_register() instead of quick_service_set_full() per-binary All scripts now use the proper service model: multiple actions grouped under one service (e.g. hero_proxy service contains hero_proxy_server and hero_proxy_ui actions). Refs: #29 Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>Each service repo now owns its own scripts/rhai/run.rhai. install_and_run_all.rhai clones/updates repos and delegates via chdir() + run("scripts/rhai/run.rhai") instead of duplicating build/register logic. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>## hero_proc_sdk — new API layer Adds `ProcoFactory` as the single connection entry point, returning `HeroProcClient` — an object-oriented handle to the server: let proc = ProcoFactory::default_socket().await?; let proc = ProcoFactory::local_uds("/path/to.sock").await?; let proc = ProcoFactory::http("http://host:port").await?; ### ServiceHandle (from service_builder().save()) let svc = proc.service_builder("myapp") .description("…") .action("start", |a| { a.script("…").restart_always() }) .save().await?; svc.start().await? svc.stop().await? svc.restart().await? svc.status().await? // → "running" / "exited" / … svc.logs(50).await? // → Vec<LogLine> svc.delete().await? ### RunHandle (from run_builder().start()) let run = proc.run_builder("pipeline") .action("build", |a| { a.script("cargo build") }) .action("test", |a| { a.script("cargo test").depends_on("build") }) .start().await?; run.wait().await? run.status().await? // → "ok" / "error" / … run.logs().await? // → Vec<JobLogLine> across all jobs ### Domain clients proc.services() — ServicesClient: start/stop/restart/list/put proc.runs() — RunsClient: wait/status proc.service(name) → ServiceHandle for existing service proc.run(id) → RunHandle for existing run ### ActionBuilder additions New ergonomic aliases matching the closure-based API: .script(s) set/replace the script .command(s) alias for .script() .cwd(path) alias for .dir() .timeout_secs(n) converts to timeout_ms .restart_always() sets a 999 999-attempt retry policy .retry_count(n) set retry count directly ActionBuilder::with_name(name) — constructor without script ## hero_proc_examples — proper runnable examples Restructured crate with examples/ directory and src/lib.rs: examples/01_connect.rs — connect, ping, stats, list services examples/02_service.rs — full service lifecycle via ServiceHandle examples/03_run.rs — multi-step pipeline via RunHandle + logs examples/04_secrets.rs — store, retrieve, list, delete secrets Makefile: make connect / service / run / secrets / all / check Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>- Move all socket paths to hero_sockets subdirectory layout: $HERO_SOCKET_DIR/hero_proc_server/rpc.sock (was: .../hero_proc_server.sock) $HERO_SOCKET_DIR/hero_proc_ui/ui.sock (was: .../hero_proc_ui.sock) - Add HERO_SOCKET_DIR env var support to all socket resolution helpers (server types/socket.rs, sdk/socket.rs, sdk/logger.rs, ui/main.rs, cli/main.rs) - Add hero_context_middleware to hero_proc_server web layer: reads X-Hero-Context, X-Hero-Claims, and X-Forwarded-Prefix on every request - Update Makefile and crates/hero_proc_ui/Makefile to use new socket paths - Update openrpc_proxy! macro socket hint to hero_proc_server/rpc.sock - Carry forward pre-existing: job_ids filter in log queries, log.rs job_id tracking improvements, and dashboard.js UI fixes Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>The action form's AI-interpreter handler was fetching `${window.BASE_PATH}/api/aibroker/models` — a variable that's never set on the page — so whenever the dashboard was served under a router prefix (e.g. `/hero_proc/ui` via hero_router) the request missed the prefix and 404'd. The catch branch then painted "aibroker unreachable — enter model manually in env", even though aibroker was fine. - Switch to the local `BASE` constant that the rest of this file reads from `<meta name="base-path">`, matching every other fetch here. - Drop the "enter manually" fallback: if aibroker is unreachable or the response is empty, disable the model select and toast an error. AI actions should simply fail loudly instead of pretending to work with a user-typed model string. - Differentiate connect failure, HTTP error, and empty-catalog cases so the toast actually points at the right thing. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>Users reaching hero_proc_ui via hero_logic's "Configure ↗" deep-link had no way to discover how caller inputs flow into the script / prompt / env values. Added two short inline help blocks on the action form: - Under the Script / User-prompt textarea: explains that `{{name}}` and `{{nested.path}}` interpolate caller inputs at job-create time, shows the escape hatch for shell interpreters (HERO_INPUT_*), and notes the "list of strings" serialisation (JSON) with a pointer to using the new `text` type for prose-per-item layouts. - Under the Environment Variables section: explains that env values accept both `${VAR}` parent-shell expansion and `{{input}}` input templating, and that each caller input is auto-exported as `HERO_INPUT_<UPPERCASE_NAME>` for shell scripts that don't use `{{}}` syntax. Both blocks use a new `.hl-help-*` style (blue-tinted panel with mono `code` backgrounds) so they read as non-intrusive hint cards. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>The `input_schema` field on ActionSpec has existed in the openrpc schema + Rust model since hero_proc#47 landed, but was never exposed in the action form — hero_logic was the only place a user could declare what inputs an action accepts. That left the action opaque: anyone running it directly had to grep the script for `{{vars}}` and guess types. Surface it as a dedicated Inputs section on the action form, right above Environment Variables. - Parse the existing `input_schema` string on load; render one row per declared property with [name][type][?item-type][×]. - Types: string / text / integer / number / boolean / array. - When type == array, a second select appears for `items.type` (same list minus array — no nested arrays, keeps the emitted JSON Schema predictable). - `addActionInputRow()` appends a blank row; delegated `change` handler toggles the item-type select on type changes for existing rows too. - `collectActionInputSchema()` serialises the rows back to a JSON Schema string on save; returns null (= "no declared inputs") when the user left the section empty so back-compat callers don't see an `{"type":"object","properties":{}}` blob on every action. Accompanying help text explains: inputs are the typed contract at `job.create` time, values are substituted via `{{name}}` / `{{nested. path}}` across script / prompt / env / ai_config, and shell- interpreter actions additionally receive each input as `$HERO_INPUT_NAME`. Verified end-to-end with action.set + action.get — the schema round-trips correctly on the unix socket. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>Mirrors the existing Ai interpreter pattern. When an action uses interpreter=mcp, the executor does NOT spawn a child process — instead it parses the script as JSON `{service, tool, arguments}` and POSTs a JSON-RPC tools/call to hero_router's /mcp/{service} endpoint over UDS, writing the MCP response content to the job logs. Changes: - Interpreter::Mcp variant added; is_in_process() now returns true for both Ai and Mcp. - run_job_mcp mirrors run_job_ai: parse script, POST to hero_router, handle isError / content array, write result lines to logs, mark Succeeded/Failed. - aibroker_call refactored to delegate to http_post_uds(socket, path, body) which is now shared between Ai (/rpc) and Mcp (/mcp/{service}). - mcp_content_text flattens the MCP text-part array into a single string; non-text parts (images, resources) are ignored for now and can be surfaced via a future raw_output / typed-decoder extension. End-to-end verified: a flow action with interpreter=mcp successfully invoked hero_browser's page_screenshot_save and produced a 1920x1080 PNG on disk. Full chain: hero_proc mcp job -> hero_router /mcp/ hero_browser_server -> hero_browser RPC -> response back through logs. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>When interpreter=mcp is selected, the action form now swaps the raw Script textarea for a specialized picker: - Service dropdown populated from hero_router's /api/services (tools- exposing entries only) - Tool dropdown populated from that service's tools/list - Arguments form rendered from the tool's inputSchema — one row per property with type-appropriate widget (string/number/integer/boolean select/object/array textarea), required-marker, and description On save, spec.script is serialized to {"service":"...","tool":"...","arguments":{...}} which is exactly what the Mcp interpreter in hero_proc parses at run time. No new fields on ActionSpec — the existing `script` field carries the MCP config for MCP actions, mirroring how `ai_config` carries AI config for AI actions. Edit flow: on opening an existing MCP action, parse the saved script JSON and re-populate service/tool/args so users can tweak and re-save. Adds two same-origin proxy routes on hero_proc_ui so the browser never talks to hero_router directly (no CORS gymnastics): GET /api/router/services -> hero_router's catalog POST /api/router/mcp/{service} -> hero_router's /mcp/{service} End-to-end verified via hero_browser: selecting Hero Browser Server → page_screenshot_save renders a 4-field args form (browser_id, full_page, page_id, path) with correct types and hints. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>preselectSource('job_N') was silently failing because sources are service names, not job identifiers — leaving _activeJobIds null and loadLogs() returning all logs. Fixed all three code paths: - DOMContentLoaded: set _activeJobIds = [targetJobId] directly - hashchange handler: same - navigateToJobLogs(): same — this was why a reload was required Also wires job_id prop through hero_proc_app (Dioxus island) for the same URL pattern. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>Pull request closed