Typed inputs on ActionSpec + templating at job-create time #47
Labels
No labels
prio_critical
prio_low
type_bug
type_contact
type_issue
type_lead
type_question
type_story
type_task
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
lhumina_code/hero_proc#47
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Motivation
hero_logicis refactoring nodes to be thin pointers tohero_procactions. The plan on thehero_logicside:hero_logicnode cards (no more Model / System prompt / Script / Temperature / Timeout / Retry on the workflow card).{ node_id, input_schema, output_schema, action_name, action_context }— just I/O typing and a reference to the action that runs it.hero_proc_ui, defines the action there, comes back, and binds the node to it.+ Step— kind/interpreter is a property of the referenced action, not the node.The gap
Workflow-level inputs (typed fields the user fills in when running a step) need to reach the action. Today
hero_logicworks around this by:action.get→ render{{var}}placeholders inscript+ai_config.system_prompt→ send the pre-renderedActionSpectojob.create. Awkward: action in storage has raw{{foo}}placeholders that hero_proc itself can't render; every step is fetch+mutate+repush.Proposal
1.
input_schema: Option<String>onActionSpecJSON Schema (serialized) declaring the inputs an action expects. Optional. Validated on
action.set.2.
inputs: Option<Value>onJobCreateInputCaller-supplied values for this invocation. Validated against the action's
input_schema.3. Templating at dispatch time
hero_proc renders
{{var}}/{{nested.path}}inside the executor across:spec.script(user prompt / script body)spec.ai_config.system_promptspec.ai_config.model/temperature/max_tokensspec.envvaluesUnresolved placeholders stay literal. Port
interpolate_templatefromhero_logic/src/engine/node_executors.rs.4. Shell interpreters: also expose inputs as env vars
Merge
inputsinto child process env asHERO_INPUT_*(uppercase) so shell scripts can read them without{{}}syntax.Implementation checklist
hero_proc / hero_proc_lib
input_schema: Option<String>onActionSpec(openrpc + model + db).inputs: Option<Value>onJobCreateInput(openrpc + server RPC).hero_proc_lib::templatemodule withinterpolate_template.supervisor/executor.rs, before dispatching any spec, renderscript+ai_config.*+envvalues againstinputs.inputsinto child process env with aHERO_INPUT_*prefix.inputsagainstinput_schemaatjob.create; surface error via RPC.action.setwheninput_schemais set but doesn't parse as JSON Schema.hero_proc_ui
input_schema(reuse hero_logic's editor design).input_schemaread-only + atest runform that callsjob.createwith typed inputs.hero_logic (follow-up)
ai_config/script/ interpreter fields from the workflow editor's node body.Configure in hero_proc ↗link.execute_actiontojob.create(action_name, context, inputs)— drop fetch+mutate+repush.+ Stepbutton.interpolate_templatefromhero_logic(now in hero_proc).Non-goals
output_format/ output parsing here — that's a workflow-execution concern.jsonschemacrate.References
hero_logic/src/engine/node_executors.rs::execute_action.hero_proc_server/src/rpc/job.rs:54-84— noinputsonJobCreateInput.hero_proc_lib/src/db/actions/model.rs:159-218— noinput_schemaonActionSpec.hero_proc_server/src/supervisor/executor.rs:54-67, 89-241— dispatch is literal; no templating.hero_aibroker — verified working
Earlier RFC flagged
system_promptdrop in aibroker. Re-verified end-to-end:hero_proccorrectly builds[{role: system, ...}, {role: user, ...}]messages array (supervisor/executor.rs:123-133),hero_aibroker::handle_ai_chatparses system-role messages correctly (api/mod.rs:330-351), and the model respects the system prompt. No aibroker change needed. My earlier finding was looking at the OpenAI-compat HTTP endpoint, not the JSON-RPC path hero_proc uses.