AI Assistant does not execute tools or follow instructions properly #109

Closed
opened 2026-04-12 13:55:40 +00:00 by mik-tf · 1 comment
Owner

Problem

Hero AI Assistant (via hero_agent) does not execute commands when asked. Instead of running tools (Python scripts, shell commands, MCP calls), it lists its capabilities and asks what to do.

Example: Asked "write a python uv script to generate 50 random numbers and tell them to me" — it listed capabilities (File operations, Shell commands, Code generation...) but did not execute anything.

Expected

The AI Assistant should use its MCP tools to execute the request, as it did in earlier versions (v0.8.7-dev and before).

Context

  • Version: v0.9.0-dev on herozero.gent04.grid.tf
  • Models tested: mini (default), claude-haiku-4.1, claude-haiku-4.5
  • uv and python3: confirmed installed in container (uv 0.11.3, Python 3.13.5)
  • Worked before: AI assistant could run Python scripts and use tools in v0.8.7-dev
  • The agent correctly identifies context (geomind) and lists MCP tools, but does not invoke them

Investigation

  1. Check hero_agent_server — did the OSIS_URL or AIBROKER_API_ENDPOINT changes in hero_agent.toml affect tool routing?
  2. Compare hero_agent.toml env vars between v0.8.7-dev and v0.9.0-dev (AIBROKER_API_ENDPOINT port changed 9997→6666)
  3. Check if MCP bridge scripts are correctly routing through the new URL pattern (//<socket_type>/)
  4. Check hero_agent system prompt — is it instructing the model to list capabilities vs execute?
  5. Test with previous commit to confirm regression

Repos to check

  • hero_agent (OSIS_URL default change)
  • hero_zero/services/hero_agent.toml (env var changes: OSIS_URL, OSIS_CONTEXT, AIBROKER_API_ENDPOINT, HERO_AGENT_UI_BASE_PATH)
  • hero_aibroker (model routing, MCP proxy)
## Problem Hero AI Assistant (via hero_agent) does not execute commands when asked. Instead of running tools (Python scripts, shell commands, MCP calls), it lists its capabilities and asks what to do. Example: Asked "write a python uv script to generate 50 random numbers and tell them to me" — it listed capabilities (File operations, Shell commands, Code generation...) but did not execute anything. ## Expected The AI Assistant should use its MCP tools to execute the request, as it did in earlier versions (v0.8.7-dev and before). ## Context - **Version:** v0.9.0-dev on herozero.gent04.grid.tf - **Models tested:** mini (default), claude-haiku-4.1, claude-haiku-4.5 - **uv and python3:** confirmed installed in container (uv 0.11.3, Python 3.13.5) - **Worked before:** AI assistant could run Python scripts and use tools in v0.8.7-dev - **The agent correctly identifies context (geomind) and lists MCP tools, but does not invoke them** ## Investigation 1. Check hero_agent_server — did the OSIS_URL or AIBROKER_API_ENDPOINT changes in hero_agent.toml affect tool routing? 2. Compare hero_agent.toml env vars between v0.8.7-dev and v0.9.0-dev (AIBROKER_API_ENDPOINT port changed 9997→6666) 3. Check if MCP bridge scripts are correctly routing through the new URL pattern (/<service>/<socket_type>/<path>) 4. Check hero_agent system prompt — is it instructing the model to list capabilities vs execute? 5. Test with previous commit to confirm regression ## Repos to check - hero_agent (OSIS_URL default change) - hero_zero/services/hero_agent.toml (env var changes: OSIS_URL, OSIS_CONTEXT, AIBROKER_API_ENDPOINT, HERO_AGENT_UI_BASE_PATH) - hero_aibroker (model routing, MCP proxy)
Author
Owner

Fixed in v0.9.1-dev.

Root cause: The agent saved user messages to OSIS then loaded them from history to build the LLM request. OSIS was unreachable, so user messages were lost — the LLM only received the system prompt with 62 tools and no user question.

Fix: Added safety check in hero_agent/crates/hero_agent/src/agent.rs (both handle_message and quick_response) ensuring the current user message is always in the messages vector regardless of OSIS availability.

Commit: lhumina_code/hero_agent@26b5909

Verified: Shell commands, Python/uv scripts, MCP tools, file operations all working.

Signed-off-by: mik-tf

Fixed in v0.9.1-dev. **Root cause:** The agent saved user messages to OSIS then loaded them from history to build the LLM request. OSIS was unreachable, so user messages were lost — the LLM only received the system prompt with 62 tools and no user question. **Fix:** Added safety check in `hero_agent/crates/hero_agent/src/agent.rs` (both `handle_message` and `quick_response`) ensuring the current user message is always in the messages vector regardless of OSIS availability. **Commit:** https://forge.ourworld.tf/lhumina_code/hero_agent/commit/26b5909 **Verified:** Shell commands, Python/uv scripts, MCP tools, file operations all working. Signed-off-by: mik-tf
Sign in to join this conversation.
No labels
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
lhumina_code/home#109
No description provided.