story: [AI Assistant] Testing feedback - Assistant non-functional despite valid API keys #215

Open
opened 2026-05-06 09:33:43 +00:00 by marionrvrn · 0 comments
Member

Overview

The AI Assistant does not respond to any query, even with valid API keys configured.
Tested across two sessions (May 5 and May 6). On both days, OpenRouter and Anthropic
keys were valid and confirmed, yet the assistant returned errors on every message sent.
The root cause is that Auto (default) is hardcoded to use the model
llama-3.3-70b-versatile via Groq. When the Groq key is missing, there is no
fallback to other configured providers.

Current state

  • Every message fails. The full error returned on May 5:

    "Error: All streaming LLM targets exhausted: LLM streaming API error 500
    Internal Server Error: Model not found: Model 'llama-3.3-70b-versatile' requires
    a connected provider (groq). Add your API key in the Providers tab to use
    this model."

  • On May 6, error messaging reverted to a generic "I wasn't able to complete
    that action. Could you try rephrasing your request?" and "(No response from AI —
    the provider returned an empty reply. Please try again or switch models.)"
  • Tested with multiple queries: "What is Hero OS and what can it do?",
    "Can you help me create slides?", "What is an AI assistant?" -> All failed
  • OpenRouter (Primary) was valid on both test days but never used
  • Groq (Primary) has been invalid across both sessions, this is the blocker
  • Anthropic (Optional) has been valid across both sessions but is never
    used as a fallback
  • No indication in the UI of which provider or model is actually being used
  • Error message "try rephrasing your request" is misleading as the issue is not
    the query, it is a hardcoded model dependency on a missing provider

API key status across two test sessions

Provider May 5 May 6
OpenRouter (Primary)
Groq (Primary)
Anthropic (Optional)

Updated Suggestions

Must fix now

  • Remove the hardcoded dependency on llama-3.3-70b-versatile via Groq in
    Auto (default) mode, or ensure Groq is pre-configured so users don't need
    to supply their own key
  • Fix fallback logic -> Auto mode must route to any working provider when
    the default model/provider is unavailable

Should do soon

  • Show the user which provider and model is active in the current session
  • Surface the real error to the user instead of the generic "try rephrasing"
    message, the May 5 detailed error was actually more useful than the May 6
    generic one
  • Clarify whether Groq key is user-supplied or platform-supplied and add
    onboarding guidance accordingly
## Overview The AI Assistant does not respond to any query, even with valid API keys configured. Tested across two sessions (May 5 and May 6). On both days, OpenRouter and Anthropic keys were valid and confirmed, yet the assistant returned errors on every message sent. The root cause is that Auto (default) is hardcoded to use the model `llama-3.3-70b-versatile` via Groq. When the Groq key is missing, there is no fallback to other configured providers. ## Current state - Every message fails. The full error returned on May 5: > "Error: All streaming LLM targets exhausted: LLM streaming API error 500 Internal Server Error: Model not found: Model 'llama-3.3-70b-versatile' requires a connected provider (groq). Add your API key in the Providers tab to use this model." - On May 6, error messaging reverted to a generic "I wasn't able to complete that action. Could you try rephrasing your request?" and "(No response from AI — the provider returned an empty reply. Please try again or switch models.)" - Tested with multiple queries: "What is Hero OS and what can it do?", "Can you help me create slides?", "What is an AI assistant?" -> All failed - OpenRouter (Primary) was valid on both test days but never used - Groq (Primary) has been invalid across both sessions, this is the blocker - Anthropic (Optional) has been valid across both sessions but is never used as a fallback - No indication in the UI of which provider or model is actually being used - Error message "try rephrasing your request" is misleading as the issue is not the query, it is a hardcoded model dependency on a missing provider ### API key status across two test sessions | Provider | May 5 | May 6 | |---|---|---| | OpenRouter (Primary) | ✅ | ✅ | | Groq (Primary) | ❌ | ❌ | | Anthropic (Optional) | ✅ | ✅ | ## Updated Suggestions ### Must fix now - Remove the hardcoded dependency on `llama-3.3-70b-versatile` via Groq in Auto (default) mode, or ensure Groq is pre-configured so users don't need to supply their own key - Fix fallback logic -> Auto mode must route to any working provider when the default model/provider is unavailable ### Should do soon - Show the user which provider and model is active in the current session - Surface the real error to the user instead of the generic "try rephrasing" message, the May 5 detailed error was actually more useful than the May 6 generic one - Clarify whether Groq key is user-supplied or platform-supplied and add onboarding guidance accordingly
mik-tf added this to the ACTIVE project 2026-05-06 17:31:57 +00:00
Sign in to join this conversation.
No labels
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
lhumina_code/home#215
No description provided.