Add connection status indicator to all service UIs + extend smoke tests #70
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Problem
hero_proc_ui has a connection status indicator (green dot in navbar) that shows:
Other service UIs (hero_redis_ui, hero_embedder_ui, hero_books_ui, hero_inspector_ui, hero_collab_ui, hero_browser_ui, hero_whiteboard_ui, etc.) don't have this. Users can't tell if the backend is healthy without checking container logs.
Part 1 — Connection status component for all UIs
Pattern (from hero_proc_ui)
static/js/connection-status.js— standalone JS module/healthendpoint + backend RPCrpc.healthvia/rpc/proxyServices to add it to
Implementation
connection-status.jsfrom hero_proc_ui into a reusable patternPart 2 — Extend smoke tests for new services
Add to
hero_services/tests/smoke.shHealth checks:
RPC connectivity:
rpc.healthorrpc.discoveron each server via its UI proxy"result"Target: 57+ smoke tests covering all services
Part 3 — Foundry seed data
hero_foundry_admin and hero_foundry_ui both show "No repositories" when empty. Need:
Signed-off-by: mik-tf
Implementation Plan
Step 0: Extend
connection-status.jsmodule (hero_proc)The current module's popover is hardcoded — no way to inject service-specific data (model counts, key counts, etc.). Add an optional
extraInfoFncallback that returns key/value pairs rendered as extra rows in the popover. Backwards-compatible — services that don't use it see no change.Step 1a: Add indicator — hero_books_ui
Copy module, add HTML snippet (
status-dot+status-label) to navbar inbase.html, add init call. Has/healthendpoint already.Step 1b: Add indicator — hero_foundry_admin
Same pattern as books. Has
/healthendpoint, hasbase.htmlwith navbar.Step 2a: Standardize — hero_redis_ui
Replace custom inline polling with standard module. Preserve keys count + memory usage via
extraInfoFncallingredis.info.Step 2b: Standardize — hero_embedder_ui
Replace custom inline polling. Preserve model count + quality levels (Q1-Q4) via
extraInfoFncallinginfoRPC.Step 2c: Standardize — hero_aibroker_ui
Replace custom inline polling. Preserve model count + provider count via
extraInfoFncallinginfoRPC.Step 2d: Standardize — hero_inspector_ui
Replace SSE-based status dot with standard module. Keep SSE for service discovery events (separate concern).
Step 3: Rebuild — hero_browser_ui
Full dashboard rebuild following hero_proc_ui pattern:
base.html+index.html)connection-status.jsstatus dotdashboard.csswith compact sizingStep 4: Foundry seed data
Seed demo repositories on startup so neither foundry service shows empty UI:
Step 5: Extend smoke tests
Add health + RPC connectivity tests for new/updated services to
hero_services/tests/smoke.sh, targeting 57+ total tests.Step 6: Sync module copies
Copy updated
connection-status.jsto hero_collab_ui and hero_whiteboard_ui so all copies match.Repos touched (all on
development_mik)hero_proc, hero_books, hero_foundry, hero_foundry_ui, hero_redis, hero_embedder, hero_aibroker, hero_inspector, hero_browser_mcp, hero_services
Signed-off-by: mik-tf
Implementation Complete
All steps implemented on
development_mikbranches across 12 repos. Summary:Step 0: Extended connection-status.js (hero_proc)
extraInfoFncallback for service-specific popover data (model counts, keys, etc.)Step 1a: hero_books_ui — Added indicator
Step 1b: hero_foundry_admin — Added indicator
Step 2a: hero_redis_ui — Standardized
updateServerStatus()with moduleextraInfoFnStep 2b: hero_embedder_ui — Standardized
loadServerInfo()with moduleextraInfoFnStep 2c: hero_aibroker_ui — Standardized
pollStatus()with moduleextraInfoFnStep 2d: hero_inspector — Standardized
Step 3: hero_browser_ui — Full rebuild
Step 4: Foundry seed data
seed_dataexample creates 2 demo repos (hero-app.forge, hero-docs.forge)seed-foundry-repos.shscript for container initStep 5: Smoke tests extended
Step 6: Module synced
Repos touched
hero_proc, hero_books, hero_foundry, hero_redis, hero_embedder, hero_aibroker, hero_inspector, hero_browser_mcp, hero_services, hero_collab, hero_whiteboard
All on
development_mik— ready for verification and squash merge.Signed-off-by: mik-tf
Post-#70 Follow-up Items
These are pre-existing issues discovered during testing, not regressions from #70:
1. Compute UI blank page
hero_compute_uiloads but shows blank content2. Foundry seed data not wired into entrypoint
seed_dataexample binary andseed-foundry-repos.shscript were createdseed_databinary, include in dist, call from Docker entrypoint3. Redis SSO token invalid in iframe
{"error":"invalid token"}when loaded via Hero OS iframe4. hero_agent integration (issue #72)
Signed-off-by: mik-tf
Squash Merged to Development
All 11 repos squash merged and pushed. Branches cleaned up.
Working (visible dot + clickable popover):
Code deployed but needs per-service fixes:
Infrastructure delivered:
Next: Fix remaining 8 services
Each needs specific per-service attention to work correctly in the Hero OS iframe context. Will continue in follow-up.
Signed-off-by: mik-tf
Connection status indicator — follow-up fixes (round 2)
What was fixed
All 11 services now have visible + clickable status dot
Remaining: backend connectivity issues (not status indicator)
Pushed to development on all 11 repos.
Round 3 — connection status for remaining services
Pushed to development
Needs follow-up
Total: 17 services now have connection status indicator
Round 4 — voice dot position + indexer popover + proxy root redirect
Fixed
hero_voice_ui — Moved status dot from bottom status bar to navbar header next to "HeroVoice" brand text. Popover placement changed from
toptobottom. Matches the hero_proxy_ui pattern perfectly.hero_indexer_ui — The actual served UI is from
lhumina_code/hero_indexer_ui/(Askama templates intemplates/base.html, 2944 lines rendered), NOT fromhero_indexer/crates/hero_indexer_ui/static/index.html(167 lines, separate rust-embed app). Replaced the non-clickable CSS-only status dot with the full clickable popover (health + RPC ping + recheck button).hero_proxy_server — Root handler (
/) was showing a service discovery dashboard instead of redirecting to Hero OS. AddedHERO_PROXY_DEFAULT_SERVICEcheck: when set, redirects/to the configured service (e.g./hero_os_ui/). Dashboard still accessible when env var is unset.hero_services — Two fixes:
entrypoint.sh: socat bridge was routing port 6666 →hero_proxy_ui.sock(admin dashboard only). Fixed to route tohero_proxy_serverTCP:9997 (the actual reverse proxy).hero_proxy.toml: AddedHERO_PROXY_DEFAULT_SERVICE = "hero_os_ui"so root redirects to Hero OS.UI patterns for connection status indicator
Three different UI architectures encountered across services:
connection-status.js)<script src>, configurable per service<script>block before</body>, Bootstrap Popover with health+RPC pollingbase.htmltemplate, compiled at build timeAll patterns produce the same UX: green pulsing dot → click → popover showing UI Server + Backend status + recheck button.
Smoke tests
111 passed, 0 failed, 2 skipped.
Signed-off-by: mik-tf