Jobs appeared in the sidebar as jobs count but not in the jobs tab #22

Open
opened 2026-04-30 11:35:20 +00:00 by mahmoud · 5 comments
Owner

We need to investigate this issue. We may need to call the appropriate hero_proc endpoint to fetch the jobs and list them, similar to how it’s done in the sidebar.

We need to investigate this issue. We may need to call the appropriate hero_proc endpoint to fetch the jobs and list them, similar to how it’s done in the sidebar.
Author
Owner

might be related to #20

might be related to https://forge.ourworld.tf/lhumina_code/hero_codescalers/issues/20
Author
Owner

Implementation Spec for Issue #22

Objective

The sidebar's "Jobs Total" badge displays a non-zero count while the Jobs tab shows "No jobs". The two views read from different fields of the same hero_proc_sdk job_list response: the sidebar uses JobListResult.total, which hero_proc computes from a SQL COUNT(*) that ignores the tag filter, while the Jobs tab consumes JobListResult.jobs, which IS post-filtered by tag in Rust. Make both views render the same codescaler-scoped count by sourcing the count from the post-filtered list returned by jobs::list, not from total.

Root cause

  1. crates/hero_codescalers_server/src/main.rs:676-689get_hero_proc_job_count() returns r.total.unwrap_or(0). That total field is populated by hero_proc's list_jobs at hero_proc/crates/hero_proc_lib/src/db/jobs/model.rs:670-678, which builds the SQL WHERE clause from context_name, phase, service_id, action_id, run_id, hero_proc_service_name only. Tag filtering is applied in Rust after the SELECT (model.rs:697-702), so total counts every job in the hero_proc DB regardless of tag. Sending tag: Some("codescaler") makes the jobs array tag-correct but leaves total tag-unaware.
  2. crates/hero_codescalers_server/src/jobs.rs:226-269jobs::list returns the correct, post-filtered job set. Its only weakness is the limit.or(Some(500)) default: the SQL page is taken first (most-recent 500 across ALL jobs in hero_proc), then narrowed to codescaler in Rust. On a busy shared hero_proc (foreign jobs from other services), this can drop legitimate codescaler jobs off the visible page even though the sidebar would still happily report them in total.
  3. UI side, crates/hero_codescalers_ui/static/js/dashboard.js:850-856pollSidebar writes stats.job_count into sb-jobs-total (and incorrectly into sb-jobs-running, with sb-jobs-failed hard-coded to 0). The sidebar therefore re-displays the wrong number every poll cycle even when loadJobs() overwrites it briefly with the correct count from the table fetch.
  4. UI minor: dashboard.js:618 reads j.created_at; jobs::list emits j.created_at_ms (jobs.rs:537). Every row's "Created" column renders as "—" — easy to confuse with "no data" while debugging this issue.

The Jobs tab is the correct view (it iterates the tag-filtered jobs array). The sidebar is the wrong view (it trusts total).

Requirements

  • The sidebar "Jobs Total" must equal the number of rows the Jobs tab would render with no filters applied.
  • The sidebar "Running" and "Failed" sub-counts must reflect codescaler jobs only, not duplicate the total.
  • The Jobs tab must display every codescaler job that exists in hero_proc, even when foreign jobs dominate the most-recent N.
  • The sidebar count must remain authoritative without requiring the user to open the Jobs tab first.
  • "Created" timestamps in the Jobs tab must render as a human-readable date.
  • No change to hero_proc; this is a hero_codescalers-side fix only.

Files to Modify

  • crates/hero_codescalers_server/src/main.rs — replace get_hero_proc_job_count() so it returns counts derived from the post-filtered jobs array, and surface phase breakdown so the sidebar can render Running / Failed correctly without a separate jobs.list round-trip on every poll.
  • crates/hero_codescalers_server/src/jobs.rs — add a small internal helper (e.g. count_by_phase) shared between the new stats path and the existing list, so there is one definition of "codescaler jobs". Optionally raise the default page size used by jobs::list to a higher cap (e.g. 2_000) and document why.
  • crates/hero_codescalers_ui/static/js/dashboard.js — update pollSidebar to consume the new structured job_stats object (or a dedicated key) from the stats response instead of treating job_count as both total and running. Fix the j.created_atj.created_at_ms rendering bug in renderJobs.

No template, no openrpc.json, no SDK regeneration is strictly required if we only enrich the existing stats result with extra keys — additive JSON changes don't break older clients.

Implementation Plan

Step 1: Add a shared codescaler-scoped counting helper in jobs.rs

Files: crates/hero_codescalers_server/src/jobs.rs
Dependencies: none

  • Introduce pub async fn stats(state: &AppState) -> Result<JobStatsSummary> (or equivalent) that calls hp.job_list with JobFilter { tag: Some("codescaler"), limit: Some(2_000), ..Default::default() }, applies the same defense-in-depth tags.contains("codescaler") filter that list() already does, and returns { total: usize, running: usize, failed: usize, pending: usize, succeeded: usize, cancelled: usize }. Returning struct-with-Serialize is cleanest — keeps main.rs free of phase string logic.
  • Bump the default limit in list() from Some(500) to Some(2_000) so the Jobs tab cannot be starved by foreign jobs sharing the hero_proc DB. Add an inline comment that explains: hero_proc applies LIMIT before the Rust-side tag post-filter, so the SQL page must be wide enough to include all codescaler jobs in the working set.
  • Reason for not asking hero_proc to push the tag filter into SQL: that's a hero_proc-side change. We're scoping the bug fix to hero_codescalers per the issue body.

Step 2: Rewrite get_hero_proc_job_count() and enrich the stats RPC

Files: crates/hero_codescalers_server/src/main.rs
Dependencies: Step 1

  • Delete get_hero_proc_job_count() (line 676) and replace its single call site in the "stats" arm (line 331) with let job_stats = jobs::stats(state).await.unwrap_or_default();.
  • In the Ok(json!({ … })) block (lines 349-366), keep "job_count" for backwards compatibility but set it to job_stats.total. Add a sibling "job_stats": job_stats (which serializes to { total, running, failed, pending, succeeded, cancelled }) for the sidebar's per-phase rendering.
  • Implement Default on the new JobStatsSummary so unwrap_or_default() keeps the daemon serving stats even when hero_proc is unreachable (matching today's unwrap_or(0) behavior).

Step 3: Fix the sidebar to consume real codescaler counts

Files: crates/hero_codescalers_ui/static/js/dashboard.js
Dependencies: Step 2

  • In pollSidebar (lines 850-856), replace the three lines that pump stats.job_count into both sb-jobs-total and sb-jobs-running (and the hard-coded zero into sb-jobs-failed) with reads from the new structured object: setText('sb-jobs-running', stats.job_stats?.running ?? 0), setText('sb-jobs-failed', stats.job_stats?.failed ?? 0), setText('sb-jobs-total', stats.job_stats?.total ?? stats.job_count ?? 0). The ?? stats.job_count fallback covers a transient version skew where the server has not been restarted yet.
  • Add a brief comment that the sidebar must use stats.job_stats, not call jobs.list directly, so the sidebar costs one RPC per poll instead of two.

Step 4: Fix the created_at rendering bug in the Jobs tab

Files: crates/hero_codescalers_ui/static/js/dashboard.js
Dependencies: none (independent of Steps 1-3)

  • In renderJobs (line 618) change j.created_at to j.created_at_ms and convert via new Date(j.created_at_ms).toLocaleString() only when the field is a finite number. This is mechanically tiny but if left in place the Jobs tab will look "broken" to a human eye even after Steps 1-3 land.

Step 5: Smoke-test parity

Files: none (test harness)
Dependencies: Steps 1-4

  • make build && make install, restart the server (service_codescalers start --instance 0 --root --reset or whatever the local instance is), open the UI.
  • Trigger a couple of jobs from the Users / Services tabs to populate hero_proc.
  • Open the Jobs tab; record the row count and the per-phase counts.
  • Reload the page (do NOT visit the Jobs tab first); confirm the sidebar's Total / Running / Failed values match what the Jobs tab showed.
  • Bonus check: proc job submit … --tag foreign (or any non-codescaler job) and confirm the sidebar Total does NOT increase.

Acceptance Criteria

  • After a fresh page load (without opening the Jobs tab), the sidebar "Total" badge equals the Jobs tab row count.
  • The sidebar "Running" badge equals the count of codescaler jobs whose phase == "running"; "Failed" equals phase == "failed". Neither is hard-coded.
  • Submitting a non-codescaler job to the same hero_proc does NOT change the sidebar count.
  • The Jobs tab returns codescaler jobs even when >=500 newer non-codescaler jobs exist in hero_proc.
  • The Jobs tab "Created" column renders timestamps for every row that has created_at_ms, instead of "—".
  • cargo test --workspace --lib and make check pass.

Notes

  • No hero_proc change. The cleanest fix would be to push the tag filter into hero_proc's SQL WHERE so total and the page contents agree, but that is out of scope per the issue body and would force a coordinated hero_proc release.
  • No openrpc.json / SDK regeneration. We're only adding an extra key (job_stats) to the existing stats result; older callers still see job_count.
  • Why limit 2_000 and not "all". Hero_proc has no streaming list; bigger pages mean bigger allocations on every sidebar poll. 2_000 is enough headroom to cover any realistic codescaler workload while keeping the response under a few hundred KB. Document this trade-off inline.
  • Sidebar polling cost. stats is already cheap (in-memory counters except for the hero_proc round-trip we're keeping). The new structured job_stats adds zero extra RPCs because we're reusing the same job_list call that today populates job_count.
  • Linked issue #20 is an unrelated end-to-end verification ticket. It does not constrain or contradict this fix; landing this fix actually advances #20 by making the Jobs tab usable.
  • Deliberately NOT changed: jobs::cleanup, the enqueue tag scheme, the OpenRPC SDK, the askama template, or the auth gating. The bug is fully contained in the count derivation and one JS rendering line.
## Implementation Spec for Issue #22 ### Objective The sidebar's "Jobs Total" badge displays a non-zero count while the Jobs tab shows "No jobs". The two views read from different fields of the same `hero_proc_sdk` `job_list` response: the sidebar uses `JobListResult.total`, which hero_proc computes from a SQL `COUNT(*)` that **ignores the tag filter**, while the Jobs tab consumes `JobListResult.jobs`, which IS post-filtered by tag in Rust. Make both views render the same `codescaler`-scoped count by sourcing the count from the post-filtered list returned by `jobs::list`, not from `total`. ### Root cause 1. **`crates/hero_codescalers_server/src/main.rs:676-689` — `get_hero_proc_job_count()`** returns `r.total.unwrap_or(0)`. That `total` field is populated by hero_proc's `list_jobs` at `hero_proc/crates/hero_proc_lib/src/db/jobs/model.rs:670-678`, which builds the SQL `WHERE` clause from `context_name`, `phase`, `service_id`, `action_id`, `run_id`, `hero_proc_service_name` only. **Tag filtering is applied in Rust after the SELECT** (`model.rs:697-702`), so `total` counts every job in the hero_proc DB regardless of tag. Sending `tag: Some("codescaler")` makes the `jobs` array tag-correct but leaves `total` tag-unaware. 2. **`crates/hero_codescalers_server/src/jobs.rs:226-269` — `jobs::list`** returns the correct, post-filtered job set. Its only weakness is the `limit.or(Some(500))` default: the SQL page is taken first (most-recent 500 across ALL jobs in hero_proc), then narrowed to `codescaler` in Rust. On a busy shared hero_proc (foreign jobs from other services), this can drop legitimate codescaler jobs off the visible page even though the sidebar would still happily report them in `total`. 3. **UI side, `crates/hero_codescalers_ui/static/js/dashboard.js:850-856` — `pollSidebar`** writes `stats.job_count` into `sb-jobs-total` (and incorrectly into `sb-jobs-running`, with `sb-jobs-failed` hard-coded to 0). The sidebar therefore re-displays the wrong number every poll cycle even when `loadJobs()` overwrites it briefly with the correct count from the table fetch. 4. **UI minor: `dashboard.js:618`** reads `j.created_at`; `jobs::list` emits `j.created_at_ms` (jobs.rs:537). Every row's "Created" column renders as "—" — easy to confuse with "no data" while debugging this issue. The Jobs tab is the **correct** view (it iterates the tag-filtered `jobs` array). The sidebar is the **wrong** view (it trusts `total`). ### Requirements - The sidebar "Jobs Total" must equal the number of rows the Jobs tab would render with no filters applied. - The sidebar "Running" and "Failed" sub-counts must reflect codescaler jobs only, not duplicate the total. - The Jobs tab must display every codescaler job that exists in hero_proc, even when foreign jobs dominate the most-recent N. - The sidebar count must remain authoritative without requiring the user to open the Jobs tab first. - "Created" timestamps in the Jobs tab must render as a human-readable date. - No change to hero_proc; this is a hero_codescalers-side fix only. ### Files to Modify - **`crates/hero_codescalers_server/src/main.rs`** — replace `get_hero_proc_job_count()` so it returns counts derived from the post-filtered `jobs` array, and surface phase breakdown so the sidebar can render Running / Failed correctly without a separate `jobs.list` round-trip on every poll. - **`crates/hero_codescalers_server/src/jobs.rs`** — add a small internal helper (e.g. `count_by_phase`) shared between the new stats path and the existing `list`, so there is one definition of "codescaler jobs". Optionally raise the default page size used by `jobs::list` to a higher cap (e.g. 2_000) and document why. - **`crates/hero_codescalers_ui/static/js/dashboard.js`** — update `pollSidebar` to consume the new structured `job_stats` object (or a dedicated key) from the `stats` response instead of treating `job_count` as both total and running. Fix the `j.created_at` → `j.created_at_ms` rendering bug in `renderJobs`. No template, no openrpc.json, no SDK regeneration is strictly required if we only enrich the existing `stats` result with extra keys — additive JSON changes don't break older clients. ### Implementation Plan #### Step 1: Add a shared codescaler-scoped counting helper in `jobs.rs` Files: `crates/hero_codescalers_server/src/jobs.rs` Dependencies: none - Introduce `pub async fn stats(state: &AppState) -> Result<JobStatsSummary>` (or equivalent) that calls `hp.job_list` with `JobFilter { tag: Some("codescaler"), limit: Some(2_000), ..Default::default() }`, applies the same defense-in-depth `tags.contains("codescaler")` filter that `list()` already does, and returns `{ total: usize, running: usize, failed: usize, pending: usize, succeeded: usize, cancelled: usize }`. Returning struct-with-Serialize is cleanest — keeps `main.rs` free of phase string logic. - Bump the default limit in `list()` from `Some(500)` to `Some(2_000)` so the Jobs tab cannot be starved by foreign jobs sharing the hero_proc DB. Add an inline comment that explains: hero_proc applies LIMIT before the Rust-side tag post-filter, so the SQL page must be wide enough to include all codescaler jobs in the working set. - Reason for not asking hero_proc to push the tag filter into SQL: that's a hero_proc-side change. We're scoping the bug fix to hero_codescalers per the issue body. #### Step 2: Rewrite `get_hero_proc_job_count()` and enrich the `stats` RPC Files: `crates/hero_codescalers_server/src/main.rs` Dependencies: Step 1 - Delete `get_hero_proc_job_count()` (line 676) and replace its single call site in the `"stats"` arm (line 331) with `let job_stats = jobs::stats(state).await.unwrap_or_default();`. - In the `Ok(json!({ … }))` block (lines 349-366), keep `"job_count"` for backwards compatibility but set it to `job_stats.total`. Add a sibling `"job_stats": job_stats` (which serializes to `{ total, running, failed, pending, succeeded, cancelled }`) for the sidebar's per-phase rendering. - Implement `Default` on the new `JobStatsSummary` so `unwrap_or_default()` keeps the daemon serving `stats` even when hero_proc is unreachable (matching today's `unwrap_or(0)` behavior). #### Step 3: Fix the sidebar to consume real codescaler counts Files: `crates/hero_codescalers_ui/static/js/dashboard.js` Dependencies: Step 2 - In `pollSidebar` (lines 850-856), replace the three lines that pump `stats.job_count` into both `sb-jobs-total` and `sb-jobs-running` (and the hard-coded zero into `sb-jobs-failed`) with reads from the new structured object: `setText('sb-jobs-running', stats.job_stats?.running ?? 0)`, `setText('sb-jobs-failed', stats.job_stats?.failed ?? 0)`, `setText('sb-jobs-total', stats.job_stats?.total ?? stats.job_count ?? 0)`. The `?? stats.job_count` fallback covers a transient version skew where the server has not been restarted yet. - Add a brief comment that the sidebar must use `stats.job_stats`, not call `jobs.list` directly, so the sidebar costs one RPC per poll instead of two. #### Step 4: Fix the `created_at` rendering bug in the Jobs tab Files: `crates/hero_codescalers_ui/static/js/dashboard.js` Dependencies: none (independent of Steps 1-3) - In `renderJobs` (line 618) change `j.created_at` to `j.created_at_ms` and convert via `new Date(j.created_at_ms).toLocaleString()` only when the field is a finite number. This is mechanically tiny but if left in place the Jobs tab will look "broken" to a human eye even after Steps 1-3 land. #### Step 5: Smoke-test parity Files: none (test harness) Dependencies: Steps 1-4 - `make build && make install`, restart the server (`service_codescalers start --instance 0 --root --reset` or whatever the local instance is), open the UI. - Trigger a couple of jobs from the Users / Services tabs to populate hero_proc. - Open the Jobs tab; record the row count and the per-phase counts. - Reload the page (do NOT visit the Jobs tab first); confirm the sidebar's Total / Running / Failed values match what the Jobs tab showed. - Bonus check: `proc job submit … --tag foreign` (or any non-codescaler job) and confirm the sidebar Total does NOT increase. ### Acceptance Criteria - [ ] After a fresh page load (without opening the Jobs tab), the sidebar "Total" badge equals the Jobs tab row count. - [ ] The sidebar "Running" badge equals the count of codescaler jobs whose `phase == "running"`; "Failed" equals `phase == "failed"`. Neither is hard-coded. - [ ] Submitting a non-codescaler job to the same hero_proc does NOT change the sidebar count. - [ ] The Jobs tab returns codescaler jobs even when >=500 newer non-codescaler jobs exist in hero_proc. - [ ] The Jobs tab "Created" column renders timestamps for every row that has `created_at_ms`, instead of "—". - [ ] `cargo test --workspace --lib` and `make check` pass. ### Notes - **No hero_proc change.** The cleanest fix would be to push the tag filter into hero_proc's SQL `WHERE` so `total` and the page contents agree, but that is out of scope per the issue body and would force a coordinated hero_proc release. - **No openrpc.json / SDK regeneration.** We're only adding an extra key (`job_stats`) to the existing `stats` result; older callers still see `job_count`. - **Why limit 2_000 and not "all".** Hero_proc has no streaming list; bigger pages mean bigger allocations on every sidebar poll. 2_000 is enough headroom to cover any realistic codescaler workload while keeping the response under a few hundred KB. Document this trade-off inline. - **Sidebar polling cost.** `stats` is already cheap (in-memory counters except for the hero_proc round-trip we're keeping). The new structured `job_stats` adds zero extra RPCs because we're reusing the same `job_list` call that today populates `job_count`. - **Linked issue #20** is an unrelated end-to-end verification ticket. It does not constrain or contradict this fix; landing this fix actually advances #20 by making the Jobs tab usable. - **Deliberately NOT changed**: `jobs::cleanup`, the `enqueue` tag scheme, the OpenRPC SDK, the askama template, or the auth gating. The bug is fully contained in the count derivation and one JS rendering line.
Author
Owner

Test Results

cargo check --workspace

passed
Finished in 1.4s (full check from clean: 1.34s; cached: 0.12s)

cargo test --workspace --lib

passed

  • Total: 0
  • Passed: 0
  • Failed: 0

Notes

  • No library tests are currently defined in any workspace crate (hero_codescalers_sdk and nu_exec both report running 0 tests); the binary crates (hero_codescalers, hero_codescalers_server, hero_codescalers_ui) have no --lib target.
  • Pre-existing dead-code warnings remain in crates/hero_codescalers_ui/src/main.rs (1 warning) and crates/hero_codescalers_server/src/geoip.rs (11 warnings); these were present before this change and are not introduced by the patch.
## Test Results ### `cargo check --workspace` passed Finished in 1.4s (full check from clean: 1.34s; cached: 0.12s) ### `cargo test --workspace --lib` passed - Total: 0 - Passed: 0 - Failed: 0 ### Notes - No library tests are currently defined in any workspace crate (`hero_codescalers_sdk` and `nu_exec` both report `running 0 tests`); the binary crates (`hero_codescalers`, `hero_codescalers_server`, `hero_codescalers_ui`) have no `--lib` target. - Pre-existing dead-code warnings remain in `crates/hero_codescalers_ui/src/main.rs` (1 warning) and `crates/hero_codescalers_server/src/geoip.rs` (11 warnings); these were present before this change and are not introduced by the patch.
Author
Owner

Implementation Summary

Root cause confirmed and fixed: hero_proc_sdk::JobListResult.total is a SQL COUNT(*) that ignores the tag filter (tag matching happens in Rust after the SELECT). The sidebar was reading total and therefore showing every job in hero_proc, codescaler-tagged or not. The Jobs tab was iterating the post-filtered jobs array and showing the correct (smaller) set, so the two views disagreed.

Fix: derive both views from the same post-filtered list.

Changes

crates/hero_codescalers_server/src/jobs.rs

  • Added CODESCALER_TAG constant; replaced inline "codescaler" literals in list() with the constant.
  • Added JobStatsSummary { total, pending, running, succeeded, failed, cancelled } (Default, Serialize).
  • Added pub async fn stats(_state: &AppState) -> Result<JobStatsSummary> — queries the same tag: "codescaler" filter list() uses, applies the same defense-in-depth post-filter, and counts per phase.
  • Bumped the list() default page from Some(500) to Some(2_000). Inline comment explains why: hero_proc's LIMIT is applied before its Rust-side tag post-filter, so the SQL page must be wide enough to include every codescaler job in the working set.

crates/hero_codescalers_server/src/main.rs

  • Replaced the call to get_hero_proc_job_count() in the stats arm with jobs::stats(state).await.unwrap_or_default().
  • Enriched the stats JSON response: job_count is preserved (now sourced from job_stats.total) for backwards compatibility; new sibling job_stats carries the per-phase breakdown.
  • Removed get_hero_proc_job_count().

crates/hero_codescalers_ui/static/js/dashboard.js

  • pollSidebar: now reads stats.job_stats.{total,running,failed} instead of duplicating stats.job_count into Total and Running and hard-coding Failed to 0. ?? stats.job_count fallback covers a transient version skew where the server hasn't been restarted yet.
  • renderJobs (line 618) and showJobDetail (line 1659): fixed j.created_atNumber.isFinite(j.created_at_ms) ? new Date(j.created_at_ms).toLocaleString() : '—'. The server emits created_at_ms (verified in jobs.rs:602); the prior code rendered every row's "Created" cell as "—".

Deliberately not changed

  • j.updated_at in showJobDetail — the server has no updated_at_ms field (only created_at_ms, started_at_ms, finished_at_ms); leaving it as today's "—" is correct until a separate cleanup decides whether to drop the row or repurpose it for finished_at_ms.
  • loadJobs writing the same sidebar IDs from its locally computed allJobs array — already codescaler-scoped and correct, just redundant; out of scope.
  • hero_proc itself — pushing the tag filter into SQL WHERE would be the cleaner upstream fix, but it's a coordinated release and out of scope per the issue body.
  • openrpc.json / SDK regeneration — job_stats is an additive field, no breaking change.

Test results

cargo check --workspace: passed.
cargo test --workspace --lib: passed (0 tests; the workspace has no library test targets, so this only confirms compile health). Behavioral verification is the manual smoke test from Step 5 of the spec — restart the service, populate hero_proc with codescaler jobs and at least one foreign-tagged job, reload the page without opening the Jobs tab, and confirm sidebar Total / Running / Failed match the Jobs tab and that foreign jobs do NOT inflate the sidebar count.

Pre-existing dead-code warnings in geoip.rs and hero_codescalers_ui/src/main.rs remain, untouched by this patch.

## Implementation Summary Root cause confirmed and fixed: `hero_proc_sdk::JobListResult.total` is a SQL `COUNT(*)` that ignores the `tag` filter (tag matching happens in Rust *after* the SELECT). The sidebar was reading `total` and therefore showing every job in hero_proc, codescaler-tagged or not. The Jobs tab was iterating the post-filtered `jobs` array and showing the correct (smaller) set, so the two views disagreed. Fix: derive both views from the same post-filtered list. ### Changes **`crates/hero_codescalers_server/src/jobs.rs`** - Added `CODESCALER_TAG` constant; replaced inline `"codescaler"` literals in `list()` with the constant. - Added `JobStatsSummary { total, pending, running, succeeded, failed, cancelled }` (`Default`, `Serialize`). - Added `pub async fn stats(_state: &AppState) -> Result<JobStatsSummary>` — queries the same `tag: "codescaler"` filter `list()` uses, applies the same defense-in-depth post-filter, and counts per phase. - Bumped the `list()` default page from `Some(500)` to `Some(2_000)`. Inline comment explains why: hero_proc's `LIMIT` is applied before its Rust-side tag post-filter, so the SQL page must be wide enough to include every codescaler job in the working set. **`crates/hero_codescalers_server/src/main.rs`** - Replaced the call to `get_hero_proc_job_count()` in the `stats` arm with `jobs::stats(state).await.unwrap_or_default()`. - Enriched the `stats` JSON response: `job_count` is preserved (now sourced from `job_stats.total`) for backwards compatibility; new sibling `job_stats` carries the per-phase breakdown. - Removed `get_hero_proc_job_count()`. **`crates/hero_codescalers_ui/static/js/dashboard.js`** - `pollSidebar`: now reads `stats.job_stats.{total,running,failed}` instead of duplicating `stats.job_count` into Total and Running and hard-coding Failed to 0. `?? stats.job_count` fallback covers a transient version skew where the server hasn't been restarted yet. - `renderJobs` (line 618) and `showJobDetail` (line 1659): fixed `j.created_at` → `Number.isFinite(j.created_at_ms) ? new Date(j.created_at_ms).toLocaleString() : '—'`. The server emits `created_at_ms` (verified in `jobs.rs:602`); the prior code rendered every row's "Created" cell as "—". ### Deliberately not changed - `j.updated_at` in `showJobDetail` — the server has no `updated_at_ms` field (only `created_at_ms`, `started_at_ms`, `finished_at_ms`); leaving it as today's "—" is correct until a separate cleanup decides whether to drop the row or repurpose it for `finished_at_ms`. - `loadJobs` writing the same sidebar IDs from its locally computed `allJobs` array — already codescaler-scoped and correct, just redundant; out of scope. - `hero_proc` itself — pushing the tag filter into SQL `WHERE` would be the cleaner upstream fix, but it's a coordinated release and out of scope per the issue body. - `openrpc.json` / SDK regeneration — `job_stats` is an additive field, no breaking change. ### Test results `cargo check --workspace`: passed. `cargo test --workspace --lib`: passed (0 tests; the workspace has no library test targets, so this only confirms compile health). Behavioral verification is the manual smoke test from Step 5 of the spec — restart the service, populate hero_proc with codescaler jobs and at least one foreign-tagged job, reload the page without opening the Jobs tab, and confirm sidebar Total / Running / Failed match the Jobs tab and that foreign jobs do NOT inflate the sidebar count. Pre-existing dead-code warnings in `geoip.rs` and `hero_codescalers_ui/src/main.rs` remain, untouched by this patch.
Author
Owner

Correction — initial fix had the wrong polarity

The first round (c4126a0) made the sidebar agree with the (over-filtered) Jobs tab. On proc job list we have ~48 jobs visible, but none of them are tagged codescaler — they're hero_proc-supervised services, not jobs hero_codescalers itself enqueued. So the previous fix landed both views on 0, which is internally consistent but useless.

Re-reading the issue body — "fetch the jobs and list them, similar to how it's done in the sidebar" — the right reading is to widen the tab to match the sidebar's previously-broad view, not narrow the sidebar.

What changed in 1e0a829

  • jobs::list() and jobs::stats(): drop the implicit tag: codescaler filter. extra_tag still narrows when explicitly requested (per-user views via codescaler_<user> continue to work).
  • jobs::get(): drop the tag gate. The codescaler UI is admin-only (ADMIN_SECRETS whitelist) so any visible hero_proc job is fair game to inspect from this daemon. logs/cancel/delete follow automatically since they delegate to get().
  • jobs::cleanup() and build_tags() stay scoped to the codescaler tag — bulk-delete must never touch system services, and codescaler-launched jobs still carry the canonical tag so the per-user filter and the cleanup gate keep working.
  • Cleanup unit tests still pass (5/5), including the one that asserts cleanup refuses to touch a job lacking the canonical tag.

Smoke test

  • make build && make install
  • Restart codescaler instance: service_codescalers start --root --reset (or your local equivalent)
  • Open the dashboard, go to the Jobs tab — should now show the same jobs as proc job list
  • Sidebar Total/Running/Failed should match the per-phase counts of the same set
  • The Created column should render timestamps (the created_at_ms rendering fix from c4126a0)
## Correction — initial fix had the wrong polarity The first round (c4126a0) made the sidebar agree with the (over-filtered) Jobs tab. On `proc job list` we have ~48 jobs visible, but none of them are tagged `codescaler` — they're hero_proc-supervised services, not jobs hero_codescalers itself enqueued. So the previous fix landed both views on `0`, which is internally consistent but useless. Re-reading the issue body — "fetch the jobs and list them, similar to how it's done in the sidebar" — the right reading is to widen the tab to match the sidebar's previously-broad view, not narrow the sidebar. ### What changed in `1e0a829` - `jobs::list()` and `jobs::stats()`: drop the implicit `tag: codescaler` filter. `extra_tag` still narrows when explicitly requested (per-user views via `codescaler_<user>` continue to work). - `jobs::get()`: drop the tag gate. The codescaler UI is admin-only (`ADMIN_SECRETS` whitelist) so any visible hero_proc job is fair game to inspect from this daemon. `logs`/`cancel`/`delete` follow automatically since they delegate to `get()`. - `jobs::cleanup()` and `build_tags()` stay scoped to the codescaler tag — bulk-delete must never touch system services, and codescaler-launched jobs still carry the canonical tag so the per-user filter and the cleanup gate keep working. - Cleanup unit tests still pass (5/5), including the one that asserts cleanup refuses to touch a job lacking the canonical tag. ### Smoke test - `make build && make install` - Restart codescaler instance: `service_codescalers start --root --reset` (or your local equivalent) - Open the dashboard, go to the Jobs tab — should now show the same jobs as `proc job list` - Sidebar Total/Running/Failed should match the per-phase counts of the same set - The Created column should render timestamps (the `created_at_ms` rendering fix from c4126a0)
Sign in to join this conversation.
No labels
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
lhumina_code/hero_codescalers#22
No description provided.