in stats bar #8

Closed
opened 2026-03-18 07:57:07 +00:00 by despiegk · 3 comments
Owner

image

show system cpu.mem
but also sum of all processes, so we see how much we take from total

image

should be mbit/sec
so we know how much goes out

![image](/attachments/a7644a84-41d6-4fe4-89be-7d816a22827f) show system cpu.mem but also sum of all processes, so we see how much we take from total ![image](/attachments/b2c50729-3328-4b43-8d8b-af5bf036417f) should be mbit/sec so we know how much goes out
122 KiB
421 KiB
Author
Owner

Implementation Spec for Issue #8 — Stats Bar Improvements

Objective

Improve the global stats sidebar to show:

  1. Both system-level CPU/memory and the summed CPU/memory of all hero_proc-managed processes, so operators can see how much of the system's resources their managed processes consume.
  2. Network stats displayed in Mbit/sec (rate since last sample), not cumulative bytes.

Requirements

  • CPU widget — show system-wide CPU% and, on a secondary line, the sum of CPU% of all running managed processes.
  • Memory widget — show system-wide used/total memory and, on a secondary line, the summed RSS memory of all running managed processes.
  • Network widget — display RX and TX as a rate in Mbit/sec (delta bytes between polls ÷ elapsed seconds × 8 ÷ 1,000,000). On first poll show "-".
  • Backend must query running job PIDs cheaply (single SQL) and call the existing processes_stats() batch function, then sum results.
  • Two new fields added to system.stats JSON: managed_cpu_percent (f32) and managed_memory_bytes (u64).
  • openrpc.json schema updated accordingly.

Files to Modify/Create

  • crates/hero_proc_lib/src/db/jobs/model.rs — Add get_running_pids() SQL function
  • crates/hero_proc_lib/src/db/factory.rs — Expose running_pids() on JobsApi
  • crates/hero_proc_server/src/rpc/system.rs — Collect PIDs, sum stats, add new fields to response
  • crates/hero_proc_server/openrpc.json — Add managed_cpu_percent and managed_memory_bytes to SystemStats schema
  • crates/hero_proc_ui/static/js/dashboard.js — Track poll timestamp, add formatMbps(), update updateAdminSidebar to show managed stats and Mbit/sec network
  • crates/hero_proc_ui/templates/base.html — Add secondary HTML elements for managed CPU and managed memory

Implementation Plan

Step 1 — Add get_running_pids to DB model layer

File: crates/hero_proc_lib/src/db/jobs/model.rs

  • Add SQL query: SELECT pid FROM jobs WHERE phase IN ('running','retrying') AND pid IS NOT NULL
  • Returns Vec<u32>
  • Dependencies: none

Step 2 — Expose running_pids() on JobsApi

File: crates/hero_proc_lib/src/db/factory.rs

  • Add running_pids() method to JobsApi that calls the model function
  • Dependencies: Step 1

Step 3 — Extend system.stats RPC handler

File: crates/hero_proc_server/src/rpc/system.rs

  • Call db.jobs.running_pids(), then sysmon::processes_stats(), sum cpu% and memory_bytes
  • Add managed_cpu_percent and managed_memory_bytes to JSON response
  • Dependencies: Step 2

Step 4 — Update OpenRPC schema

File: crates/hero_proc_server/openrpc.json

  • Add managed_cpu_percent and managed_memory_bytes to SystemStats schema
  • Dependencies: none (can run parallel to Steps 1-3)

Step 5 — Frontend: timestamp tracking + formatMbps helper

File: crates/hero_proc_ui/static/js/dashboard.js

  • Add lastStatsTimestamp variable
  • Track elapsed seconds between polls in loadSystemStats()
  • Add formatMbps(byteDelta, elapsedSec) helper
  • Dependencies: none (can run parallel to Steps 1-4)

Step 6 — Frontend: update sidebar to show Mbit/sec and managed stats

File: crates/hero_proc_ui/static/js/dashboard.js

  • Update updateAdminSidebar(stats, elapsedSec) signature
  • Replace cumulative bytes display with rate-based Mbit/sec using formatMbps
  • Add managed CPU and memory display from new response fields
  • Dependencies: Step 5

Step 7 — HTML: add secondary elements to CPU and Memory widgets

File: crates/hero_proc_ui/templates/base.html

  • Add <div id="admin-managed-cpu"> in CPU widget
  • Add <div id="admin-managed-mem"> in Memory widget
  • Dependencies: none (can run parallel)

Acceptance Criteria

  • system.stats returns managed_cpu_percent and managed_memory_bytes
  • When no managed processes running, both values are 0/0.0
  • Memory widget shows secondary line: "128 MB managed"
  • CPU widget shows secondary line: "Managed: 3.4%"
  • Network widget shows "RX: 1.23 Mbit/s" / "TX: 0.45 Mbit/s"
  • On first poll, network shows "-"
  • Network sparkline still functions correctly
  • No regression in other sidebar widgets
  • openrpc.json schema reflects the two new fields

Notes

  • Use SI Mbit (1,000,000 bits), not MiB. Formula: bytes * 8 / elapsed_sec / 1_000_000
  • managed_cpu_percent is per-core summed (sysinfo semantics) — can exceed system-wide CPU%. Consider a tooltip.
  • The direct SQL query for PIDs avoids the N+1 pattern of existing helpers.
## Implementation Spec for Issue #8 — Stats Bar Improvements ### Objective Improve the global stats sidebar to show: 1. Both **system-level** CPU/memory and the **summed CPU/memory of all hero_proc-managed processes**, so operators can see how much of the system's resources their managed processes consume. 2. Network stats displayed in **Mbit/sec** (rate since last sample), not cumulative bytes. ### Requirements - **CPU widget** — show system-wide CPU% and, on a secondary line, the sum of CPU% of all running managed processes. - **Memory widget** — show system-wide used/total memory and, on a secondary line, the summed RSS memory of all running managed processes. - **Network widget** — display RX and TX as a **rate in Mbit/sec** (delta bytes between polls ÷ elapsed seconds × 8 ÷ 1,000,000). On first poll show "-". - Backend must query running job PIDs cheaply (single SQL) and call the existing `processes_stats()` batch function, then sum results. - Two new fields added to `system.stats` JSON: `managed_cpu_percent` (f32) and `managed_memory_bytes` (u64). - `openrpc.json` schema updated accordingly. ### Files to Modify/Create - `crates/hero_proc_lib/src/db/jobs/model.rs` — Add `get_running_pids()` SQL function - `crates/hero_proc_lib/src/db/factory.rs` — Expose `running_pids()` on `JobsApi` - `crates/hero_proc_server/src/rpc/system.rs` — Collect PIDs, sum stats, add new fields to response - `crates/hero_proc_server/openrpc.json` — Add `managed_cpu_percent` and `managed_memory_bytes` to `SystemStats` schema - `crates/hero_proc_ui/static/js/dashboard.js` — Track poll timestamp, add `formatMbps()`, update `updateAdminSidebar` to show managed stats and Mbit/sec network - `crates/hero_proc_ui/templates/base.html` — Add secondary HTML elements for managed CPU and managed memory ### Implementation Plan #### Step 1 — Add `get_running_pids` to DB model layer **File:** `crates/hero_proc_lib/src/db/jobs/model.rs` - Add SQL query: `SELECT pid FROM jobs WHERE phase IN ('running','retrying') AND pid IS NOT NULL` - Returns `Vec<u32>` - Dependencies: none #### Step 2 — Expose `running_pids()` on `JobsApi` **File:** `crates/hero_proc_lib/src/db/factory.rs` - Add `running_pids()` method to `JobsApi` that calls the model function - Dependencies: Step 1 #### Step 3 — Extend `system.stats` RPC handler **File:** `crates/hero_proc_server/src/rpc/system.rs` - Call `db.jobs.running_pids()`, then `sysmon::processes_stats()`, sum cpu% and memory_bytes - Add `managed_cpu_percent` and `managed_memory_bytes` to JSON response - Dependencies: Step 2 #### Step 4 — Update OpenRPC schema **File:** `crates/hero_proc_server/openrpc.json` - Add `managed_cpu_percent` and `managed_memory_bytes` to `SystemStats` schema - Dependencies: none (can run parallel to Steps 1-3) #### Step 5 — Frontend: timestamp tracking + `formatMbps` helper **File:** `crates/hero_proc_ui/static/js/dashboard.js` - Add `lastStatsTimestamp` variable - Track elapsed seconds between polls in `loadSystemStats()` - Add `formatMbps(byteDelta, elapsedSec)` helper - Dependencies: none (can run parallel to Steps 1-4) #### Step 6 — Frontend: update sidebar to show Mbit/sec and managed stats **File:** `crates/hero_proc_ui/static/js/dashboard.js` - Update `updateAdminSidebar(stats, elapsedSec)` signature - Replace cumulative bytes display with rate-based Mbit/sec using `formatMbps` - Add managed CPU and memory display from new response fields - Dependencies: Step 5 #### Step 7 — HTML: add secondary elements to CPU and Memory widgets **File:** `crates/hero_proc_ui/templates/base.html` - Add `<div id="admin-managed-cpu">` in CPU widget - Add `<div id="admin-managed-mem">` in Memory widget - Dependencies: none (can run parallel) ### Acceptance Criteria - [ ] `system.stats` returns `managed_cpu_percent` and `managed_memory_bytes` - [ ] When no managed processes running, both values are `0`/`0.0` - [ ] Memory widget shows secondary line: "128 MB managed" - [ ] CPU widget shows secondary line: "Managed: 3.4%" - [ ] Network widget shows "RX: 1.23 Mbit/s" / "TX: 0.45 Mbit/s" - [ ] On first poll, network shows "-" - [ ] Network sparkline still functions correctly - [ ] No regression in other sidebar widgets - [ ] `openrpc.json` schema reflects the two new fields ### Notes - Use SI Mbit (1,000,000 bits), not MiB. Formula: `bytes * 8 / elapsed_sec / 1_000_000` - `managed_cpu_percent` is per-core summed (sysinfo semantics) — can exceed system-wide CPU%. Consider a tooltip. - The direct SQL query for PIDs avoids the N+1 pattern of existing helpers.
Author
Owner

Test Results

  • Status: FAIL
  • Passed: 77 (across all test suites)
  • Failed: 3

Failed Tests

  1. commands::config_ops::tests::test_config_diff_help — panicked at config diff help should succeed (tests/integration/tests/commands/config_ops.rs:57)
  2. commands::config_ops::tests::test_config_import_help — panicked at config import help should succeed (tests/integration/tests/commands/config_ops.rs:37)
  3. commands::system_commands::tests::test_reload_help — panicked at reload help should succeed (tests/integration/tests/commands/system_commands.rs:31)

Test Suite Breakdown

Suite Passed Failed
hero_proc_integration_tests (unit fixtures + harness) 4 0
bulk_operations 7 0
service_management (ran, no failures reported) 0
cli_integration 73 3

Warnings

  • unused import: crate::db::actions::model::ActionSpec in hero_proc_lib/src/db/integration_tests.rs:614
  • function shell_escape is never used in tests/integration/src/fixtures.rs:84
  • function add_loop_service is never used in tests/integration/tests/service_management.rs:23
  • function add_dependent_service is never used in tests/integration/tests/service_management.rs:37

Notes

The 3 failing tests are all CLI help-text assertions — they check that certain subcommands (config diff, config import, reload) exist and return a successful exit code when invoked with --help. These failures suggest those subcommands may be missing or renamed in the current build.

Run: 2026-03-18 | Branch: development_kristof_ttyd

## Test Results - **Status:** FAIL - **Passed:** 77 (across all test suites) - **Failed:** 3 ### Failed Tests 1. `commands::config_ops::tests::test_config_diff_help` — panicked at `config diff help should succeed` (tests/integration/tests/commands/config_ops.rs:57) 2. `commands::config_ops::tests::test_config_import_help` — panicked at `config import help should succeed` (tests/integration/tests/commands/config_ops.rs:37) 3. `commands::system_commands::tests::test_reload_help` — panicked at `reload help should succeed` (tests/integration/tests/commands/system_commands.rs:31) ### Test Suite Breakdown | Suite | Passed | Failed | |---|---|---| | `hero_proc_integration_tests` (unit fixtures + harness) | 4 | 0 | | `bulk_operations` | 7 | 0 | | `service_management` | (ran, no failures reported) | 0 | | `cli_integration` | 73 | 3 | ### Warnings - `unused import: crate::db::actions::model::ActionSpec` in `hero_proc_lib/src/db/integration_tests.rs:614` - `function shell_escape is never used` in `tests/integration/src/fixtures.rs:84` - `function add_loop_service is never used` in `tests/integration/tests/service_management.rs:23` - `function add_dependent_service is never used` in `tests/integration/tests/service_management.rs:37` ### Notes The 3 failing tests are all CLI help-text assertions — they check that certain subcommands (`config diff`, `config import`, `reload`) exist and return a successful exit code when invoked with `--help`. These failures suggest those subcommands may be missing or renamed in the current build. _Run: 2026-03-18 | Branch: `development_kristof_ttyd`_
Author
Owner

Implementation committed: 8757c4c

Browse: lhumina_code/hero_proc@8757c4c

Implementation committed: `8757c4c` Browse: https://forge.ourworld.tf/lhumina_code/hero_proc/commit/8757c4c
Commenting is not possible because the repository is archived.
No labels
No milestone
No project
No assignees
1 participant
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
lhumina_code/hero_proc_archive#8
No description provided.