tools to install & use GPU's
  • Rust 65.7%
  • JavaScript 14.1%
  • HTML 9.1%
  • CSS 5.8%
  • Shell 5.3%
Find a file
despiegk 4bc9991764
Some checks failed
build / check (push) Failing after 2s
Update GPU service and remove build scripts
Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-05-08 17:18:32 +02:00
.forgejo/workflows feat: scaffold hero_gpu workspace with 7 crates 2026-04-26 13:18:48 +02:00
crates Update GPU service and remove build scripts 2026-05-08 17:18:32 +02:00
scripts refactor: migrate hero_gpu_ui to Bootstrap-based consolidated templates 2026-04-27 13:08:52 +02:00
.gitignore feat: scaffold hero_gpu workspace with 7 crates 2026-04-26 13:18:48 +02:00
Cargo.lock Update GPU service and remove build scripts 2026-05-08 17:18:32 +02:00
Cargo.toml chore: pin hero_ git deps to version 0.6.0 2026-05-08 12:55:43 +02:00
README.md feat: scaffold hero_gpu workspace with 7 crates 2026-04-26 13:18:48 +02:00

hero_gpu

OpenRPC gateway in front of an SGLang Python backend, exposing OpenAI- and Anthropic-compatible endpoints, an installer, an SDK, and an admin UI.

Crates

  • hero_gpu — CLI binary (handles --start / --stop, registers with hero_proc)
  • hero_gpu_server — JSON-RPC server + OpenRPC spec + SGLang client
  • hero_gpu_api — OpenAI + Anthropic HTTP adapters (library)
  • hero_gpu_sdk — generated client from openrpc.json
  • hero_gpu_installer — uv-based sandboxed Python/SGLang installer
  • hero_gpu_ui — Askama + Bootstrap admin dashboard
  • hero_gpu_examples — examples + integration test skeleton

Sockets

All under ~/hero/var/sockets/hero_gpu/:

  • rpc.sock — JSON-RPC 2.0 (consumed by hero_gpu_sdk)
  • openapi.sock — OpenAI + Anthropic compatible HTTP (consumed by external clients)
  • ui.sock — admin dashboard

Quickstart

make install          # build + install binaries to ~/hero/bin/
make download-assets  # fetch Bootstrap, Icons, Unpoly into hero_gpu_ui/static/
make run              # start service via hero_proc
make logs             # tail server logs
make stop             # stop service

SGLang backend

Launched as a separate hero_proc action via scripts/sglang_runner.sh, running inside a uv-managed Python venv at ~/hero/var/hero_gpu/python/. The Rust server talks to it over 127.0.0.1:30000.

See crates/hero_gpu_server/openrpc.json for the full RPC interface.