No description
  • Rust 78.3%
  • HTML 13.2%
  • Shell 6.8%
  • Makefile 1.1%
  • JavaScript 0.4%
  • Other 0.1%
Find a file
Timur Gordon 697a9cee23
Some checks failed
Build Linux / build-linux (linux-amd64, false, x86_64-unknown-linux-musl) (push) Failing after 1m5s
Build Linux / build-linux (linux-arm64, true, aarch64-unknown-linux-gnu) (push) Failing after 1m34s
Build and Test / build (push) Failing after 1m45s
Tests / test (push) Successful in 4m36s
fix: resolve remaining clippy warnings in tests and graph.rs
Remove unnecessary mut bindings, unneeded struct patterns, expect_fun_call,
unused imports, and collapsible-if in test and server code.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-24 02:43:27 +01:00
.claude chore: repository health check and build improvements 2026-01-25 09:07:40 +01:00
.forgejo/workflows refactor: move to multi-crate workspace with SDK-based UI 2026-02-21 15:37:54 +03:00
crates fix: resolve remaining clippy warnings in tests and graph.rs 2026-02-24 02:43:27 +01:00
docker refactor: rename all binaries to snake_case and embed static assets 2026-02-21 15:50:22 +03:00
docs feat: run zinit_server and zinit_ui in screen sessions with live log tailing 2026-02-22 07:15:09 +03:00
embedded_scripts refactor: rename all binaries to snake_case and embed static assets 2026-02-21 15:50:22 +03:00
etc/zinit refactor: rename all binaries to snake_case and embed static assets 2026-02-21 15:50:22 +03:00
examples refactor: rename all binaries to snake_case and embed static assets 2026-02-21 15:50:22 +03:00
scripts refactor: rename all binaries to snake_case and embed static assets 2026-02-21 15:50:22 +03:00
tests fix: resolve remaining clippy warnings in tests and graph.rs 2026-02-24 02:43:27 +01:00
.gitignore Add .gitignore for Rust project 2026-01-02 16:14:32 +01:00
buildenv.sh refactor: rename all binaries to snake_case and embed static assets 2026-02-21 15:50:22 +03:00
Cargo.lock fix: job manager concurrency bugs, data integrity, and pre-existing test failures 2026-02-23 15:16:57 +01:00
Cargo.toml fix: job monitor, graph deps, remove MCP tab, add /openrpc.json and test plan 2026-02-22 10:27:20 +03:00
Makefile fix: test-bash and test-rhai targets missing cargo env setup 2026-02-23 21:27:56 +01:00
README.md docs: update README with embedded assets and reset-all feature 2026-02-22 05:35:07 +03:00

zinit

A lightweight process supervisor with dependency management, similar to systemd but simpler.

Quick Install

Get started in one command:

curl -fsSL https://raw.githubusercontent.com/threefoldtech/zinit/main/scripts/install.sh | bash

Or download and run the installer script directly:

cd /tmp && curl -O https://raw.githubusercontent.com/threefoldtech/zinit/main/scripts/install.sh
chmod +x install.sh
./install.sh

This will:

  • Detect your OS and architecture
  • Download pre-built binaries (Linux amd64, macOS arm64)
  • Install to $HOME/hero/bin
  • Configure your shell PATH automatically
  • Start zinit_server in background (macOS/Windows only)

Then use: zinit list, zinit status, zinit start <service>, etc.

For more options, see Quick Start section below.


Documentation


Features

  • Dependency Graph: Services declare dependencies (requires, after, wants, conflicts)
  • State Machine: 8 explicit states (Inactive, Blocked, Starting, Running, Stopping, Success, Exited, Failed)
  • Process Groups: Signals sent to process groups, handling sh -c child processes correctly
  • Health Checks: TCP, HTTP, and exec-based health checks with retries
  • Ordered Shutdown: Dependents stop before their dependencies
  • Hot Reload: Reload configuration without full restart
  • Multi-Environment: Works in containers, VMs, and bare-metal
  • Web Admin Dashboard: Real-time service management UI with charts, logs, events, and bulk operations
  • Fully Embedded UI: All assets (Bootstrap, Chart.js, icons) compiled into the binary — no CDN or network required

Deployment Modes

zinit adapts its behavior based on deployment environment:

Container Mode

Use zinit_pid1 as your container entrypoint:

ENTRYPOINT ["/usr/bin/zinit_pid1", "--container"]

Or set the environment variable:

ZINIT_CONTAINER=1 zinit_pid1

Behavior:

  • Loads services from /etc/zinit/services/
  • Clean exit on shutdown (no reboot syscall)
  • No system services directory

VM / Bare-Metal Mode

Use zinit_pid1 as your init system (PID 1):

# In /sbin/init or kernel cmdline: init=/usr/bin/zinit_pid1

Behavior:

  • Loads system services from /etc/zinit/system/ first (auto-assigned class=system)
  • Loads user services from /etc/zinit/services/ second
  • Handles reboot/poweroff via syscalls (SIGINT=reboot, SIGTERM=poweroff)
  • Never exits (kernel panic prevention)

Standalone Mode

Run zinit_server directly (not as PID 1):

zinit_server --config-dir /etc/zinit/services

Optionally enable system services directory:

zinit_server --config-dir /etc/zinit/services --pid1-mode

Quick Start

Download and install pre-built binaries:

./scripts/install.sh

This script:

  • Detects your OS and architecture
  • Downloads binaries from Forgejo registry
  • Installs to $HOME/hero/bin
  • Configures your PATH automatically
  • On macOS/Windows, automatically starts the server in the background

Building from Source

# Full build with Makefile
make build

# Or manual build
cargo build --release --workspace

# Run the server + admin UI
make run

# Use the CLI
zinit list
zinit status my-service
zinit start my-service
zinit stop my-service

See scripts/README.md for detailed information about installation scripts and the Makefile for build targets.

Architecture

zinit_pid1 (PID 1 shim)
    | spawns/monitors
    v
zinit_server (daemon)
    | unix socket (IPC + OpenRPC)
    v
zinit (CLI/TUI)        zinit_ui (web admin dashboard)

Crate Structure

zinit is organized as a Cargo workspace with separate crates:

crates/
  zinit_sdk/           # Library: shared types, protocol, client implementations
  zinit_server/        # Binary: process supervisor daemon (IPC + OpenRPC)
  zinit/               # Binary: CLI client and TUI
  zinit_ui/            # Binary: web admin dashboard
  zinit_rhai/          # Library: Rhai scripting bindings
  zinit_pid1/          # Binary: PID 1 init shim (Linux only)

Dependency Graph

zinit_sdk  (no internal deps)
     ^         ^          ^           ^          ^
     |         |          |           |          |
  server     CLI         UI        rhai       pid1

All crates depend only on zinit_sdk. No cross-dependencies between server, CLI, UI, rhai, or pid1.

Ports and Sockets

Component Binding Default
zinit_server Unix socket (IPC) ~/hero/var/sockets/zinit_server.sock
zinit_server TCP (API, optional, disable with --web-port 0) 3875
zinit_ui TCP (HTTP dashboard) 9880
zinit_ui Unix socket (local tool access) ~/hero/var/sockets/zinit_admin.sock

Configuration

Service configs are TOML files in the config directory (default: /etc/zinit/services/).

For detailed service configuration defaults and specifications, see docs/SERVICE_SPECS.md

[service]
name = "my-app"
exec = "/usr/bin/my-app --daemon"
dir = "/var/lib/my-app"     # optional working directory
oneshot = false              # exit after success (default: false)
status = "start"             # start | stop | ignore (default: start)
class = "user"               # user | system (default: user)

[dependencies]
requires = ["database"]      # must be running
after = ["logger"]           # start order only
wants = ["metrics"]          # soft dependency
conflicts = ["legacy-app"]   # mutual exclusion

[lifecycle]
restart = "on-failure"       # always | on-failure | never
stop_signal = "SIGTERM"
start_timeout_ms = 30000
stop_timeout_ms = 10000
restart_delay_ms = 1000
restart_delay_max_ms = 60000
max_restarts = 0             # 0 = unlimited

[health]
type = "http"
endpoint = "http://localhost:8080/health"
interval_ms = 10000
retries = 3

[logging]
buffer_lines = 1000

Targets

Virtual services for grouping:

[target]
name = "multi-user"

[dependencies]
requires = ["network", "logger", "database"]

Service Status

The status field controls supervisor behavior:

  • start (default): Automatically start and keep running
  • stop: Keep stopped (won't auto-start)
  • ignore: Supervisor ignores this service

Service Class

The class field protects critical services from bulk operations:

  • user (default): Normal service, affected by *_all commands
  • system: Protected service, skipped by bulk operations

System-class services are immune to start_all, stop_all, and delete_all commands.

CLI Commands

zinit list                    # List all services
zinit status <name>           # Show service status
zinit start <name>            # Start a service
zinit stop <name>             # Stop (cascades to dependents)
zinit restart <name>          # Restart a service
zinit kill <name> [signal]    # Send signal to service
zinit logs <name> [-n N]      # View service logs
zinit why <name>              # Show why service is blocked
zinit tree                    # Show dependency tree
zinit reload                  # Reload configuration
zinit add-service <toml>      # Add service at runtime
zinit remove-service <name>   # Remove a service
zinit start-all               # Start all user-class services
zinit stop-all                # Stop all user-class services
zinit delete-all              # Delete all user-class services
zinit shutdown                # Stop all services, exit daemon
zinit poweroff                # Power off system (signals init)
zinit reboot                  # Reboot system (signals init)

# Xinet socket activation proxy commands
zinit xinet set <name>        # Create or update xinet proxy (replaces existing)
  --listen <addr>             # Listen address: host:port or unix:/path (repeatable)
  --backend <addr>            # Backend address: host:port or unix:/path
  --service <name>            # Zinit service to start on connection
  [--connect-timeout <secs>]  # Timeout for backend connect (default: 30)
  [--idle-timeout <secs>]     # Stop service after idle seconds (default: 0=never)
  [--single-connection]       # Allow only one connection at a time
zinit xinet delete <name>     # Delete xinet proxy
zinit xinet list              # List all xinet proxies
zinit xinet status [name]     # Show proxy status (all if no name given)

# Debug commands
zinit debug-state             # Full graph state dump
zinit debug-procs <name>      # Process tree for a service

Web Admin Dashboard

The zinit_ui crate provides a real-time web admin dashboard at http://localhost:9880:

  • Services tab: Live service list with state badges, PID, memory usage, restart counts
  • Tasks tab: Oneshot service results with exit codes
  • Add Service: Full form for creating/editing services with all config options
  • Logs: Per-service log viewer with ANSI color support, stream filtering, auto-refresh
  • Events: Real-time event stream with filtering
  • Reset All: Bulk stop and delete all services with confirmation
  • API Docs: Interactive OpenRPC documentation
  • MCP: Model Context Protocol connection details for AI tool integration
  • Charts: Memory usage history graph, service state distribution

All UI assets (Bootstrap 5.3.3, Bootstrap Icons, Chart.js) are embedded in the binary via rust-embed — no internet connection needed.

The UI connects to zinit_server via the SDK (AsyncZinitClient) over Unix socket.

# Start server + UI
make run

# Or start separately
zinit_server --config-dir ~/hero/cfg/zinit &
zinit_ui --port 9880

Xinet Socket Activation Proxy

Xinet enables on-demand service startup through socket activation. When a client connects to the proxy's listening socket, the proxy starts the backend service and forwards traffic.

Use Cases

  • Databases: Start postgres on first query
  • Development Servers: Start on HTTP request
  • Backup Services: Start on trigger
  • Rarely-Used Services: Reduce memory footprint

Example: PostgreSQL with Socket Activation

Create the backend service:

# /etc/zinit/services/postgres.toml
[service]
name = "postgres"
exec = "/usr/bin/postgres -D /var/lib/postgres"
status = "stop"  # Don't autostart

[lifecycle]
start_timeout_ms = 5000
stop_timeout_ms = 10000

Register the proxy (starts postgres on first connection):

zinit xinet set postgres-proxy \
  --listen tcp:localhost:5432 \
  --backend unix:/tmp/postgres.sock \
  --service postgres \
  --connect-timeout 10 \
  --idle-timeout 300  # Stop after 5 minutes idle

Now clients connect to localhost:5432 and postgres starts automatically.

Example: Multiple Listen Addresses

zinit xinet set postgres-proxy \
  --listen tcp:0.0.0.0:5432 \
  --listen unix:/run/postgres.sock \
  --backend unix:/tmp/postgres.sock \
  --service postgres

Proxy Features

  • Bidirectional Forwarding: TCP ↔ TCP, Unix ↔ Unix, TCP ↔ Unix
  • Auto-Start Backend: Starts service on first connection
  • Idle Timeout: Automatically stops service after inactivity
  • Connection Limits: Optional single-connection mode
  • Replace Mode: xinet set replaces existing proxy (stops old one first)
  • Connection Stats: Track active connections and total traffic

Path Configuration

zinit uses platform-specific default paths:

Linux (System/PID1 mode)

  • Config directory: /etc/zinit/services
  • System services: /etc/zinit/system (PID1 mode only)
  • Socket: /run/zinit.sock

macOS / Windows (Standalone mode)

  • Config directory: $HOME/hero/cfg/zinit
  • Socket: $HOME/hero/var/sockets/zinit_server.sock

You can override these with environment variables (see below).

See docs/PATHS.md for detailed path configuration documentation.

Environment Variables

Variable Default Description
ZINIT_LOG_LEVEL info Log level: trace, debug, info, warn, error
ZINIT_CONFIG_DIR Platform-specific (see above) Service config directory
ZINIT_SOCKET Platform-specific (see above) Unix socket path
ZINIT_CONTAINER unset If set, zinit_pid1 runs in container mode

Example: Custom Paths

# Use custom config and socket directories
export ZINIT_CONFIG_DIR=/opt/my-services
export ZINIT_SOCKET=/tmp/my-zinit.sock

# Start server
zinit_server

# Connect with CLI
zinit list

Library Usage

Use zinit_sdk as a library dependency:

use zinit_sdk::{ZinitClient, ServiceConfig};

// Blocking client
let socket = zinit_sdk::socket::default_path();
let mut client = ZinitClient::connect_unix(&socket)?;
let services = client.list()?;

// Async client
use zinit_sdk::AsyncZinitClient;
let mut client = AsyncZinitClient::connect_unix(&socket).await?;
let status = client.status("my-service").await?;

Docker Usage

# Build test image
docker build -t zinit-test -f docker/Dockerfile .

# Run (uses container mode automatically)
docker run -it --rm zinit-test

# With debug logging
docker run -it --rm -e ZINIT_LOG_LEVEL=debug zinit-test

# Explicit container mode
docker run -it --rm -e ZINIT_CONTAINER=1 zinit-test

Shutdown Ordering

Services are stopped in reverse dependency order:

Example: database <- app <- worker

Startup order:   database -> app -> worker
Shutdown order:  worker -> app -> database

When stopping a single service, dependents are stopped first:

  • zinit stop database stops worker, then app, then database
  • Dependencies are NOT auto-stopped (other services may need them)

Development

make check       # Verify workspace builds
make test        # Run unit tests
make build       # Build all release binaries
make lint        # Run clippy linter
make test-all    # Run all tests (unit + bash + rhai)

# Run specific integration tests
make test-bash   # Legacy bash-based tests
make test-rhai   # New Rhai-based integration tests

# Playground
make play-tui    # Launch TUI with sample services for manual testing
make play-web    # Launch web UI with sample services

License

See LICENSE file.