No description
This repository has been archived on 2026-03-18. You can view files and clone it, but you cannot make any changes to its state, such as pushing and creating new issues, pull requests or comments.
  • Rust 81.5%
  • JavaScript 7.4%
  • Shell 6.3%
  • CSS 1.8%
  • Makefile 1.5%
  • Other 1.4%
Find a file
Jan De Landtsheer 222d856926
Some checks failed
Build and Test / build (push) Failing after 49s
Tests / test (push) Failing after 1m19s
Build Linux / build-linux (linux-amd64, false, x86_64-unknown-linux-musl) (push) Failing after 2m28s
Build Linux / build-linux (linux-arm64, true, aarch64-unknown-linux-gnu) (push) Failing after 3m10s
Revert "fix: remove dead --pid1-mode flag from zinit_pid1"
This reverts commit 408f1c083a.
2026-03-13 21:46:56 +01:00
.claude chore: repository health check and build improvements 2026-01-25 09:07:40 +01:00
.forgejo/workflows fix(ci): cherry-pick valuable fixes from stale branches 2026-03-13 11:26:57 +01:00
crates Revert "fix: remove dead --pid1-mode flag from zinit_pid1" 2026-03-13 21:46:56 +01:00
docker feat: parallel service shutdown with dependency-aware grouping 2026-02-28 16:28:38 +03:00
docs lhumina_code/home#19 — Fix logs.get deserialization and update TOML parser docs 2026-03-11 14:21:02 -04:00
embedded_scripts feat: parallel service shutdown with dependency-aware grouping 2026-02-28 16:28:38 +03:00
etc/zinit feat: parallel service shutdown with dependency-aware grouping 2026-02-28 16:28:38 +03:00
examples feat(examples,cli): Step 10 - Production-ready example configurations and comprehensive recipes 2026-03-09 08:03:44 +01:00
scripts fix(ci): cherry-pick valuable fixes from stale branches 2026-03-13 11:26:57 +01:00
tests fix: keep ZINIT_SERVER_BIN naming instead of ZINIT_OPENRPC_BIN 2026-03-13 13:39:43 +01:00
.gitignore chore: add *.db to .gitignore 2026-03-08 04:39:39 +01:00
buildenv.sh chore: bump version to 0.4.0 2026-03-09 08:13:45 +01:00
Cargo.lock fix: update hero_rpc to latest (8c7f594) 2026-03-13 13:38:59 +01:00
Cargo.toml fix: remove rust-version pin from all Cargo.toml files 2026-03-13 11:09:26 +01:00
Makefile fix(shutdown): reduce force shutdown wait timeouts from 30s to 5s 2026-03-09 10:09:04 +01:00
README.md lhumina_code/home#19 — Fix logs.get deserialization and update TOML parser docs 2026-03-11 14:21:02 -04:00

zinit

A lightweight process supervisor with dependency management, similar to systemd but simpler.

Quick Install

Get started in one command:

curl -fsSL https://raw.githubusercontent.com/threefoldtech/zinit/main/scripts/install.sh | bash

Or download and run the installer script directly:

cd /tmp && curl -O https://raw.githubusercontent.com/threefoldtech/zinit/main/scripts/install.sh
chmod +x install.sh
./install.sh

This will:

  • Detect your OS and architecture
  • Download pre-built binaries (Linux amd64, macOS arm64)
  • Install to $HOME/hero/bin
  • Configure your shell PATH automatically
  • Start zinit_server in background (macOS/Windows only)

Then use: zinit list, zinit status, zinit start <service>, etc.

For more options, see Quick Start section below.


Documentation

Getting Started:

Reference:


Features

  • Dependency Graph: Services declare dependencies (requires, after, wants, conflicts)
  • State Machine: 8 explicit states (Inactive, Blocked, Starting, Running, Stopping, Success, Exited, Failed)
  • Process Groups: Signals sent to process groups, handling sh -c child processes correctly
  • Health Checks: TCP, HTTP, and exec-based health checks with retries
  • Ordered Shutdown: Dependents stop before their dependencies
  • Hot Reload: Reload configuration without full restart
  • Multi-Environment: Works in containers, VMs, and bare-metal
  • Web Admin Dashboard: Real-time service management UI with charts, logs, events, and bulk operations
  • Fully Embedded UI: All assets (Bootstrap, Chart.js, icons) compiled into the binary — no CDN or network required

Deployment Modes

zinit adapts its behavior based on deployment environment:

Container Mode

Use zinit_pid1 as your container entrypoint:

ENTRYPOINT ["/usr/bin/zinit_pid1", "--container"]

Or set the environment variable:

ZINIT_CONTAINER=1 zinit_pid1

Behavior:

  • Loads services from /etc/zinit/services/
  • Clean exit on shutdown (no reboot syscall)
  • No system services directory

VM / Bare-Metal Mode

Use zinit_pid1 as your init system (PID 1):

# In /sbin/init or kernel cmdline: init=/usr/bin/zinit_pid1

Behavior:

  • Loads system services from /etc/zinit/system/ first (auto-assigned class=system)
  • Loads user services from /etc/zinit/services/ second
  • Handles reboot/poweroff via syscalls (SIGINT=reboot, SIGTERM=poweroff)
  • Never exits (kernel panic prevention)

Standalone Mode

Run zinit_server directly (not as PID 1):

zinit_server --config-dir ~/hero/cfg/zinit

Important: In standalone mode, TOML config files placed in the config directory are not automatically loaded at startup. You must run zinit reload after placing or modifying config files to import them into the service database:

# Place service configs
cp my-service.toml ~/hero/cfg/zinit/

# Load configs into zinit
zinit reload

# Now start
zinit start my-service

Optionally enable system services directory:

zinit_server --config-dir /etc/zinit/services --pid1-mode

Quick Start

Download and install pre-built binaries:

./scripts/install.sh

This script:

  • Detects your OS and architecture
  • Downloads binaries from Forgejo registry
  • Installs to $HOME/hero/bin
  • Configures your PATH automatically
  • On macOS/Windows, automatically starts the server in the background

Building from Source

# Full build with Makefile
make build

# Or manual build
cargo build --release --workspace

# Run the server + admin UI
make run

# Use the CLI
zinit list
zinit status my-service
zinit start my-service
zinit stop my-service

See scripts/README.md for detailed information about installation scripts and the Makefile for build targets.

Architecture

zinit_pid1 (PID 1 shim)
    | spawns/monitors
    v
zinit_server (daemon)
    | unix socket (IPC + OpenRPC)
    v
zinit (CLI/TUI)        zinit_ui (web admin dashboard)
                         | unix socket + TCP :9999

Crate Structure

zinit is organized as a Cargo workspace with 7 separate crates:

crates/
  zinit_sdk/               # Shared service SDK types and client library
  zinit_server/            # Process supervisor daemon (IPC + OpenRPC)
  zinit/                   # Command-line interface
  zinit_ui/                # Web dashboard UI
  zinit_lib/               # SQLite persistence layer with factory pattern
  zinit_pid1/              # Init shim (PID 1 mode)
  zinit_integration_test/  # Integration test suite

Dependency Graph

        zinit_sdk (no internal deps)
           ^         ^        ^         ^        ^
           |         |        |         |        |
  server  CLI       UI      lib      pid1

All crates depend on zinit_sdk. No cross-dependencies between server, CLI, UI, lib, or pid1. The lib crate provides the factory pattern for persistent storage (SQLite) and job tracking.

Ports and Sockets

Component Binding Default
zinit_server Unix socket (IPC) ~/hero/var/sockets/zinit_server.sock
zinit_ui Unix socket (local tool access) ~/hero/var/sockets/zinit_ui.sock
zinit_ui TCP (HTTP dashboard) 9999

Core Concepts

Service

A service is a named container for one or more executable tasks with metadata. Services are defined in TOML configuration files and represent applications, daemons, or system components you want to manage.

Example:

[service]
name = "my-app"
exec = "/usr/bin/my-app --daemon"

Job

A job (previously called an "action") is a single executable task within a service. Jobs have triggers that determine when they run:

  • start: Runs when the service starts
  • stop: Runs when the service stops
  • check: Health check that runs periodically
  • manual: Runs only on explicit command

Run

A run is a persistent execution record of a job. Each time a job executes, a run is created with:

  • Execution status (pending, running, success, failed)
  • Exit code and error messages
  • Execution timestamp and duration
  • Complete output logs (stdout/stderr)

Runs are stored in SQLite and enable historical tracking, auditing, and debugging of job executions.

Factory Pattern

The factory pattern (implemented in zinit_lib) provides a unified entry point (ZinitDb) for accessing all persistence and configuration APIs:

let zinitdb = ZinitDb::with_defaults()?;

// Access APIs via namespaced methods
zinitdb.jobs.list()?           // Job CRUD operations
zinitdb.runs.insert(...)?      // Create execution records
zinitdb.services.parse(...)?   // Load service configs
zinitdb.logging.append(...)?   // Store job logs

This pattern isolates all database complexity and provides clean, type-safe access to:

  • JobsApi: Job lifecycle management
  • RunsApi: Execution tracking
  • ServicesApi: Configuration file handling
  • LoggingApi: Persistent log storage

For detailed API reference, see crates/zinit_lib.

Configuration

Service configs are TOML files in the config directory (default: ~/hero/cfg/zinit/ on macOS/Linux standalone, /etc/zinit/services/ in container/PID1 mode).

Important: After placing or modifying TOML config files, run zinit reload to import them into zinit's database.

For detailed service configuration defaults and specifications, see docs/SERVICE_SPECS.md

Currently Parsed TOML Sections

The legacy TOML loader (zinit reload) parses these sections:

[service]
name = "my-app"
exec = "/usr/bin/my-app --daemon"
description = "My application"  # optional
oneshot = false                  # optional (default: false)
kill_others = false              # optional (default: false)

[service.env]
RUST_LOG = "info"
DATABASE_URL = "postgres://localhost/mydb"

[dependencies]
requires = ["database"]      # must be running
after = ["logger"]           # start order only

Planned (Not Yet Parsed)

The following sections are defined in the spec but are not yet implemented in the TOML config loader. They will be silently ignored if present:

# NOT YET PARSED — use SDK builders or CLI flags instead
[lifecycle]       # restart policy, signals, timeouts
[health]          # health check configuration
[logging]         # log buffer settings

These features are available through the SDK builder API and CLI zinit add commands. See docs/SDK.md for programmatic configuration.

Environment Variables in TOML

Environment variables are set under [service.env]:

[service.env]
DATABASE_URL = "postgres://localhost/mydb"
DEBUG = "true"

Targets

Virtual services for grouping:

[target]
name = "multi-user"

[dependencies]
requires = ["network", "logger", "database"]

Service Status

The status field controls supervisor behavior:

  • start (default): Automatically start and keep running
  • stop: Keep stopped (won't auto-start)
  • ignore: Supervisor ignores this service

Service Class

The class field protects critical services from bulk operations:

  • user (default): Normal service, affected by *_all commands
  • system: Protected service, skipped by bulk operations

System-class services are immune to start_all, stop_all, and delete_all commands.

CLI Commands

zinit list                    # List all services
zinit status <name>           # Show service status
zinit start <name>            # Start a service
  [--tree]                    # Also start required dependencies
zinit stop <name>             # Stop (cascades to dependents)
zinit restart <name>          # Restart a service
zinit kill <name> [signal]    # Send signal to service
zinit logs <name> [-n N]      # View service logs
zinit why <name>              # Show why service is blocked
zinit tree                    # Show dependency tree
zinit reload                  # Reload configuration
zinit add service <name>      # Add service at runtime
  [--description <text>]      # Service description
  [--class user|system]       # Service class (default: user)
  [--after <svc>]             # Start order dependency
  [--requires <svc>]          # Hard dependency
  [--wants <svc>]             # Soft dependency
  [--conflicts <svc>]         # Mutual exclusion
  [--persist]                 # Save to config directory
zinit add job <service> <job> # Add job to service
  --exec <cmd>                # Command to execute
  [--trigger start|stop|check]# Job trigger (manual if omitted)
  [--restart on-failure]      # Restart policy
  [--interval-ms <ms>]        # Check interval
  [--timeout-ms <ms>]         # Job timeout
zinit delete <service> [job]  # Delete service or job
zinit reset                   # Stop all services, delete all configs (with confirmation)
  [--force]                   # Skip confirmation prompt
zinit shutdown                # Stop all services, exit daemon
zinit poweroff                # Power off system (signals init)
zinit reboot                  # Reboot system (signals init)

# Xinet socket activation proxy commands
zinit xinet set <name>        # Create or update xinet proxy (replaces existing)
  --listen <addr>             # Listen address: host:port or unix:/path (repeatable)
  --backend <addr>            # Backend address: host:port or unix:/path
  --service <name>            # Zinit service to start on connection
  [--connect-timeout <secs>]  # Timeout for backend connect (default: 30)
  [--idle-timeout <secs>]     # Stop service after idle seconds (default: 0=never)
  [--single-connection]       # Allow only one connection at a time
zinit xinet delete <name>     # Delete xinet proxy
zinit xinet list              # List all xinet proxies
zinit xinet status [name]     # Show proxy status (all if no name given)

# Debug commands
zinit debug-state             # Full graph state dump
zinit debug-procs <name>      # Process tree for a service

# Demo & Testing
zinit demo                    # Create demo service configs and reload

Web Admin Dashboard

The zinit_ui crate provides a real-time web admin dashboard at http://localhost:9999 with six main tabs:

  • Actions: Display registered actions with interpreter, timeout, and tags
  • Jobs: View job instances with phase, status, and logs; includes statistics bar
  • Runs: Track execution runs with status and job counts
  • Services: Manage services, dependencies, and action mappings
  • Secrets: Store and manage encrypted configuration values
  • Logs: Query and filter system logs by source, level, and timestamp

The dashboard also includes:

  • Theme toggle: Dark/Light mode in the navbar
  • Refresh button: Manual refresh of all data
  • Search and filtering: Each tab has search/filter controls
  • Bulk operations: Service demo loading and job purging

All UI assets (Bootstrap 5.3.3, Bootstrap Icons) are embedded in the binary via rust-embed — no internet connection needed.

The UI connects to zinit_server via the SDK (AsyncZinitClient) over Unix socket.

# Start server + UI
make run

# Or start separately
zinit_server --config-dir ~/hero/cfg/zinit &
zinit_ui --port 9999

Xinet Socket Activation Proxy

Xinet enables on-demand service startup through socket activation. When a client connects to the proxy's listening socket, the proxy starts the backend service and forwards traffic.

Use Cases

  • Databases: Start postgres on first query
  • Development Servers: Start on HTTP request
  • Backup Services: Start on trigger
  • Rarely-Used Services: Reduce memory footprint

Example: PostgreSQL with Socket Activation

Create the backend service:

# /etc/zinit/services/postgres.toml
[service]
name = "postgres"
exec = "/usr/bin/postgres -D /var/lib/postgres"
status = "stop"  # Don't autostart

[lifecycle]
start_timeout_ms = 5000
stop_timeout_ms = 10000

Register the proxy (starts postgres on first connection):

zinit xinet set postgres-proxy \
  --listen tcp:localhost:5432 \
  --backend unix:/tmp/postgres.sock \
  --service postgres \
  --connect-timeout 10 \
  --idle-timeout 300  # Stop after 5 minutes idle

Now clients connect to localhost:5432 and postgres starts automatically.

Example: Multiple Listen Addresses

zinit xinet set postgres-proxy \
  --listen tcp:0.0.0.0:5432 \
  --listen unix:/run/postgres.sock \
  --backend unix:/tmp/postgres.sock \
  --service postgres

Proxy Features

  • Bidirectional Forwarding: TCP ↔ TCP, Unix ↔ Unix, TCP ↔ Unix
  • Auto-Start Backend: Starts service on first connection
  • Idle Timeout: Automatically stops service after inactivity
  • Connection Limits: Optional single-connection mode
  • Replace Mode: xinet set replaces existing proxy (stops old one first)
  • Connection Stats: Track active connections and total traffic

Path Configuration

zinit uses platform-specific default paths:

Linux (System/PID1 mode)

  • Config directory: /etc/zinit/services
  • System services: /etc/zinit/system (PID1 mode only)
  • Socket: /run/zinit.sock

macOS / Windows (Standalone mode)

  • Config directory: $HOME/hero/cfg/zinit
  • Socket: $HOME/hero/var/sockets/zinit_server.sock

You can override these with environment variables (see below).

See docs/PATHS.md for detailed path configuration documentation.

Environment Variables

Variable Default Description
ZINIT_LOG_LEVEL info Log level: trace, debug, info, warn, error
ZINIT_CONFIG_DIR Platform-specific (see above) Service config directory
ZINIT_SOCKET Platform-specific (see above) Unix socket path
ZINIT_CONTAINER unset If set, zinit_pid1 runs in container mode

Example: Custom Paths

# Use custom config and socket directories
export ZINIT_CONFIG_DIR=/opt/my-services
export ZINIT_SOCKET=/tmp/my-zinit.sock

# Start server
zinit_server

# Connect with CLI
zinit list

Library Usage

SDK Client

Use zinit_sdk for IPC communication with the running server. The client is auto-generated from the OpenRPC specification:

use zinit_sdk::ZinitRPCAPIClient;

// Connect via Unix socket (async)
let client = ZinitRPCAPIClient::connect_socket("/path/to/zinit_server.sock").await?;

// List services
let list = client.service_list(zinit_sdk::ServiceListInput {
    context: None,
}).await?;

// Get service status
let status = client.service_status(zinit_sdk::ServiceStatusInput {
    name: "my-service".into(),
    context: None,
}).await?;

For ergonomic service construction, use the builder API:

use zinit_sdk::{ServiceBuilder, ActionBuilder};

let service = ServiceBuilder::new("myapp")
    .description("My application")
    .exec("./myapp --server")
    .requires(&["database"])
    .build();

Persistence Layer (Factory Pattern)

For direct database access and offline service configuration management, use zinit_lib and its factory pattern:

use zinit_lib::ZinitDb;

let db = ZinitDb::with_defaults()?;

// Access jobs, runs, services, and logs via namespaced APIs
db.jobs.list()?
db.runs.insert(service, job, trigger, command)?
db.services.load_from_file(path)?
db.logging.append(run_id, "stdout", message)?

The factory pattern provides a unified entry point to SQLite persistence with clean, type-safe APIs. See crates/zinit_lib/src/db/README.md for complete API documentation.

Docker Usage

# Build test image
docker build -t zinit-test -f docker/Dockerfile .

# Run (uses container mode automatically)
docker run -it --rm zinit-test

# With debug logging
docker run -it --rm -e ZINIT_LOG_LEVEL=debug zinit-test

# Explicit container mode
docker run -it --rm -e ZINIT_CONTAINER=1 zinit-test

Shutdown Ordering

Services are stopped in reverse dependency order:

Example: database <- app <- worker

Startup order:   database -> app -> worker
Shutdown order:  worker -> app -> database

When stopping a single service, dependents are stopped first:

  • zinit stop database stops worker, then app, then database
  • Dependencies are NOT auto-stopped (other services may need them)

Development

make check       # Verify workspace builds
make test        # Run unit tests
make build       # Build all release binaries
make install     # Build release and install to ~/hero/bin/
make lint        # Run clippy linter
make test-all    # Run all tests (unit + bash + rhai)

# Run server + UI
make run         # Release build, install to ~/hero/bin/, start on port 9999
make rundev      # Debug build, install to ~/hero/bin/, start on port 9999
make stop        # Graceful shutdown

# Run specific integration tests
make test-bash   # Bash-based integration tests
make test-rhai   # Rhai-based integration tests

# Playground
make play-tui    # Launch TUI with sample services for manual testing
make play-web    # Launch web UI with sample services

License

See LICENSE file.