- Rust 100%
Covers: workspace layout, boot flow, all 28 CLI commands with examples, JSON-RPC API method reference, socat examples for direct socket access, partition layout, topology templates, error codes, and ADR links. |
||
|---|---|---|
| crates | ||
| docs | ||
| .gitignore | ||
| Cargo.lock | ||
| Cargo.toml | ||
| README.md | ||
mos_volmgr
MOS Volume Management — a Rust workspace for storage management on MOS (Mycelium Operating System) nodes.
Architecture
┌─────────────────────────────────────┐
│ mos_volmgr_common │
│ shared storage primitives (lib) │
│ device discovery, GPT partitioning, │
│ mkfs, mount, RAID, subvolumes │
└──────────┬──────────┬───────────────┘
│ │
┌────────────┘ └────────────┐
│ │
┌──────────▼──────────┐ ┌──────────────▼──────────────┐
│ mos_sysvol │ │ mos_volmgrd │
│ boot-time oneshot │ │ volume management daemon │
│ (binary: mos_sysvol)│ │ JSON-RPC over Unix socket │
└─────────────────────┘ │ (binary: mos_volmgrd) │
└──────────────▲──────────────┘
│ Unix socket
┌──────────────┴──────────────┐
│ mos_volmgr │
│ CLI client for daemon │
│ (binary: mos-volmgr) │
└─────────────────────────────┘
| Crate | Binary | Role |
|---|---|---|
mos_volmgr_common |
(library) | Shared primitives: sysfs device discovery, GPT via gptman, mkfs wrappers, mount/unmount, mdadm RAID, btrfs subvolumes, constants |
mos_sysvol |
mos_sysvol |
Boot-time oneshot run by my_init. Detects, partitions, formats, and mounts persistent storage. Fixed 5-partition layout, RAID1 dual-disk support |
mos_volmgrd |
mos_volmgrd |
Long-running daemon. Listens on /run/mos_volmgrd/mos_volmgrd.sock for JSON-RPC 2.0 requests. Queries, manages, and monitors storage |
mos_volmgr |
mos-volmgr |
CLI client. Connects to the daemon, sends RPC calls, prints results |
Building
cargo build --release
Binaries are produced at:
target/release/mos_sysvoltarget/release/mos_volmgrdtarget/release/mos-volmgr
Boot Flow
-
mos_sysvol runs at boot (called by my_init as a oneshot service)
- Loads btrfs kernel module
- Checks for existing MOS storage by label
- If found: assembles RAID arrays, mounts everything
- If not found: discovers empty disks, partitions (GPT), formats, creates subvolumes, mounts
- Configuration via kernel cmdline:
mossize=N(data GB, default 4),mosswap=N(swap GB, default 2)
-
mos_volmgrd starts after mos_sysvol completes
- Binds Unix socket at
/run/mos_volmgrd/mos_volmgrd.sock - Accepts JSON-RPC 2.0 connections
- Manages storage for the lifetime of the node
- Binds Unix socket at
-
mos-volmgr is used interactively or from scripts to query and manage storage
CLI Usage
Querying Storage
# Full inventory (disks, partitions, filesystems, mounts, subvolumes)
mos-volmgr status
# Individual queries
mos-volmgr disks
mos-volmgr partitions
mos-volmgr filesystems
mos-volmgr mounts
mos-volmgr subvolumes
# Free space on a specific disk
mos-volmgr free-space /dev/sda
# Which storage topologies are possible with current disks
mos-volmgr topologies
Statistics and Health
# Filesystem usage (size, used, available, percent)
mos-volmgr usage
# Btrfs device error counters
mos-volmgr device-stats /var/cache/system
# Btrfs scrub status
mos-volmgr scrub-status /var/cache/system
# Detailed btrfs filesystem usage breakdown
mos-volmgr fs-usage /var/cache/system
# Btrfs data/metadata/system allocation
mos-volmgr fs-df /var/cache/system
Managing Volumes
Creating a Partition
Create a new partition in unallocated space on an existing GPT disk:
# Use all remaining free space
mos-volmgr create-partition \
--disk /dev/sda \
--name mydata \
--part-type linux
# Specify a size (in bytes)
mos-volmgr create-partition \
--disk /dev/sda \
--name extra \
--part-type linux \
--size 107374182400 # 100 GiB
Partition types: linux, esp, swap, bios_boot.
Creating a Filesystem
# btrfs
mos-volmgr create-filesystem \
--device /dev/sda6 \
--fstype btrfs \
--label MYDATA
# ext4
mos-volmgr create-filesystem \
--device /dev/sda6 \
--fstype ext4 \
--label MYEXT4
# Also: vfat, swap, bcachefs
Mounting and Unmounting
mos-volmgr mount \
--source /dev/sda6 \
--target /mnt/mydata \
--fstype btrfs \
--options "noatime,compress=zstd"
mos-volmgr unmount /mnt/mydata
Subvolume Management
# Create a new subvolume on the MOSDATA filesystem
mos-volmgr create-subvolume myapp-data
# Delete a subvolume (refuses to delete core subvolumes: system, etc, modules, vm-meta)
mos-volmgr delete-subvolume myapp-data
# Create a read-only snapshot
mos-volmgr snapshot system system-snap-20260325
# Set a quota (bytes)
mos-volmgr set-quota myapp-data 10737418240 # 10 GiB
Filesystem Utilities
# Start a scrub (background integrity check)
mos-volmgr scrub-start /var/cache/system
# Check scrub progress
mos-volmgr scrub-status /var/cache/system
# Start a balance (rebalance data across devices)
mos-volmgr balance-start /var/cache/system
# Check balance progress
mos-volmgr balance-status /var/cache/system
# Defragment a path recursively
mos-volmgr defrag /var/cache/system
# Grow filesystem to fill available device space
mos-volmgr resize-max /var/cache/system
JSON Output
All commands accept --json for machine-parseable output:
mos-volmgr disks --json
mos-volmgr usage --json
mos-volmgr create-subvolume mydata --json
Service Discovery
# Show the OpenRPC service document (all available methods)
mos-volmgr discover
JSON-RPC API
The daemon speaks JSON-RPC 2.0 over a Unix socket at /run/mos_volmgrd/mos_volmgrd.sock. Each request is a single-line JSON object terminated by \n.
Direct access with socat
# List disks
echo '{"jsonrpc":"2.0","id":1,"method":"storage.list_disks","params":{}}' | \
socat - UNIX-CONNECT:/run/mos_volmgrd/mos_volmgrd.sock
# List free space on a disk
echo '{"jsonrpc":"2.0","id":1,"method":"storage.list_free_space","params":{"disk":"/dev/sda"}}' | \
socat - UNIX-CONNECT:/run/mos_volmgrd/mos_volmgrd.sock
# Create a subvolume
echo '{"jsonrpc":"2.0","id":1,"method":"storage.create_subvolume","params":{"name":"mydata"}}' | \
socat - UNIX-CONNECT:/run/mos_volmgrd/mos_volmgrd.sock
# OpenRPC service discovery
echo '{"jsonrpc":"2.0","id":1,"method":"rpc.discover","params":{}}' | \
socat - UNIX-CONNECT:/run/mos_volmgrd/mos_volmgrd.sock
Method Reference
Query / Inventory
| Method | Params | Description |
|---|---|---|
storage.list_disks |
{} |
All block devices with type, size, model, serial |
storage.list_partitions |
{} |
All partitions with label, UUID, fstype |
storage.list_filesystems |
{} |
MOS-labeled filesystems (MOSEFI, MOSDATA, etc.) |
storage.list_mounts |
{} |
MOS-managed mount points |
storage.list_subvolumes |
{} |
Btrfs subvolumes on MOSDATA |
storage.list_free_space |
{"disk": "/dev/sda"} |
Unpartitioned gaps on a disk |
storage.inventory |
{} |
All of the above in one call |
storage.feasible_topologies |
{} |
Which topology templates work with current disks |
Statistics / Health
| Method | Params | Description |
|---|---|---|
storage.usage |
{} |
Per-filesystem usage (total/used/available/percent) |
storage.device_stats |
{"mount_point": "..."} |
Btrfs device error counters |
storage.scrub_status |
{"mount_point": "..."} |
Scrub running/completed status |
storage.fs_usage |
{"mount_point": "..."} |
Detailed btrfs space breakdown |
storage.fs_df |
{"mount_point": "..."} |
Btrfs data/metadata/system allocation |
Volume Management
| Method | Params | Description |
|---|---|---|
storage.create_partition |
{"disk","name","part_type","size_bytes?"} |
Add partition to existing GPT |
storage.create_filesystem |
{"device","fstype","label"} |
Format a partition |
storage.mount |
{"source","target","fstype","options"} |
Mount a filesystem |
storage.unmount |
{"path": "..."} |
Unmount a path |
Subvolume Management
| Method | Params | Description |
|---|---|---|
storage.create_subvolume |
{"name": "..."} |
Create btrfs subvolume |
storage.delete_subvolume |
{"name": "..."} |
Delete subvolume (not core ones) |
storage.snapshot_subvolume |
{"source","snapshot"} |
Read-only snapshot |
storage.set_quota |
{"name","max_bytes"} |
Set qgroup quota |
Filesystem Utilities
| Method | Params | Description |
|---|---|---|
storage.scrub_start |
{"mount_point": "..."} |
Start integrity scrub |
storage.balance_start |
{"mount_point": "..."} |
Start data rebalance |
storage.balance_status |
{"mount_point": "..."} |
Query balance progress |
storage.defrag |
{"path": "..."} |
Recursive defragmentation |
storage.resize_max |
{"mount_point": "..."} |
Grow filesystem to device size |
Discovery
| Method | Params | Description |
|---|---|---|
rpc.discover |
{} |
OpenRPC service document |
Error Codes
| Code | Meaning |
|---|---|
| -32700 | Parse error (malformed JSON) |
| -32600 | Invalid request |
| -32601 | Method not found |
| -32602 | Invalid params |
| -32603 | Internal error |
| -32000 | Storage operation failed |
| -32001 | Validation error |
| -32002 | Device not found |
Partition Layout (mos_sysvol)
When mos_sysvol initializes a disk, it creates a fixed 5-partition GPT layout:
| # | GPT Name | Size | Filesystem | Label |
|---|---|---|---|---|
| 1 | mosbios | 1 MB | (raw) | — |
| 2 | mosefi | 100 MB | FAT32 | MOSEFI |
| 3 | mosboot | 1 GB | ext4 | MOSBOOT |
| 4 | mosswap | N GB | swap | MOSSWAP |
| 5 | mosdata | N GB | btrfs | MOSDATA |
On dual-disk systems, both disks get this layout. mdadm RAID1 mirrors are created for the EFI (metadata v0.90) and boot (metadata v1.2) partitions. The data partitions use btrfs RAID1.
Btrfs subvolumes on MOSDATA: system, etc, modules, vm-meta, mounted at /var/cache/<name>.
Topology Templates
The daemon knows about these storage layout patterns for feasibility analysis:
| Topology | Min Disks | Requirements | Description |
|---|---|---|---|
| BtrfsSingle | 1 | — | Single disk, btrfs data |
| BcachefsSingle | 1 | — | Single disk, bcachefs data |
| DualIndependent | 2 | — | Independent btrfs per disk |
| SsdHddBcachefs | 2 | SSD + HDD | SSD cache + HDD backing via bcachefs |
| Bcachefs2Copy | 2 | — | Multi-device bcachefs with 2 replicas |
| BtrfsRaid1 | 2 | — | Mirrored btrfs across 2 disks |
Query feasibility with mos-volmgr topologies.
Design Decisions
- ADR-001 — Workspace consolidation from mos_sysvol + mos_storage
- ADR-002 — Daemon redesign as query/management service
- ADR-003 — JSON-RPC 2.0 over Unix socket
License
Apache-2.0