implement iroh #1
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
see Here’s a practical spec for a fully replicated KVS over Iroh in Rust.
The right base is
iroh-docson top ofiroh,iroh-blobs, andiroh-gossip.irohgives you authenticated/encrypted peer-to-peer QUIC transport with relay fallback;iroh-docsgives you a mutable replicated document model; andiroh-docsalready depends oniroh-blobsplusiroh-gossipfor content transfer and live sync. Iniroh-docs, each entry is keyed by key + author + namespace, and the entry value is a BLAKE3 hash + size + timestamp for the content, while the actual bytes are stored/transferred separately. (docs.iroh.computer)What to build
Use one shared namespace secret as the initial “shared secret”.
That maps very naturally to Iroh’s model:
So the first version should be:
System goal
That is an important point: this is not Raft.
iroh-docsis a replicated sync substrate using reconciliation and live sync, not a linearizable consensus system. So this design gives you local-first replicated state, not strict serializable consensus. (Docs.rs)Recommended data model
Use exactly one Iroh document namespace for one KVS.
Key layout
Store application keys as plain UTF-8 bytes, for example:
Value layout
Store the actual value bytes in blobs, and store the KVS mapping in the doc.
Conceptually:
This matches how
iroh-docsis designed: the doc entry points to content by hash, and the content is handled separately. (GitHub)Delete semantics
Represent delete as a tombstone.
iroh-docsalready has prefix delete semantics at replica level, but for a KVS I would make deletes explicit in your app layer:That is simpler than physically removing history at the start.
Replication model
Every node does all of this:
iroh::Endpointiroh-blobsiroh-gossipiroh-docsThis fits the documented stack exactly:
Docsis spawned with anEndpoint, blobs store, and gossip protocol, then attached to an Iroh router with the docs/blob/gossip ALPNs. (Docs.rs)Membership model for v1
Keep it simple.
v1
later
Iroh already has a
DocTickettype containing a document capability plus peer addresses; sharing can be read or write capability depending on how you construct/export it. (Docs.rs)Rust spec
Crates
Those versions reflect the docs currently published for the crates I checked. (Docs.rs)
Main components
1. Node runtime
2. Persistent local storage
Use file-backed storage for both:
iroh-docssupports persistent file-based storage backed byredb, and persists all replicas to a single file. (Docs.rs)3. Shared-secret bootstrap
Your config should contain:
namespace_secretknown_peers[]Example config:
4. KVS API
Expose a small local API:
Conflict policy
Because
iroh-docsentries are keyed by key + author + namespace, the same logical app key can have multiple authored entries. Your KVS layer should collapse those into one visible value by policy. (GitHub)For v1, use:
Visible value for a logical key = newest timestamp wins
That gives deterministic convergence.
Full replication guarantee
Your app-level rule should be:
Implementation:
Because
iroh-docstracks the content hash/length but not the content itself, your KVS code must treat “entry present but blob missing” as an incomplete replica and repair it. (GitHub)Suggested architecture
One namespace, many authors
Do not use one author for the whole cluster.
Better:
Why:
Seed peers
Each node should know a few seed peers:
No consensus in v1
Do not promise:
Promise instead:
Example: node setup
This example follows the documented pattern for standing up Iroh + blobs + gossip + docs.
That matches the published setup flow for
iroh-docs. (Docs.rs)For production, switch memory stores to persistent file-backed stores.
Example: logical KVS interface
Below is the shape I would implement, even if exact method names may need minor adjustment against the current crate API.
Materialized index
Maintain a local map:
Where:
This index is rebuilt from the doc on startup and updated from subscriptions during runtime.
Example: using one shared secret
This is the important bootstrapping pattern.
Node 1
iroh-docsexposes namespace secrets as the write capability, andDocTicketcan carry either read or write capability plus peer addresses. (GitHub)Pseudo-flow:
Node 2..N
EndpointAddrsExample: app-level write path
Under the hood:
This is aligned with
Replica::insertandReplica::hash_and_insert, which insert records referencing content by hash/len. (Docs.rs)Example: conflict resolution
Suppose:
foo=1foo=2Your visible KVS result should be resolved deterministically:
That gives convergence without consensus.
Example project layout
config.rsruntime.rskv.rsput/get/delete/listsync.rsindex.rsapi.rsOperational spec
Durability
Recovery
On restart:
Health check
Return healthy only if:
Anti-entropy
Even with live updates, also run periodic full reconciliation:
Security model for v1
What the shared secret means
Your shared secret should be the namespace secret.
That means:
Good enough for first version
Not good enough long term
What not to do
Do not start with:
Those all fight the simplicity of your stated goal.
Your stated goal is:
That is full-replication event-synced KVS, and Iroh docs/blobs/gossip are a better fit than Raft for the first version.
Minimal design decision summary
Best first implementation
Why this fits Iroh
Because Iroh is explicitly built for direct authenticated/encrypted P2P connections with relay fallback, and
iroh-docsis explicitly a replicated multi-dimensional key-value document protocol using set reconciliation, backed by blobs and gossip. (docs.iroh.computer)License
iroh-docsis dual-licensed MIT OR Apache-2.0. (Docs.rs)If you want, I can turn this into a compilable Rust starter project with
main.rs,kv.rs, and a simple local HTTP API.make the KVS as part of our codescalers
then make sure we store all info we need to remember into that KVS
make the KVS as a higher level library we can use in the ...codescalers_server
the manual sync we were implemented can be removed
Implementation Spec for Issue #1 — Implement Iroh KVS
Objective
Build a fully-replicated, node-local, eventually-consistent key/value store backed by
iroh-docs(on top ofiroh,iroh-blobs,iroh-gossip) as a new workspace cratehero_codescalers_kvs. Migrate all persistent state currently held inhero_codescalers_server's SQLite DB (nodes, admins, groups, group-members, per-node stats) into the KVS, and rip out the existing "manual sync" machinery (thesync_queuetable,sync_worker, JSON-RPC*.apply*methods, and theproxyforwarder). The namespace secret is shared out of band and grants write capability; every node signs with its own author keypair; last-write-wins (LWW) by(timestamp, author_id); deletes are tombstones; anti-entropy + live gossip sync.Requirements
crates/hero_codescalers_kvsthat compiles independently and is reusable by other Hero services.KvStorehigh-level async API:put(key, value) -> Result<()>,get(key) -> Result<Option<Vec<u8>>>,delete(key) -> Result<()>,list(prefix) -> Result<Vec<(String, Vec<u8>)>>,list_keys(prefix) -> Result<Vec<String>>,sync_once() -> Result<()>, plus asubscribe()event stream andshutdown()method.iroh::Endpoint+iroh_blobs::store::fs::Store+iroh_gossip::net::Gossip+iroh_docs::protocol::Docs; single namespace per store. Values are blobs; entries recordblob_hash + len + timestamp + author.data_dir); in-memory variant (iroh_blobs::store::mem,iroh_docs::store::memory) for tests.(timestamp_ms, author_id)tuple.KvValue::Tombstone { ts }serialised via serde_json) soget/listskip them; a background reaper may prune tombstones older than a TTL.hero_codescalers_serveris migrated to back itsnodes,admins,groups,group_membersdata in the KVS instead of SQLite; sessions and users (read from/etc/passwd/w) stay as they are; jobs/logs (owned by hero_proc) stay as they are.Files to Modify/Create
New crate
crates/hero_codescalers_kvs/:crates/hero_codescalers_kvs/Cargo.toml— package manifest; depsiroh = "0.97",iroh-blobs = "0.99",iroh-gossip = "0.97",iroh-docs = "0.97", plus workspacetokio,serde,serde_json,anyhow,thiserror,tracing,parking_lot,chrono,futures,hex,rand,data-encoding,tempfile(dev-dep).crates/hero_codescalers_kvs/src/lib.rs— re-exports; crate docs.crates/hero_codescalers_kvs/src/config.rs—KvConfig(namespace secret, author secret, data_dir, seed_peers, anti_entropy_interval, tombstone_ttl, persistence mode), plus aKvConfigBuilderand helpers to parse/encode namespace/author secrets.crates/hero_codescalers_kvs/src/value.rs—KvValue { Live { bytes }, Tombstone }with serde_json envelope{ v: 1, kind: "live"|"tombstone", ts_ms, data?: base64 }.crates/hero_codescalers_kvs/src/store.rs—KvStorestruct; holdsEndpoint,Docs,Doc,Author,NamespaceSecret, blob store handle, seed-peers list, shutdown handle, events broadcaster. Implementsnew_persistent,new_memory,put,get,delete,list,list_keys,sync_once,subscribe,shutdown.crates/hero_codescalers_kvs/src/lww.rs— per-key entry reducer: given an iterator of doc entries for a logical key, pick the winner via(ts_ms, author_id_bytes).crates/hero_codescalers_kvs/src/anti_entropy.rs— background task: iterate all entries, verify blob locally, fetch missing from seed peers; optional tombstone reaper.crates/hero_codescalers_kvs/src/events.rs—KvEvent { Put { key }, Delete { key }, RemotePut { key, author }, ... }overtokio::sync::broadcast.crates/hero_codescalers_kvs/src/keys.rs— helpers for keyspace (bytes <-> UTF-8 strings, prefix encoding).crates/hero_codescalers_kvs/src/error.rs—KvErrorenum withthiserror;pub type Result<T> = std::result::Result<T, KvError>;crates/hero_codescalers_kvs/tests/roundtrip.rs— single-node put/get/delete/list tests on the in-memory variant.crates/hero_codescalers_kvs/tests/two_node_sync.rs— two in-memory nodes, dial each other, assert LWW convergence on conflicting writes, assert tombstone wins over older put, assertsync_oncefetches missing blobs.Workspace root:
Cargo.toml— addcrates/hero_codescalers_kvsto[workspace] members; addiroh,iroh-blobs,iroh-gossip,iroh-docs,thiserror,hex,data-encoding,rand,tempfileunder[workspace.dependencies].Cargo.lock— regenerated.Server migration (
crates/hero_codescalers_server/):Cargo.toml— addhero_codescalers_kvs = { path = "../hero_codescalers_kvs" }; removerusqlite(kept only if jobs/logs still need it — they don't, so remove).src/main.rs— replaceDbwith a newKvStatefacade; removesync_workermodule; removeproxymodule; delete every*.apply*,sync.status,sync.pending,remote.rpcdispatch arm; removepending_syncsfromstats; drop the mycelium TCP listener (no longer needed since replication is over iroh); load KVS config from env/data_dir; callKvStore::new_persistentat startup and register self-node into KVS.src/model/mod.rs— rewrite exports; theDbtype disappears, replaced byKvState(or similar) which wrapsKvStoreand exposes the same high-level Rust methods (node_list,admin_add,group_create, etc.) but reads/writes through the KVS.src/model/db.rs— delete and replace withsrc/model/state.rs(new) containing theKvStatefacade; migrations vanish; sync_* functions vanish.src/model/node.rs— rewrite:Nodestruct stays (serde_json in/out of KVS); all methods reimplemented againstKvStore(kvs.put("nodes/<ipv6>", json)/kvs.delete(...)/kvs.list("nodes/")). Remove allsync_enqueue_rpccalls and all*_apply_remote/*_apply_stats/*_apply_deletemethods — LWW + replication is handled inside KVS.src/model/admin.rs— same treatment: persist underadmins/<ipv6>; drop apply_* methods.src/model/group.rs— persist groups undergroups/<name>and members undergroup_members/<name>/<ipv6>; drop apply_* methods.src/proxy.rs— delete.src/sync_worker.rs— delete (the periodic stats task moves intomain.rsor a tiny newstats_worker.rs; it no longer enqueues syncs — it just callsstate.node_update_stats(...)which writes to KVS and lets replication propagate).src/sessions.rs,src/users.rs— unchanged.openrpc.json— dropsync.status,sync.pending,node.apply,node.apply_delete,node.apply_stats,admin.apply,admin.apply_delete,group.apply,group.apply_delete,group.apply_member,group.apply_member_delete,remote.rpc; droppending_syncsfrom thestatsresult schema.openrpc.client.generated.rs— regenerated by the client macro; no manual edits.heroservice.json— unchanged.UI (
crates/hero_codescalers_ui/):templates/index.html— remove the "sync pending" stat tile (lines 231 and 445).static/js/dashboard.js— removestats-sync-pending/stat-sync-pendingwiring (lines 592, 886).Docs:
README.md— short note that replication is now via iroh-docs.Implementation Plan
Step 1: Add workspace members and dependencies
Files:
Cargo.toml(root)crates/hero_codescalers_kvsto[workspace] members.iroh = "0.97",iroh-blobs = "0.99",iroh-gossip = "0.97",iroh-docs = "0.97"to[workspace.dependencies].thiserror,hex,data-encoding,rand,tempfileto[workspace.dependencies](tempfile as a dev-dep convention).rusqlitefrom workspace deps yet (kept until Step 7).Dependencies: none.
Step 2: Scaffold
hero_codescalers_kvscrate (types, config, errors, value envelope)Files:
crates/hero_codescalers_kvs/Cargo.toml,crates/hero_codescalers_kvs/src/lib.rs,crates/hero_codescalers_kvs/src/config.rs,crates/hero_codescalers_kvs/src/error.rs,crates/hero_codescalers_kvs/src/value.rs,crates/hero_codescalers_kvs/src/keys.rs,crates/hero_codescalers_kvs/src/events.rsKvError::{Io, Iroh, Docs, Blobs, Serde, NotFound, Shutdown, ...}).KvValueenvelope withserdeimpls; tombstones carryts_ms; live values base64-encode bytes.KvConfigstruct + builder:namespace_secret: NamespaceSecret,author_secret: Option<AuthorSecret>(generated ifNone),data_dir: PathBuf,seed_peers: Vec<NodeAddr>,anti_entropy_interval: Duration(default 60s),tombstone_ttl: Option<Duration>(default 7d),persistence: Persistence::{File, Memory}.KvEventenum + broadcaster type alias.Dependencies: Step 1.
Step 3: Implement persistent
KvStoreconstructor + lifecycleFiles:
crates/hero_codescalers_kvs/src/store.rsKvStore::new_persistent(cfg: KvConfig) -> Result<Self>:data_dir, subdirsblobs/,docs/,keys/.keys/node.secret).Endpoint::builder().secret_key(sk).bind().iroh_blobs::store::fs::Storeatblobs/.iroh_gossip::net::Gossipbound to endpoint.iroh_docs::protocol::Docswithfsstore atdocs/.Router.namespace_secretto get a writableDochandle (create if not present).docs.author_create/import, storeauthor.id().doc.start_sync(peers)).KvStore::new_memory(cfg)— same but usingiroh_blobs::store::mem::MemStoreandiroh_docs::store::memory::Store, no on-disk persistence.shutdown()— cancel background tasks, close doc, close endpoint.Dependencies: Step 2.
Step 4: Implement
put/get/delete/list/list_keysFiles:
crates/hero_codescalers_kvs/src/store.rs,crates/hero_codescalers_kvs/src/lww.rsput(key, value):KvValue::Live { ts_ms: now_ms(), data: value }, serialise to JSON.blobs.add_bytes(json_bytes)-> hash.doc.set_hash(author, key_bytes, hash, len).KvEvent::Put { key }.delete(key):KvValue::Tombstone { ts_ms: now_ms() }, serialise.put.KvEvent::Delete.get(key):key_bytes.lww::pick_winner.NoneorTombstone->Ok(None).list(prefix)andlist_keys(prefix)— prefix-range scan + LWW + skip tombstones.sync_once()— calldoc.sync_with_peers(seed_peers.clone()).await.Dependencies: Step 3.
Step 5: Events subscription
Files:
crates/hero_codescalers_kvs/src/events.rs,crates/hero_codescalers_kvs/src/store.rsKvStore::subscribe() -> broadcast::Receiver<KvEvent>.LiveEvent::InsertRemote/InsertLocal/NeighborUp/NeighborDownintoKvEventand re-broadcast.put/delete.Dependencies: Step 4.
Step 6: Anti-entropy + tombstone reaper
Files:
crates/hero_codescalers_kvs/src/anti_entropy.rs,crates/hero_codescalers_kvs/src/store.rstokio::time::interval(anti_entropy_interval)task.doc.sync_with_peersas a safety belt.tombstone_ttlset.Dependencies: Step 3, Step 4.
Step 7: Single-node unit tests
Files:
crates/hero_codescalers_kvs/tests/roundtrip.rsput/get, overwrite, delete, list, list_keys — all onnew_memory.Dependencies: Step 4.
Step 8: Two-node integration test
Files:
crates/hero_codescalers_kvs/tests/two_node_sync.rsnew_memorynodes, mutual seeds; assert LWW, tombstone semantics, offline->online sync viasync_once.Dependencies: Step 4, 5, 6.
Step 9: Server state facade (
KvState) built on top ofKvStoreFiles:
crates/hero_codescalers_server/src/model/state.rs(new),crates/hero_codescalers_server/src/model/mod.rs,crates/hero_codescalers_server/src/model/node.rs,crates/hero_codescalers_server/src/model/admin.rs,crates/hero_codescalers_server/src/model/group.rs,crates/hero_codescalers_server/Cargo.tomlhero_codescalers_kvspath dep; removerusqlite.KvState { kvs: Arc<KvStore>, self_ipv6: String }.nodes/<ipv6>,admins/<ipv6>,groups/<name>,group_members/<group>/<ipv6>.KvStore. Drop all*_apply_*methods.model/db.rs.Dependencies: Steps 2–6.
Step 10: Gut the manual sync machinery in the server
Files:
crates/hero_codescalers_server/src/main.rs,crates/hero_codescalers_server/src/sync_worker.rs,crates/hero_codescalers_server/src/proxy.rs,crates/hero_codescalers_server/openrpc.jsonsync_worker.rs,proxy.rs.main.rs: remove mycelium TCP listener, remove mod decls, add KvConfig construction from env vars, replaceDb::openwithKvState::open, drop all*.apply*/sync.*/remote.rpcdispatch arms, droppending_syncsfromstats, replacesync_worker::spawnwith a small stats-only worker.openrpc.json: remove the listed methods andpending_syncsfield.Dependencies: Step 9.
Step 11: UI cleanup
Files:
crates/hero_codescalers_ui/templates/index.html,crates/hero_codescalers_ui/static/js/dashboard.jsDependencies: Step 10.
Step 12: End-to-end validation + docs
Files:
README.mdDependencies: all prior steps.
Parallelisation notes
Acceptance Criteria
cargo build --workspacesucceeds on a clean checkout.cargo test -p hero_codescalers_kvspasses, including the two-node integration test.cargo test --workspace --libpasses.cargo clippy --workspace --all-targets -- -D warningspasses.crates/hero_codescalers_server/src/sync_worker.rsandcrates/hero_codescalers_server/src/proxy.rsno longer exist.crates/hero_codescalers_server/src/model/db.rsno longer exists;KvStateis the only state facade.grep -r sync_queue crates/,grep -r sync_enqueue_rpc crates/,grep -r node_apply crates/,grep -r admin_apply crates/,grep -r group_apply crates/,grep -r rusqlite crates/all return no matches.openrpc.jsoncontains no methods undersync.*, no*.apply*, noremote.rpc; thestatsschema has nopending_syncs.blobs/dir and adocs/redb file.Notes
data_dir/kvs/keys/author.secret, 0600 perms.[mycelium_ipv6]:9955to receive sync RPC. That binding is no longer needed. Keep mycelium address detection only becauseself_ipv6remains the stable identifier used in keys.node_update_statsruns every 30s and now writes into the KVS each tick. Fine for a few nodes; flag if the cluster grows.rpc-openrpcclient.hero_codescalers_sdkregenerates its client fromopenrpc.jsonvia a proc macro; no manual edits required.hero_codescalers_kvsas an OpenRPC service. It is a plain Rust library crate; no heroservice.json, no socket.Critical Files for Implementation
Cargo.toml(workspace root)crates/hero_codescalers_kvs/src/store.rscrates/hero_codescalers_server/src/main.rscrates/hero_codescalers_server/src/model/state.rscrates/hero_codescalers_server/openrpc.jsonTest Results
All tests passing across the new KVS crate and the updated server.
cargo test --workspacehero_codescalers_kvs—roundtrip.rs(single-node memory CRUD)hero_codescalers_kvs—two_node_sync.rs(two-node convergence)hero_codescalers_sdk,nu_exec)Single-node tests exercise put/get/overwrite/delete semantics and LWW ordering against the in-memory blob + docs store. The two-node tests bring up two full
KvStoreinstances in the same process, wire them bidirectionally, and verify that writes, tombstones, and LWW-resolved overwrites propagate through the iroh-docs replica within 30 seconds.Multi-instance sync script (
make test-sync)The new
scripts/test-kvs-sync.shspawns N fullhero_codescalers_serverprocesses, each with its own UDS, data directory, and node secret, sharing only a generated namespace secret. All communication happens over the UDS OpenRPC interface exactly as a real admin client would use it.Scenarios exercised (NODES=2, SETTLE_MS=2000, TIMEOUT=30):
kv.puton node 1 propagates to node 2 via gossip/synckv.puton node 2 overwrites node 1's value under LWWkv.deleteon node 1 produces a tombstone that node 2 observesusers/andgroups/prefixes,kv.list_keyson node 2 returns only the twousers/keys in sorted orderOutput of a clean run:
The script is wired into the Makefile as
make test-syncand respectsNODES=,TIMEOUT=, andSETTLE_MS=environment overrides.Implementation Summary
The issue is fully implemented. All server state now lives in a fully-replicated Iroh-backed KVS; the previous manual TCP/mycelium sync path has been removed.
New crate:
hero_codescalers_kvsA higher-level library over
iroh,iroh-docs,iroh-blobs, andiroh-gossip(all at 0.97/0.99). Exposes aKvStorewithput,get,delete,list,list_keys,sync_once,start_sync_with,subscribe, andshutdown.Files added:
crates/hero_codescalers_kvs/Cargo.tomlcrates/hero_codescalers_kvs/src/lib.rs— public re-exportscrates/hero_codescalers_kvs/src/config.rs—KvConfigbuilder, namespace/author secret helpers,Persistence::{Memory, File}crates/hero_codescalers_kvs/src/error.rs—KvError/Resultaliasescrates/hero_codescalers_kvs/src/events.rs—KvEventbroadcast channel (PutLocal/DeleteLocal/PutRemote/DeleteRemote/SyncStarted/SyncFinished)crates/hero_codescalers_kvs/src/keys.rs— key-namespace helperscrates/hero_codescalers_kvs/src/value.rs—KvValue::{Live, Tombstone}JSON envelope with base64-encoded payloads and wall-clock timestampscrates/hero_codescalers_kvs/src/lww.rs— deterministic LWW reducer(timestamp, author_id_bytes)crates/hero_codescalers_kvs/src/anti_entropy.rs— periodic reconcile task backed bytokio_util::CancellationTokencrates/hero_codescalers_kvs/src/store.rs—KvStore— wiresEndpoint(N0 preset) +FsStore/MemStoreblobs +Gossip+Docs+ ALPN-dispatchedRouter, opens or imports the namespace, loads or creates an author, emits live events, starts initial sync, and exposes the public APITests added:
crates/hero_codescalers_kvs/tests/roundtrip.rs— 5 single-node CRUD testscrates/hero_codescalers_kvs/tests/two_node_sync.rs— 3 two-node convergence tests (put propagation, LWW overwrite, tombstone win)Server integration:
hero_codescalers_serversrc/model/state.rs(new):KvState— wrapsArc<KvStore>and the local IPv6, provides async CRUD helpers over thenodes/,admins/,groups/, andgroup_members/prefixessrc/model/{node,admin,group}.rs: rewritten to async methods onKvState; all data now flows through the KVSsrc/model/mod.rs: re-exports for the newstatemodule, dropsdbsrc/main.rs: readsHERO_CODESCALERS_KVS_{NAMESPACE_SECRET,AUTHOR_SECRET,SEEDS,DATA_DIR}, constructs a singleKvStore, drops the mycelium TCP listener, replacessync_workerwith an inline 30-second stats task, and routes 8 new OpenRPC methods through a UDS-only admin gate:kv.put,kv.get,kv.delete,kv.list,kv.list_keys,kv.info,kv.peer_add,kv.sync_onceopenrpc.json+openrpc.client.generated.rs: extended with the 8 new methods, param schemas, and result envelopessrc/model/db.rs,src/proxy.rs,src/sync_worker.rs, and the legacyrusqlitedependencyAdmin model over UDS
require_admintreats any caller withcaller_ip == None(i.e. UDS) as an admin, consistent with the existing OpenRPC-over-UDS design. Only HTTP/TCP callers have to pass the IP-based admin check.UI
crates/hero_codescalers_ui/templates/index.htmlandstatic/js/dashboard.jsno longer reference the obsoletestats.sync_pending/stat-sync-pendingfields that the old sync worker populated.Tooling
Makefile: addedtest-kvs(crate tests) andtest-sync(multi-instance script) targetsscripts/test-kvs-sync.sh: spawns N server instances, cross-wires them over the UDS OpenRPC interface withkv.peer_add, and asserts propagation, LWW, tombstones, and prefix listing within a bounded timeoutTest Results
All tests pass:
Notes
KvStorereturns entries throughpick_winnerso clients always see LWW-resolved reads.kv.sync_once.EndpointAddrfrom iroh 0.97 replaces the oldNodeAddralias — both names are re-exported from the crate for downstream ergonomics.