fix release mgmt and make sure everyone understands #1
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Release Management & Build Pipeline
Status: Closed — core items completed in #12/#14. Remaining items moved to #15.
Completed
Dockerfile.pack+build-local.sh(issue #12)make distcompiles all binaries inrust:1.93-bookwormcontainermake packcreates thindebian:bookworm-slimimage fromdist/make pushpushes toforge.ourworld.tf/lhumina_code/hero_zeromake deploydoes all of the above + deploys to herodevmake demopromotes:dev→:demoand deploys to herodemoDockerfile,Dockerfile.prod) deleted (issue #14)hero_services/README.mdMoved to #15
Related Issues
Dev docs tgz release fixed: CI pipeline rebuilt (single job, versioned releases only), old stale releases cleaned up, v1.0.0 published and verified. One-liner install works:
curl -sSfL .../install.sh | bash. See PRs #35, #36, #39, #41 on geomind_code/dev_docs.Checkbox 3 update — hero_builder migration to Forge/Podman
Code complete. Migrated from Docker/ghcr.io to Podman/Forge:
forge.ourworld.tf/lhumina_code/hero_builder_baseCI blocked: runner host needs Podman installed. Filed mycelium/circle_ops#666.
PR ready to merge once unblocked: lhumina_code/hero_builder#4.
Checkbox 2 (arm/intel binaries) is blocked on the same ops dependency as checkbox 3 (mycelium/circle_ops#666 — Podman on runner host). Code changes for cross-compilation can proceed independently.
Related: #3 (service cleanup for demo readiness) — the binaries and container image are prerequisites for getting services running cleanly.
If we want to build container images inside workflows, then we have a few options as described in Forgejo docs. Among these, the best in terms of isolation/security and also the only one that's clearly compatible with Podman is the LXC approach.
The final recommended setup looks like this:
As noted, these LXC runners can also host VMs inside, which we might find interesting for some use cases.
Some minor config is required to use Podman inside of the LXC environment. See my example here.
Ongoing work with ops. They are checking how to make it work with podman. Currently we have issues with the runners.