Compare commits

...

30 Commits
v9.8 ... main

Author SHA1 Message Date
Mikhail Chusavitin
7640f20714 Consolidate dist/ into cache/ and release/ subdirs
All intermediate build artifacts (binaries, live-build work dirs, overlay
stages, NVIDIA/NCCL/cuBLAS/john caches) now live under dist/cache/.
Final ISOs go to dist/release/ instead of scattered dist/easy-bee-v*/ and iso/out/.
dist/ is already gitignored, iso/out/ entry removed as redundant.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-06 12:28:47 +03:00
Mikhail Chusavitin
1593bf3e76 Add scripts/build.sh -- single entry point for ISO builds
Auto-detects build mode: remote VM if BUILDER_HOST is set in .env,
local Docker otherwise. Cache hardcoded to dist/container-cache (gitignored).
All flags forwarded to build-in-container.sh.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-06 12:24:09 +03:00
Mikhail Chusavitin
ae80d7711e Add continuous hardware health monitoring and component detail view
- kmsg watcher now records kernel errors (GPU Xid, MCE, EDAC, storage I/O) at all times,
  not only during SAT tasks; flushImmediate writes directly to ComponentStatusDB
- New health_poller: polls ipmitool sdr every 60s for PSU health (watchdog:psu source)
- Hardware Summary card auto-refreshes every 30s via htmx without page reload
- Component rows (CPU/Memory/Storage/GPU/PSU) are now clickable -- opens a modal
  with per-component status, source, timestamp and last 20 history entries

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-06 09:56:39 +03:00
Mikhail Chusavitin
ca78b9df65 Add initramfs-level Drive Wipe tool (bee.wipe=all)
Installs a local-premount initramfs hook that intercepts bee.wipe=all before
squashfs is mounted. Shows a numbered disk selection TUI (pure POSIX sh), wipes
selected disks (nvme format / blkdiscard / dd fallback), syncs, and reboots.
Works even when squashfs fails to mount.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-06 09:23:05 +03:00
Mikhail Chusavitin
5cafe63f33 Add Drive Wipe boot menu entry and overlay wipe script
Adds a "WIPE ALL DISKS" entry to both GRUB and isolinux menus (bee.wipe=all).
Includes bee-wipe-disks for manual use from a running live system.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-06 09:22:59 +03:00
Mikhail Chusavitin
b75e65bcb1 Version-stamp squashfs filename and restrict live-boot media selection
Squashfs versioning:
- ISO now contains filesystem-v<VERSION>.squashfs instead of the generic
  filesystem.squashfs, making it immediately visible which build is
  running (visible in /run/live/medium/live/ at boot time).
- Full build path: rename filesystem.squashfs → filesystem-v*.squashfs
  after lb build, before lb binary_checksums/binary_iso.
- Fast path: find and unpack whatever filesystem*.squashfs exists, repack
  as the new versioned name, remove the old file, update the ISO.
- needs_full_build: accept any filesystem*.squashfs so version changes
  alone don't force a full rebuild.

Media selection hardening:
- Add live-media=/dev/disk/by-label/<LABEL> to the kernel boot line in
  addition to the existing live-media-label=<LABEL>. live-boot will now
  open exactly the labeled device rather than scanning all block devices,
  preventing accidental use of squashfs files from local disks or
  stale virtual media attached via IPMI.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-04 18:44:47 +03:00
Mikhail Chusavitin
8d173175eb Add chroot hook to strip all xattrs before squashfs creation
mksquashfs 4.5.1 (bookworm) writes a non-SQUASHFS_INVALID_BLK value for
xattr_id_table_start in the superblock even when -no-xattrs is passed, if
the source chroot contains POSIX ACL xattrs set by dpkg at install time.
Linux 6.1 squashfs driver then fails with "unable to read xattr id index
table" and refuses to mount the filesystem.

Strip all xattrs from the chroot via Python3 (already present) immediately
before mksquashfs runs. With an xattr-free source tree the resulting
squashfs is guaranteed to have SQUASHFS_INVALID_BLK in the xattr field.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-04 17:44:09 +03:00
Mikhail Chusavitin
5cbde0448e Update submodules (bible, internal/chart)
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-04 15:41:45 +03:00
Mikhail Chusavitin
49a09fde05 Disable xattrs in all mksquashfs calls
--chroot-squashfs-compression-options does not exist in live-build
bookworm (1:20230502). The correct mechanism is the MKSQUASHFS_OPTIONS
environment variable read by binary_rootfs.

Export MKSQUASHFS_OPTIONS="-no-xattrs" before lb build so live-build's
binary_rootfs picks it up, and add -no-xattrs explicitly to every
direct mksquashfs call in build.sh (fast-path repack and the dormant
split-layers function). Remove the invalid lb config option.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-04 15:29:15 +03:00
Mikhail Chusavitin
f3962422c8 Fix lb config option name for squashfs compression options
--chroot-squashfs-options is not a valid lb_config option; the correct
name is --chroot-squashfs-compression-options. Without this fix lb config
aborts immediately, so the -no-xattrs flag (which prevents the
"unable to read xattr id index table" boot failure) was never applied.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-04 14:03:41 +03:00
Mikhail Chusavitin
ee36e3c711 Strip xattrs from squashfs to fix boot failure
Kernel squashfs driver fails with "unable to read xattr id index table"
when the squashfs contains POSIX ACL xattrs (system.posix_acl_*) written
by mksquashfs as unrecognised entries. This caused every built ISO to
drop to an initramfs shell at boot.

Add -no-xattrs to mksquashfs options so xattrs are stripped at build
time. xattrs are not needed in a live read-only rootfs.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-04 13:56:26 +03:00
Mikhail Chusavitin
cca3b21d35 Revert squashfs layer split — live-boot cannot mount partial rootfs
split_live_squashfs_layers moved /usr out of filesystem.squashfs into a
separate 10-usr.squashfs, leaving a rootfs skeleton that live-boot
(1:20230131+deb12u1) cannot mount: the initramfs panics with
"Can not mount /dev/loop0 ... filesystem.squashfs".

live-boot in bookworm expects a single self-contained filesystem.squashfs.
Revert to the standard single-squashfs layout and remove the dead
multi-squashfs guard in needs_full_build().

The split_live_squashfs_layers function is kept for future reference.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-04 11:14:10 +03:00
Mikhail Chusavitin
75c33e073e Fix split_live_squashfs_layers crash under POSIX sh (dash)
trap RETURN is a bash extension not supported by /bin/sh on Debian.
With set -e active the unsupported trap call exited the build immediately
after lb build, before bootloader sync and ISO copy steps ran.

Remove both trap RETURN lines — explicit rm -rf at the end of the
function is sufficient for cleanup on the happy path.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-04 09:52:31 +03:00
7b4bcc745a Split live rootfs into smaller squashfs layers 2026-05-03 23:15:22 +03:00
42774d44a6 Restore post-build GRUB and isolinux sync 2026-05-03 21:49:54 +03:00
5dc022ddf8 Drop post-build EFI bootloader patching 2026-05-03 21:22:53 +03:00
6623e159f5 Grow EFI image before syncing GRUB theme assets 2026-05-03 21:18:37 +03:00
bbd6d009f8 Avoid EFI image overflow when syncing GRUB theme 2026-05-03 21:16:36 +03:00
6c2b188ec9 Add no-GUI boot mode and quieter boot diagnostics 2026-05-03 21:14:45 +03:00
14505ef24a Remove easy bee ASCII logo banners 2026-05-03 21:07:13 +03:00
4f20c9246d Make UEFI boot safe and remove GRUB logo 2026-05-03 20:11:42 +03:00
eed157c2db Pin live boot medium to versioned ISO label 2026-05-03 15:52:07 +03:00
a2c8aea0df Fix GRUB theme ISO validation helper name 2026-05-03 14:45:16 +03:00
b21f03cd26 Reduce bee-selfheal log noise and slow timer 2026-05-03 14:38:12 +03:00
cac5b9c86e Detach install media after install-to-ram 2026-05-03 14:16:45 +03:00
b5d04ef045 Synchronize project versioning across build artifacts 2026-05-03 14:16:32 +03:00
fcd64438ea Update submodule references 2026-05-03 14:12:00 +03:00
0e39e7d960 Make toram default and add install-to-ram CLI 2026-05-03 14:07:47 +03:00
Mikhail Chusavitin
58d6da0e4f Fix live task logs and SAT windows 2026-04-30 17:26:45 +03:00
Mikhail Chusavitin
7ce73e34a4 Add NVMe block format tool 2026-04-30 16:27:25 +03:00
44 changed files with 2188 additions and 235 deletions

1
.gitignore vendored
View File

@@ -1,6 +1,5 @@
.env
.DS_Store
dist/
iso/out/
build-cache/
audit/bee

View File

@@ -64,6 +64,8 @@ func run(args []string, stdout, stderr io.Writer) (exitCode int) {
return runExport(args[1:], stdout, stderr)
case "preflight":
return runPreflight(args[1:], stdout, stderr)
case "install-to-ram":
return runInstallToRAM(args[1:], stdout, stderr)
case "support-bundle":
return runSupportBundle(args[1:], stdout, stderr)
case "web":
@@ -90,6 +92,7 @@ func printRootUsage(w io.Writer) {
fmt.Fprintln(w, `bee commands:
bee audit --runtime auto|local|livecd --output stdout|file:<path>
bee preflight --output stdout|file:<path>
bee install-to-ram
bee export --target <device>
bee support-bundle --output stdout|file:<path>
bee web --listen :80 [--audit-path `+app.DefaultAuditJSONPath+`]
@@ -109,6 +112,8 @@ func runHelp(args []string, stdout, stderr io.Writer) int {
return runExport([]string{"--help"}, stdout, stdout)
case "preflight":
return runPreflight([]string{"--help"}, stdout, stdout)
case "install-to-ram":
return runInstallToRAM([]string{"--help"}, stdout, stdout)
case "support-bundle":
return runSupportBundle([]string{"--help"}, stdout, stdout)
case "web":
@@ -252,6 +257,32 @@ func runPreflight(args []string, stdout, stderr io.Writer) int {
return 0
}
func runInstallToRAM(args []string, stdout, stderr io.Writer) int {
fs := flag.NewFlagSet("install-to-ram", flag.ContinueOnError)
fs.SetOutput(stderr)
fs.Usage = func() {
fmt.Fprintln(stderr, "usage: bee install-to-ram")
}
if err := fs.Parse(args); err != nil {
if err == flag.ErrHelp {
return 0
}
return 2
}
if fs.NArg() != 0 {
fs.Usage()
return 2
}
application := app.New(platform.New())
logLine := func(s string) { fmt.Fprintln(stdout, s) }
if err := application.RunInstallToRAM(context.Background(), logLine); err != nil {
slog.Error("run install-to-ram", "err", err)
return 1
}
return 0
}
func runSupportBundle(args []string, stdout, stderr io.Writer) int {
fs := flag.NewFlagSet("support-bundle", flag.ContinueOnError)
fs.SetOutput(stderr)

View File

@@ -24,6 +24,8 @@ var supportBundleServices = []string{
"bee-selfheal.service",
"bee-selfheal.timer",
"bee-sshsetup.service",
"display-manager.service",
"lightdm.service",
"nvidia-dcgm.service",
"nvidia-fabricmanager.service",
}
@@ -44,12 +46,128 @@ var supportBundleCommands = []struct {
{name: "system/mount.txt", cmd: []string{"mount"}},
{name: "system/df-h.txt", cmd: []string{"df", "-h"}},
{name: "system/dmesg.txt", cmd: []string{"dmesg"}},
{name: "system/dmesg-gui-video-input.txt", cmd: []string{"sh", "-c", `
if command -v dmesg >/dev/null 2>&1; then
dmesg | grep -iE 'nvidia|drm|fb|framebuffer|vesa|efi|lightdm|Xorg|input|hid|usb|keyboard|mouse|virtual keyboard|virtual mouse|ami|aspeed|ast' || echo "no GUI/video/input kernel messages found"
else
echo "dmesg not found"
fi
`}},
{name: "system/kernel-aer-nvidia.txt", cmd: []string{"sh", "-c", `
if command -v dmesg >/dev/null 2>&1; then
dmesg | grep -iE 'AER|NVRM|Xid|pcieport|nvidia' || echo "no AER/NVRM/Xid kernel messages found"
else
echo "dmesg not found"
fi
`}},
{name: "system/loginctl-sessions.txt", cmd: []string{"sh", "-c", `
if command -v loginctl >/dev/null 2>&1; then
loginctl list-sessions 2>&1 || true
else
echo "loginctl not found"
fi
`}},
{name: "system/loginctl-seats.txt", cmd: []string{"sh", "-c", `
if command -v loginctl >/dev/null 2>&1; then
loginctl list-seats 2>&1 || true
echo
for seat in $(loginctl list-seats --no-legend 2>/dev/null | awk '{print $1}'); do
echo "=== $seat ==="
loginctl seat-status "$seat" 2>&1 || true
echo
done
else
echo "loginctl not found"
fi
`}},
{name: "system/ps-gui.txt", cmd: []string{"sh", "-c", `
ps -ef | grep -iE 'lightdm|Xorg|X$|openbox|chromium|chrome|xinit|xsession' | grep -v grep || echo "no GUI processes found"
`}},
{name: "system/lspci-video-vv.txt", cmd: []string{"sh", "-c", `
if ! command -v lspci >/dev/null 2>&1; then
echo "lspci not found"
exit 0
fi
found=0
for dev in $(lspci -Dn | awk '$2 ~ /^03(00|02):$/ {print $1}'); do
found=1
echo "=== $dev ==="
lspci -s "$dev" -vv 2>&1 || true
echo
done
if [ "$found" -eq 0 ]; then
echo "no display-class PCI devices found"
fi
`}},
{name: "system/proc-fb.txt", cmd: []string{"cat", "/proc/fb"}},
{name: "system/drm-cards.txt", cmd: []string{"sh", "-c", `
if [ -d /sys/class/drm ]; then
for path in /sys/class/drm/card*; do
[ -e "$path" ] || continue
card=$(basename "$path")
echo "=== $card ==="
for f in status enabled dpms modes; do
[ -r "$path/$f" ] && printf " %-8s %s\n" "$f" "$(cat "$path/$f" 2>/dev/null)"
done
device=$(readlink -f "$path/device" 2>/dev/null || true)
[ -n "$device" ] && echo " device ${device##*/}"
echo
done
else
echo "/sys/class/drm not present"
fi
`}},
{name: "system/input-devices.txt", cmd: []string{"sh", "-c", `
if [ -r /proc/bus/input/devices ]; then
cat /proc/bus/input/devices
else
echo "/proc/bus/input/devices not readable"
fi
`}},
{name: "system/udevadm-input.txt", cmd: []string{"sh", "-c", `
if ! command -v udevadm >/dev/null 2>&1; then
echo "udevadm not found"
exit 0
fi
found=0
for dev in /dev/input/event*; do
[ -e "$dev" ] || continue
found=1
echo "=== $dev ==="
udevadm info --query=all --name="$dev" 2>&1 || true
echo
done
if [ "$found" -eq 0 ]; then
echo "no /dev/input/event* devices found"
fi
`}},
{name: "system/xinput-list.txt", cmd: []string{"sh", "-c", `
if command -v xinput >/dev/null 2>&1; then
DISPLAY=:0 xinput --list 2>&1 || true
else
echo "xinput not found"
fi
`}},
{name: "system/libinput-list-devices.txt", cmd: []string{"sh", "-c", `
if command -v libinput >/dev/null 2>&1; then
libinput list-devices 2>&1 || true
else
echo "libinput not found"
fi
`}},
{name: "system/systemctl-gui-units.txt", cmd: []string{"sh", "-c", `
if ! command -v systemctl >/dev/null 2>&1; then
echo "systemctl not found"
exit 0
fi
echo "=== unit files ==="
systemctl list-unit-files --no-pager --all 'lightdm*' 'display-manager*' 2>&1 || true
echo
echo "=== active units ==="
systemctl list-units --no-pager --all 'lightdm*' 'display-manager*' 2>&1 || true
echo
echo "=== failed units ==="
systemctl --failed --no-pager 2>&1 | grep -iE 'lightdm|display-manager|Xorg' || echo "no failed GUI units"
`}},
{name: "system/nvidia-smi-q.txt", cmd: []string{"nvidia-smi", "-q"}},
{name: "system/nvidia-smi-topo.txt", cmd: []string{"sh", "-c", `
@@ -236,6 +354,13 @@ var supportBundleOptionalFiles = []struct {
}{
{name: "system/kern.log", src: "/var/log/kern.log"},
{name: "system/syslog.txt", src: "/var/log/syslog"},
{name: "system/Xorg.0.log", src: "/var/log/Xorg.0.log"},
{name: "system/Xorg.0.log.old", src: "/var/log/Xorg.0.log.old"},
{name: "system/lightdm/lightdm.log", src: "/var/log/lightdm/lightdm.log"},
{name: "system/lightdm/x-0.log", src: "/var/log/lightdm/x-0.log"},
{name: "system/lightdm/x-0-greeter.log", src: "/var/log/lightdm/x-0-greeter.log"},
{name: "system/home-bee-xsession-errors.log", src: "/home/bee/.xsession-errors"},
{name: "system/home-bee-chromium-debug.log", src: "/tmp/bee-chrome/chrome_debug.log"},
{name: "system/fabricmanager.log", src: "/var/log/fabricmanager.log"},
{name: "system/nvlsm.log", src: "/var/log/nvlsm.log"},
{name: "system/fabricmanager/fabricmanager.log", src: "/var/log/fabricmanager/fabricmanager.log"},

View File

@@ -5,6 +5,7 @@ import (
"fmt"
"os"
"os/exec"
"path/filepath"
"strconv"
"strings"
)
@@ -18,7 +19,7 @@ type InstallDisk struct {
MountedParts []string // partition mount points currently active
}
const squashfsPath = "/run/live/medium/live/filesystem.squashfs"
const squashfsGlob = "/run/live/medium/live/*.squashfs"
// ListInstallDisks returns block devices suitable for installation.
// Excludes the current live boot medium but includes USB drives.
@@ -176,11 +177,22 @@ func inferLiveBootKind(fsType, source, deviceType, transport string) string {
// squashfs size × 1.5 to allow for extracted filesystem and bootloader.
// Returns 0 if the squashfs is not available (non-live environment).
func MinInstallBytes() int64 {
fi, err := os.Stat(squashfsPath)
if err != nil {
files, err := filepath.Glob(squashfsGlob)
if err != nil || len(files) == 0 {
return 0
}
return fi.Size() * 3 / 2
var total int64
for _, path := range files {
fi, statErr := os.Stat(path)
if statErr != nil {
continue
}
total += fi.Size()
}
if total == 0 {
return 0
}
return total * 3 / 2
}
// toramActive returns true when the live system was booted with toram.
@@ -222,12 +234,10 @@ func DiskWarnings(d InstallDisk) []string {
humanBytes(min), humanBytes(d.SizeBytes)))
}
if toramActive() {
sqFi, err := os.Stat(squashfsPath)
if err == nil {
free := freeMemBytes()
if free > 0 && free < sqFi.Size()*2 {
w = append(w, "toram mode — low RAM, extraction may be slow or fail")
}
free := freeMemBytes()
min := MinInstallBytes()
if free > 0 && min > 0 && free < (min*4/3) {
w = append(w, "toram mode — low RAM, extraction may be slow or fail")
}
}
return w

View File

@@ -14,6 +14,22 @@ import (
const installToRAMDir = "/dev/shm/bee-live"
const copyProgressLogStep int64 = 100 * 1024 * 1024
var liveMediumSquashfsGlob = func() ([]string, error) {
return filepath.Glob("/run/live/medium/live/*.squashfs")
}
var runRemountMedium = func() ([]byte, error) {
return exec.Command("bee-remount-medium").CombinedOutput()
}
var umountLiveMedium = func() error {
return exec.Command("umount", "/run/live/medium").Run()
}
var ejectDevice = func(device string) error {
return exec.Command("eject", device).Run()
}
func (s *System) IsLiveMediaInRAM() bool {
return s.LiveMediaRAMState().InRAM
}
@@ -140,8 +156,7 @@ func (s *System) RunInstallToRAM(ctx context.Context, logFunc func(string)) (ret
return nil
}
squashfsFiles, err := filepath.Glob("/run/live/medium/live/*.squashfs")
sourceAvailable := err == nil && len(squashfsFiles) > 0
squashfsFiles, sourceAvailable := ensureLiveMediumAvailable(log)
dstDir := installToRAMDir
@@ -171,7 +186,7 @@ func (s *System) RunInstallToRAM(ctx context.Context, logFunc func(string)) (ret
}
goto bindMedium
}
return fmt.Errorf("no squashfs files found in /run/live/medium/live/ and no prior RAM copy in %s — reconnect the installation medium and retry", dstDir)
return fmt.Errorf("no squashfs files found in /run/live/medium/live/ and no prior RAM copy in %s — reconnect the installation medium and retry (or run bee-remount-medium as root)", dstDir)
}
{
@@ -254,10 +269,83 @@ bindMedium:
if status.InRAM {
log(fmt.Sprintf("Verification passed: live medium now served from %s.", describeLiveBootSource(status)))
}
log("Done. Squashfs files are in RAM. Installation media can be safely disconnected.")
detachInstallMedium(status, log)
log("Done. Squashfs files are in RAM. Installation media has been detached when possible.")
return nil
}
func tryRemountLiveMedium(log func(string)) error {
output, err := runRemountMedium()
trimmed := strings.TrimSpace(string(output))
if err != nil {
if trimmed != "" && log != nil {
for _, line := range strings.Split(trimmed, "\n") {
log("bee-remount-medium: " + line)
}
}
return err
}
if trimmed != "" && log != nil {
for _, line := range strings.Split(trimmed, "\n") {
log("bee-remount-medium: " + line)
}
}
return nil
}
func ensureLiveMediumAvailable(log func(string)) ([]string, bool) {
squashfsFiles, err := liveMediumSquashfsGlob()
sourceAvailable := err == nil && len(squashfsFiles) > 0
if sourceAvailable {
return squashfsFiles, true
}
if log != nil {
log("Live medium not mounted at /run/live/medium — attempting automatic remount scan...")
}
if remountErr := tryRemountLiveMedium(log); remountErr != nil {
if log != nil {
log(fmt.Sprintf("Automatic remount did not restore the live medium: %v", remountErr))
}
return squashfsFiles, false
}
squashfsFiles, err = liveMediumSquashfsGlob()
sourceAvailable = err == nil && len(squashfsFiles) > 0
if sourceAvailable && log != nil {
log("Live medium restored after remount scan.")
}
return squashfsFiles, sourceAvailable
}
func detachInstallMedium(status LiveBootSource, log func(string)) {
if log == nil {
log = func(string) {}
}
log("Detaching original installation medium...")
if err := umountLiveMedium(); err != nil {
log(fmt.Sprintf("Warning: could not unmount /run/live/medium: %v", err))
} else {
log("Unmounted /run/live/medium.")
}
device := strings.TrimSpace(status.Device)
if device == "" {
device = strings.TrimSpace(status.Source)
}
if device == "" || !strings.HasPrefix(device, "/dev/") {
log("No block device identified for eject; skipping media eject.")
return
}
if err := ejectDevice(device); err != nil {
log(fmt.Sprintf("Warning: could not eject %s: %v", device, err))
return
}
log(fmt.Sprintf("Ejected %s.", device))
}
func verifyInstallToRAMStatus(status LiveBootSource, dstDir string, mediumRebound bool, log func(string)) error {
if status.InRAM {
return nil

View File

@@ -1,6 +1,9 @@
package platform
import "testing"
import (
"fmt"
"testing"
)
func TestInferLiveBootKind(t *testing.T) {
t.Parallel()
@@ -124,3 +127,156 @@ func TestShouldLogCopyProgress(t *testing.T) {
t.Fatal("expected final completion log")
}
}
func TestTryRemountLiveMedium(t *testing.T) {
t.Parallel()
orig := runRemountMedium
t.Cleanup(func() {
runRemountMedium = orig
})
t.Run("success", func(t *testing.T) {
runRemountMedium = func() ([]byte, error) {
return []byte("[10:57:31] Mounted /dev/sr1 on /run/live/medium\n"), nil
}
var logs []string
if err := tryRemountLiveMedium(func(msg string) { logs = append(logs, msg) }); err != nil {
t.Fatalf("tryRemountLiveMedium() error = %v", err)
}
if len(logs) != 1 || logs[0] != "bee-remount-medium: [10:57:31] Mounted /dev/sr1 on /run/live/medium" {
t.Fatalf("logs=%v", logs)
}
})
t.Run("failure", func(t *testing.T) {
runRemountMedium = func() ([]byte, error) {
return []byte("must be run as root\n"), fmt.Errorf("exit status 1")
}
var logs []string
err := tryRemountLiveMedium(func(msg string) { logs = append(logs, msg) })
if err == nil {
t.Fatal("expected error")
}
if len(logs) != 1 || logs[0] != "bee-remount-medium: must be run as root" {
t.Fatalf("logs=%v", logs)
}
})
}
func TestEnsureLiveMediumAvailableRemountsSource(t *testing.T) {
t.Parallel()
origGlob := liveMediumSquashfsGlob
origRemount := runRemountMedium
t.Cleanup(func() {
liveMediumSquashfsGlob = origGlob
runRemountMedium = origRemount
})
callCount := 0
liveMediumSquashfsGlob = func() ([]string, error) {
callCount++
if callCount == 1 {
return nil, nil
}
return []string{"/run/live/medium/live/filesystem.squashfs"}, nil
}
runRemountMedium = func() ([]byte, error) {
return []byte("Mounted /dev/sr1 on /run/live/medium\n"), nil
}
var logs []string
files, ok := ensureLiveMediumAvailable(func(msg string) { logs = append(logs, msg) })
if !ok {
t.Fatal("expected live medium to become available after remount")
}
if callCount < 2 {
t.Fatalf("liveMediumSquashfsGlob called %d times, want at least 2", callCount)
}
if len(files) != 1 || files[0] != "/run/live/medium/live/filesystem.squashfs" {
t.Fatalf("files=%v", files)
}
found := false
for _, msg := range logs {
if msg == "Live medium restored after remount scan." {
found = true
break
}
}
if !found {
t.Fatalf("expected remount success log, logs=%v", logs)
}
}
func TestDetachInstallMedium(t *testing.T) {
t.Parallel()
origUmount := umountLiveMedium
origEject := ejectDevice
t.Cleanup(func() {
umountLiveMedium = origUmount
ejectDevice = origEject
})
t.Run("success", func(t *testing.T) {
var umountCalled bool
var ejected string
umountLiveMedium = func() error {
umountCalled = true
return nil
}
ejectDevice = func(device string) error {
ejected = device
return nil
}
var logs []string
detachInstallMedium(LiveBootSource{Kind: "cdrom", Device: "/dev/sr1"}, func(msg string) { logs = append(logs, msg) })
if !umountCalled {
t.Fatal("expected umountLiveMedium to be called")
}
if ejected != "/dev/sr1" {
t.Fatalf("ejected=%q want /dev/sr1", ejected)
}
if len(logs) < 3 {
t.Fatalf("logs=%v", logs)
}
})
t.Run("no device", func(t *testing.T) {
umountLiveMedium = func() error { return nil }
ejectDevice = func(device string) error {
t.Fatalf("unexpected eject for %q", device)
return nil
}
var logs []string
detachInstallMedium(LiveBootSource{Kind: "ram", Source: "tmpfs"}, func(msg string) { logs = append(logs, msg) })
found := false
for _, msg := range logs {
if msg == "No block device identified for eject; skipping media eject." {
found = true
break
}
}
if !found {
t.Fatalf("logs=%v", logs)
}
})
t.Run("eject failure is warning only", func(t *testing.T) {
umountLiveMedium = func() error { return nil }
ejectDevice = func(device string) error { return fmt.Errorf("exit status 1") }
var logs []string
detachInstallMedium(LiveBootSource{Kind: "usb", Device: "/dev/sdb1"}, func(msg string) { logs = append(logs, msg) })
found := false
for _, msg := range logs {
if msg == "Warning: could not eject /dev/sdb1: exit status 1" {
found = true
break
}
}
if !found {
t.Fatalf("logs=%v", logs)
}
})
}

View File

@@ -125,6 +125,8 @@ func defaultTaskPriority(target string, params taskParams) int {
return taskPriorityInstall
case "install-to-ram":
return taskPriorityInstallToRAM
case "nvme-format":
return taskPriorityInstall
case "audit":
return taskPriorityAudit
case "nvidia-bench-perf", "nvidia-bench-power", "nvidia-bench-autotune":
@@ -1677,6 +1679,56 @@ func (h *handler) handleAPIBenchmarkResults(w http.ResponseWriter, r *http.Reque
fmt.Fprint(w, renderBenchmarkResultsCard(h.opts.ExportDir))
}
// ── Hardware summary / component detail ──────────────────────────────────────
// handleAPIHardwareSummary returns the hardware summary card HTML fragment for
// htmx polling (hx-get="/api/hardware-summary" hx-swap="outerHTML").
func (h *handler) handleAPIHardwareSummary(w http.ResponseWriter, _ *http.Request) {
w.Header().Set("Content-Type", "text/html; charset=utf-8")
w.Header().Set("Cache-Control", "no-store")
fmt.Fprint(w, renderHardwareSummaryCard(h.opts))
}
// handleAPIComponentDetail returns an HTML fragment describing the current and
// historical status for one component type (cpu, memory, storage, gpu, psu).
func (h *handler) handleAPIComponentDetail(w http.ResponseWriter, r *http.Request) {
compType := r.PathValue("type")
var exact, prefixes []string
var title string
switch compType {
case "cpu":
title = "CPU"
exact = []string{"cpu:all"}
case "memory":
title = "Memory"
exact = []string{"memory:all"}
prefixes = []string{"memory:"}
case "storage":
title = "Storage"
exact = []string{"storage:all"}
prefixes = []string{"storage:"}
case "gpu":
title = "GPU"
prefixes = []string{"pcie:gpu:"}
case "psu":
title = "PSU"
prefixes = []string{"psu:"}
default:
http.NotFound(w, r)
return
}
var records []app.ComponentStatusRecord
if h.opts.App != nil && h.opts.App.StatusDB != nil {
all := h.opts.App.StatusDB.All()
records = matchedRecords(all, exact, prefixes)
}
w.Header().Set("Content-Type", "text/html; charset=utf-8")
w.Header().Set("Cache-Control", "no-store")
fmt.Fprint(w, renderComponentDetail(title, records))
}
func (h *handler) rollbackPendingNetworkChange() error {
h.pendingNetMu.Lock()
pnc := h.pendingNet

View File

@@ -85,6 +85,27 @@ func TestHandleAPIBlackboxStatusReturnsPersistedState(t *testing.T) {
}
}
func TestParseNVMeFormatModes(t *testing.T) {
raw := `
lbaf 0 : ms:0 lbads:9 rp:0x2 (in use)
lbaf 1 : ms:8 lbads:9 rp:0x1
lbaf 2 : ms:0 lbads:12 rp:0
`
modes := parseNVMeFormatModes(raw)
if len(modes) != 3 {
t.Fatalf("modes=%#v want 3 modes", modes)
}
if modes[0].Mode != 0 || modes[0].DataBytes != 512 || modes[0].MetadataBytes != 0 || !modes[0].InUse {
t.Fatalf("mode 0=%#v", modes[0])
}
if modes[1].Label != "MODE 1 (512+8)" {
t.Fatalf("mode 1 label=%q", modes[1].Label)
}
if modes[2].DataBytes != 4096 || modes[2].MetadataBytes != 0 {
t.Fatalf("mode 2=%#v", modes[2])
}
}
func TestHandleAPIBenchmarkNvidiaRunQueuesSelectedGPUs(t *testing.T) {
globalQueue.mu.Lock()
originalTasks := globalQueue.tasks

View File

@@ -0,0 +1,76 @@
package webui
import (
"bytes"
"context"
"log/slog"
"os/exec"
"time"
"bee/audit/internal/app"
"bee/audit/internal/collector"
)
const healthPollInterval = 60 * time.Second
const psuIPMITimeout = 15 * time.Second
// healthPoller runs periodic health checks for hardware components that do not
// emit kernel log events (e.g. PSU). Results are written to ComponentStatusDB.
type healthPoller struct {
statusDB *app.ComponentStatusDB
}
func newHealthPoller(statusDB *app.ComponentStatusDB) *healthPoller {
return &healthPoller{statusDB: statusDB}
}
func (p *healthPoller) start() {
goRecoverLoop("health poller", 5*time.Second, p.run)
}
func (p *healthPoller) run() {
ticker := time.NewTicker(healthPollInterval)
defer ticker.Stop()
for range ticker.C {
p.pollPSU()
}
}
func (p *healthPoller) pollPSU() {
if p.statusDB == nil {
return
}
ctx, cancel := context.WithTimeout(context.Background(), psuIPMITimeout)
defer cancel()
cmd := exec.CommandContext(ctx, "ipmitool", "sdr")
var out bytes.Buffer
cmd.Stdout = &out
if err := cmd.Run(); err != nil {
// IPMI not available or not a server — skip silently.
slog.Debug("health poller: ipmitool sdr unavailable", "err", err)
return
}
slots := collector.PSUSlotsFromSDR(out.String())
if len(slots) == 0 {
return
}
const source = "watchdog:psu"
for slot, psu := range slots {
key := "psu:" + slot
status := psu.Status
if status == "" {
status = "Unknown"
}
detail := ""
switch status {
case "Critical":
detail = "PSU sensor reported non-OK state"
case "Warning":
detail = "PSU sensor in warning state"
}
p.statusDB.Record(key, source, status, detail)
}
}

View File

@@ -91,6 +91,7 @@ func (j *jobState) writeLogLineLocked(line string) {
j.logBuf = bufio.NewWriterSize(f, 64*1024)
}
_, _ = j.logBuf.WriteString(line + "\n")
_ = j.logBuf.Flush()
}
// closeLog flushes and closes the log file. Called after all task output is done.

View File

@@ -73,6 +73,9 @@ func (w *kmsgWatcher) run() {
w.mu.Lock()
if w.window != nil {
w.recordEvent(evt)
} else {
evtCopy := evt
goRecoverOnce("kmsg flush immediate", func() { w.flushImmediate(evtCopy) })
}
w.mu.Unlock()
}
@@ -180,6 +183,52 @@ func (w *kmsgWatcher) flushWindow(window *kmsgWindow) {
}
}
// flushImmediate writes a single kmsg event directly to the status DB without a SAT window.
// Called when an error is detected outside of any SAT task (always-on watching).
func (w *kmsgWatcher) flushImmediate(evt kmsgEvent) {
if w.statusDB == nil {
return
}
const source = "watchdog:kmsg"
detail := "kernel: " + truncate(evt.raw, 120)
var severity string
for _, p := range platform.HardwareErrorPatterns {
if p.Re.MatchString(evt.raw) {
if p.Severity == "critical" {
severity = "Critical"
} else {
severity = "Warning"
}
break
}
}
if severity == "" {
severity = "Warning"
}
if len(evt.ids) == 0 {
key := "cpu:all"
if evt.category == "memory" {
key = "memory:all"
}
w.statusDB.Record(key, source, severity, detail)
return
}
for _, id := range evt.ids {
var key string
switch evt.category {
case "gpu", "pcie":
key = "pcie:" + normalizeBDF(id)
case "storage":
key = "storage:" + id
default:
key = "pcie:" + normalizeBDF(id)
}
w.statusDB.Record(key, source, severity, detail)
}
}
// parseKmsgLine parses a single /dev/kmsg line and returns an event if it matches
// any pattern in platform.HardwareErrorPatterns.
// kmsg format: "<priority>,<sequence>,<timestamp_usec>,-;message text"

View File

@@ -0,0 +1,368 @@
package webui
import (
"context"
"encoding/json"
"fmt"
"net/http"
"os/exec"
"path/filepath"
"regexp"
"sort"
"strconv"
"strings"
"time"
)
type nvmeFormatMode struct {
Mode int `json:"mode"`
DataBytes int64 `json:"data_bytes"`
MetadataBytes int64 `json:"metadata_bytes"`
InUse bool `json:"in_use"`
Label string `json:"label"`
}
type nvmeFormatDisk struct {
Device string `json:"device"`
Model string `json:"model,omitempty"`
Serial string `json:"serial,omitempty"`
Size string `json:"size,omitempty"`
CurrentMode int `json:"current_mode"`
CurrentFormat string `json:"current_format"`
Modes []nvmeFormatMode `json:"modes"`
Error string `json:"error,omitempty"`
}
type nvmeListJSON struct {
Devices []struct {
DevicePath string `json:"DevicePath"`
ModelNumber string `json:"ModelNumber"`
SerialNumber string `json:"SerialNumber"`
PhysicalSize int64 `json:"PhysicalSize"`
} `json:"Devices"`
}
var (
nvmeFormatDeviceRE = regexp.MustCompile(`^/dev/nvme[0-9]+n[0-9]+$`)
nvmeLBAFCompactLineRE = regexp.MustCompile(`(?im)^\s*lbaf\s+(\d+)\s*:\s*ms:(\d+)\s+lbads:(\d+).*$`)
nvmeLBAFVerboseLineRE = regexp.MustCompile(`(?im)^\s*LBA Format\s+(\d+)\s*:\s*Metadata Size:\s*(\d+)\s+bytes\s*-\s*Data Size:\s*(\d+)\s+bytes.*$`)
nvmeCommandContext = exec.CommandContext
nvmeListFormatsTimeout = 20 * time.Second
)
func listNVMeFormatDisks(ctx context.Context) ([]nvmeFormatDisk, error) {
ctx, cancel := context.WithTimeout(ctx, nvmeListFormatsTimeout)
defer cancel()
out, err := nvmeCommandContext(ctx, "nvme", "list", "-o", "json").Output()
if err != nil {
return nil, err
}
var root nvmeListJSON
if err := json.Unmarshal(out, &root); err != nil {
return nil, err
}
disks := make([]nvmeFormatDisk, 0, len(root.Devices))
seen := map[string]struct{}{}
for _, dev := range root.Devices {
path := strings.TrimSpace(dev.DevicePath)
if !nvmeFormatDeviceRE.MatchString(path) {
continue
}
if _, ok := seen[path]; ok {
continue
}
seen[path] = struct{}{}
disk := nvmeFormatDisk{
Device: path,
Model: strings.TrimSpace(dev.ModelNumber),
Serial: strings.TrimSpace(dev.SerialNumber),
Size: formatNVMeBytes(dev.PhysicalSize),
CurrentMode: -1,
}
modes, parseErr := readNVMeFormatModes(ctx, path)
if parseErr != nil {
disk.Error = parseErr.Error()
}
disk.Modes = modes
for _, mode := range modes {
if mode.InUse {
disk.CurrentMode = mode.Mode
disk.CurrentFormat = formatNVMeBlock(mode.DataBytes, mode.MetadataBytes)
break
}
}
disks = append(disks, disk)
}
sort.Slice(disks, func(i, j int) bool { return disks[i].Device < disks[j].Device })
return disks, nil
}
func readNVMeFormatModes(ctx context.Context, device string) ([]nvmeFormatMode, error) {
if !nvmeFormatDeviceRE.MatchString(device) {
return nil, fmt.Errorf("invalid NVMe device")
}
out, err := nvmeCommandContext(ctx, "nvme", "id-ns", device, "-H").CombinedOutput()
if err != nil {
msg := strings.TrimSpace(string(out))
if msg == "" {
msg = err.Error()
}
return nil, fmt.Errorf("%s", msg)
}
modes := parseNVMeFormatModes(string(out))
if len(modes) == 0 {
return nil, fmt.Errorf("no LBA format modes found")
}
return modes, nil
}
func parseNVMeFormatModes(raw string) []nvmeFormatMode {
byMode := map[int]nvmeFormatMode{}
for _, m := range nvmeLBAFCompactLineRE.FindAllStringSubmatch(raw, -1) {
mode, errMode := strconv.Atoi(m[1])
metadata, errMS := strconv.ParseInt(m[2], 10, 64)
lbads, errLBADS := strconv.Atoi(m[3])
if errMode != nil || errMS != nil || errLBADS != nil || lbads < 0 || lbads >= 63 {
continue
}
data := int64(1) << lbads
line := m[0]
byMode[mode] = nvmeFormatMode{
Mode: mode,
DataBytes: data,
MetadataBytes: metadata,
InUse: strings.Contains(strings.ToLower(line), "in use"),
Label: fmt.Sprintf("MODE %d (%s)", mode, formatNVMeBlock(data, metadata)),
}
}
for _, m := range nvmeLBAFVerboseLineRE.FindAllStringSubmatch(raw, -1) {
mode, errMode := strconv.Atoi(m[1])
metadata, errMS := strconv.ParseInt(m[2], 10, 64)
data, errData := strconv.ParseInt(m[3], 10, 64)
if errMode != nil || errMS != nil || errData != nil || data <= 0 {
continue
}
line := m[0]
byMode[mode] = nvmeFormatMode{
Mode: mode,
DataBytes: data,
MetadataBytes: metadata,
InUse: strings.Contains(strings.ToLower(line), "in use"),
Label: fmt.Sprintf("MODE %d (%s)", mode, formatNVMeBlock(data, metadata)),
}
}
modes := make([]nvmeFormatMode, 0, len(byMode))
for _, mode := range byMode {
modes = append(modes, mode)
}
sort.Slice(modes, func(i, j int) bool { return modes[i].Mode < modes[j].Mode })
return modes
}
func runNVMeFormatTask(ctx context.Context, j *jobState, device string, lbaf int) error {
if !nvmeFormatDeviceRE.MatchString(device) {
return fmt.Errorf("invalid NVMe device")
}
modes, err := readNVMeFormatModes(ctx, device)
if err != nil {
return err
}
var selected nvmeFormatMode
found := false
for _, mode := range modes {
if mode.Mode == lbaf {
selected = mode
found = true
break
}
}
if !found {
return fmt.Errorf("MODE %d is not available on %s", lbaf, device)
}
ms := 0
if selected.MetadataBytes > 0 {
ms = 1
}
j.append(fmt.Sprintf("Formatting %s to %s with --lbaf=%d --ms=%d --force", device, formatNVMeBlock(selected.DataBytes, selected.MetadataBytes), selected.Mode, ms))
cmd := nvmeCommandContext(ctx, "nvme", "format", device, fmt.Sprintf("--lbaf=%d", selected.Mode), fmt.Sprintf("--ms=%d", ms), "--force")
return streamCmdJob(j, cmd)
}
func (h *handler) handleAPINVMeFormats(w http.ResponseWriter, r *http.Request) {
disks, err := listNVMeFormatDisks(r.Context())
if err != nil {
writeError(w, http.StatusInternalServerError, err.Error())
return
}
writeJSON(w, disks)
}
func (h *handler) handleAPINVMeFormatRun(w http.ResponseWriter, r *http.Request) {
var req struct {
Device string `json:"device"`
LBAF int `json:"lbaf"`
}
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
writeError(w, http.StatusBadRequest, "invalid request body")
return
}
if !nvmeFormatDeviceRE.MatchString(req.Device) {
writeError(w, http.StatusBadRequest, "invalid NVMe device")
return
}
disks, err := listNVMeFormatDisks(r.Context())
if err != nil {
writeError(w, http.StatusInternalServerError, err.Error())
return
}
var label string
allowed := false
for _, disk := range disks {
if disk.Device != req.Device {
continue
}
for _, mode := range disk.Modes {
if mode.Mode == req.LBAF {
allowed = true
label = mode.Label
break
}
}
}
if !allowed {
writeError(w, http.StatusBadRequest, "LBA format mode is not available for this device")
return
}
name := fmt.Sprintf("NVMe Format %s to %s", filepath.Base(req.Device), label)
t := &Task{
ID: newJobID("nvme-format"),
Name: name,
Target: "nvme-format",
Priority: defaultTaskPriority("nvme-format", taskParams{}),
Status: TaskPending,
CreatedAt: time.Now(),
params: taskParams{
Device: req.Device,
LBAF: req.LBAF,
},
}
globalQueue.enqueue(t)
writeJSON(w, map[string]string{"task_id": t.ID, "job_id": t.ID})
}
func formatNVMeBlock(dataBytes, metadataBytes int64) string {
return strconv.FormatInt(dataBytes, 10) + "+" + strconv.FormatInt(metadataBytes, 10)
}
func formatNVMeBytes(n int64) string {
if n <= 0 {
return ""
}
units := []string{"B", "KB", "MB", "GB", "TB", "PB"}
v := float64(n)
unit := 0
for v >= 1000 && unit < len(units)-1 {
v /= 1000
unit++
}
if unit == 0 {
return fmt.Sprintf("%d B", n)
}
return fmt.Sprintf("%.1f %s", v, units[unit])
}
func renderNVMeFormatInline() string {
return `<div id="nvme-format-status" style="font-size:13px;color:var(--muted);margin-bottom:12px">Loading NVMe disks...</div>
<div id="nvme-format-table"><p style="color:var(--muted);font-size:13px">Loading...</p></div>
<script>
function nvmeFormatEsc(s) {
return String(s == null ? '' : s).replace(/[&<>"']/g, function(c) {
return {'&':'&amp;','<':'&lt;','>':'&gt;','"':'&quot;',"'":'&#39;'}[c];
});
}
function loadNVMeFormats() {
var status = document.getElementById('nvme-format-status');
var table = document.getElementById('nvme-format-table');
status.textContent = 'Loading NVMe disks...';
status.style.color = 'var(--muted)';
table.innerHTML = '<p style="color:var(--muted);font-size:13px">Loading...</p>';
fetch('/api/tools/nvme-formats').then(function(r) { return r.json().then(function(d) { if (!r.ok) throw new Error(d.error || ('HTTP ' + r.status)); return d; }); }).then(function(disks) {
window._nvmeFormatDisks = Array.isArray(disks) ? disks : [];
if (!window._nvmeFormatDisks.length) {
status.textContent = 'No NVMe disks found.';
table.innerHTML = '';
return;
}
status.textContent = window._nvmeFormatDisks.length + ' NVMe disk(s) found.';
var rows = window._nvmeFormatDisks.map(function(d, idx) {
var current = d.current_format ? (d.current_format + ' / MODE ' + d.current_mode) : 'unknown';
var detail = [d.model || '', d.serial || '', d.size || ''].filter(Boolean).join(' | ');
var options = (d.modes || []).map(function(m) {
return '<option value="' + m.mode + '"' + (m.in_use ? ' selected' : '') + '>' + nvmeFormatEsc(m.label) + '</option>';
}).join('');
var disabled = options ? '' : ' disabled';
var err = d.error ? '<div style="font-size:12px;color:var(--crit-fg,#9f3a38);margin-top:4px">' + nvmeFormatEsc(d.error) + '</div>' : '';
return '<tr>'
+ '<td style="font-family:monospace;white-space:nowrap">' + nvmeFormatEsc(d.device) + (detail ? '<div style="font-family:inherit;font-size:12px;color:var(--muted)">' + nvmeFormatEsc(detail) + '</div>' : '') + '</td>'
+ '<td style="white-space:nowrap">' + nvmeFormatEsc(current) + err + '</td>'
+ '<td style="white-space:nowrap"><select id="nvme-format-select-' + idx + '"' + disabled + '>' + options + '</select></td>'
+ '<td style="white-space:nowrap"><button class="btn btn-sm btn-primary" onclick="nvmeFormatRun(' + idx + ', this)"' + disabled + '>Apply</button><div class="nvme-format-row-msg" style="margin-top:6px;font-size:12px;color:var(--muted)"></div></td>'
+ '</tr>';
}).join('');
table.innerHTML = '<table><tr><th>Disk</th><th>Current block / mode</th><th>New mode</th><th>Action</th></tr>' + rows + '</table>';
}).catch(function(e) {
status.textContent = 'Error loading NVMe disks: ' + e.message;
status.style.color = 'var(--crit-fg,#9f3a38)';
table.innerHTML = '';
});
}
function nvmeWaitTaskDone(taskID, rowMsg) {
var timer = setInterval(function() {
fetch('/api/tasks').then(function(r) { return r.json(); }).then(function(tasks) {
var task = (tasks || []).find(function(t) { return t.id === taskID; });
if (!task) return;
if (task.status === 'done' || task.status === 'failed' || task.status === 'cancelled') {
clearInterval(timer);
rowMsg.textContent = 'Task ' + taskID + ': ' + task.status + (task.error ? ' - ' + task.error : '');
rowMsg.style.color = task.status === 'done' ? 'var(--ok,green)' : 'var(--crit-fg,#9f3a38)';
loadNVMeFormats();
}
}).catch(function(){});
}, 1500);
}
function nvmeFormatRun(idx, btn) {
var disk = (window._nvmeFormatDisks || [])[idx];
var select = document.getElementById('nvme-format-select-' + idx);
var row = btn.closest('td');
var rowMsg = row.querySelector('.nvme-format-row-msg');
if (!disk || !select) return;
var lbaf = parseInt(select.value, 10);
var mode = (disk.modes || []).find(function(m) { return m.mode === lbaf; });
if (!mode) return;
if (!window.confirm('Format ' + disk.device + ' to ' + mode.label + '? This erases data on the namespace.')) return;
btn.disabled = true;
rowMsg.style.color = 'var(--muted)';
rowMsg.textContent = 'Queued...';
fetch('/api/tools/nvme-format/run', {
method:'POST',
headers:{'Content-Type':'application/json'},
body:JSON.stringify({device: disk.device, lbaf: lbaf})
}).then(function(r) { return r.json().then(function(d) { if (!r.ok) throw new Error(d.error || ('HTTP ' + r.status)); return d; }); }).then(function(d) {
rowMsg.textContent = 'Task ' + d.task_id + ' queued.';
nvmeWaitTaskDone(d.task_id, rowMsg);
}).catch(function(e) {
rowMsg.style.color = 'var(--crit-fg,#9f3a38)';
rowMsg.textContent = 'Error: ' + e.message;
}).finally(function() {
btn.disabled = false;
});
}
loadNVMeFormats();
</script>`
}
func renderNVMeFormatCard() string {
return `<div class="card"><div class="card-head">NVMe Block Format <button class="btn btn-sm btn-secondary" onclick="loadNVMeFormats()" style="margin-left:auto">&#8635; Refresh</button></div><div class="card-body">` +
`<p style="font-size:13px;color:var(--muted);margin-bottom:12px">Lists NVMe namespaces and changes their LBA format through a queued task.</p>` +
renderNVMeFormatInline() + `</div></div>`
}

View File

@@ -431,7 +431,7 @@ fetch('/api/system/ram-status').then(r=>r.json()).then(d=>{
else if (kind === 'disk') label = 'disk (' + source + ')';
else label = source;
boot.textContent = 'Current boot source: ' + label + '.';
txt.textContent = d.message || 'Checking...';
txt.textContent = d.blocked_reason || d.message || 'Checking...';
if (d.status === 'ok' || d.in_ram) {
txt.style.color = 'var(--ok, green)';
} else if (d.status === 'failed') {
@@ -475,6 +475,7 @@ function installToRAM() {
<div class="card"><div class="card-head">Services</div><div class="card-body">` +
renderServicesInline() + `</div></div>
` + renderNVMeFormatCard() + `
<script>
function checkTools() {

View File

@@ -85,6 +85,7 @@ func renderPage(page string, opts HandlerOptions) string {
body +
`</div></div>` +
renderAuditModal() +
`<dialog id="component-detail-dialog" style="min-width:600px;max-width:900px;width:90vw;padding:0;border:1px solid var(--border);border-radius:8px;background:var(--surface)"><div id="component-detail-body" style="padding-bottom:20px"></div></dialog>` +
`<script>
// Add copy button to every .terminal on the page
document.querySelectorAll('.terminal').forEach(function(t){
@@ -184,13 +185,14 @@ func renderAudit() string {
}
func renderHardwareSummaryCard(opts HandlerOptions) string {
const cardAttrs = ` hx-get="/api/hardware-summary" hx-trigger="every 30s" hx-swap="outerHTML"`
data, err := loadSnapshot(opts.AuditPath)
if err != nil {
return `<div class="card"><div class="card-head card-head-actions"><span>Hardware Summary</span><div class="card-head-buttons"><button class="btn btn-primary btn-sm" onclick="auditModalRun()">Run audit</button></div></div><div class="card-body"></div></div>`
return `<div class="card"` + cardAttrs + `><div class="card-head card-head-actions"><span>Hardware Summary</span><div class="card-head-buttons"><button class="btn btn-primary btn-sm" onclick="auditModalRun()">Run audit</button></div></div><div class="card-body"></div></div>`
}
var ingest schema.HardwareIngestRequest
if err := json.Unmarshal(data, &ingest); err != nil {
return `<div class="card"><div class="card-head">Hardware Summary</div><div class="card-body"><span class="badge badge-err">Parse error</span></div></div>`
return `<div class="card"` + cardAttrs + `><div class="card-head">Hardware Summary</div><div class="card-body"><span class="badge badge-err">Parse error</span></div></div>`
}
hw := ingest.Hardware
@@ -200,7 +202,7 @@ func renderHardwareSummaryCard(opts HandlerOptions) string {
}
var b strings.Builder
b.WriteString(`<div class="card"><div class="card-head">Hardware Summary</div><div class="card-body">`)
b.WriteString(`<div class="card"` + cardAttrs + `><div class="card-head">Hardware Summary</div><div class="card-body">`)
// Server identity block above the component table.
{
@@ -229,22 +231,32 @@ func renderHardwareSummaryCard(opts HandlerOptions) string {
}
b.WriteString(`<table style="width:auto">`)
writeRow := func(label, value, badgeHTML string) {
b.WriteString(fmt.Sprintf(`<tr><td style="padding:6px 14px 6px 0;font-weight:700;white-space:nowrap">%s</td><td style="padding:6px 0;color:var(--muted);font-size:13px">%s</td><td style="padding:6px 0 6px 12px">%s</td></tr>`,
html.EscapeString(label), html.EscapeString(value), badgeHTML))
// writeRow renders one component row. compType is the URL path segment for the detail
// endpoint (e.g. "cpu"). Pass "" for rows that have no detail view.
writeRow := func(label, value, badgeHTML, compType string) {
var labelHTML string
if compType != "" {
labelHTML = fmt.Sprintf(
`<span style="cursor:pointer;text-decoration:underline dotted;text-underline-offset:3px" hx-get="/api/components/%s" hx-target="#component-detail-body" hx-swap="innerHTML" onclick="document.getElementById('component-detail-dialog').showModal()">%s</span>`,
compType, html.EscapeString(label))
} else {
labelHTML = html.EscapeString(label)
}
fmt.Fprintf(&b, `<tr><td style="padding:6px 14px 6px 0;font-weight:700;white-space:nowrap">%s</td><td style="padding:6px 0;color:var(--muted);font-size:13px">%s</td><td style="padding:6px 0 6px 12px">%s</td></tr>`,
labelHTML, html.EscapeString(value), badgeHTML)
}
writeRow("CPU", hwDescribeCPU(hw),
renderComponentChips(matchedRecords(records, []string{"cpu:all"}, nil)))
renderComponentChips(matchedRecords(records, []string{"cpu:all"}, nil)), "cpu")
writeRow("Memory", hwDescribeMemory(hw),
renderComponentChips(matchedRecords(records, []string{"memory:all"}, []string{"memory:"})))
renderComponentChips(matchedRecords(records, []string{"memory:all"}, []string{"memory:"})), "memory")
writeRow("Storage", hwDescribeStorage(hw),
renderComponentChips(matchedRecords(records, []string{"storage:all"}, []string{"storage:"})))
renderComponentChips(matchedRecords(records, []string{"storage:all"}, []string{"storage:"})), "storage")
writeRow("GPU", hwDescribeGPU(hw),
renderComponentChips(matchedRecords(records, nil, []string{"pcie:gpu:"})))
renderComponentChips(matchedRecords(records, nil, []string{"pcie:gpu:"})), "gpu")
psuMatched := matchedRecords(records, nil, []string{"psu:"})
if len(psuMatched) == 0 && len(hw.PowerSupplies) > 0 {
@@ -252,10 +264,10 @@ func renderHardwareSummaryCard(opts HandlerOptions) string {
psuStatus := hwPSUStatus(hw.PowerSupplies)
psuMatched = []app.ComponentStatusRecord{{ComponentKey: "psu:ipmi", Status: psuStatus}}
}
writeRow("PSU", hwDescribePSU(hw), renderComponentChips(psuMatched))
writeRow("PSU", hwDescribePSU(hw), renderComponentChips(psuMatched), "psu")
if nicDesc := hwDescribeNIC(hw); nicDesc != "" {
writeRow("Network", nicDesc, "")
writeRow("Network", nicDesc, "", "")
}
b.WriteString(`</table>`)
@@ -999,3 +1011,67 @@ func rowIssueHTML(issue string) string {
}
return html.EscapeString(issue)
}
// renderComponentDetail renders a modal content fragment for one component type.
// Called by handleAPIComponentDetail and displayed inside #component-detail-dialog.
func renderComponentDetail(title string, records []app.ComponentStatusRecord) string {
var b strings.Builder
fmt.Fprintf(&b, `<div style="padding:20px 24px 0">`)
fmt.Fprintf(&b, `<div style="display:flex;align-items:center;justify-content:space-between;margin-bottom:16px">`)
fmt.Fprintf(&b, `<span style="font-size:16px;font-weight:700">%s — Status Detail</span>`, html.EscapeString(title))
b.WriteString(`<button class="btn btn-sm btn-secondary" onclick="document.getElementById('component-detail-dialog').close()">Close</button>`)
b.WriteString(`</div>`)
if len(records) == 0 {
b.WriteString(`<p style="color:var(--muted)">No status data recorded yet for this component type.</p>`)
b.WriteString(`</div>`)
return b.String()
}
sort.Slice(records, func(i, j int) bool {
return records[i].ComponentKey < records[j].ComponentKey
})
for _, rec := range records {
letter, cls := chipLetterClass(rec.Status)
fmt.Fprintf(&b, `<div style="margin-bottom:20px">`)
fmt.Fprintf(&b, `<div style="display:flex;align-items:center;gap:8px;margin-bottom:8px">`)
fmt.Fprintf(&b, `<span class="chip %s">%s</span>`, cls, letter)
fmt.Fprintf(&b, `<span style="font-weight:700;font-size:13px">%s</span>`, html.EscapeString(rec.ComponentKey))
if !rec.LastCheckedAt.IsZero() {
fmt.Fprintf(&b, `<span style="color:var(--muted);font-size:12px">checked %s</span>`, rec.LastCheckedAt.Format("2006-01-02 15:04:05"))
}
b.WriteString(`</div>`)
if rec.ErrorSummary != "" {
fmt.Fprintf(&b, `<div style="font-size:12px;margin-bottom:8px;color:var(--muted)">%s</div>`, html.EscapeString(rec.ErrorSummary))
}
// History table — newest first, cap at 20 entries.
history := rec.History
if len(history) > 20 {
history = history[len(history)-20:]
}
b.WriteString(`<table style="width:100%;font-size:12px;border-collapse:collapse">`)
b.WriteString(`<tr style="color:var(--muted)"><th style="text-align:left;padding:2px 10px 2px 0;white-space:nowrap">Time</th><th style="text-align:left;padding:2px 10px 2px 0">Status</th><th style="text-align:left;padding:2px 10px 2px 0">Source</th><th style="text-align:left;padding:2px 0">Detail</th></tr>`)
for i := len(history) - 1; i >= 0; i-- {
e := history[i]
eLetter, eCls := chipLetterClass(e.Status)
detail := e.Detail
if detail == "" {
detail = "—"
}
fmt.Fprintf(&b,
`<tr><td style="padding:3px 10px 3px 0;white-space:nowrap;color:var(--muted)">%s</td><td style="padding:3px 10px 3px 0"><span class="chip %s" style="font-size:10px;width:16px;height:16px">%s</span></td><td style="padding:3px 10px 3px 0;white-space:nowrap">%s</td><td style="padding:3px 0;color:var(--muted)">%s</td></tr>`,
html.EscapeString(e.At.Format("2006-01-02 15:04:05")),
eCls, eLetter,
html.EscapeString(e.Source),
html.EscapeString(detail),
)
}
b.WriteString(`</table>`)
b.WriteString(`</div>`)
}
b.WriteString(`</div>`)
return b.String()
}

View File

@@ -221,6 +221,11 @@ func NewHandler(opts HandlerOptions) http.Handler {
h.kmsg = newKmsgWatcher(opts.App.StatusDB)
h.kmsg.start()
globalQueue.kmsgWatcher = h.kmsg
// Start periodic health poller for components that don't emit kernel log events (e.g. PSU).
if opts.App.StatusDB != nil {
newHealthPoller(opts.App.StatusDB).start()
}
}
globalQueue.startWorker(&opts)
@@ -307,6 +312,8 @@ func NewHandler(opts HandlerOptions) http.Handler {
// Tools
mux.HandleFunc("GET /api/tools/check", h.handleAPIToolsCheck)
mux.HandleFunc("GET /api/tools/nvme-formats", h.handleAPINVMeFormats)
mux.HandleFunc("POST /api/tools/nvme-format/run", h.handleAPINVMeFormatRun)
// GPU presence / tools
mux.HandleFunc("GET /api/gpu/presence", h.handleAPIGPUPresence)
@@ -326,6 +333,10 @@ func NewHandler(opts HandlerOptions) http.Handler {
mux.HandleFunc("GET /api/install/disks", h.handleAPIInstallDisks)
mux.HandleFunc("POST /api/install/run", h.handleAPIInstallRun)
// Hardware component detail (fragment for modal in Hardware Summary card)
mux.HandleFunc("GET /api/hardware-summary", h.handleAPIHardwareSummary)
mux.HandleFunc("GET /api/components/{type}", h.handleAPIComponentDetail)
// Metrics — SSE stream of live sensor data + server-side SVG charts + CSV export
mux.HandleFunc("GET /api/metrics/stream", h.handleAPIMetricsStream)
mux.HandleFunc("GET /api/metrics/latest", h.handleAPIMetricsLatest)
@@ -1292,8 +1303,8 @@ const loadingPageHTML = `<!DOCTYPE html>
*{margin:0;padding:0;box-sizing:border-box}
html,body{height:100%;background:#0f1117;display:flex;align-items:center;justify-content:center;font-family:'Courier New',monospace;color:#e2e8f0}
.wrap{text-align:center;width:420px}
.logo{font-size:11px;line-height:1.4;color:#f6c90e;margin-bottom:6px;white-space:pre;text-align:left}
.subtitle{font-size:12px;color:#a0aec0;text-align:left;margin-bottom:24px;padding-left:2px}
.brand{font-size:22px;letter-spacing:.18em;color:#f6c90e;margin-bottom:6px;text-align:left}
.subtitle{font-size:12px;color:#a0aec0;text-align:left;margin-bottom:24px}
.spinner{width:36px;height:36px;border:3px solid #2d3748;border-top-color:#f6c90e;border-radius:50%;animation:spin .8s linear infinite;margin:0 auto 14px}
.spinner.hidden{display:none}
@keyframes spin{to{transform:rotate(360deg)}}
@@ -1311,12 +1322,7 @@ td:first-child{color:#718096;width:55%}
</head>
<body>
<div class="wrap">
<div class="logo"> ███████╗ █████╗ ███████╗██╗ ██╗ ██████╗ ███████╗███████╗
██╔════╝██╔══██╗██╔════╝╚██╗ ██╔╝ ██╔══██╗██╔════╝██╔════╝
█████╗ ███████║███████╗ ╚████╔╝ █████╗██████╔╝█████╗ █████╗
██╔══╝ ██╔══██║╚════██║ ╚██╔╝ ╚════╝██╔══██╗██╔══╝ ██╔══╝
███████╗██║ ██║███████║ ██║ ██████╔╝███████╗███████╗
╚══════╝╚═╝ ╚═╝╚══════╝ ╚═╝ ╚═════╝ ╚══════╝╚══════╝</div>
<div class="brand">EASY BEE</div>
<div class="subtitle">Hardware Audit LiveCD</div>
<div class="spinner" id="spin"></div>
<div class="status" id="st">Connecting to bee-web...</div>
@@ -1326,8 +1332,20 @@ td:first-child{color:#718096;width:55%}
<script>
(function(){
var gone = false;
var pollStarted = false;
var fallbackOpenTimer = null;
var AUTO_OPEN_DELAY_MS = 15000;
function go(){ if(!gone){gone=true;window.location.replace('/');} }
function scheduleFallbackOpen(){
if(fallbackOpenTimer!==null) return;
fallbackOpenTimer=setTimeout(function(){
document.getElementById('spin').className='spinner hidden';
document.getElementById('st').textContent='Startup checks are taking too long — opening app...';
go();
},AUTO_OPEN_DELAY_MS);
}
function icon(s){
if(s==='active') return '<span class="ok">&#9679; active</span>';
if(s==='failed') return '<span class="fail">&#10005; failed</span>';
@@ -1359,6 +1377,7 @@ function pollServices(){
tbl.innerHTML=html;
if(allSettled(svcs)){
clearInterval(pollTimer);
if(fallbackOpenTimer!==null) clearTimeout(fallbackOpenTimer);
document.getElementById('spin').className='spinner hidden';
document.getElementById('st').textContent='Ready \u2014 opening...';
setTimeout(go,800);
@@ -1373,8 +1392,12 @@ function probe(){
if(r.ok){
document.getElementById('st').textContent='bee-web running \u2014 checking services...';
document.getElementById('btn').style.display='';
pollServices();
pollTimer=setInterval(pollServices,1500);
scheduleFallbackOpen();
if(!pollStarted){
pollStarted=true;
pollServices();
pollTimer=setInterval(pollServices,1500);
}
} else {
document.getElementById('st').textContent='bee-web starting (status '+r.status+')...';
setTimeout(probe,500);

View File

@@ -604,6 +604,25 @@ func TestReadyIsOKWhenAuditPathIsUnset(t *testing.T) {
}
}
func TestLoadingPageHasFallbackAutoOpen(t *testing.T) {
handler := NewHandler(HandlerOptions{})
rec := httptest.NewRecorder()
handler.ServeHTTP(rec, httptest.NewRequest(http.MethodGet, "/loading", nil))
if rec.Code != http.StatusOK {
t.Fatalf("status=%d body=%s", rec.Code, rec.Body.String())
}
body := rec.Body.String()
for _, needle := range []string{
`var AUTO_OPEN_DELAY_MS = 15000;`,
`function scheduleFallbackOpen(){`,
`Startup checks are taking too long — opening app...`,
} {
if !strings.Contains(body, needle) {
t.Fatalf("loading page missing %q: %s", needle, body)
}
}
}
func TestAuditPageRendersViewerFrameAndActions(t *testing.T) {
dir := t.TempDir()
path := filepath.Join(dir, "audit.json")
@@ -677,6 +696,12 @@ func TestToolsPageRendersNvidiaSelfHealSection(t *testing.T) {
if !strings.Contains(body, `/api/blackbox/status`) {
t.Fatalf("tools page missing black-box status api usage: %s", body)
}
if !strings.Contains(body, `NVMe Block Format`) {
t.Fatalf("tools page missing nvme block format section: %s", body)
}
if !strings.Contains(body, `/api/tools/nvme-formats`) || !strings.Contains(body, `/api/tools/nvme-format/run`) {
t.Fatalf("tools page missing nvme format api usage: %s", body)
}
}
func TestBenchmarkPageRendersGPUSelectionControls(t *testing.T) {

View File

@@ -376,6 +376,12 @@ func executeTaskWithOptions(opts *HandlerOptions, t *Task, j *jobState, ctx cont
break
}
err = a.RunInstallToRAM(ctx, j.append)
case "nvme-format":
if strings.TrimSpace(t.params.Device) == "" {
err = fmt.Errorf("device is required")
break
}
err = runNVMeFormatTask(ctx, j, t.params.Device, t.params.LBAF)
default:
j.append("ERROR: unknown target: " + t.Target)
j.finish("unknown target")

View File

@@ -57,6 +57,7 @@ var taskNames = map[string]string{
"support-bundle": "Support Bundle",
"install": "Install to Disk",
"install-to-ram": "Install to RAM",
"nvme-format": "NVMe Block Format Change",
}
// burnNames maps target → human-readable name when a burn profile is set.
@@ -137,6 +138,7 @@ type taskParams struct {
RampRunID string `json:"ramp_run_id,omitempty"`
DisplayName string `json:"display_name,omitempty"`
Device string `json:"device,omitempty"` // for install
LBAF int `json:"lbaf,omitempty"`
PlatformComponents []string `json:"platform_components,omitempty"`
}
@@ -598,6 +600,17 @@ func (q *taskQueue) startRecoveredTaskMonitorLocked(t *Task, j *jobState) {
}
func (q *taskQueue) runTaskExternal(t *Task, j *jobState) {
startedKmsgWatch := false
if q.kmsgWatcher != nil && isSATTarget(t.Target) {
q.kmsgWatcher.NotifyTaskStarted(t.ID, t.Target)
startedKmsgWatch = true
}
defer func() {
if startedKmsgWatch && q.kmsgWatcher != nil {
q.kmsgWatcher.NotifyTaskFinished(t.ID)
}
}()
stopTail := make(chan struct{})
doneTail := make(chan struct{})
defer func() {

View File

@@ -126,6 +126,23 @@ func TestNewTaskJobStateLoadsExistingLog(t *testing.T) {
}
}
func TestJobAppendFlushesTaskLogImmediately(t *testing.T) {
dir := t.TempDir()
path := filepath.Join(dir, "task.log")
j := newTaskJobState(path)
j.append("live-line")
data, err := os.ReadFile(path)
if err != nil {
t.Fatal(err)
}
if string(data) != "live-line\n" {
t.Fatalf("log=%q want live-line newline", string(data))
}
j.closeLog()
}
func TestTaskQueueSnapshotSortsNewestFirst(t *testing.T) {
now := time.Date(2026, 4, 2, 12, 0, 0, 0, time.UTC)
q := &taskQueue{
@@ -849,3 +866,82 @@ func TestExecuteTaskMarksPanicsAsFailedAndClosesKmsgWindow(t *testing.T) {
t.Fatalf("expected kmsg window to be cleared, got %+v", window)
}
}
func TestRunTaskExternalOpensAndClosesKmsgWindow(t *testing.T) {
dir := t.TempDir()
releasePath := filepath.Join(dir, "release")
readyPath := filepath.Join(dir, "ready")
q := &taskQueue{
opts: &HandlerOptions{ExportDir: dir},
logsDir: filepath.Join(dir, "tasks"),
kmsgWatcher: newKmsgWatcher(nil),
trigger: make(chan struct{}, 1),
}
if err := os.MkdirAll(q.logsDir, 0755); err != nil {
t.Fatal(err)
}
tk := &Task{
ID: "cpu-external-1",
Name: "CPU SAT",
Target: "cpu",
Status: TaskRunning,
CreatedAt: time.Now(),
}
q.assignTaskLogPathLocked(tk)
j := newTaskJobState(tk.LogPath)
orig := externalTaskRunnerCommand
externalTaskRunnerCommand = func(exportDir, taskID string) (*exec.Cmd, error) {
script := "printf ready > \"$1\"; while [ ! -f \"$2\" ]; do sleep 0.05; done"
return exec.Command("sh", "-c", script, "sh", readyPath, releasePath), nil
}
defer func() { externalTaskRunnerCommand = orig }()
done := make(chan struct{})
go func() {
q.runTaskExternal(tk, j)
close(done)
}()
deadline := time.Now().Add(2 * time.Second)
for time.Now().Before(deadline) {
if _, err := os.Stat(readyPath); err == nil {
break
}
time.Sleep(20 * time.Millisecond)
}
if _, err := os.Stat(readyPath); err != nil {
t.Fatalf("external runner did not start: %v", err)
}
q.kmsgWatcher.mu.Lock()
activeCount := q.kmsgWatcher.activeCount
window := q.kmsgWatcher.window
q.kmsgWatcher.mu.Unlock()
if activeCount != 1 {
t.Fatalf("activeCount while running=%d want 1", activeCount)
}
if window == nil || len(window.targets) != 1 || window.targets[0] != "cpu" {
t.Fatalf("window while running=%+v", window)
}
if err := os.WriteFile(releasePath, []byte("1\n"), 0644); err != nil {
t.Fatal(err)
}
select {
case <-done:
case <-time.After(2 * time.Second):
t.Fatal("runTaskExternal did not return")
}
q.kmsgWatcher.mu.Lock()
activeCount = q.kmsgWatcher.activeCount
window = q.kmsgWatcher.window
q.kmsgWatcher.mu.Unlock()
if activeCount != 0 {
t.Fatalf("activeCount after finish=%d want 0", activeCount)
}
if window != nil {
t.Fatalf("expected kmsg window to be cleared, got %+v", window)
}
}

View File

@@ -16,6 +16,12 @@ else
LB_LINUX_PACKAGES="linux-image"
fi
if [ -n "${BEE_ISO_VOLUME:-}" ]; then
LB_ISO_VOLUME="${BEE_ISO_VOLUME}"
else
LB_ISO_VOLUME="EASY_BEE_${BEE_GPU_VENDOR_UPPER:-NVIDIA}"
fi
lb config noauto \
--distribution bookworm \
--architectures amd64 \
@@ -30,9 +36,9 @@ lb config noauto \
--linux-flavours "amd64" \
--linux-packages "${LB_LINUX_PACKAGES}" \
--memtest memtest86+ \
--iso-volume "EASY_BEE_${BEE_GPU_VENDOR_UPPER:-NVIDIA}" \
--iso-volume "${LB_ISO_VOLUME}" \
--iso-application "EASY-BEE-${BEE_GPU_VENDOR_UPPER:-NVIDIA}" \
--bootappend-live "boot=live components video=1920x1080 console=ttyS0,115200n8 console=tty0 loglevel=3 systemd.show_status=1 username=bee user-fullname=Bee modprobe.blacklist=nouveau,snd_hda_intel,snd_hda_codec_realtek,snd_hda_codec_generic,soundcore" \
--bootappend-live "boot=live live-media=/dev/disk/by-label/${LB_ISO_VOLUME} live-media-label=${LB_ISO_VOLUME} components video=1920x1080 console=ttyS0,115200n8 console=tty0 loglevel=3 systemd.show_status=1 username=bee user-fullname=Bee modprobe.blacklist=nouveau,snd_hda_intel,snd_hda_codec_realtek,snd_hda_codec_generic,soundcore" \
--debootstrap-options "--include=ca-certificates" \
--apt-recommends false \
--chroot-squashfs-compression-type zstd \

View File

@@ -8,7 +8,7 @@ BUILDER_DIR="${REPO_ROOT}/iso/builder"
CONTAINER_TOOL="${CONTAINER_TOOL:-docker}"
IMAGE_TAG="${BEE_BUILDER_IMAGE:-bee-iso-builder}"
BUILDER_PLATFORM="${BEE_BUILDER_PLATFORM:-linux/amd64}"
CACHE_DIR="${BEE_BUILDER_CACHE_DIR:-${REPO_ROOT}/dist/container-cache}"
CACHE_DIR="${BEE_BUILDER_CACHE_DIR:-${REPO_ROOT}/dist/cache}"
AUTH_KEYS=""
CLEAN_CACHE=0
VARIANT="all"
@@ -54,14 +54,14 @@ if [ "$CLEAN_CACHE" = "1" ]; then
"${CACHE_DIR:?}/bee" \
"${CACHE_DIR:?}/lb-packages"
echo "=== cleaning live-build work dirs ==="
rm -rf "${REPO_ROOT}/dist/live-build-work-nvidia"
rm -rf "${REPO_ROOT}/dist/live-build-work-nvidia-legacy"
rm -rf "${REPO_ROOT}/dist/live-build-work-amd"
rm -rf "${REPO_ROOT}/dist/live-build-work-nogpu"
rm -rf "${REPO_ROOT}/dist/overlay-stage-nvidia"
rm -rf "${REPO_ROOT}/dist/overlay-stage-nvidia-legacy"
rm -rf "${REPO_ROOT}/dist/overlay-stage-amd"
rm -rf "${REPO_ROOT}/dist/overlay-stage-nogpu"
rm -rf "${REPO_ROOT}/dist/cache/live-build-work-nvidia"
rm -rf "${REPO_ROOT}/dist/cache/live-build-work-nvidia-legacy"
rm -rf "${REPO_ROOT}/dist/cache/live-build-work-amd"
rm -rf "${REPO_ROOT}/dist/cache/live-build-work-nogpu"
rm -rf "${REPO_ROOT}/dist/cache/overlay-stage-nvidia"
rm -rf "${REPO_ROOT}/dist/cache/overlay-stage-nvidia-legacy"
rm -rf "${REPO_ROOT}/dist/cache/overlay-stage-amd"
rm -rf "${REPO_ROOT}/dist/cache/overlay-stage-nogpu"
echo "=== caches cleared, proceeding with build ==="
fi

View File

@@ -51,8 +51,8 @@ case "$BUILD_VARIANT" in
;;
esac
BUILD_WORK_DIR="${DIST_DIR}/live-build-work-${BUILD_VARIANT}"
OVERLAY_STAGE_DIR="${DIST_DIR}/overlay-stage-${BUILD_VARIANT}"
BUILD_WORK_DIR="${DIST_DIR}/cache/live-build-work-${BUILD_VARIANT}"
OVERLAY_STAGE_DIR="${DIST_DIR}/cache/overlay-stage-${BUILD_VARIANT}"
export BEE_GPU_VENDOR BEE_NVIDIA_MODULE_FLAVOR BUILD_VARIANT
@@ -63,18 +63,33 @@ export PATH="$PATH:/usr/local/go/bin"
# Allow git to read the bind-mounted repo (different UID inside container).
git config --global safe.directory "${REPO_ROOT}"
mkdir -p "${DIST_DIR}"
mkdir -p "${DIST_DIR}/cache" "${DIST_DIR}/release"
mkdir -p "${CACHE_ROOT}"
: "${GOCACHE:=${CACHE_ROOT}/go-build}"
: "${GOMODCACHE:=${CACHE_ROOT}/go-mod}"
export GOCACHE GOMODCACHE
resolve_audit_version() {
resolve_project_version() {
if [ -n "${BEE_VERSION:-}" ]; then
echo "${BEE_VERSION}"
return 0
fi
if [ -n "${BEE_AUDIT_VERSION:-}" ] && [ -n "${BEE_ISO_VERSION:-}" ] && [ "${BEE_AUDIT_VERSION}" != "${BEE_ISO_VERSION}" ]; then
echo "ERROR: BEE_AUDIT_VERSION (${BEE_AUDIT_VERSION}) and BEE_ISO_VERSION (${BEE_ISO_VERSION}) differ; versioning must stay synchronized" >&2
exit 1
fi
if [ -n "${BEE_AUDIT_VERSION:-}" ]; then
echo "${BEE_AUDIT_VERSION}"
return 0
fi
if [ -n "${BEE_ISO_VERSION:-}" ]; then
echo "${BEE_ISO_VERSION}"
return 0
fi
tag="$(git -C "${REPO_ROOT}" describe --tags --match 'v[0-9]*' --abbrev=7 --dirty 2>/dev/null || true)"
case "${tag}" in
v*)
@@ -97,35 +112,6 @@ resolve_audit_version() {
date +%Y%m%d
}
# ISO image versioned separately from the audit binary (iso/v* tags).
resolve_iso_version() {
if [ -n "${BEE_ISO_VERSION:-}" ]; then
echo "${BEE_ISO_VERSION}"
return 0
fi
# Plain v* tags (e.g. v2.7) take priority — this is the current tagging scheme
tag="$(git -C "${REPO_ROOT}" describe --tags --match 'v[0-9]*' --abbrev=7 --dirty 2>/dev/null || true)"
case "${tag}" in
v*)
echo "${tag#v}"
return 0
;;
esac
# Legacy iso/v* tags fallback
tag="$(git -C "${REPO_ROOT}" describe --tags --match 'iso/v*' --abbrev=7 --dirty 2>/dev/null || true)"
case "${tag}" in
iso/v*)
echo "${tag#iso/v}"
return 0
;;
esac
# Fall back to audit version so the name is still meaningful
resolve_audit_version
}
sync_builder_workdir() {
src_dir="$1"
dst_dir="$2"
@@ -530,12 +516,12 @@ validate_iso_live_boot_entries() {
exit 1
fi
grep -q 'menuentry "EASY-BEE"' "$grub_cfg" || {
grep -q 'menuentry "EASY-BEE v' "$grub_cfg" || {
echo "ERROR: GRUB default EASY-BEE entry is missing" >&2
rm -f "$grub_cfg" "$isolinux_cfg"
exit 1
}
grep -q 'menuentry "EASY-BEE -- load to RAM (toram)"' "$grub_cfg" || {
grep -q 'menuentry "EASY-BEE v.* -- load to RAM (toram)"' "$grub_cfg" || {
echo "ERROR: GRUB toram entry is missing" >&2
rm -f "$grub_cfg" "$isolinux_cfg"
exit 1
@@ -550,6 +536,11 @@ validate_iso_live_boot_entries() {
rm -f "$grub_cfg" "$isolinux_cfg"
exit 1
}
grep -q 'linux .*live-media-label=EASY_BEE_' "$grub_cfg" || {
echo "ERROR: GRUB live entry is missing live-media-label pinning" >&2
rm -f "$grub_cfg" "$isolinux_cfg"
exit 1
}
grep -q 'append .*boot=live ' "$isolinux_cfg" || {
echo "ERROR: isolinux live entry is missing boot=live" >&2
@@ -561,11 +552,50 @@ validate_iso_live_boot_entries() {
rm -f "$grub_cfg" "$isolinux_cfg"
exit 1
}
grep -q 'append .*live-media-label=EASY_BEE_' "$isolinux_cfg" || {
echo "ERROR: isolinux live entry is missing live-media-label pinning" >&2
rm -f "$grub_cfg" "$isolinux_cfg"
exit 1
}
rm -f "$grub_cfg" "$isolinux_cfg"
echo "=== live boot validation OK ==="
}
validate_iso_grub_assets() {
iso_path="$1"
echo "=== validating GRUB assets in ISO ==="
[ -f "$iso_path" ] || {
echo "ERROR: ISO not found for GRUB asset validation: $iso_path" >&2
exit 1
}
require_iso_reader "$iso_path" >/dev/null 2>&1 || {
echo "ERROR: ISO reader unavailable for GRUB asset validation" >&2
exit 1
}
iso_files="$(mktemp)"
iso_list_files "$iso_path" > "$iso_files" || {
echo "ERROR: failed to list ISO files for GRUB asset validation" >&2
rm -f "$iso_files"
exit 1
}
for required in \
boot/grub/config.cfg \
boot/grub/grub.cfg; do
grep -q "^${required}$" "$iso_files" || {
echo "ERROR: missing GRUB asset in ISO: ${required}" >&2
rm -f "$iso_files"
exit 1
}
done
rm -f "$iso_files"
echo "=== GRUB asset validation OK ==="
}
validate_iso_nvidia_runtime() {
iso_path="$1"
[ "$BEE_GPU_VENDOR" = "nvidia" ] || return 0
@@ -578,29 +608,37 @@ validate_iso_nvidia_runtime() {
squashfs_tmp="$(mktemp)"
squashfs_list="$(mktemp)"
iso_read_member "$iso_path" live/filesystem.squashfs "$squashfs_tmp" || {
rm -f "$squashfs_tmp" "$squashfs_list"
nvidia_runtime_fail "failed to extract live/filesystem.squashfs from ISO"
}
unsquashfs -ll "$squashfs_tmp" > "$squashfs_list" 2>/dev/null || {
rm -f "$squashfs_tmp" "$squashfs_list"
nvidia_runtime_fail "failed to inspect filesystem.squashfs from ISO"
iso_files="$(mktemp)"
iso_list_files "$iso_path" > "$iso_files" || {
rm -f "$squashfs_tmp" "$squashfs_list" "$iso_files"
nvidia_runtime_fail "failed to list ISO files for NVIDIA runtime validation"
}
grep '^live/.*\.squashfs$' "$iso_files" | while IFS= read -r squashfs_member; do
iso_read_member "$iso_path" "$squashfs_member" "$squashfs_tmp" || {
rm -f "$squashfs_tmp" "$squashfs_list" "$iso_files"
nvidia_runtime_fail "failed to extract $squashfs_member from ISO"
}
unsquashfs -ll "$squashfs_tmp" >> "$squashfs_list" 2>/dev/null || {
rm -f "$squashfs_tmp" "$squashfs_list" "$iso_files"
nvidia_runtime_fail "failed to inspect $squashfs_member from ISO"
}
: > "$squashfs_tmp"
done
grep -Eq 'usr/bin/dcgmi$' "$squashfs_list" || {
rm -f "$squashfs_tmp" "$squashfs_list"
rm -f "$squashfs_tmp" "$squashfs_list" "$iso_files"
nvidia_runtime_fail "dcgmi missing from final NVIDIA ISO"
}
grep -Eq 'usr/bin/nv-hostengine$' "$squashfs_list" || {
rm -f "$squashfs_tmp" "$squashfs_list"
rm -f "$squashfs_tmp" "$squashfs_list" "$iso_files"
nvidia_runtime_fail "nv-hostengine missing from final NVIDIA ISO"
}
grep -Eq 'usr/bin/dcgmproftester([0-9]+)?$' "$squashfs_list" || {
rm -f "$squashfs_tmp" "$squashfs_list"
rm -f "$squashfs_tmp" "$squashfs_list" "$iso_files"
nvidia_runtime_fail "dcgmproftester missing from final NVIDIA ISO"
}
rm -f "$squashfs_tmp" "$squashfs_list"
rm -f "$squashfs_tmp" "$squashfs_list" "$iso_files"
echo "=== NVIDIA runtime validation OK ==="
}
@@ -694,30 +732,25 @@ write_canonical_grub_cfg() {
kernel="$2"
append_live="$3"
initrd="$4"
version_label="${PROJECT_VERSION_EFFECTIVE}"
cat > "$cfg" <<EOF
source /boot/grub/config.cfg
echo ""
echo " ███████╗ █████╗ ███████╗██╗ ██╗ ██████╗ ███████╗███████╗"
echo " ██╔════╝██╔══██╗██╔════╝╚██╗ ██╔╝ ██╔══██╗██╔════╝██╔════╝"
echo " █████╗ ███████║███████╗ ╚████╔╝ █████╗██████╔╝█████╗ █████╗"
echo " ██╔══╝ ██╔══██║╚════██║ ╚██╔╝ ╚════╝██╔══██╗██╔══╝ ██╔══╝"
echo " ███████╗██║ ██║███████║ ██║ ██████╔╝███████╗███████╗"
echo " ╚══════╝╚═╝ ╚═╝╚══════╝ ╚═╝ ╚═════╝ ╚══════╝╚══════╝"
echo " Hardware Audit LiveCD"
echo ""
menuentry "EASY-BEE" {
linux ${kernel} ${append_live} bee.display=kms bee.nvidia.mode=normal pci=realloc net.ifnames=0 biosdevname=0 mitigations=off transparent_hugepage=always numa_balancing=disable pcie_aspm=off intel_idle.max_cstate=1 processor.max_cstate=1 nowatchdog nosoftlockup
menuentry "EASY-BEE v${version_label}" {
linux ${kernel} ${append_live} nomodeset bee.nvidia.mode=normal pci=realloc net.ifnames=0 biosdevname=0 mitigations=off transparent_hugepage=always numa_balancing=disable pcie_aspm=off intel_idle.max_cstate=1 processor.max_cstate=1 nowatchdog nosoftlockup
initrd ${initrd}
}
menuentry "EASY-BEE -- load to RAM (toram)" {
linux ${kernel} ${append_live} toram bee.display=kms bee.nvidia.mode=normal pci=realloc net.ifnames=0 biosdevname=0 mitigations=off transparent_hugepage=always numa_balancing=disable pcie_aspm=off intel_idle.max_cstate=1 processor.max_cstate=1 nowatchdog nosoftlockup
menuentry "EASY-BEE v${version_label} -- load to RAM (toram)" {
linux ${kernel} ${append_live} toram nomodeset bee.nvidia.mode=normal pci=realloc net.ifnames=0 biosdevname=0 mitigations=off transparent_hugepage=always numa_balancing=disable pcie_aspm=off intel_idle.max_cstate=1 processor.max_cstate=1 nowatchdog nosoftlockup
initrd ${initrd}
}
menuentry "EASY-BEE v${version_label} -- no GUI / no X11" {
linux ${kernel} ${append_live} nomodeset bee.gui=off bee.nvidia.mode=gsp-off pci=realloc net.ifnames=0 biosdevname=0 mitigations=off transparent_hugepage=always numa_balancing=disable pcie_aspm=off intel_idle.max_cstate=1 processor.max_cstate=1 nowatchdog nosoftlockup
initrd ${initrd}
}
if [ "\${grub_platform}" = "efi" ]; then
menuentry "Memory Test (memtest86+)" {
@@ -742,21 +775,28 @@ write_canonical_isolinux_cfg() {
kernel="$2"
initrd="$3"
append_live="$4"
version_label="${PROJECT_VERSION_EFFECTIVE}"
cat > "$cfg" <<EOF
label live-@FLAVOUR@-normal
menu label ^EASY-BEE
menu default
menu label ^EASY-BEE v${version_label}
linux ${kernel}
initrd ${initrd}
append ${append_live} nomodeset bee.nvidia.mode=normal net.ifnames=0 biosdevname=0 mitigations=off transparent_hugepage=always numa_balancing=disable pcie_aspm=off intel_idle.max_cstate=1 processor.max_cstate=1 nowatchdog nosoftlockup
label live-@FLAVOUR@-toram
menu label EASY-BEE (^load to RAM)
menu label EASY-BEE v${version_label} (^load to RAM)
menu default
linux ${kernel}
initrd ${initrd}
append ${append_live} toram nomodeset bee.nvidia.mode=normal net.ifnames=0 biosdevname=0 mitigations=off transparent_hugepage=always numa_balancing=disable pcie_aspm=off intel_idle.max_cstate=1 processor.max_cstate=1 nowatchdog nosoftlockup
label live-@FLAVOUR@-console
menu label EASY-BEE v${version_label} (^no GUI / no X11)
linux ${kernel}
initrd ${initrd}
append ${append_live} nomodeset bee.gui=off bee.nvidia.mode=gsp-off net.ifnames=0 biosdevname=0 mitigations=off transparent_hugepage=always numa_balancing=disable pcie_aspm=off intel_idle.max_cstate=1 processor.max_cstate=1 nowatchdog nosoftlockup
label live-@FLAVOUR@-gsp-off
menu label EASY-BEE (^NVIDIA GSP=off)
linux ${kernel}
@@ -800,10 +840,7 @@ enforce_live_build_bootloader_assets() {
if [ -f "$grub_cfg" ]; then
if extract_live_grub_entry "$grub_cfg"; then
mkdir -p "$grub_dir/live-theme"
cp "${BUILDER_DIR}/config/bootloaders/grub-efi/config.cfg" "$grub_dir/config.cfg"
cp "${BUILDER_DIR}/config/bootloaders/grub-efi/theme.cfg" "$grub_dir/theme.cfg"
cp -R "${BUILDER_DIR}/config/bootloaders/grub-efi/live-theme/." "$grub_dir/live-theme/"
write_canonical_grub_cfg "$grub_cfg" "$grub_kernel" "${live_build_append:-$grub_append}" "$grub_initrd"
echo "bootloader sync: rewrote binary/boot/grub/grub.cfg with canonical EASY-BEE menu"
else
@@ -857,8 +894,11 @@ FULL_BUILD_MARKER="${BUILD_WORK_DIR}/.bee-full-build-marker"
# hooks, archives, Dockerfile, auto/config) require a full lb build.
needs_full_build() {
[ -f "${FULL_BUILD_MARKER}" ] || return 0
[ -f "${BUILD_WORK_DIR}/binary/live/filesystem.squashfs" ] || return 0
[ -f "${BUILD_WORK_DIR}/live-image-amd64.hybrid.iso" ] || return 0
# Accept any versioned squashfs (filesystem-v*.squashfs or legacy filesystem.squashfs)
_any_sq=$(find "${BUILD_WORK_DIR}/binary/live" -maxdepth 1 \
-name 'filesystem*.squashfs' 2>/dev/null | head -1)
[ -n "$_any_sq" ] || return 0
_heavy=$(find \
"${BUILDER_DIR}/VERSIONS" \
@@ -881,40 +921,109 @@ needs_full_build() {
# Fast-path: unsquash existing filesystem, rsync overlay on top, repack.
# Requires ~10 GB free in BEE_CACHE_DIR for the unpacked squashfs.
fast_path_repack_squashfs() {
_sq="${BUILD_WORK_DIR}/binary/live/filesystem.squashfs"
_old_sq=$(find "${BUILD_WORK_DIR}/binary/live" -maxdepth 1 \
-name 'filesystem*.squashfs' | sort | head -1)
_sq="${BUILD_WORK_DIR}/binary/live/${SQUASHFS_FILENAME}"
_tmp="${BEE_CACHE_DIR}/fast-unsquash-${BUILD_VARIANT}"
echo "=== fast-path: unsquash ($(du -sh "$_sq" | cut -f1) compressed) ==="
echo "=== fast-path: unsquash $(basename "$_old_sq") ($(du -sh "$_old_sq" | cut -f1) compressed) ==="
rm -rf "$_tmp"
unsquashfs -d "$_tmp" "$_sq"
unsquashfs -d "$_tmp" "$_old_sq"
echo "=== fast-path: syncing overlay stage ==="
rsync -a --checksum "${OVERLAY_STAGE_DIR}/" "$_tmp/"
echo "=== fast-path: repacking squashfs ==="
echo "=== fast-path: repacking as ${SQUASHFS_FILENAME} ==="
_sq_new="${_sq}.new"
rm -f "$_sq_new"
mksquashfs "$_tmp" "$_sq_new" -comp zstd -b 1048576 -noappend -no-progress
mksquashfs "$_tmp" "$_sq_new" -comp zstd -b 1048576 -noappend -no-progress -no-xattrs
mv "$_sq_new" "$_sq"
rm -rf "$_tmp"
[ "$_old_sq" != "$_sq" ] && rm -f "$_old_sq"
echo "=== fast-path: squashfs repacked ($(du -sh "$_sq" | cut -f1)) ==="
}
# Fast-path: rebuild ISO by replacing only live/filesystem.squashfs via xorriso.
# Fast-path: rebuild ISO replacing the squashfs via xorriso.
# Boot structure (El Torito, EFI, MBR hybrid) is replayed from the prior ISO.
fast_path_rebuild_iso() {
_sq="${BUILD_WORK_DIR}/binary/live/filesystem.squashfs"
_sq="${BUILD_WORK_DIR}/binary/live/${SQUASHFS_FILENAME}"
_prior="${BUILD_WORK_DIR}/live-image-amd64.hybrid.iso"
_new="${BUILD_WORK_DIR}/live-image-amd64.hybrid.iso.new"
echo "=== fast-path: rebuilding ISO with xorriso ==="
rm -f "$_new"
# Remove any old squashfs entries from the prior ISO before adding the new one
_old_entries=$(xorriso -indev "$_prior" -find /live -name 'filesystem*.squashfs' -- 2>/dev/null \
| grep -E '^/live/filesystem.*\.squashfs$' || true)
_rm_args=""
for _e in $_old_entries; do
_rm_args="$_rm_args -rm $_e --"
done
# shellcheck disable=SC2086
xorriso \
-indev "$_prior" \
-outdev "$_new" \
-map "$_sq" /live/filesystem.squashfs \
${_rm_args} \
-map "$_sq" /live/${SQUASHFS_FILENAME} \
-boot_image any replay \
-commit
mv "$_new" "$_prior"
echo "=== fast-path: ISO rebuilt ==="
}
dir_has_entries() {
_dir="$1"
[ -d "$_dir" ] || return 1
find "$_dir" -mindepth 1 -print -quit 2>/dev/null | grep -q .
}
move_tree_to_layer() {
_src_root="$1"
_rel="$2"
_dst_root="$3"
[ -e "${_src_root}/${_rel}" ] || return 0
mkdir -p "${_dst_root}/$(dirname "$_rel")"
mv "${_src_root}/${_rel}" "${_dst_root}/${_rel}"
}
split_live_squashfs_layers() {
lb_dir="$1"
live_dir="${lb_dir}/binary/live"
base_sq="${live_dir}/filesystem.squashfs"
usr_sq="${live_dir}/10-usr.squashfs"
fw_sq="${live_dir}/20-firmware.squashfs"
[ -f "$base_sq" ] || return 0
command -v unsquashfs >/dev/null 2>&1 || return 0
command -v mksquashfs >/dev/null 2>&1 || return 0
tmp_root="$(mktemp -d)"
tmp_usr="$(mktemp -d)"
tmp_fw="$(mktemp -d)"
echo "=== splitting live squashfs into smaller layers ==="
unsquashfs -d "$tmp_root/root" "$base_sq" >/dev/null
mkdir -p "$tmp_usr/root" "$tmp_fw/root"
move_tree_to_layer "$tmp_root/root" "usr" "$tmp_usr/root"
move_tree_to_layer "$tmp_root/root" "lib/firmware" "$tmp_fw/root"
move_tree_to_layer "$tmp_root/root" "usr/lib/firmware" "$tmp_fw/root"
move_tree_to_layer "$tmp_root/root" "boot/firmware" "$tmp_fw/root"
rm -f "$usr_sq" "$fw_sq"
mksquashfs "$tmp_root/root" "${base_sq}.new" -comp zstd -b 1048576 -noappend -no-progress -no-xattrs >/dev/null
mv "${base_sq}.new" "$base_sq"
if dir_has_entries "$tmp_usr/root"; then
mksquashfs "$tmp_usr/root" "${usr_sq}.new" -comp zstd -b 1048576 -noappend -no-progress -no-xattrs >/dev/null
mv "${usr_sq}.new" "$usr_sq"
fi
if dir_has_entries "$tmp_fw/root"; then
mksquashfs "$tmp_fw/root" "${fw_sq}.new" -comp zstd -b 1048576 -noappend -no-progress -no-xattrs >/dev/null
mv "${fw_sq}.new" "$fw_sq"
fi
echo "=== live squashfs layers ==="
find "$live_dir" -maxdepth 1 -type f -name '*.squashfs' -exec du -sh {} \; | sort
rm -rf "$tmp_root" "$tmp_usr" "$tmp_fw"
}
recover_iso_memtest() {
lb_dir="$1"
iso_path="$2"
@@ -992,11 +1101,12 @@ recover_iso_memtest() {
fi
}
AUDIT_VERSION_EFFECTIVE="$(resolve_audit_version)"
ISO_VERSION_EFFECTIVE="$(resolve_iso_version)"
ISO_BASENAME="easy-bee-${BUILD_VARIANT}-v${ISO_VERSION_EFFECTIVE}-amd64"
PROJECT_VERSION_EFFECTIVE="$(resolve_project_version)"
SQUASHFS_FILENAME="filesystem-v${PROJECT_VERSION_EFFECTIVE}.squashfs"
ISO_BASENAME="easy-bee-${BUILD_VARIANT}-v${PROJECT_VERSION_EFFECTIVE}-amd64"
# Versioned output directory: dist/easy-bee-v4.1/ — all final artefacts live here.
OUT_DIR="${DIST_DIR}/easy-bee-v${ISO_VERSION_EFFECTIVE}"
OUT_DIR="${DIST_DIR}/release/easy-bee-v${PROJECT_VERSION_EFFECTIVE}"
ISO_VERSION_LABEL_TOKEN="$(printf '%s' "${PROJECT_VERSION_EFFECTIVE}" | tr '[:lower:].-' '[:upper:]__')"
mkdir -p "${OUT_DIR}"
LOG_DIR="${OUT_DIR}/${ISO_BASENAME}.logs"
LOG_ARCHIVE="${OUT_DIR}/${ISO_BASENAME}.logs.tar.gz"
@@ -1172,7 +1282,7 @@ fi
echo "=== bee ISO build (variant: ${BUILD_VARIANT}) ==="
echo "Debian: ${DEBIAN_VERSION}, Kernel ABI: ${DEBIAN_KERNEL_ABI}, Go: ${GO_VERSION}"
echo "Audit version: ${AUDIT_VERSION_EFFECTIVE}, ISO version: ${ISO_VERSION_EFFECTIVE}"
echo "Project version: ${PROJECT_VERSION_EFFECTIVE}"
echo ""
run_step "sync git submodules" "05-git-submodules" \
@@ -1180,7 +1290,7 @@ run_step "sync git submodules" "05-git-submodules" \
# --- compile bee binary (static, Linux amd64) ---
# Shared between variants — built once, reused on second pass.
BEE_BIN="${DIST_DIR}/bee-linux-amd64"
BEE_BIN="${DIST_DIR}/cache/bee-linux-amd64"
NEED_BUILD=1
if [ -f "$BEE_BIN" ]; then
NEWEST_SRC=$(find "${REPO_ROOT}/audit" -name '*.go' -newer "$BEE_BIN" | head -1)
@@ -1192,7 +1302,7 @@ if [ "$NEED_BUILD" = "1" ]; then
"cd '${REPO_ROOT}/audit' && \
env GOOS=linux GOARCH=amd64 CGO_ENABLED=0 \
go build \
-ldflags '-s -w -X main.Version=${AUDIT_VERSION_EFFECTIVE}' \
-ldflags '-s -w -X main.Version=${PROJECT_VERSION_EFFECTIVE}' \
-o '${BEE_BIN}' \
./cmd/bee"
echo "binary: $BEE_BIN"
@@ -1211,16 +1321,16 @@ else
fi
# --- NVIDIA-only build steps ---
GPU_BURN_WORKER_BIN="${DIST_DIR}/bee-gpu-burn-worker-linux-amd64"
GPU_BURN_WORKER_BIN="${DIST_DIR}/cache/bee-gpu-burn-worker-linux-amd64"
if [ "$BEE_GPU_VENDOR" = "nvidia" ]; then
run_step "download cuBLAS/cuBLASLt/cudart ${NCCL_CUDA_VERSION} userspace" "20-cublas" \
sh "${BUILDER_DIR}/build-cublas.sh" \
"${CUBLAS_VERSION}" \
"${CUDA_USERSPACE_VERSION}" \
"${NCCL_CUDA_VERSION}" \
"${DIST_DIR}"
"${DIST_DIR}/cache"
CUBLAS_CACHE="${DIST_DIR}/cublas-${CUBLAS_VERSION}+cuda${NCCL_CUDA_VERSION}"
CUBLAS_CACHE="${DIST_DIR}/cache/cublas-${CUBLAS_VERSION}+cuda${NCCL_CUDA_VERSION}"
echo "=== bee-gpu-burn FP4 header probe ==="
fp4_type_match="$(grep -Rsnm 1 'CUDA_R_4F_E2M1' "${CUBLAS_CACHE}/include" 2>/dev/null || true)"
@@ -1346,7 +1456,7 @@ fi
# --- copy bee binary into overlay ---
mkdir -p "${OVERLAY_STAGE_DIR}/usr/local/bin"
cp "${DIST_DIR}/bee-linux-amd64" "${OVERLAY_STAGE_DIR}/usr/local/bin/bee"
cp "$BEE_BIN" "${OVERLAY_STAGE_DIR}/usr/local/bin/bee"
chmod +x "${OVERLAY_STAGE_DIR}/usr/local/bin/bee"
if [ "$BEE_GPU_VENDOR" = "nvidia" ] && [ -f "$GPU_BURN_WORKER_BIN" ]; then
@@ -1376,10 +1486,10 @@ done
# --- NVIDIA kernel modules and userspace libs ---
if [ "$BEE_GPU_VENDOR" = "nvidia" ]; then
run_step "build NVIDIA ${NVIDIA_DRIVER_VERSION} modules" "40-nvidia-module" \
sh "${BUILDER_DIR}/build-nvidia-module.sh" "${NVIDIA_DRIVER_VERSION}" "${DIST_DIR}" "${DEBIAN_KERNEL_ABI}" "${BEE_NVIDIA_MODULE_FLAVOR}"
sh "${BUILDER_DIR}/build-nvidia-module.sh" "${NVIDIA_DRIVER_VERSION}" "${DIST_DIR}/cache" "${DEBIAN_KERNEL_ABI}" "${BEE_NVIDIA_MODULE_FLAVOR}"
KVER="${DEBIAN_KERNEL_ABI}-amd64"
NVIDIA_CACHE="${DIST_DIR}/nvidia-${BEE_NVIDIA_MODULE_FLAVOR}-${NVIDIA_DRIVER_VERSION}-${KVER}"
NVIDIA_CACHE="${DIST_DIR}/cache/nvidia-${BEE_NVIDIA_MODULE_FLAVOR}-${NVIDIA_DRIVER_VERSION}-${KVER}"
# Inject .ko files into overlay at /usr/local/lib/nvidia/
OVERLAY_KMOD_DIR="${OVERLAY_STAGE_DIR}/usr/local/lib/nvidia"
@@ -1405,9 +1515,9 @@ if [ "$BEE_GPU_VENDOR" = "nvidia" ]; then
# --- build / download NCCL ---
run_step "download NCCL ${NCCL_VERSION}+cuda${NCCL_CUDA_VERSION}" "50-nccl" \
sh "${BUILDER_DIR}/build-nccl.sh" "${NCCL_VERSION}" "${NCCL_CUDA_VERSION}" "${DIST_DIR}" "${NCCL_SHA256:-}"
sh "${BUILDER_DIR}/build-nccl.sh" "${NCCL_VERSION}" "${NCCL_CUDA_VERSION}" "${DIST_DIR}/cache" "${NCCL_SHA256:-}"
NCCL_CACHE="${DIST_DIR}/nccl-${NCCL_VERSION}+cuda${NCCL_CUDA_VERSION}"
NCCL_CACHE="${DIST_DIR}/cache/nccl-${NCCL_VERSION}+cuda${NCCL_CUDA_VERSION}"
# Inject libnccl.so.* into overlay alongside other NVIDIA userspace libs
cp "${NCCL_CACHE}/lib/"* "${OVERLAY_STAGE_DIR}/usr/lib/"
@@ -1423,19 +1533,19 @@ if [ "$BEE_GPU_VENDOR" = "nvidia" ]; then
"${NCCL_TESTS_VERSION}" \
"${NCCL_VERSION}" \
"${NCCL_CUDA_VERSION}" \
"${DIST_DIR}" \
"${DIST_DIR}/cache" \
"${NVCC_VERSION}" \
"${DEBIAN_VERSION}"
NCCL_TESTS_CACHE="${DIST_DIR}/nccl-tests-${NCCL_TESTS_VERSION}"
NCCL_TESTS_CACHE="${DIST_DIR}/cache/nccl-tests-${NCCL_TESTS_VERSION}"
cp "${NCCL_TESTS_CACHE}/bin/all_reduce_perf" "${OVERLAY_STAGE_DIR}/usr/local/bin/all_reduce_perf"
chmod +x "${OVERLAY_STAGE_DIR}/usr/local/bin/all_reduce_perf"
cp "${NCCL_TESTS_CACHE}/lib/"* "${OVERLAY_STAGE_DIR}/usr/lib/" 2>/dev/null || true
echo "=== all_reduce_perf injected ==="
run_step "build john jumbo ${JOHN_JUMBO_COMMIT}" "70-john" \
sh "${BUILDER_DIR}/build-john.sh" "${JOHN_JUMBO_COMMIT}" "${DIST_DIR}"
JOHN_CACHE="${DIST_DIR}/john-${JOHN_JUMBO_COMMIT}"
sh "${BUILDER_DIR}/build-john.sh" "${JOHN_JUMBO_COMMIT}" "${DIST_DIR}/cache"
JOHN_CACHE="${DIST_DIR}/cache/john-${JOHN_JUMBO_COMMIT}"
mkdir -p "${OVERLAY_STAGE_DIR}/usr/local/lib/bee/john"
rsync -a --delete "${JOHN_CACHE}/run/" "${OVERLAY_STAGE_DIR}/usr/local/lib/bee/john/run/"
ln -sfn ../lib/bee/john/run/john "${OVERLAY_STAGE_DIR}/usr/local/bin/john"
@@ -1467,8 +1577,10 @@ else
fi
cat > "${OVERLAY_STAGE_DIR}/etc/bee-release" <<EOF
BEE_ISO_VERSION=${ISO_VERSION_EFFECTIVE}
BEE_AUDIT_VERSION=${AUDIT_VERSION_EFFECTIVE}
BEE_VERSION=${PROJECT_VERSION_EFFECTIVE}
export BEE_VERSION
BEE_ISO_VERSION=${PROJECT_VERSION_EFFECTIVE}
BEE_AUDIT_VERSION=${PROJECT_VERSION_EFFECTIVE}
BEE_BUILD_VARIANT=${BUILD_VARIANT}
BEE_GPU_VENDOR=${BEE_GPU_VENDOR}
BUILD_DATE=${BUILD_DATE}
@@ -1561,6 +1673,7 @@ if ! needs_full_build; then
fast_path_rebuild_iso
ISO_RAW="${LB_DIR}/live-image-amd64.hybrid.iso"
validate_iso_live_boot_entries "$ISO_RAW"
validate_iso_grub_assets "$ISO_RAW"
validate_iso_nvidia_runtime "$ISO_RAW"
cp "$ISO_RAW" "$ISO_OUT"
echo ""
@@ -1575,15 +1688,25 @@ echo "=== building ISO (variant: ${BUILD_VARIANT}) ==="
# Export for auto/config
BEE_GPU_VENDOR_UPPER="$(echo "${BUILD_VARIANT}" | tr 'a-z-' 'A-Z_')"
export BEE_GPU_VENDOR_UPPER
BEE_ISO_VOLUME="EASY_BEE_${BEE_GPU_VENDOR_UPPER}_V${ISO_VERSION_LABEL_TOKEN}"
export BEE_GPU_VENDOR_UPPER BEE_ISO_VOLUME
cd "${LB_DIR}"
run_step_sh "live-build clean" "80-lb-clean" "lb clean --all 2>&1 | tail -3"
run_step_sh "live-build config" "81-lb-config" "lb config 2>&1 | tail -5"
dump_memtest_debug "pre-build" "${LB_DIR}"
export MKSQUASHFS_OPTIONS="-no-xattrs"
run_step_sh "live-build build" "90-lb-build" "lb build 2>&1"
echo "=== enforcing canonical bootloader assets ==="
enforce_live_build_bootloader_assets "${LB_DIR}"
# Rename lb's default filesystem.squashfs to the versioned filename so the
# ISO contains a version-stamped squashfs (e.g. filesystem-v10.15.squashfs).
_std_sq="${LB_DIR}/binary/live/filesystem.squashfs"
_ver_sq="${LB_DIR}/binary/live/${SQUASHFS_FILENAME}"
if [ -f "${_std_sq}" ] && [ "${_std_sq}" != "${_ver_sq}" ]; then
mv "${_std_sq}" "${_ver_sq}"
echo "=== squashfs renamed: filesystem.squashfs → ${SQUASHFS_FILENAME} ==="
fi
reset_live_build_stage "${LB_DIR}" "binary_checksums"
reset_live_build_stage "${LB_DIR}" "binary_iso"
reset_live_build_stage "${LB_DIR}" "binary_zsync"
@@ -1615,6 +1738,7 @@ if [ -f "$ISO_RAW" ]; then
fi
validate_iso_memtest "$ISO_RAW"
validate_iso_live_boot_entries "$ISO_RAW"
validate_iso_grub_assets "$ISO_RAW"
validate_iso_nvidia_runtime "$ISO_RAW"
cp "$ISO_RAW" "$ISO_OUT"
touch "${FULL_BUILD_MARKER}"

View File

@@ -1,5 +1,7 @@
set default=0
set timeout=5
set default=1
set timeout=10
set color_normal=yellow/black
set color_highlight=white/brown
if [ x$feature_default_font_path = xy ] ; then
font=unicode
@@ -8,7 +10,7 @@ else
fi
if loadfont $font ; then
set gfxmode=1920x1080,1280x1024,auto
set gfxmode=1280x1024,auto
set gfxpayload=keep
insmod efi_gop
insmod efi_uga
@@ -26,6 +28,3 @@ insmod gfxterm
terminal_input console serial
terminal_output gfxterm serial
insmod tga
source /boot/grub/theme.cfg

View File

@@ -1,15 +1,25 @@
source /boot/grub/config.cfg
menuentry "EASY-BEE" {
linux @KERNEL_LIVE@ @APPEND_LIVE@ bee.display=kms bee.nvidia.mode=normal pci=realloc net.ifnames=0 biosdevname=0 mitigations=off transparent_hugepage=always numa_balancing=disable pcie_aspm=off intel_idle.max_cstate=1 processor.max_cstate=1 nowatchdog nosoftlockup
menuentry "EASY-BEE v@VERSION@" {
linux @KERNEL_LIVE@ @APPEND_LIVE@ nomodeset bee.nvidia.mode=normal pci=realloc net.ifnames=0 biosdevname=0 mitigations=off transparent_hugepage=always numa_balancing=disable pcie_aspm=off intel_idle.max_cstate=1 processor.max_cstate=1 nowatchdog nosoftlockup
initrd @INITRD_LIVE@
}
menuentry "EASY-BEE -- load to RAM (toram)" {
linux @KERNEL_LIVE@ @APPEND_LIVE@ toram bee.display=kms bee.nvidia.mode=normal pci=realloc net.ifnames=0 biosdevname=0 mitigations=off transparent_hugepage=always numa_balancing=disable pcie_aspm=off intel_idle.max_cstate=1 processor.max_cstate=1 nowatchdog nosoftlockup
menuentry "EASY-BEE v@VERSION@ -- load to RAM (toram)" {
linux @KERNEL_LIVE@ @APPEND_LIVE@ toram nomodeset bee.nvidia.mode=normal pci=realloc net.ifnames=0 biosdevname=0 mitigations=off transparent_hugepage=always numa_balancing=disable pcie_aspm=off intel_idle.max_cstate=1 processor.max_cstate=1 nowatchdog nosoftlockup
initrd @INITRD_LIVE@
}
menuentry "EASY-BEE v@VERSION@ -- no GUI / no X11" {
linux @KERNEL_LIVE@ @APPEND_LIVE@ nomodeset bee.gui=off bee.nvidia.mode=gsp-off pci=realloc net.ifnames=0 biosdevname=0 mitigations=off transparent_hugepage=always numa_balancing=disable pcie_aspm=off intel_idle.max_cstate=1 processor.max_cstate=1 nowatchdog nosoftlockup
initrd @INITRD_LIVE@
}
menuentry "*** WIPE ALL DISKS (irreversible!) ***" {
linux @KERNEL_LIVE@ @APPEND_LIVE@ toram nomodeset bee.gui=off bee.wipe=all net.ifnames=0 biosdevname=0
initrd @INITRD_LIVE@
}
if [ "${grub_platform}" = "efi" ]; then
menuentry "Memory Test (memtest86+)" {

View File

@@ -5,13 +5,6 @@ title-text: ""
message-font: "Unifont Regular 16"
terminal-font: "Unifont Regular 16"
#bee logo - centered, upper third of screen
+ image {
top = 4%
left = 50%-200
file = "bee-logo.tga"
}
#help bar at the bottom
+ label {
top = 100%-50

View File

@@ -1,16 +1,22 @@
label live-@FLAVOUR@-normal
menu label ^EASY-BEE
menu default
menu label ^EASY-BEE v@VERSION@
linux @LINUX@
initrd @INITRD@
append @APPEND_LIVE@ nomodeset bee.nvidia.mode=normal net.ifnames=0 biosdevname=0 mitigations=off transparent_hugepage=always numa_balancing=disable pcie_aspm=off intel_idle.max_cstate=1 processor.max_cstate=1 nowatchdog nosoftlockup
label live-@FLAVOUR@-toram
menu label EASY-BEE (^load to RAM)
menu label EASY-BEE v@VERSION@ (^load to RAM)
menu default
linux @LINUX@
initrd @INITRD@
append @APPEND_LIVE@ toram nomodeset bee.nvidia.mode=normal net.ifnames=0 biosdevname=0 mitigations=off transparent_hugepage=always numa_balancing=disable pcie_aspm=off intel_idle.max_cstate=1 processor.max_cstate=1 nowatchdog nosoftlockup
label live-@FLAVOUR@-console
menu label EASY-BEE v@VERSION@ (^no GUI / no X11)
linux @LINUX@
initrd @INITRD@
append @APPEND_LIVE@ nomodeset bee.gui=off bee.nvidia.mode=gsp-off net.ifnames=0 biosdevname=0 mitigations=off transparent_hugepage=always numa_balancing=disable pcie_aspm=off intel_idle.max_cstate=1 processor.max_cstate=1 nowatchdog nosoftlockup
label live-@FLAVOUR@-gsp-off
menu label EASY-BEE (^NVIDIA GSP=off)
linux @LINUX@
@@ -35,6 +41,12 @@ label live-@FLAVOUR@-failsafe
initrd @INITRD@
append @APPEND_LIVE@ nomodeset bee.nvidia.mode=gsp-off noapic noapm nodma nomce nolapic nosmp vga=normal net.ifnames=0 biosdevname=0
label wipe-disks
menu label *** WIPE ALL DISKS (irreversible!) ***
linux @LINUX@
initrd @INITRD@
append @APPEND_LIVE@ toram nomodeset bee.gui=off bee.wipe=all net.ifnames=0 biosdevname=0
label memtest
menu label ^Memory Test (memtest86+)
linux /boot/memtest86+x64.bin

View File

@@ -67,6 +67,7 @@ chmod +x /usr/local/bin/bee-log-run 2>/dev/null || true
chmod +x /usr/local/bin/bee-selfheal 2>/dev/null || true
chmod +x /usr/local/bin/bee-boot-status 2>/dev/null || true
chmod +x /usr/local/bin/bee-install 2>/dev/null || true
chmod +x /usr/local/bin/bee-gui-gate 2>/dev/null || true
chmod +x /usr/local/bin/bee-remount-medium 2>/dev/null || true
if [ "$GPU_VENDOR" = "nvidia" ]; then
chmod +x /usr/local/bin/bee-nvidia-load 2>/dev/null || true

View File

@@ -0,0 +1,57 @@
#!/bin/sh
# 9012-wipe.hook.chroot
#
# Adds bee-initramfs-wipe to the initramfs so that selecting the
# "WIPE ALL DISKS" boot menu entry runs the wipe tool before squashfs
# is mounted — i.e. it works even when live boot fails.
#
# Two files are installed inside the chroot:
# /etc/initramfs-tools/hooks/bee-wipe — copies binaries into initrd
# /etc/initramfs-tools/scripts/local-premount/bee-wipe — runs at boot
set -e
HOOK_DIR="/etc/initramfs-tools/hooks"
SCRIPT_DIR="/etc/initramfs-tools/scripts/local-premount"
mkdir -p "${HOOK_DIR}" "${SCRIPT_DIR}"
# ── initramfs hook: copy binaries ────────────────────────────────────────────
cat > "${HOOK_DIR}/bee-wipe" << 'EOF'
#!/bin/sh
PREREQ=""
prereqs() { echo "$PREREQ"; }
case "$1" in prereqs) prereqs; exit 0 ;; esac
. /usr/share/initramfs-tools/hook-functions
for bin in lsblk blkid blkdiscard blockdev; do
b=$(command -v "$bin" 2>/dev/null) && copy_exec "$b" /bin
done
[ -x /usr/sbin/nvme ] && copy_exec /usr/sbin/nvme /sbin
copy_exec /usr/local/bin/bee-initramfs-wipe /bin/bee-wipe
EOF
chmod +x "${HOOK_DIR}/bee-wipe"
# ── initramfs premount script: trigger on bee.wipe=all ───────────────────────
cat > "${SCRIPT_DIR}/bee-wipe" << 'EOF'
#!/bin/sh
PREREQ=""
prereqs() { echo "$PREREQ"; }
case "$1" in prereqs) prereqs; exit 0 ;; esac
grep -qw 'bee.wipe=all' /proc/cmdline 2>/dev/null || exit 0
exec /bin/bee-wipe
EOF
chmod +x "${SCRIPT_DIR}/bee-wipe"
echo "9012-wipe: installed initramfs hook and premount script"
KVER=$(ls /lib/modules | sort -V | tail -1)
echo "9012-wipe: rebuilding initramfs for kernel ${KVER}"
update-initramfs -u -k "${KVER}"
echo "9012-wipe: done"

View File

@@ -0,0 +1,37 @@
#!/usr/bin/env python3
# 9998-strip-xattrs.hook.chroot
#
# mksquashfs 4.5.1 (Debian bookworm) writes a non-INVALID xattr_id_table_start
# even with -no-xattrs when the source tree contains POSIX ACL xattrs set by
# dpkg/install-time. Linux 6.1 squashfs driver then fails with
# "unable to read xattr id index table" and aborts the mount.
#
# Strip all xattrs from the live chroot before mksquashfs sees the tree so the
# resulting squashfs has SQUASHFS_INVALID_BLK in xattr_id_table_start.
import os
def strip(path):
try:
for attr in os.listxattr(path, follow_symlinks=False):
try:
os.removexattr(path, attr, follow_symlinks=False)
except OSError:
pass
except OSError:
pass
removed = 0
for root, dirs, files in os.walk('/', topdown=True, followlinks=False):
for name in dirs + files:
p = os.path.join(root, name)
try:
attrs = os.listxattr(p, follow_symlinks=False)
if attrs:
strip(p)
removed += len(attrs)
except OSError:
pass
strip(root)
print(f"9998-strip-xattrs: removed xattrs from {removed} entries")

View File

@@ -1,11 +1,4 @@
███████╗ █████╗ ███████╗██╗ ██╗ ██████╗ ███████╗███████╗
██╔════╝██╔══██╗██╔════╝╚██╗ ██╔╝ ██╔══██╗██╔════╝██╔════╝
█████╗ ███████║███████╗ ╚████╔╝ █████╗██████╔╝█████╗ █████╗
██╔══╝ ██╔══██║╚════██║ ╚██╔╝ ╚════╝██╔══██╗██╔══╝ ██╔══╝
███████╗██║ ██║███████║ ██║ ██████╔╝███████╗███████╗
╚══════╝╚═╝ ╚═╝╚══════╝ ╚═╝ ╚═════╝ ╚══════╝╚══════╝
EASY BEE
Hardware Audit LiveCD
Build: %%BUILD_INFO%%

View File

@@ -1,6 +1,6 @@
[Unit]
Description=Bee: hardware audit
After=bee-preflight.service bee-network.service bee-nvidia.service bee-blackbox.service
After=bee-preflight.service bee-nvidia.service bee-blackbox.service
[Service]
Type=oneshot

View File

@@ -1,7 +1,6 @@
[Unit]
Description=Bee: bring up network interfaces via DHCP
After=local-fs.target bee-blackbox.service
Before=network-online.target bee-audit.service
After=bee-web.service bee-audit.service
[Service]
Type=oneshot

View File

@@ -1,6 +1,6 @@
[Unit]
Description=Bee: runtime preflight self-check
After=bee-network.service bee-nvidia.service bee-blackbox.service
After=bee-nvidia.service bee-blackbox.service
Before=bee-audit.service
[Service]

View File

@@ -3,7 +3,7 @@ Description=Bee: run self-heal checks periodically
[Timer]
OnBootSec=45sec
OnUnitActiveSec=60sec
OnUnitActiveSec=3min
AccuracySec=15sec
Unit=bee-selfheal.service

View File

@@ -0,0 +1,2 @@
[Service]
ExecCondition=/usr/local/bin/bee-gui-gate

View File

@@ -51,12 +51,7 @@ while true; do
printf '\033[H\033[2J'
printf '\n'
printf ' \033[33m███████╗ █████╗ ███████╗██╗ ██╗ ██████╗ ███████╗███████╗\033[0m\n'
printf ' \033[33m██╔════╝██╔══██╗██╔════╝╚██╗ ██╔╝ ██╔══██╗██╔════╝██╔════╝\033[0m\n'
printf ' \033[33m█████╗ ███████║███████╗ ╚████╔╝ █████╗██████╔╝█████╗ █████╗\033[0m\n'
printf ' \033[33m██╔══╝ ██╔══██║╚════██║ ╚██╔╝ ╚════╝██╔══██╗██╔══╝ ██╔══╝\033[0m\n'
printf ' \033[33m███████╗██║ ██║███████║ ██║ ██████╔╝███████╗███████╗\033[0m\n'
printf ' \033[33m╚══════╝╚═╝ ╚═╝╚══════╝ ╚═╝ ╚═════╝ ╚══════╝╚══════╝\033[0m\n'
printf ' \033[33mEASY BEE\033[0m\n'
printf ' Hardware Audit LiveCD\n'
printf '\n'

View File

@@ -0,0 +1,27 @@
#!/bin/sh
# bee-gui-gate — skip starting the local GUI when bee.gui=off is set.
set -eu
cmdline_param() {
key="$1"
for token in $(cat /proc/cmdline 2>/dev/null); do
case "$token" in
"$key"=*)
echo "${token#*=}"
return 0
;;
esac
done
return 1
}
mode="$(cmdline_param bee.gui || true)"
case "${mode}" in
off|false|0|tty|console|text|nogui)
echo "bee-gui-gate: bee.gui=${mode}; skipping lightdm"
exit 1
;;
esac
exit 0

View File

@@ -0,0 +1,166 @@
#!/bin/sh
# bee-initramfs-wipe — interactive disk wipe running entirely in the initramfs.
# Triggered by bee.wipe=all on the kernel cmdline (via local-premount hook).
# Works before squashfs is mounted, so it runs even when live boot fails.
RED='\033[1;31m'
YEL='\033[1;33m'
GRN='\033[1;32m'
CYN='\033[1;36m'
NC='\033[0m'
p() { printf '%b\n' "$*"; }
pp() { printf '%b' "$*"; }
banner() {
p ""
p "${RED}╔══════════════════════════════════════════════════════════╗${NC}"
p "${RED}║ BEE DRIVE WIPE — initramfs stage ║${NC}"
p "${RED}╚══════════════════════════════════════════════════════════╝${NC}"
p ""
}
# ── find boot device ─────────────────────────────────────────────────────────
boot_dev() {
local label token
for token in $(cat /proc/cmdline 2>/dev/null); do
case "$token" in
live-media-label=*) label="${token#*=}" ;;
esac
done
[ -z "$label" ] && return
local dev
dev=$(blkid -L "$label" 2>/dev/null) || return
# strip partition suffix: /dev/sdb1 → /dev/sdb, /dev/nvme0n1p1 → /dev/nvme0n1
echo "$dev" | sed 's/p\?[0-9]\+$//'
}
# ── enumerate candidate disks ─────────────────────────────────────────────────
list_disks() {
local boot
boot=$(boot_dev)
lsblk -d -n -o NAME,TYPE,SIZE,MODEL 2>/dev/null | while read -r name type size model; do
[ "$type" = "disk" ] || continue
[ "$size" = "0B" ] && continue
local dev="/dev/$name"
[ "$dev" = "$boot" ] && continue
printf '%s\t%s\t%s\n' "$dev" "$size" "${model:-}"
done
}
# ── wipe one disk ─────────────────────────────────────────────────────────────
wipe_one() {
local dev="$1"
p ""
p "=== ${YEL}${dev}${NC} ==="
if echo "$dev" | grep -q '^/dev/nvme'; then
if nvme format --ses=1 "$dev" 2>&1; then
p " ${GRN}nvme format OK${NC}"
blockdev --flushbufs "$dev" 2>/dev/null || true
return
fi
p " nvme format failed — falling back to blkdiscard"
fi
if blkdiscard -f "$dev" 2>&1; then
p " ${GRN}blkdiscard OK${NC}"
blockdev --flushbufs "$dev" 2>/dev/null || true
return
fi
p " blkdiscard not supported — zeroing partition tables (HDD fallback)"
local size_bytes mb32 skip
size_bytes=$(blockdev --getsize64 "$dev" 2>/dev/null || echo 0)
mb32=$(( 32 * 1024 * 1024 ))
dd if=/dev/zero of="$dev" bs=4M count=8 conv=fsync status=progress 2>&1 || true
if [ "$size_bytes" -gt $(( mb32 * 2 )) ]; then
skip=$(( (size_bytes - mb32) / (4 * 1024 * 1024) ))
dd if=/dev/zero of="$dev" bs=4M count=8 seek="$skip" conv=fsync status=progress 2>&1 || true
fi
blockdev --flushbufs "$dev" 2>/dev/null || true
p " ${GRN}done (partition tables zeroed)${NC}"
}
# ── main ──────────────────────────────────────────────────────────────────────
banner
BOOT=$(boot_dev)
[ -n "$BOOT" ] && p "Boot device (excluded): ${CYN}${BOOT}${NC}\n"
# build indexed list
i=0
DEVS=""
IFS='
'
for line in $(list_disks); do
i=$(( i + 1 ))
dev=$(echo "$line" | cut -f1)
size=$(echo "$line" | cut -f2)
model=$(echo "$line" | cut -f3)
DEVS="${DEVS}${i}:${dev}:${size}:${model}
"
printf " ${CYN}[%d]${NC} %-16s %8s %s\n" "$i" "$dev" "$size" "$model"
done
IFS='
'
if [ "$i" -eq 0 ]; then
p "\nNo physical disks found (boot device excluded)."
p "Dropping to shell — type 'exit' to continue boot."
exec /bin/sh
fi
p ""
pp "Enter numbers to wipe (space-separated), ${YEL}all${NC} for all, ${YEL}q${NC} to abort: "
read -r SELECTION
case "$SELECTION" in
q|Q|'') p "\nAborted."; exec /bin/sh ;;
esac
# resolve selection → list of devs
SELECTED=""
if [ "$SELECTION" = "all" ] || [ "$SELECTION" = "ALL" ]; then
SELECTED=$(echo "$DEVS" | grep -v '^$' | cut -d: -f2 | tr '\n' ' ')
else
for num in $SELECTION; do
match=$(echo "$DEVS" | grep "^${num}:" | cut -d: -f2)
if [ -z "$match" ]; then
p "${RED}Unknown index: ${num}${NC}"; exec /bin/sh
fi
SELECTED="${SELECTED}${match} "
done
fi
SELECTED=$(echo "$SELECTED" | tr -s ' ' | sed 's/ $//')
p ""
p "Selected for wipe: ${YEL}${SELECTED}${NC}"
p "${RED}WARNING: This is IRREVERSIBLE. All data on the selected disks will be lost.${NC}"
p ""
pp "Type YES to confirm, anything else to abort: "
read -r CONFIRM
if [ "$CONFIRM" != "YES" ]; then
p "\nAborted — no disks were touched."
exec /bin/sh
fi
p "\nStarting wipe..."
for dev in $SELECTED; do
wipe_one "$dev"
done
sync
p ""
p "${GRN}=== All selected disks wiped and flushed. ===${NC}"
p ""
pp "Press Enter to reboot..."
read -r _
reboot

View File

@@ -8,7 +8,7 @@
# Layout (UEFI): GPT, /dev/sdX1=EFI 512MB vfat, /dev/sdX2=root ext4
# Layout (BIOS): MBR, /dev/sdX1=root ext4
#
# Squashfs source: /run/live/medium/live/filesystem.squashfs
# Squashfs sources: /run/live/medium/live/*.squashfs
set -euo pipefail
@@ -62,9 +62,9 @@ for tool in parted mkfs.vfat mkfs.ext4 unsquashfs grub-install update-grub; do
fi
done
SQUASHFS="/run/live/medium/live/filesystem.squashfs"
if [ ! -f "$SQUASHFS" ]; then
echo "ERROR: squashfs not found at $SQUASHFS" >&2
mapfile -t SQUASHFS_FILES < <(find /run/live/medium/live -maxdepth 1 -type f -name '*.squashfs' | sort)
if [ "${#SQUASHFS_FILES[@]}" -eq 0 ]; then
echo "ERROR: no squashfs files found under /run/live/medium/live" >&2
echo " The live medium may have been disconnected." >&2
echo " Reconnect the disc and run: bee-remount-medium --wait" >&2
echo " Then re-run bee-install." >&2
@@ -106,7 +106,10 @@ log "=== BEE DISK INSTALLER ==="
log "Target device : $DEVICE"
log "Root partition: $PART_ROOT"
[ "$UEFI" = "1" ] && log "EFI partition : $PART_EFI"
log "Squashfs : $SQUASHFS ($(du -sh "$SQUASHFS" | cut -f1))"
log "Squashfs : ${#SQUASHFS_FILES[@]} layer(s)"
for sf in "${SQUASHFS_FILES[@]}"; do
log " - $sf ($(du -sh "$sf" | cut -f1))"
done
log "Log : $LOGFILE"
log ""
@@ -163,7 +166,9 @@ log " Mounted."
# ------------------------------------------------------------------
log "--- Step 5/7: Unpacking filesystem (this takes 10-20 minutes) ---"
log " Source: $SQUASHFS"
for sf in "${SQUASHFS_FILES[@]}"; do
log " Source: $sf"
done
log " Target: $MOUNT_ROOT"
# unsquashfs does not support resume, so retry the entire unpack step if the
@@ -177,9 +182,9 @@ while true; do
fi
[ "$UNPACK_ATTEMPTS" -gt 1 ] && log " Retry attempt $UNPACK_ATTEMPTS / $UNPACK_MAX ..."
# Re-check squashfs is reachable before each attempt
if [ ! -f "$SQUASHFS" ]; then
log " SOURCE LOST: $SQUASHFS not found."
mapfile -t SQUASHFS_FILES < <(find /run/live/medium/live -maxdepth 1 -type f -name '*.squashfs' | sort)
if [ "${#SQUASHFS_FILES[@]}" -eq 0 ]; then
log " SOURCE LOST: no squashfs files found under /run/live/medium/live."
log " Reconnect the disc and run 'bee-remount-medium --wait' in another terminal,"
log " then press Enter here to retry."
read -r _
@@ -194,12 +199,17 @@ while true; do
fi
UNPACK_OK=0
unsquashfs -f -d "$MOUNT_ROOT" "$SQUASHFS" 2>&1 | \
grep -E '^\[|^inod|^created|^extract|^ERROR|failed' | \
while IFS= read -r line; do log " $line"; done || UNPACK_OK=$?
for sf in "${SQUASHFS_FILES[@]}"; do
log " Unpacking $(basename "$sf") ..."
unsquashfs -f -d "$MOUNT_ROOT" "$sf" 2>&1 | \
grep -E '^\[|^inod|^created|^extract|^ERROR|failed' | \
while IFS= read -r line; do log " $line"; done || UNPACK_OK=$?
[ "$UNPACK_OK" -eq 0 ] || break
done
# Check squashfs is still reachable (gone = disc pulled during copy)
if [ ! -f "$SQUASHFS" ]; then
mapfile -t SQUASHFS_FILES < <(find /run/live/medium/live -maxdepth 1 -type f -name '*.squashfs' | sort)
if [ "${#SQUASHFS_FILES[@]}" -eq 0 ]; then
log " WARNING: source medium lost during unpack — will retry after remount."
log " Run 'bee-remount-medium --wait' in another terminal, then press Enter."
read -r _

View File

@@ -1,8 +1,9 @@
#!/bin/sh
# bee-network.sh — bring up all physical network interfaces via DHCP
# Unattended: runs silently, logs results, never blocks.
# Unattended: starts later in boot, runs quietly, and gives up after a bounded timeout.
LOG_PREFIX="bee-network"
DHCP_TIMEOUT_SECS=300
log() { echo "[$LOG_PREFIX] $*"; }
@@ -19,9 +20,50 @@ if command -v udevadm >/dev/null 2>&1; then
udevadm settle --timeout=5 >/dev/null 2>&1 || log "WARN: udevadm settle timed out"
fi
start_dhcp() {
iface="$1"
if ! ip link set "$iface" up; then
log "WARN: could not bring up $iface"
return 1
fi
carrier=$(cat "/sys/class/net/$iface/carrier" 2>/dev/null || true)
if [ "$carrier" = "1" ]; then
log "carrier detected on $iface"
else
log "carrier not detected on $iface"
fi
dhclient -r "$iface" >/dev/null 2>&1 || true
if timeout "${DHCP_TIMEOUT_SECS}" dhclient -4 -q -1 "$iface" >/dev/null 2>&1; then
addr="$(ip -4 -o addr show dev "$iface" scope global 2>/dev/null | awk '{print $4}' | head -1)"
if [ -n "$addr" ]; then
log "DHCP lease acquired on $iface ($addr)"
else
log "DHCP lease acquired on $iface"
fi
return 0
fi
rc=$?
case "$rc" in
124)
log "DHCP timed out on $iface after ${DHCP_TIMEOUT_SECS}s"
;;
*)
log "DHCP failed on $iface (exit $rc)"
;;
esac
dhclient -r "$iface" >/dev/null 2>&1 || true
return 1
}
started_ifaces=""
started_count=0
scan_pass=1
pids=""
pid_ifaces=""
# Some server NICs appear a bit later after module/firmware init. Do a small
# bounded rescan window without turning network bring-up into a boot blocker.
@@ -34,22 +76,11 @@ while [ "$scan_pass" -le 3 ]; do
*" $iface "*) continue ;;
esac
log "bringing up $iface"
if ! ip link set "$iface" up; then
log "WARN: could not bring up $iface"
continue
fi
carrier=$(cat "/sys/class/net/$iface/carrier" 2>/dev/null || true)
if [ "$carrier" = "1" ]; then
log "carrier detected on $iface"
else
log "carrier not detected yet on $iface"
fi
# DHCP in background — non-blocking, keep dhclient verbose output in the service log.
dhclient -4 -v -nw "$iface" &
log "DHCP started for $iface (pid $!)"
log "starting DHCP on $iface (timeout ${DHCP_TIMEOUT_SECS}s)"
start_dhcp "$iface" &
pid="$!"
pids="$pids $pid"
pid_ifaces="$pid_ifaces $pid:$iface"
started_ifaces="$started_ifaces $iface"
started_count=$((started_count + 1))
@@ -68,4 +99,15 @@ if [ "$started_count" -eq 0 ]; then
exit 0
fi
log "done (interfaces started: $started_count)"
success_count=0
for pid_iface in $pid_ifaces; do
pid="${pid_iface%%:*}"
iface="${pid_iface#*:}"
if wait "$pid"; then
success_count=$((success_count + 1))
else
log "DHCP did not complete successfully on $iface"
fi
done
log "done (interfaces scanned: $started_count, leases acquired: $success_count)"

View File

@@ -2,7 +2,7 @@
# bee-remount-medium — find and remount the live ISO medium to /run/live/medium
#
# Run this after reconnecting the ISO source disc (USB/CD) if the live medium
# was lost and /run/live/medium/live/filesystem.squashfs is missing.
# was lost and /run/live/medium/live/*.squashfs are missing.
#
# Usage: bee-remount-medium [--wait]
# --wait keep retrying every 5 seconds until the medium is found (useful
@@ -11,7 +11,7 @@
set -euo pipefail
MEDIUM_DIR="/run/live/medium"
SQUASHFS_REL="live/filesystem.squashfs"
SQUASHFS_GLOB="live/*.squashfs"
WAIT_MODE=0
for arg in "$@"; do
@@ -28,6 +28,10 @@ done
log() { echo "[$(date +%H:%M:%S)] $*"; }
die() { log "ERROR: $*" >&2; exit 1; }
if [ "$(id -u)" -ne 0 ]; then
die "bee-remount-medium must be run as root (use sudo or a root shell)"
fi
# Return all candidate block devices (optical + removable USB mass storage)
find_candidates() {
# CD/DVD drives
@@ -52,7 +56,7 @@ try_mount() {
local tmpdir
tmpdir=$(mktemp -d /tmp/bee-probe-XXXXXX)
if mount -o ro "$dev" "$tmpdir" 2>/dev/null; then
if [ -f "${tmpdir}/${SQUASHFS_REL}" ]; then
if find "${tmpdir}/live" -maxdepth 1 -type f -name '*.squashfs' 2>/dev/null | grep -q .; then
# Unmount probe mount and mount properly onto live path
umount "$tmpdir" 2>/dev/null || true
rmdir "$tmpdir" 2>/dev/null || true
@@ -78,8 +82,9 @@ attempt() {
for dev in $(find_candidates); do
log " Trying $dev ..."
if try_mount "$dev"; then
local sq="${MEDIUM_DIR}/${SQUASHFS_REL}"
log "SUCCESS: squashfs available at $sq ($(du -sh "$sq" | cut -f1))"
local count
count=$(find "${MEDIUM_DIR}/live" -maxdepth 1 -type f -name '*.squashfs' 2>/dev/null | wc -l | tr -d ' ')
log "SUCCESS: ${count} squashfs layer(s) available under ${MEDIUM_DIR}/live"
return 0
fi
done
@@ -96,5 +101,5 @@ if [ "$WAIT_MODE" = "1" ]; then
sleep 5
done
else
attempt || die "No ISO medium with ${SQUASHFS_REL} found. Reconnect the disc and re-run, or use --wait."
attempt || die "No ISO medium with ${SQUASHFS_GLOB} found. Reconnect the disc and re-run, or use --wait."
fi

View File

@@ -8,11 +8,17 @@ EXPORT_DIR="/appdata/bee/export"
AUDIT_JSON="${EXPORT_DIR}/bee-audit.json"
RUNTIME_JSON="${EXPORT_DIR}/runtime-health.json"
LOCK_DIR="/run/bee-selfheal.lock"
EVENTS=0
log() {
echo "[${LOG_PREFIX}] $*"
}
log_event() {
EVENTS=$((EVENTS + 1))
log "$*"
}
have_nvidia_gpu() {
lspci -Dn 2>/dev/null | awk '$2 ~ /^03(00|02):$/ && $3 ~ /^10de:/ { found=1; exit } END { exit(found ? 0 : 1) }'
}
@@ -56,24 +62,22 @@ web_healthy() {
mkdir -p "${EXPORT_DIR}" /run
if ! mkdir "${LOCK_DIR}" 2>/dev/null; then
log "another self-heal run is already active"
log_event "another self-heal run is already active"
exit 0
fi
trap 'rmdir "${LOCK_DIR}" >/dev/null 2>&1 || true' EXIT
log "start"
if have_nvidia_gpu && [ ! -e /dev/nvidia0 ]; then
log "NVIDIA GPU detected but /dev/nvidia0 is missing"
log_event "NVIDIA GPU detected but /dev/nvidia0 is missing"
restart_service bee-nvidia.service || true
fi
runtime_state="$(artifact_state "${RUNTIME_JSON}")"
if [ "${runtime_state}" != "ready" ]; then
if [ "${runtime_state}" = "interrupted" ]; then
log "runtime-health.json.tmp exists — interrupted runtime-health write detected"
log_event "runtime-health.json.tmp exists — interrupted runtime-health write detected"
else
log "runtime-health.json missing or empty"
log_event "runtime-health.json missing or empty"
fi
restart_service bee-preflight.service || true
fi
@@ -81,19 +85,17 @@ fi
audit_state="$(artifact_state "${AUDIT_JSON}")"
if [ "${audit_state}" != "ready" ]; then
if [ "${audit_state}" = "interrupted" ]; then
log "bee-audit.json.tmp exists — interrupted audit write detected"
log_event "bee-audit.json.tmp exists — interrupted audit write detected"
else
log "bee-audit.json missing or empty"
log_event "bee-audit.json missing or empty"
fi
restart_service bee-audit.service || true
fi
if ! service_active bee-web.service; then
log "bee-web.service is not active"
log_event "bee-web.service is not active"
restart_service bee-web.service || true
elif ! web_healthy; then
log "bee-web health check failed"
log_event "bee-web health check failed"
restart_service bee-web.service || true
fi
log "done"

View File

@@ -0,0 +1,132 @@
#!/bin/bash
# bee-wipe-disks — erase all physical disks (interactive, confirmation required)
#
# Triggered automatically when the kernel cmdline contains bee.wipe=all.
# Can also be run manually from a root shell.
#
# Wipe strategy:
# NVMe — nvme format (ATA-style secure erase, fast)
# Other — blkdiscard -f (TRIM/UNMAP, fast on SSDs)
# dd if=/dev/zero (fallback for HDDs, zeros first+last 32 MB)
set -euo pipefail
RED=$'\033[1;31m'
YEL=$'\033[1;33m'
GRN=$'\033[1;32m'
NC=$'\033[0m'
banner() {
echo ""
echo "${RED}╔══════════════════════════════════════════════════════════╗${NC}"
echo "${RED}║ BEE DISK WIPE — ALL DATA WILL BE DESTROYED ║${NC}"
echo "${RED}╚══════════════════════════════════════════════════════════╝${NC}"
echo ""
}
# ── find boot device to skip ──────────────────────────────────────────────────
live_dev() {
local src
src=$(findmnt -n -o SOURCE /run/live/medium 2>/dev/null || true)
[ -z "$src" ] && return
# Strip partition suffix: /dev/sdb1 → /dev/sdb, /dev/nvme0n1p1 → /dev/nvme0n1
echo "$src" | sed 's/p\?[0-9]\+$//'
}
# ── enumerate target disks ────────────────────────────────────────────────────
find_disks() {
local boot_dev
boot_dev=$(live_dev)
lsblk -d -n -o NAME,TYPE,SIZE,MODEL | while read -r name type size model; do
[ "$type" = "disk" ] || continue
[ "$size" = "0B" ] && continue # empty virtual media
local dev="/dev/$name"
[ "$dev" = "$boot_dev" ] && continue # skip boot device
printf '%s\t%s\t%s\n' "$dev" "$size" "$model"
done
}
# ── wipe one disk ─────────────────────────────────────────────────────────────
wipe_disk() {
local dev="$1"
echo ""
echo "=== ${YEL}${dev}${NC} ==="
if echo "$dev" | grep -q '^/dev/nvme'; then
# NVMe format (ses=1 = user data erase)
if nvme format --ses=1 "$dev" 2>&1; then
echo " ${GRN}nvme format OK${NC}"
return
fi
echo " nvme format failed, falling back to blkdiscard"
fi
if blkdiscard -f "$dev" 2>&1; then
echo " ${GRN}blkdiscard OK${NC}"
return
fi
echo " blkdiscard not supported — zeroing partition tables (HDD fallback)"
local size_bytes
size_bytes=$(blockdev --getsize64 "$dev")
local mb32=$(( 32 * 1024 * 1024 ))
# Zero first 32 MB (MBR, GPT, filesystem superblocks)
dd if=/dev/zero of="$dev" bs=4M count=8 conv=fsync status=progress 2>&1 || true
# Zero last 32 MB (backup GPT)
if [ "$size_bytes" -gt $(( mb32 * 2 )) ]; then
local skip=$(( (size_bytes - mb32) / (4 * 1024 * 1024) ))
dd if=/dev/zero of="$dev" bs=4M count=8 seek="$skip" conv=fsync status=progress 2>&1 || true
fi
echo " ${GRN}done (partition tables zeroed)${NC}"
}
# ── main ──────────────────────────────────────────────────────────────────────
banner
mapfile -t DISKS < <(find_disks | awk '{print $1}')
if [ ${#DISKS[@]} -eq 0 ]; then
echo "No physical disks found (boot device excluded)."
echo "Nothing to wipe."
exit 0
fi
echo "Disks to be ${RED}COMPLETELY ERASED${NC}:"
echo ""
find_disks | while IFS=$'\t' read -r dev size model; do
printf " ${YEL}%-16s${NC} %8s %s\n" "$dev" "$size" "$model"
done
echo ""
echo "${RED}WARNING: This is IRREVERSIBLE. All data on the listed disks will be lost.${NC}"
echo ""
printf "Type YES to confirm wipe, anything else to abort: "
read -r CONFIRM
if [ "$CONFIRM" != "YES" ]; then
echo ""
echo "Aborted — no disks were touched."
exit 0
fi
echo ""
echo "Starting wipe..."
for dev in "${DISKS[@]}"; do
wipe_disk "$dev"
done
echo ""
echo "${GRN}=== All disks wiped. ===${NC}"
echo ""
printf "Reboot now to return to the boot menu? [Y/n] "
read -r REBOOT
case "${REBOOT:-Y}" in
[Nn]*) echo "You can reboot manually when ready." ;;
*) echo "Rebooting..."; sleep 2; reboot ;;
esac

125
scripts/build.sh Executable file
View File

@@ -0,0 +1,125 @@
#!/bin/sh
# build.sh -- single entry point for ISO builds.
#
# Local build (default):
# sh scripts/build.sh
# sh scripts/build.sh --variant nvidia
# sh scripts/build.sh --clean-build
#
# Remote build (set BUILDER_HOST + BUILDER_USER in .env):
# sh scripts/build.sh
# sh scripts/build.sh --authorized-keys ~/.ssh/authorized_keys
#
# All flags are forwarded to build-in-container.sh.
set -e
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
REPO_ROOT="$(cd "${SCRIPT_DIR}/.." && pwd)"
ENV_FILE="${REPO_ROOT}/.env"
if [ -f "$ENV_FILE" ]; then
# shellcheck disable=SC1090
. "$ENV_FILE"
fi
BUILDER_HOST="${BUILDER_HOST:-}"
BUILDER_USER="${BUILDER_USER:-}"
# Cache lives inside the repo under dist/ (gitignored).
CACHE_DIR="${REPO_ROOT}/dist/cache"
# Forward all arguments as-is to the underlying build script.
EXTRA_ARGS="$*"
# ── Remote build ────────────────────────────────────────────────────────────
if [ -n "$BUILDER_HOST" ]; then
if [ -z "$BUILDER_USER" ]; then
echo "ERROR: BUILDER_USER not set. Set it in .env."
exit 1
fi
echo "=== bee builder (remote: ${BUILDER_USER}@${BUILDER_HOST}) ==="
echo ""
cd "${REPO_ROOT}"
git fetch --quiet origin main
LOCAL=$(git rev-parse HEAD)
REMOTE=$(git rev-parse origin/main)
if [ "$LOCAL" != "$REMOTE" ]; then
echo "ERROR: local repo is not in sync with remote."
echo " local: $LOCAL"
echo " remote: $REMOTE"
echo ""
echo "Push or pull before building:"
echo " git push -- if you have unpushed commits"
echo " git pull -- if remote is ahead"
exit 1
fi
echo "repo: in sync with remote ($LOCAL)"
echo ""
ssh -o StrictHostKeyChecking=no "${BUILDER_USER}@${BUILDER_HOST}" /bin/sh <<ENDSSH
set -e
REPO="/home/${BUILDER_USER}/bee"
LOG=/tmp/bee-build.log
if [ ! -d "\$REPO/.git" ]; then
echo "--- cloning bee repo ---"
git clone https://git.mchus.pro/reanimator/bee.git "\$REPO"
fi
cd "\$REPO"
echo "--- pulling latest ---"
sudo git checkout -- .
git pull --ff-only
chmod +x iso/overlay/usr/local/bin/* 2>/dev/null || true
screen -S bee-build -X quit 2>/dev/null || true
echo "--- starting build in screen session (survives SSH disconnect) ---"
echo "--- log: \$LOG ---"
screen -dmS bee-build sh -c "sh iso/builder/build-in-container.sh --cache-dir \$REPO/dist/cache ${EXTRA_ARGS} > \$LOG 2>&1; echo \$? > /tmp/bee-build-exit"
echo "--- streaming build log (Ctrl+C safe -- build continues on VM) ---"
tail -n +1 -f "\$LOG" 2>/dev/null &
TAIL_PID=\$!
while screen -list 2>/dev/null | grep -q bee-build; do
sleep 2
done
sleep 1
kill \$TAIL_PID 2>/dev/null || true
tail -n 20 "\$LOG" 2>/dev/null || true
EXIT_CODE=\$(cat /tmp/bee-build-exit 2>/dev/null || echo 1)
exit \$EXIT_CODE
ENDSSH
echo ""
echo "=== downloading ISO ==="
LOCAL_ISO_DIR="${REPO_ROOT}/dist/release"
mkdir -p "${LOCAL_ISO_DIR}"
if command -v rsync >/dev/null 2>&1 && ssh -o StrictHostKeyChecking=no "${BUILDER_USER}@${BUILDER_HOST}" command -v rsync >/dev/null 2>&1; then
rsync -az --progress \
-e "ssh -o StrictHostKeyChecking=no" \
"${BUILDER_USER}@${BUILDER_HOST}:/home/${BUILDER_USER}/bee/dist/*.iso" \
"${LOCAL_ISO_DIR}/"
else
scp -o StrictHostKeyChecking=no \
"${BUILDER_USER}@${BUILDER_HOST}:/home/${BUILDER_USER}/bee/dist/*.iso" \
"${LOCAL_ISO_DIR}/"
fi
echo ""
echo "=== build complete ==="
echo "ISO saved to: ${LOCAL_ISO_DIR}/"
ls -lh "${LOCAL_ISO_DIR}/"*.iso 2>/dev/null || true
exit 0
fi
# ── Local build ─────────────────────────────────────────────────────────────
echo "=== bee builder (local) ==="
echo "cache: ${CACHE_DIR}"
echo ""
# shellcheck disable=SC2086
exec sh "${REPO_ROOT}/iso/builder/build-in-container.sh" --cache-dir "${CACHE_DIR}" $EXTRA_ARGS