Compare commits

...

23 Commits

Author SHA1 Message Date
Mikhail Chusavitin
6082c7953e Add console tools and bee menu startup 2026-03-14 08:36:38 +03:00
Mikhail Chusavitin
f37ef0d844 Run live TUI as root via sudo 2026-03-14 08:34:23 +03:00
Mikhail Chusavitin
e32fa6e477 Use live-config autologin for bee user 2026-03-14 08:33:36 +03:00
Mikhail Chusavitin
20118bb400 Fix tty1 autologin override order 2026-03-14 08:17:23 +03:00
Mikhail Chusavitin
55d6876297 Avoid tty1 black screen on live boot 2026-03-14 08:14:49 +03:00
Mikhail Chusavitin
e8e176ab7f Add zstd to live image packages 2026-03-14 08:04:18 +03:00
Mikhail Chusavitin
caeafa836b Improve VM boot diagnostics and guest support 2026-03-14 07:51:16 +03:00
Mikhail Chusavitin
e8a52562e7 Persist builder caches outside container 2026-03-14 07:40:32 +03:00
Mikhail Chusavitin
6aca1682b9 Refactor bee CLI and LiveCD integration 2026-03-13 16:52:16 +03:00
Mikhail Chusavitin
b7c888edb1 fix: getty autologin root, inject GSP firmware for H100, bump 0.1.1 2026-03-08 22:12:02 +03:00
Mikhail Chusavitin
17d5d74a8d fix: nomodeset + remove splash (framebuffer hangs on headless H100 server) 2026-03-08 21:39:31 +03:00
Mikhail Chusavitin
d487e539bb fix: use sudo git checkout to reset root-owned build artifacts 2026-03-08 20:54:15 +03:00
Mikhail Chusavitin
441ab3adbd fix: blacklist nouveau driver (hangs on H100 unknown chipset) 2026-03-08 20:51:49 +03:00
Mikhail Chusavitin
c91c8d8cf9 feat: bee-themed grub splash (amber/black honeycomb) with progress bar 2026-03-08 20:44:19 +03:00
Mikhail Chusavitin
83e1910281 feat: custom grub bootloader - bee branding, 5s auto-boot, no splash 2026-03-08 20:35:23 +03:00
Mikhail Chusavitin
2252c5af56 fix: use isc-dhcp-client for dhclient, remove standalone lsblk (in util-linux) 2026-03-08 19:43:59 +03:00
Mikhail Chusavitin
7a4d75c143 fix: remove unsupported --hostname/--username from lb config
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-08 19:28:01 +03:00
Mikhail Chusavitin
7c62d100d4 fix: use SYSSRC=common SYSOUT=amd64 for NVIDIA build on Debian split headers
Debian 12 splits kernel headers into two packages:
  linux-headers-<kver>        (arch-specific: generated/, config/)
  linux-headers-<kver>-common (source headers: linux/, asm-generic/, etc.)

NVIDIA conftest.sh builds include paths as HEADERS=$SOURCES/include.
When SYSSRC=amd64, HEADERS=amd64/include/ which is nearly empty —
conftest can't compile any kernel header tests, all compile-tests fail
silently, and NVIDIA assumes all kernel APIs are present. This causes
link errors for APIs added in kernel 6.3+ (vm_flags_set, vm_flags_clear)
and removed APIs (phys_to_dma, dma_is_direct, get_dma_ops).

Fix: pass SYSSRC=common (real headers) and SYSOUT=amd64 (generated headers).
NVIDIA Makefile maps SYSSRC→NV_KERNEL_SOURCES, SYSOUT→NV_KERNEL_OUTPUT,
and runs 'make -C common KBUILD_OUTPUT=amd64'. Conftest then correctly
detects which APIs are present in kernel 6.1 and uses proper wrappers.

Tested: 5 .ko files built successfully on Debian 12 kernel 6.1.0-43-amd64.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-08 19:23:47 +03:00
Mikhail Chusavitin
c843ff95a2 fix: add -Wno-error to CFLAGS_MODULE for NVIDIA kernel 6.1 compat
get_dma_ops() return type changed in kernel 6.1 — GCC treats int-conversion
warning as error. Suppress with -Wno-error to allow build to complete.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-08 18:55:25 +03:00
Mikhail Chusavitin
0057686769 fix: pass GCC include dir to NVIDIA make to resolve stdarg.h not found
Debian kernel build uses -nostdinc which strips GCC's own includes.
NVIDIA's nv_stdarg.h needs <stdarg.h> from GCC.
Pass -I$(gcc --print-file-name=include) via CFLAGS_MODULE.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-08 18:53:37 +03:00
Mikhail Chusavitin
68b5e02a74 fix: run-builder.sh uses BUILDER_USER from .env, not hardcoded
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-08 18:48:33 +03:00
Mikhail Chusavitin
fa553c3f20 fix: update DEBIAN_KERNEL_ABI to 6.1.0-43 (actual kernel on build host)
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-08 18:35:44 +03:00
Mikhail Chusavitin
345a93512a migrate ISO build from Alpine to Debian 12 (Bookworm)
Replace the entire live CD build pipeline:
- Alpine SDK + mkimage + genapkovl → Debian live-build (lb config/build)
- OpenRC init scripts → systemd service units
- dropbear → openssh-server (native to Debian live)
- udhcpc → dhclient for DHCP
- apk → apt-get in setup-builder.sh and build-nvidia-module.sh
- Add auto/config (lb config options) and auto/build wrapper
- Add config/package-lists/bee.list.chroot replacing Alpine apks
- Add config/hooks/normal/9000-bee-setup.hook.chroot to enable services
- Add bee-nvidia-load and bee-sshsetup helper scripts
- Keep NVIDIA pre-compile pipeline (Option B): compile on builder VM against
  pinned Debian kernel headers (DEBIAN_KERNEL_ABI), inject .ko into includes.chroot
- Fixes: native glibc (no gcompat shims), proper udev, writable /lib/modules,
  no Alpine modloop read-only constraint, no stale apk cache issues

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-08 18:01:38 +03:00
77 changed files with 3747 additions and 1787 deletions

View File

@@ -1 +1,2 @@
BUILDER_HOST=
BUILDER_USER=

372
PLAN.md
View File

@@ -25,31 +25,29 @@ Fills the gaps where logpile/Redfish is blind: NVMe, DIMM serials, GPU serials,
- 1.8b Component wear / age telemetry — **DONE** (storage + NVMe + NVIDIA + NIC SFP/DOM + NIC packet stats)
- 1.9 Mellanox/NVIDIA NIC enrichment — **DONE** (mstflint + ethtool firmware fallback)
- 1.10 RAID controller enrichment — **DONE (initial multi-tool support)** (storcli + sas2/3ircu + arcconf + ssacli + VROC/mdstat)
- 1.11 Output and USB write**DONE** (usb + /tmp fallback)
- 1.11 Output and export workflow**DONE** (explicit file output + manual removable export via TUI)
- 1.12 Integration test (local) — **DONE** (`scripts/test-local.sh`)
### Phase 2 — Alpine LiveCD
### Phase 2 — Debian Live ISO
- Debug ISO track is active (builder + overlay-debug + OpenRC services + TUI workflow).
- Production ISO track — **IN PROGRESS**.
- 2.3 Alpine mkimage profile — **DONE (production profile scaffold)**
- 2.4 Network bring-up on boot**DONE**
- 2.5 OpenRC boot service (bee-audit) — **DONE** (with explicit bee-nvidia ordering)
- 2.6 Vendor utilities in overlay — **DONE (fetch script + iso/vendor scaffold)**
- 2.7 Auto-update wiring (USB first, network second) — **PARTIAL** (shell flow done; strict Ed25519 verification intentionally deferred to final stage)
- 2.8 Release workflow — **PARTIAL** (production build now injects audit binary, NVIDIA modules/tools, vendor tools, and build metadata)
- Current implementation uses Debian 12 `live-build`, `systemd`, and OpenSSH.
- Network bring-up on boot — **DONE**
- Boot services (`bee-network`, `bee-nvidia`, `bee-audit`, `bee-sshsetup`) — **DONE**
- Vendor utilities in overlay**DONE**
- Build metadata + staged overlay injection — **DONE**
- Auto-update flow remains deferred; current focus is deterministic offline audit ISO behavior.
---
## Phase 1 — Go Audit Binary
Self-contained static binary. Runs on any Linux (including Alpine LiveCD).
Self-contained static binary. Runs on any Linux (including the Debian live ISO).
Calls system utilities, parses their output, produces `HardwareIngestRequest` JSON.
### 1.1 — Project scaffold
- `audit/go.mod` — module `bee/audit`
- `audit/cmd/audit/main.go` — CLI entry point: flags, orchestration, JSON output
- `audit/cmd/bee/main.go` main CLI entry point: subcommands, runtime selection, JSON output
- `audit/internal/schema/` — copy of `HardwareIngestRequest` types from core (no import dependency)
- `audit/internal/collector/` — empty package stubs for all collectors
- `const Version = "1.0"` in main
@@ -237,305 +235,137 @@ No hardcoded vendor names in detection logic — pure PCI vendor_id map.
Tests: table tests with storcli/sas2ircu text fixtures
### 1.11 — Output and USB write
### 1.11 — Output and export workflow
`--output stdout` (default): pretty-printed JSON to stdout
`--output file:<path>`: write JSON to explicit path
`--output usb`: auto-detect first removable block device, mount it, write `audit-<board_serial>-<YYYYMMDD-HHMMSS>.json`
USB detection: scan `/sys/block/*/removable`, pick first `1`, mount to `/tmp/bee-usb`
Live ISO default service output: `/var/log/bee-audit.json`
QR summary to stdout (always): board serial + model + component counts — fits in one QR code
Uses `qrencode` if present, else skips silently
Removable-media export is manual via `bee tui` (or the LiveCD wrapper `bee-tui`):
- operator chooses a removable filesystem explicitly
- TUI mounts it if needed
- TUI asks for confirmation before copying the JSON
- TUI unmounts temporary mountpoints after export
No auto-write to arbitrary removable media is allowed.
### 1.12 — Integration test (local)
`scripts/test-local.sh` — runs audit binary on developer machine (Linux), captures JSON,
`scripts/test-local.sh` — runs `bee audit` on developer machine (Linux), captures JSON,
validates required fields are present (board.serial_number non-empty, cpus non-empty, etc.)
Not a unit test — requires real hardware access. Documents how to run for verification.
---
## Phase 2 — Alpine LiveCD
## Phase 2 — Debian Live ISO
ISO image bootable via BMC virtual media. Runs audit binary automatically on boot.
ISO image bootable via BMC virtual media or USB. Runs boot services automatically and writes the audit result to `/var/log/bee-audit.json`.
### 2.1 — Builder environment
`iso/builder/Dockerfile` — Alpine 3.21 build environment with:
- `alpine-sdk`, `abuild`, `squashfs-tools`, `xorriso`
- Go toolchain (for binary compilation inside builder)
- NVIDIA driver `.run` pre-fetched during image build
`iso/builder/setup-builder.sh` prepares a Debian 12 host/VM with:
- `live-build`, `debootstrap`, bootloader tooling, kernel headers
- Go toolchain
- everything needed to compile the `bee` binary and NVIDIA modules
`iso/builder/build.sh` — orchestrates full ISO build:
1. Compile Go binary (static, `CGO_ENABLED=0`)
2. Compile NVIDIA kernel module against Alpine 3.21 LTS kernel headers
3. Run `mkimage.sh` with bee profile
4. Output: `dist/bee-<version>.iso`
`iso/builder/build-in-container.sh` offers the same builder stack in a Debian 12 container image.
The container run is privileged because `live-build` needs mount/chroot/loop capabilities.
`iso/builder/build.sh` orchestrates the full ISO build:
1. compile the Go `bee` binary
2. create a staged overlay under `dist/overlay-stage`
3. inject SSH auth, vendor tools, NVIDIA artifacts, and build metadata into the staged overlay
4. create a disposable `live-build` workdir under `dist/live-build-work`
5. sync the staged overlay into `config/includes.chroot/`
6. run `lb config && lb build`
7. copy the final ISO into `dist/`
### 2.2 — NVIDIA driver build
Alpine 3.21, LTS kernel 6.6 — fixed versions in builder.
`iso/builder/build-nvidia-module.sh`:
- downloads the pinned NVIDIA `.run` installer
- verifies SHA256
- builds kernel modules against the pinned Debian kernel ABI
- caches modules, userspace tools, and libs in `dist/nvidia-<version>-<kver>/`
`iso/builder/build-nvidia.sh`:
- Download `NVIDIA-Linux-x86_64-<ver>.run` (version pinned in `iso/builder/VERSIONS`)
- Extract kernel module sources
- Compile against `linux-lts-dev` headers
- Strip and package as `nvidia-<ver>-k6.6.ko.tar.gz` for inclusion in overlay
`iso/overlay/usr/local/bin/bee-nvidia-load`:
- loads `nvidia`, `nvidia-modeset`, `nvidia-uvm` via `insmod`
- creates `/dev/nvidia*` nodes if the driver registered successfully
- logs failures but does not block the rest of boot
`iso/overlay/usr/local/bin/load-nvidia.sh`:
- `insmod` sequence: nvidia.ko → nvidia-modeset.ko → nvidia-uvm.ko
- Verify: `nvidia-smi -L` → log result
- On failure: log warning, continue (audit runs without GPU enrichment)
### 2.3 — ISO assembly and overlay policy
### 2.3 — Alpine mkimage profile
`iso/overlay/` is source-only input for the build.
`iso/builder/mkimg.bee.sh` — Alpine mkimage profile:
- Base: `alpine-base`
- Kernel: `linux-lts`
- Packages: `dmidecode smartmontools nvme-cli pciutils ipmitool util-linux e2fsprogs qrencode`
- Overlay: `iso/overlay/` included as apkovl
Build-time files are injected into the staged overlay only:
- `bee`
- `bee-smoketest`
- `authorized_keys`
- password-fallback marker
- `/etc/bee-release`
- vendor tools from `iso/vendor/`
### 2.4 — Network bring-up on boot
The source tree must stay clean after a build.
`iso/overlay/usr/local/bin/bee-network.sh`:
- Enumerate all network interfaces: `ip link show` → filter out loopback and virtual (docker/bridge)
- For each physical interface: `ip link set <iface> up` + `udhcpc -i <iface> -t 5 -T 3 -n`
- Log each interface result (got IP / timeout / no carrier)
- Continue regardless — network is best-effort for auto-update
### 2.4 — Boot services
`iso/overlay/etc/init.d/bee-network`:
- runlevel: default, before: bee-update
- Calls bee-network.sh
- Does not block boot if DHCP fails on all interfaces
`systemd` service order:
- `bee-sshsetup.service` → configures SSH auth before `ssh.service`
- `bee-network.service` → starts best-effort DHCP on all physical interfaces
- `bee-nvidia.service` → loads NVIDIA modules if present
- `bee-audit.service` → runs audit and logs failures without turning partial collector bugs into a boot blocker
### 2.5OpenRC boot service (bee-audit)
### 2.4bRuntime split
`iso/overlay/etc/init.d/bee-audit`:
- runlevel: default, after: bee-update
- start(): load-nvidia.sh → /usr/local/bin/audit --output usb
- on completion: print QR summary to /dev/tty1 (always, even if USB write failed)
- log everything to /var/log/bee-audit.log
- exits 0 regardless of partial failures — unattended, no prompts, no waits
Target split:
- main Go application works on a normal Linux host and on the live ISO
- live-ISO specifics stay in integration glue under `iso/`
- the live ISO passes `--runtime livecd` to the Go binary
- local runs default to `--runtime auto`, which resolves to `local` unless a live marker is detected
Unattended invariants:
- No TTY prompts ever. All decisions are automatic.
- Missing USB: output goes to /tmp/bee-audit-<serial>-<date>.json, QR shown on screen.
- Missing NVIDIA driver: GPU records have status UNKNOWN, audit continues.
- Missing ipmitool/storcli/any tool: that collector is skipped, rest continue.
- Timeout on any external command: 30s hard limit via `timeout` wrapper, then skip.
- Boot never hangs waiting for user input.
Planned code shape:
- `audit/cmd/bee/` — main CLI entrypoint
- `audit/internal/runtimeenv/` — runtime detection and mode selection
- future `audit/internal/tui/` — host/live shared TUI logic
- `iso/overlay/` — boot-time livecd integration only
`iso/overlay/etc/runlevels/default/bee-audit` symlink
### 2.5 — Operator workflows
### 2.6 — Vendor utilities in overlay
- Automatic boot audit writes JSON to `/var/log/bee-audit.json`
- `bee tui` can rerun the audit manually
- `bee tui` can export the latest audit JSON to removable media
- removable export requires explicit target selection, mount, confirmation, copy, and cleanup
`iso/overlay/usr/local/bin/` includes pre-fetched proprietary tools:
- `storcli64` (Broadcom)
- `sas2ircu`, `sas3ircu` (Broadcom/LSI)
- `mstflint` (NVIDIA Networking / Mellanox)
### 2.6 — Vendor utilities and optional assets
`scripts/fetch-vendor.sh` — downloads and places these before ISO build.
Checksums verified. Tools not committed to git — fetched at build time.
Optional binaries live in `iso/vendor/` and are included when present:
- `storcli64`
- `sas2ircu`, `sas3ircu`
- `mstflint`
`iso/vendor/.gitkeep` — placeholder, directory gitignored except .gitkeep
Missing optional tools do not fail the build or boot.
### 2.7 — Auto-update of audit binary (USB + network)
### 2.7 — Release workflow
Two update paths, tried in order on every boot:
`iso/builder/VERSIONS` pins the current release inputs:
- audit version
- Debian version / kernel ABI
- Go version
- NVIDIA driver version
**Path A — USB (no network required, higher priority):**
`bee-update.sh` scans mounted removable media for an update package before checking network.
Looks for: `<usb>/bee-update/bee-audit-linux-amd64` + `<usb>/bee-update/bee-audit-linux-amd64.sha256`
Steps:
1. Find USB mount point (same detection as audit output: `/sys/block/*/removable`)
2. Check for `bee-update/bee-audit-linux-amd64` on the USB root
3. Read version from `bee-update/VERSION` file (plain text, e.g. `1.3`)
4. Compare with running binary version (`/usr/local/bin/audit --version`)
5. If USB version > running: verify SHA256 checksum, replace binary, log update
6. Re-run audit if updated
**Authenticity verification — Ed25519 multi-key trust (stdlib only, no external tools):**
Problem: SHA256 alone does not prevent a crafted attack — an attacker places their binary
and a matching SHA256 next to it. The LiveCD would accept it.
Solution: Ed25519 asymmetric signatures via Go stdlib `crypto/ed25519`.
Multiple developer public keys are supported. A binary update is accepted if its signature
verifies against ANY of the embedded trusted public keys.
This mirrors the SSH authorized_keys model: add a developer → add their public key.
Remove a developer → rebuild without their key.
**Key management — centralized across all projects:**
Public keys live in a dedicated repo at git.mchus.pro/mchus/keys (or similar):
```
keys/
developers/
mchusavitin.pub ← Ed25519 public key, base64, one line
developer2.pub
README.md ← how to generate a key pair
```
Public keys are safe to commit — they are not secret.
Private keys stay on each developer's machine, never committed anywhere.
Key generation (one-time per developer, run locally):
```sh
# scripts/keygen.sh — also lives in the keys repo
openssl genpkey -algorithm ed25519 -out ~/.bee-release.key
openssl pkey -in ~/.bee-release.key -pubout -outform DER \
| tail -c 32 | base64 > mchusavitin.pub
```
**Embedding public keys at release time (not compile time):**
Public keys are injected via `-ldflags` at build time from the keys repo.
The binary does not hardcode keys — they are provided by the release script.
```go
// audit/internal/updater/trust.go
// trustedKeysRaw is injected at build time via -ldflags
// format: base64(key1):base64(key2):...
var trustedKeysRaw string
func trustedKeys() ([]ed25519.PublicKey, error) {
if trustedKeysRaw == "" {
return nil, fmt.Errorf("binary built without trusted keys — updates disabled")
}
var keys []ed25519.PublicKey
for _, enc := range strings.Split(trustedKeysRaw, ":") {
b, err := base64.StdEncoding.DecodeString(strings.TrimSpace(enc))
if err != nil || len(b) != ed25519.PublicKeySize {
return nil, fmt.Errorf("invalid trusted key: %w", err)
}
keys = append(keys, ed25519.PublicKey(b))
}
return keys, nil
}
func verifySignature(binaryPath, sigPath string) error {
keys, err := trustedKeys()
if err != nil {
return err
}
data, _ := os.ReadFile(binaryPath)
sig, _ := os.ReadFile(sigPath) // 64 bytes raw Ed25519 signature
for _, key := range keys {
if ed25519.Verify(key, data, sig) {
return nil // any trusted key accepts → pass
}
}
return fmt.Errorf("signature verification failed: no trusted key matched")
}
```
Release build injects keys:
```sh
# scripts/build-release.sh
KEYS=$(paste -sd: keys/developers/*.pub)
go build -ldflags "-X bee/audit/internal/updater/trust.trustedKeysRaw=${KEYS}" \
-o dist/bee-audit-linux-amd64 ./cmd/audit
```
Signing (release engineer signs with their private key):
```sh
# scripts/sign-release.sh <binary>
openssl pkeyutl -sign -inkey ~/.bee-release.key \
-rawin -in "$1" -out "$1.sig"
```
Binary built without `-ldflags` injection (e.g. local dev build) has `trustedKeysRaw=""`
→ updates are disabled, logged as INFO, audit continues normally.
Update rejected silently (logged as WARNING, audit continues with current binary) if:
- `.sig` file missing
- Signature does not match any trusted key
- `trustedKeysRaw` empty (dev build)
Update package layout on USB:
```
/bee-update/
bee-audit-linux-amd64 ← new binary (also signed with embedded keys)
bee-audit-linux-amd64.sig ← Ed25519 signature (64 bytes raw)
VERSION ← plain version string e.g. "1.3"
```
Admin workflow: download `bee-audit-linux-amd64` + `bee-audit-linux-amd64.sig` from Gitea
release assets, place in `bee-update/` on USB.
**Path B — Network (requires DHCP on at least one interface):**
1. Check network: ping git.mchus.pro -c 1 -W 3 || skip
2. Fetch: `GET https://git.mchus.pro/api/v1/repos/<org>/bee/releases/latest`
3. Parse tag_name, asset URLs for `bee-audit-linux-amd64` + `bee-audit-linux-amd64.sig`
4. Compare tag with running version
5. If newer: download both files to /tmp, verify Ed25519 signature against all trusted keys
6. Replace binary on pass, log and skip on fail
7. Re-run audit if updated
**Ordering:** USB update checked first, network checked second.
If USB update applied and verified, network check is skipped.
`iso/overlay/etc/init.d/bee-update`:
- runlevel: default
- after: bee-network (network path needs interfaces up)
- before: bee-audit (audit runs with latest binary)
- Calls bee-update.sh
Triggered after bee-audit completes, only if network is available.
`iso/overlay/usr/local/bin/bee-update.sh`:
```
1. Check network: ping git.mchus.pro -c 1 -W 3 || exit 0
2. Fetch latest release metadata:
GET https://git.mchus.pro/api/v1/repos/<org>/bee/releases/latest
3. Parse: extract tag_name, asset URL for bee-audit-linux-amd64
4. Compare tag_name with /usr/local/bin/audit --version output
5. If newer: download to /tmp/bee-audit-new, verify SHA256 checksum from release assets
6. Replace /usr/local/bin/audit (tmpfs — survives until reboot)
7. Log: updated from vX.Y to vX.Z
8. Re-run audit if update happened: /usr/local/bin/audit --output usb
```
`iso/overlay/etc/init.d/bee-update`:
- runlevel: default
- after: bee-audit, network
- Calls bee-update.sh
Release naming convention: binary asset named `bee-audit-linux-amd64` per release tag.
### 2.8 — Release workflow
`iso/builder/VERSIONS` — pinned versions:
```
AUDIT_VERSION=1.0
ALPINE_VERSION=3.21
KERNEL_VERSION=6.12
NVIDIA_DRIVER_VERSION=590.48.01
```
LiveCD release = full ISO rebuild. Binary-only patch = new Gitea release with binary asset.
On boot with network: ISO auto-patches its binary without full rebuild.
ISO version embedded in `/etc/bee-release`:
```
BEE_ISO_VERSION=1.0
BEE_AUDIT_VERSION=1.0
BUILD_DATE=2026-03-05
```
Current release model:
- shipping a new ISO means a full rebuild
- build metadata is embedded into `/etc/bee-release` and `motd`
- binary self-update remains deferred; no automatic USB/network patching is part of the current runtime
---
## Eating order
Builder environment is set up early (after 1.3) so every subsequent collector
is developed and tested directly on real hardware in the actual Alpine environment.
is developed and tested directly on real hardware in the actual Debian live ISO environment.
No "works on my Mac" drift.
```
@@ -546,8 +376,8 @@ No "works on my Mac" drift.
--- BUILDER + DEBUG ISO (unblock real-hardware testing) ---
2.1 builder VM setup → Alpine VM with build deps + Go toolchain
2.2 debug ISO profile → minimal Alpine ISO: audit binary + dropbear SSH + all packages
2.1 builder setup → Debian host/VM or privileged container with build deps
2.2 debug ISO profile → minimal Debian ISO: `bee` binary + OpenSSH + all packages
2.3 boot on real server → SSH in, verify packages present, run audit manually
--- CONTINUE COLLECTORS (tested on real hardware from here) ---
@@ -560,14 +390,14 @@ No "works on my Mac" drift.
1.8b wear/age telemetry → +SMART hours, NVMe % used, SFP DOM, ECC
1.9 Mellanox NIC enrichment → +NIC firmware/serial
1.10 RAID enrichment → +physical disks behind RAID
1.11 output + USB write → production-ready output
1.11 output + export workflow → file output + explicit removable export
--- PRODUCTION ISO ---
2.4 NVIDIA driver build → driver compiled into overlay
2.5 network bring-up on boot → DHCP on all interfaces
2.6 OpenRC boot service → audit runs on boot automatically
2.6 systemd boot service → audit runs on boot automatically
2.7 vendor utilities → storcli/sas2ircu/mstflint in image
2.8 auto-update → binary self-patches from Gitea
2.9 release workflow → versioning + release notes
2.8 release workflow → versioning + release notes
2.9 operator export flow → explicit TUI export to removable media
```

View File

@@ -1,167 +0,0 @@
package main
import (
"encoding/json"
"flag"
"fmt"
"log/slog"
"os"
"os/exec"
"path/filepath"
"sort"
"strings"
"time"
"bee/audit/internal/collector"
)
// Version is the audit binary version.
// Injected at release build time via:
//
// -ldflags "-X main.Version=1.2"
//
// Defaults to "dev" in local builds.
var Version = "dev"
func main() {
output := flag.String("output", "stdout", `output destination:
stdout — print JSON to stdout (default)
file:<path> — write JSON to file
usb — auto-detect removable media, write JSON there`)
showVersion := flag.Bool("version", false, "print version and exit")
flag.Parse()
if *showVersion {
fmt.Println(Version)
return
}
slog.SetDefault(slog.New(slog.NewTextHandler(os.Stderr, &slog.HandlerOptions{
Level: slog.LevelInfo,
})))
result := collector.Run()
data, err := json.MarshalIndent(result, "", " ")
if err != nil {
slog.Error("marshal result", "err", err)
os.Exit(1)
}
if err := writeOutput(*output, data); err != nil {
slog.Error("write output", "destination", *output, "err", err)
os.Exit(1)
}
}
func writeOutput(dest string, data []byte) error {
switch {
case dest == "stdout":
_, err := os.Stdout.Write(append(data, '\n'))
return err
case strings.HasPrefix(dest, "file:"):
path := strings.TrimPrefix(dest, "file:")
return os.WriteFile(path, append(data, '\n'), 0644)
case dest == "usb":
return writeToUSB(data)
default:
return fmt.Errorf("unknown output destination %q — use stdout, file:<path>, or usb", dest)
}
}
// writeToUSB auto-detects the first removable block device, mounts it,
// and writes the audit JSON. Falls back to /tmp on any failure.
func writeToUSB(data []byte) error {
boardSerial := extractBoardSerial(data)
filename := auditFilename(boardSerial, time.Now().UTC())
device, err := firstRemovableDevice()
if err != nil {
slog.Warn("usb output: no removable device, writing to /tmp", "err", err)
return writeAuditToPath(filepath.Join("/tmp", filename), data)
}
mountpoint := "/tmp/bee-usb"
if err := os.MkdirAll(mountpoint, 0755); err != nil {
return err
}
if err := exec.Command("mount", device, mountpoint).Run(); err != nil {
slog.Warn("usb output: mount failed, writing to /tmp", "device", device, "err", err)
return writeAuditToPath(filepath.Join("/tmp", filename), data)
}
defer func() {
if err := exec.Command("umount", mountpoint).Run(); err != nil {
slog.Warn("usb output: umount failed", "mountpoint", mountpoint, "err", err)
}
}()
path := filepath.Join(mountpoint, filename)
if err := writeAuditToPath(path, data); err != nil {
slog.Warn("usb output: write failed, falling back to /tmp", "path", path, "err", err)
return writeAuditToPath(filepath.Join("/tmp", filename), data)
}
slog.Info("usb output: written", "path", path)
return nil
}
func writeAuditToPath(path string, data []byte) error {
if err := os.WriteFile(path, append(data, '\n'), 0644); err != nil {
return err
}
slog.Info("audit output written", "path", path)
return nil
}
func extractBoardSerial(data []byte) string {
var doc struct {
Hardware struct {
Board struct {
SerialNumber string `json:"serial_number"`
} `json:"board"`
} `json:"hardware"`
}
if err := json.Unmarshal(data, &doc); err != nil {
return "unknown"
}
serial := strings.TrimSpace(doc.Hardware.Board.SerialNumber)
if serial == "" {
return "unknown"
}
return serial
}
func auditFilename(boardSerial string, now time.Time) string {
boardSerial = strings.TrimSpace(boardSerial)
if boardSerial == "" {
boardSerial = "unknown"
}
return fmt.Sprintf("audit-%s-%s.json", boardSerial, now.Format("20060102-150405"))
}
func firstRemovableDevice() (string, error) {
entries, err := os.ReadDir("/sys/block")
if err != nil {
return "", err
}
sort.Slice(entries, func(i, j int) bool { return entries[i].Name() < entries[j].Name() })
for _, e := range entries {
name := e.Name()
if strings.HasPrefix(name, "loop") || strings.HasPrefix(name, "ram") {
continue
}
removableFlag, err := os.ReadFile(filepath.Join("/sys/block", name, "removable"))
if err != nil {
continue
}
if strings.TrimSpace(string(removableFlag)) == "1" {
return filepath.Join("/dev", name), nil
}
}
return "", fmt.Errorf("no removable block device found")
}

185
audit/cmd/bee/main.go Normal file
View File

@@ -0,0 +1,185 @@
package main
import (
"flag"
"fmt"
"io"
"log/slog"
"os"
"strings"
"bee/audit/internal/app"
"bee/audit/internal/platform"
"bee/audit/internal/runtimeenv"
"bee/audit/internal/tui"
)
var Version = "dev"
func main() {
os.Exit(run(os.Args[1:], os.Stdout, os.Stderr))
}
func run(args []string, stdout, stderr io.Writer) int {
slog.SetDefault(slog.New(slog.NewTextHandler(os.Stderr, &slog.HandlerOptions{
Level: slog.LevelInfo,
})))
if len(args) == 0 {
printRootUsage(stderr)
return 1
}
switch args[0] {
case "help", "--help", "-h":
printRootUsage(stdout)
return 0
case "audit":
return runAudit(args[1:], stdout, stderr)
case "tui":
return runTUI(args[1:], stdout, stderr)
case "export":
return runExport(args[1:], stdout, stderr)
case "sat":
return runSAT(args[1:], stdout, stderr)
case "version", "--version", "-version":
fmt.Fprintln(stdout, Version)
return 0
default:
fmt.Fprintf(stderr, "bee: unknown command %q\n\n", args[0])
printRootUsage(stderr)
return 1
}
}
func printRootUsage(w io.Writer) {
fmt.Fprintln(w, `bee commands:
bee audit --runtime auto|local|livecd --output stdout|file:<path>
bee tui --runtime auto|local|livecd
bee export --target <device>
bee sat nvidia
bee version`)
}
func runAudit(args []string, stdout, stderr io.Writer) int {
fs := flag.NewFlagSet("audit", flag.ContinueOnError)
fs.SetOutput(stderr)
output := fs.String("output", "stdout", "output destination: stdout or file:<path>")
runtimeFlag := fs.String("runtime", "auto", "runtime environment: auto, local, livecd")
showVersion := fs.Bool("version", false, "print version and exit")
fs.Usage = func() {
fmt.Fprintln(stderr, "usage: bee audit [--runtime auto|local|livecd] [--output stdout|file:<path>]")
fs.PrintDefaults()
}
if err := fs.Parse(args); err != nil {
return 2
}
if *showVersion {
fmt.Fprintln(stdout, Version)
return 0
}
runtimeInfo, err := runtimeenv.Detect(*runtimeFlag)
if err != nil {
slog.Error("resolve runtime", "err", err)
return 1
}
slog.Info("runtime resolved", "mode", runtimeInfo.Mode, "reason", runtimeInfo.Reason)
application := app.New(platform.New())
path, err := application.RunAudit(runtimeInfo.Mode, *output)
if err != nil {
slog.Error("run audit", "err", err)
return 1
}
if path != "stdout" {
slog.Info("audit output written", "path", path)
}
return 0
}
func runTUI(args []string, stdout, stderr io.Writer) int {
fs := flag.NewFlagSet("tui", flag.ContinueOnError)
fs.SetOutput(stderr)
runtimeFlag := fs.String("runtime", "auto", "runtime environment: auto, local, livecd")
fs.Usage = func() {
fmt.Fprintln(stderr, "usage: bee tui [--runtime auto|local|livecd]")
fs.PrintDefaults()
}
if err := fs.Parse(args); err != nil {
return 2
}
runtimeInfo, err := runtimeenv.Detect(*runtimeFlag)
if err != nil {
slog.Error("resolve runtime", "err", err)
return 1
}
application := app.New(platform.New())
if err := tui.Run(application, runtimeInfo.Mode); err != nil {
slog.Error("run tui", "err", err)
return 1
}
return 0
}
func runExport(args []string, stdout, stderr io.Writer) int {
fs := flag.NewFlagSet("export", flag.ContinueOnError)
fs.SetOutput(stderr)
targetDevice := fs.String("target", "", "removable device path, e.g. /dev/sdb1")
fs.Usage = func() {
fmt.Fprintln(stderr, "usage: bee export --target <device>")
fs.PrintDefaults()
}
if err := fs.Parse(args); err != nil {
return 2
}
if strings.TrimSpace(*targetDevice) == "" {
fmt.Fprintln(stderr, "bee export: --target is required")
fs.Usage()
return 2
}
application := app.New(platform.New())
targets, err := application.ListRemovableTargets()
if err != nil {
slog.Error("list removable targets", "err", err)
return 1
}
for _, target := range targets {
if target.Device == *targetDevice {
path, err := application.ExportLatestAudit(target)
if err != nil {
slog.Error("export latest audit", "err", err)
return 1
}
slog.Info("audit exported", "path", path)
return 0
}
}
slog.Error("target device not found among removable filesystems", "device", *targetDevice)
return 1
}
func runSAT(args []string, stdout, stderr io.Writer) int {
if len(args) == 0 || args[0] == "help" || args[0] == "--help" || args[0] == "-h" {
fmt.Fprintln(stderr, "usage: bee sat nvidia")
return 2
}
if args[0] != "nvidia" {
fmt.Fprintf(stderr, "bee sat: unknown target %q\n", args[0])
fmt.Fprintln(stderr, "usage: bee sat nvidia")
return 2
}
application := app.New(platform.New())
archive, err := application.RunNvidiaAcceptancePack("")
if err != nil {
slog.Error("run nvidia sat", "err", err)
return 1
}
slog.Info("nvidia sat archive written", "path", archive)
return 0
}

115
audit/cmd/bee/main_test.go Normal file
View File

@@ -0,0 +1,115 @@
package main
import (
"bytes"
"strings"
"testing"
)
func TestRunRootHelp(t *testing.T) {
t.Parallel()
var stdout, stderr bytes.Buffer
rc := run([]string{"help"}, &stdout, &stderr)
if rc != 0 {
t.Fatalf("rc=%d want 0", rc)
}
if !strings.Contains(stdout.String(), "bee commands:") {
t.Fatalf("stdout missing root usage:\n%s", stdout.String())
}
}
func TestRunNoArgsPrintsUsage(t *testing.T) {
t.Parallel()
var stdout, stderr bytes.Buffer
rc := run(nil, &stdout, &stderr)
if rc != 1 {
t.Fatalf("rc=%d want 1", rc)
}
if !strings.Contains(stderr.String(), "bee commands:") {
t.Fatalf("stderr missing root usage:\n%s", stderr.String())
}
}
func TestRunUnknownCommand(t *testing.T) {
t.Parallel()
var stdout, stderr bytes.Buffer
rc := run([]string{"wat"}, &stdout, &stderr)
if rc != 1 {
t.Fatalf("rc=%d want 1", rc)
}
if !strings.Contains(stderr.String(), `unknown command "wat"`) {
t.Fatalf("stderr missing unknown command message:\n%s", stderr.String())
}
}
func TestRunVersion(t *testing.T) {
t.Parallel()
old := Version
Version = "test-version"
t.Cleanup(func() { Version = old })
var stdout, stderr bytes.Buffer
rc := run([]string{"version"}, &stdout, &stderr)
if rc != 0 {
t.Fatalf("rc=%d want 0", rc)
}
if strings.TrimSpace(stdout.String()) != "test-version" {
t.Fatalf("stdout=%q want %q", strings.TrimSpace(stdout.String()), "test-version")
}
}
func TestRunExportRequiresTarget(t *testing.T) {
t.Parallel()
var stdout, stderr bytes.Buffer
rc := run([]string{"export"}, &stdout, &stderr)
if rc != 2 {
t.Fatalf("rc=%d want 2", rc)
}
if !strings.Contains(stderr.String(), "--target is required") {
t.Fatalf("stderr missing target error:\n%s", stderr.String())
}
if !strings.Contains(stderr.String(), "usage: bee export --target <device>") {
t.Fatalf("stderr missing export usage:\n%s", stderr.String())
}
}
func TestRunSATUsage(t *testing.T) {
t.Parallel()
var stdout, stderr bytes.Buffer
rc := run([]string{"sat"}, &stdout, &stderr)
if rc != 2 {
t.Fatalf("rc=%d want 2", rc)
}
if !strings.Contains(stderr.String(), "usage: bee sat nvidia") {
t.Fatalf("stderr missing sat usage:\n%s", stderr.String())
}
}
func TestRunSATUnknownTarget(t *testing.T) {
t.Parallel()
var stdout, stderr bytes.Buffer
rc := run([]string{"sat", "amd"}, &stdout, &stderr)
if rc != 2 {
t.Fatalf("rc=%d want 2", rc)
}
if !strings.Contains(stderr.String(), `unknown target "amd"`) {
t.Fatalf("stderr missing sat target error:\n%s", stderr.String())
}
}
func TestRunAuditInvalidRuntime(t *testing.T) {
t.Parallel()
var stdout, stderr bytes.Buffer
rc := run([]string{"audit", "--runtime", "bad"}, &stdout, &stderr)
if rc != 1 {
t.Fatalf("rc=%d want 1", rc)
}
}

View File

@@ -1,3 +1,24 @@
module bee/audit
go 1.23
require github.com/charmbracelet/bubbletea v1.3.4
require (
github.com/aymanbagabas/go-osc52/v2 v2.0.1 // indirect
github.com/charmbracelet/lipgloss v1.0.0 // indirect
github.com/charmbracelet/x/ansi v0.8.0 // indirect
github.com/charmbracelet/x/term v0.2.1 // indirect
github.com/erikgeiser/coninput v0.0.0-20211004153227-1c3628e74d0f // indirect
github.com/lucasb-eyer/go-colorful v1.2.0 // indirect
github.com/mattn/go-isatty v0.0.20 // indirect
github.com/mattn/go-localereader v0.0.1 // indirect
github.com/mattn/go-runewidth v0.0.16 // indirect
github.com/muesli/ansi v0.0.0-20230316100256-276c6243b2f6 // indirect
github.com/muesli/cancelreader v0.2.2 // indirect
github.com/muesli/termenv v0.15.2 // indirect
github.com/rivo/uniseg v0.4.7 // indirect
golang.org/x/sync v0.11.0 // indirect
golang.org/x/sys v0.30.0 // indirect
golang.org/x/text v0.3.8 // indirect
)

37
audit/go.sum Normal file
View File

@@ -0,0 +1,37 @@
github.com/aymanbagabas/go-osc52/v2 v2.0.1 h1:HwpRHbFMcZLEVr42D4p7XBqjyuxQH5SMiErDT4WkJ2k=
github.com/aymanbagabas/go-osc52/v2 v2.0.1/go.mod h1:uYgXzlJ7ZpABp8OJ+exZzJJhRNQ2ASbcXHWsFqH8hp8=
github.com/charmbracelet/bubbletea v1.3.4 h1:kCg7B+jSCFPLYRA52SDZjr51kG/fMUEoPoZrkaDHyoI=
github.com/charmbracelet/bubbletea v1.3.4/go.mod h1:dtcUCyCGEX3g9tosuYiut3MXgY/Jsv9nKVdibKKRRXo=
github.com/charmbracelet/lipgloss v1.0.0 h1:O7VkGDvqEdGi93X+DeqsQ7PKHDgtQfF8j8/O2qFMQNg=
github.com/charmbracelet/lipgloss v1.0.0/go.mod h1:U5fy9Z+C38obMs+T+tJqst9VGzlOYGj4ri9reL3qUlo=
github.com/charmbracelet/x/ansi v0.8.0 h1:9GTq3xq9caJW8ZrBTe0LIe2fvfLR/bYXKTx2llXn7xE=
github.com/charmbracelet/x/ansi v0.8.0/go.mod h1:wdYl/ONOLHLIVmQaxbIYEC/cRKOQyjTkowiI4blgS9Q=
github.com/charmbracelet/x/term v0.2.1 h1:AQeHeLZ1OqSXhrAWpYUtZyX1T3zVxfpZuEQMIQaGIAQ=
github.com/charmbracelet/x/term v0.2.1/go.mod h1:oQ4enTYFV7QN4m0i9mzHrViD7TQKvNEEkHUMCmsxdUg=
github.com/erikgeiser/coninput v0.0.0-20211004153227-1c3628e74d0f h1:Y/CXytFA4m6baUTXGLOoWe4PQhGxaX0KpnayAqC48p4=
github.com/erikgeiser/coninput v0.0.0-20211004153227-1c3628e74d0f/go.mod h1:vw97MGsxSvLiUE2X8qFplwetxpGLQrlU1Q9AUEIzCaM=
github.com/lucasb-eyer/go-colorful v1.2.0 h1:1nnpGOrhyZZuNyfu1QjKiUICQ74+3FNCN69Aj6K7nkY=
github.com/lucasb-eyer/go-colorful v1.2.0/go.mod h1:R4dSotOR9KMtayYi1e77YzuveK+i7ruzyGqttikkLy0=
github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY=
github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
github.com/mattn/go-localereader v0.0.1 h1:ygSAOl7ZXTx4RdPYinUpg6W99U8jWvWi9Ye2JC/oIi4=
github.com/mattn/go-localereader v0.0.1/go.mod h1:8fBrzywKY7BI3czFoHkuzRoWE9C+EiG4R1k4Cjx5p88=
github.com/mattn/go-runewidth v0.0.16 h1:E5ScNMtiwvlvB5paMFdw9p4kSQzbXFikJ5SQO6TULQc=
github.com/mattn/go-runewidth v0.0.16/go.mod h1:Jdepj2loyihRzMpdS35Xk/zdY8IAYHsh153qUoGf23w=
github.com/muesli/ansi v0.0.0-20230316100256-276c6243b2f6 h1:ZK8zHtRHOkbHy6Mmr5D264iyp3TiX5OmNcI5cIARiQI=
github.com/muesli/ansi v0.0.0-20230316100256-276c6243b2f6/go.mod h1:CJlz5H+gyd6CUWT45Oy4q24RdLyn7Md9Vj2/ldJBSIo=
github.com/muesli/cancelreader v0.2.2 h1:3I4Kt4BQjOR54NavqnDogx/MIoWBFa0StPA8ELUXHmA=
github.com/muesli/cancelreader v0.2.2/go.mod h1:3XuTXfFS2VjM+HTLZY9Ak0l6eUKfijIfMUZ4EgX0QYo=
github.com/muesli/termenv v0.15.2 h1:GohcuySI0QmI3wN8Ok9PtKGkgkFIk7y6Vpb5PvrY+Wo=
github.com/muesli/termenv v0.15.2/go.mod h1:Epx+iuz8sNs7mNKhxzH4fWXGNpZwUaJKRS1noLXviQ8=
github.com/rivo/uniseg v0.2.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJtxc=
github.com/rivo/uniseg v0.4.7 h1:WUdvkW8uEhrYfLC4ZzdpI2ztxP1I582+49Oc5Mq64VQ=
github.com/rivo/uniseg v0.4.7/go.mod h1:FN3SvrM+Zdj16jyLfmOkMNblXMcoc8DfTHruCPUcx88=
golang.org/x/sync v0.11.0 h1:GGz8+XQP4FvTTrjZPzNKTMFtSXH80RAzG+5ghFPgK9w=
golang.org/x/sync v0.11.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sys v0.0.0-20210809222454-d867a43fc93e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.30.0 h1:QjkSwP/36a20jFYWkSue1YwXzLmsV5Gfq7Eiy72C1uc=
golang.org/x/sys v0.30.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/text v0.3.8 h1:nAL+RVCQ9uMn3vJZbV+MRnydTJFPf8qqY42YiA6MrqY=
golang.org/x/text v0.3.8/go.mod h1:E6s5w1FMmriuDzIBO73fBruAKo1PCIq6d2Q6DHfQ8WQ=

311
audit/internal/app/app.go Normal file
View File

@@ -0,0 +1,311 @@
package app
import (
"encoding/json"
"fmt"
"os"
"path/filepath"
"strconv"
"strings"
"time"
"bee/audit/internal/collector"
"bee/audit/internal/platform"
"bee/audit/internal/runtimeenv"
)
const (
DefaultAuditJSONPath = "/var/log/bee-audit.json"
DefaultAuditLogPath = "/var/log/bee-audit.log"
)
type App struct {
network networkManager
services serviceManager
exports exportManager
tools toolManager
sat satRunner
}
type ActionResult struct {
Title string
Body string
}
type networkManager interface {
ListInterfaces() ([]platform.InterfaceInfo, error)
DefaultRoute() string
DHCPOne(iface string) (string, error)
DHCPAll() (string, error)
SetStaticIPv4(cfg platform.StaticIPv4Config) (string, error)
}
type serviceManager interface {
ListBeeServices() ([]string, error)
ServiceStatus(name string) (string, error)
ServiceDo(name string, action platform.ServiceAction) (string, error)
}
type exportManager interface {
ListRemovableTargets() ([]platform.RemovableTarget, error)
ExportFileToTarget(src string, target platform.RemovableTarget) (string, error)
}
type toolManager interface {
TailFile(path string, lines int) string
CheckTools(names []string) []platform.ToolStatus
}
type satRunner interface {
RunNvidiaAcceptancePack(baseDir string) (string, error)
}
func New(platform *platform.System) *App {
return &App{
network: platform,
services: platform,
exports: platform,
tools: platform,
sat: platform,
}
}
func (a *App) RunAudit(runtimeMode runtimeenv.Mode, output string) (string, error) {
result := collector.Run(runtimeMode)
data, err := json.MarshalIndent(result, "", " ")
if err != nil {
return "", err
}
switch {
case output == "stdout":
_, err := os.Stdout.Write(append(data, '\n'))
return "stdout", err
case strings.HasPrefix(output, "file:"):
path := strings.TrimPrefix(output, "file:")
if err := os.WriteFile(path, append(data, '\n'), 0644); err != nil {
return "", err
}
return path, nil
default:
return "", fmt.Errorf("unknown output destination %q — use stdout or file:<path>", output)
}
}
func (a *App) RunAuditNow(runtimeMode runtimeenv.Mode) (ActionResult, error) {
path, err := a.RunAudit(runtimeMode, "file:"+DefaultAuditJSONPath)
body := "Audit completed."
if path != "" {
body = "Audit output: " + path
}
return ActionResult{Title: "Run audit", Body: body}, err
}
func (a *App) RunAuditToDefaultFile(runtimeMode runtimeenv.Mode) (string, error) {
return a.RunAudit(runtimeMode, "file:"+DefaultAuditJSONPath)
}
func (a *App) ExportLatestAudit(target platform.RemovableTarget) (string, error) {
if _, err := os.Stat(DefaultAuditJSONPath); err != nil {
return "", err
}
filename := fmt.Sprintf("audit-%s-%s.json", sanitizeFilename(hostnameOr("unknown")), time.Now().UTC().Format("20060102-150405"))
tmpPath := filepath.Join(os.TempDir(), filename)
data, err := os.ReadFile(DefaultAuditJSONPath)
if err != nil {
return "", err
}
if err := os.WriteFile(tmpPath, data, 0644); err != nil {
return "", err
}
defer os.Remove(tmpPath)
return a.exports.ExportFileToTarget(tmpPath, target)
}
func (a *App) ExportLatestAuditResult(target platform.RemovableTarget) (ActionResult, error) {
path, err := a.ExportLatestAudit(target)
return ActionResult{Title: "Export audit", Body: "Audit exported to " + path}, err
}
func (a *App) ListInterfaces() ([]platform.InterfaceInfo, error) {
return a.network.ListInterfaces()
}
func (a *App) DefaultRoute() string {
return a.network.DefaultRoute()
}
func (a *App) DHCPOne(iface string) (string, error) {
return a.network.DHCPOne(iface)
}
func (a *App) DHCPOneResult(iface string) (ActionResult, error) {
body, err := a.network.DHCPOne(iface)
return ActionResult{Title: "DHCP on " + iface, Body: body}, err
}
func (a *App) DHCPAll() (string, error) {
return a.network.DHCPAll()
}
func (a *App) DHCPAllResult() (ActionResult, error) {
body, err := a.network.DHCPAll()
return ActionResult{Title: "DHCP all interfaces", Body: body}, err
}
func (a *App) SetStaticIPv4(cfg platform.StaticIPv4Config) (string, error) {
return a.network.SetStaticIPv4(cfg)
}
func (a *App) SetStaticIPv4Result(cfg platform.StaticIPv4Config) (ActionResult, error) {
body, err := a.network.SetStaticIPv4(cfg)
return ActionResult{Title: "Static IPv4 on " + cfg.Interface, Body: body}, err
}
func (a *App) NetworkStatus() (ActionResult, error) {
ifaces, err := a.network.ListInterfaces()
if err != nil {
return ActionResult{Title: "Network status"}, err
}
var body strings.Builder
for _, iface := range ifaces {
ipv4 := "(no IPv4)"
if len(iface.IPv4) > 0 {
ipv4 = strings.Join(iface.IPv4, ", ")
}
fmt.Fprintf(&body, "- %s: state=%s ip=%s\n", iface.Name, iface.State, ipv4)
}
if gw := a.network.DefaultRoute(); gw != "" {
fmt.Fprintf(&body, "\nDefault route: %s\n", gw)
}
return ActionResult{Title: "Network status", Body: strings.TrimSpace(body.String())}, nil
}
func (a *App) DefaultStaticIPv4FormFields(iface string) []string {
return []string{
"",
"24",
strings.TrimSpace(a.network.DefaultRoute()),
"77.88.8.8 77.88.8.1 1.1.1.1 8.8.8.8",
}
}
func (a *App) ParseStaticIPv4Config(iface string, fields []string) platform.StaticIPv4Config {
get := func(index int) string {
if index >= 0 && index < len(fields) {
return strings.TrimSpace(fields[index])
}
return ""
}
return platform.StaticIPv4Config{
Interface: iface,
Address: get(0),
Prefix: get(1),
Gateway: get(2),
DNS: strings.Fields(get(3)),
}
}
func (a *App) ListBeeServices() ([]string, error) {
return a.services.ListBeeServices()
}
func (a *App) ServiceStatus(name string) (string, error) {
return a.services.ServiceStatus(name)
}
func (a *App) ServiceStatusResult(name string) (ActionResult, error) {
body, err := a.services.ServiceStatus(name)
return ActionResult{Title: "service: " + name, Body: body}, err
}
func (a *App) ServiceDo(name string, action platform.ServiceAction) (string, error) {
return a.services.ServiceDo(name, action)
}
func (a *App) ServiceActionResult(name string, action platform.ServiceAction) (ActionResult, error) {
body, err := a.services.ServiceDo(name, action)
return ActionResult{Title: "service: " + name, Body: body}, err
}
func (a *App) ListRemovableTargets() ([]platform.RemovableTarget, error) {
return a.exports.ListRemovableTargets()
}
func (a *App) TailFile(path string, lines int) string {
return a.tools.TailFile(path, lines)
}
func (a *App) CheckTools(names []string) []platform.ToolStatus {
return a.tools.CheckTools(names)
}
func (a *App) ToolCheckResult(names []string) ActionResult {
var body strings.Builder
for _, tool := range a.tools.CheckTools(names) {
status := "MISSING"
if tool.OK {
status = "OK (" + tool.Path + ")"
}
fmt.Fprintf(&body, "- %s: %s\n", tool.Name, status)
}
return ActionResult{Title: "Required tools", Body: strings.TrimSpace(body.String())}
}
func (a *App) AuditLogTailResult() ActionResult {
body := a.tools.TailFile(DefaultAuditLogPath, 40) + "\n\n" + a.tools.TailFile(DefaultAuditJSONPath, 20)
return ActionResult{Title: "Audit log tail", Body: body}
}
func (a *App) RunNvidiaAcceptancePack(baseDir string) (string, error) {
return a.sat.RunNvidiaAcceptancePack(baseDir)
}
func (a *App) RunNvidiaAcceptancePackResult(baseDir string) (ActionResult, error) {
path, err := a.sat.RunNvidiaAcceptancePack(baseDir)
return ActionResult{Title: "NVIDIA SAT", Body: "Archive written to " + path}, err
}
func (a *App) FormatToolStatuses(statuses []platform.ToolStatus) string {
var body strings.Builder
for _, tool := range statuses {
status := "MISSING"
if tool.OK {
status = "OK (" + tool.Path + ")"
}
fmt.Fprintf(&body, "- %s: %s\n", tool.Name, status)
}
return strings.TrimSpace(body.String())
}
func (a *App) ParsePrefix(raw string, fallback int) int {
value, err := strconv.Atoi(strings.TrimSpace(raw))
if err != nil || value <= 0 {
return fallback
}
return value
}
func hostnameOr(fallback string) string {
hn, err := os.Hostname()
if err != nil || strings.TrimSpace(hn) == "" {
return fallback
}
return hn
}
func sanitizeFilename(v string) string {
var out []rune
for _, r := range v {
switch {
case r >= 'a' && r <= 'z', r >= 'A' && r <= 'Z', r >= '0' && r <= '9', r == '-', r == '_', r == '.':
out = append(out, r)
default:
out = append(out, '-')
}
}
if len(out) == 0 {
return "unknown"
}
return string(out)
}

View File

@@ -0,0 +1,279 @@
package app
import (
"errors"
"testing"
"bee/audit/internal/platform"
)
type fakeNetwork struct {
listInterfacesFn func() ([]platform.InterfaceInfo, error)
defaultRouteFn func() string
dhcpOneFn func(string) (string, error)
dhcpAllFn func() (string, error)
setStaticIPv4Fn func(platform.StaticIPv4Config) (string, error)
}
func (f fakeNetwork) ListInterfaces() ([]platform.InterfaceInfo, error) {
return f.listInterfacesFn()
}
func (f fakeNetwork) DefaultRoute() string {
return f.defaultRouteFn()
}
func (f fakeNetwork) DHCPOne(iface string) (string, error) {
return f.dhcpOneFn(iface)
}
func (f fakeNetwork) DHCPAll() (string, error) {
return f.dhcpAllFn()
}
func (f fakeNetwork) SetStaticIPv4(cfg platform.StaticIPv4Config) (string, error) {
return f.setStaticIPv4Fn(cfg)
}
type fakeServices struct {
serviceStatusFn func(string) (string, error)
serviceDoFn func(string, platform.ServiceAction) (string, error)
}
func (f fakeServices) ListBeeServices() ([]string, error) {
return nil, nil
}
func (f fakeServices) ServiceStatus(name string) (string, error) {
return f.serviceStatusFn(name)
}
func (f fakeServices) ServiceDo(name string, action platform.ServiceAction) (string, error) {
return f.serviceDoFn(name, action)
}
type fakeExports struct{}
func (f fakeExports) ListRemovableTargets() ([]platform.RemovableTarget, error) {
return nil, nil
}
func (f fakeExports) ExportFileToTarget(src string, target platform.RemovableTarget) (string, error) {
return "", nil
}
type fakeTools struct {
tailFileFn func(string, int) string
checkToolsFn func([]string) []platform.ToolStatus
}
func (f fakeTools) TailFile(path string, lines int) string {
return f.tailFileFn(path, lines)
}
func (f fakeTools) CheckTools(names []string) []platform.ToolStatus {
return f.checkToolsFn(names)
}
type fakeSAT struct {
runFn func(string) (string, error)
}
func (f fakeSAT) RunNvidiaAcceptancePack(baseDir string) (string, error) {
return f.runFn(baseDir)
}
func TestNetworkStatusFormatsInterfacesAndRoute(t *testing.T) {
t.Parallel()
a := &App{
network: fakeNetwork{
listInterfacesFn: func() ([]platform.InterfaceInfo, error) {
return []platform.InterfaceInfo{
{Name: "eth0", State: "UP", IPv4: []string{"10.0.0.2/24"}},
{Name: "eth1", State: "DOWN", IPv4: nil},
}, nil
},
defaultRouteFn: func() string { return "10.0.0.1" },
},
}
result, err := a.NetworkStatus()
if err != nil {
t.Fatalf("NetworkStatus error: %v", err)
}
if result.Title != "Network status" {
t.Fatalf("title=%q want %q", result.Title, "Network status")
}
if want := "- eth0: state=UP ip=10.0.0.2/24"; !contains(result.Body, want) {
t.Fatalf("body missing %q\nbody=%s", want, result.Body)
}
if want := "- eth1: state=DOWN ip=(no IPv4)"; !contains(result.Body, want) {
t.Fatalf("body missing %q\nbody=%s", want, result.Body)
}
if want := "Default route: 10.0.0.1"; !contains(result.Body, want) {
t.Fatalf("body missing %q\nbody=%s", want, result.Body)
}
}
func TestNetworkStatusPropagatesListError(t *testing.T) {
t.Parallel()
a := &App{
network: fakeNetwork{
listInterfacesFn: func() ([]platform.InterfaceInfo, error) {
return nil, errors.New("boom")
},
defaultRouteFn: func() string { return "" },
},
}
result, err := a.NetworkStatus()
if err == nil {
t.Fatal("expected error")
}
if result.Title != "Network status" {
t.Fatalf("title=%q want %q", result.Title, "Network status")
}
}
func TestParseStaticIPv4ConfigAndDefaults(t *testing.T) {
t.Parallel()
a := &App{
network: fakeNetwork{
defaultRouteFn: func() string { return " 192.168.1.1 " },
listInterfacesFn: func() ([]platform.InterfaceInfo, error) {
return nil, nil
},
dhcpOneFn: func(string) (string, error) { return "", nil },
dhcpAllFn: func() (string, error) { return "", nil },
setStaticIPv4Fn: func(platform.StaticIPv4Config) (string, error) { return "", nil },
},
}
defaults := a.DefaultStaticIPv4FormFields("eth0")
if len(defaults) != 4 {
t.Fatalf("len(defaults)=%d want 4", len(defaults))
}
if defaults[1] != "24" || defaults[2] != "192.168.1.1" {
t.Fatalf("unexpected defaults: %#v", defaults)
}
cfg := a.ParseStaticIPv4Config("eth0", []string{
" 10.10.0.5 ",
" 23 ",
" 10.10.0.1 ",
" 1.1.1.1 8.8.8.8 ",
})
if cfg.Interface != "eth0" || cfg.Address != "10.10.0.5" || cfg.Prefix != "23" || cfg.Gateway != "10.10.0.1" {
t.Fatalf("unexpected cfg: %#v", cfg)
}
if len(cfg.DNS) != 2 || cfg.DNS[0] != "1.1.1.1" || cfg.DNS[1] != "8.8.8.8" {
t.Fatalf("unexpected dns: %#v", cfg.DNS)
}
}
func TestServiceActionResults(t *testing.T) {
t.Parallel()
a := &App{
services: fakeServices{
serviceStatusFn: func(name string) (string, error) {
return "active", nil
},
serviceDoFn: func(name string, action platform.ServiceAction) (string, error) {
return string(action) + " ok", nil
},
},
}
statusResult, err := a.ServiceStatusResult("bee-audit")
if err != nil {
t.Fatalf("ServiceStatusResult error: %v", err)
}
if statusResult.Title != "service: bee-audit" || statusResult.Body != "active" {
t.Fatalf("unexpected status result: %#v", statusResult)
}
actionResult, err := a.ServiceActionResult("bee-audit", platform.ServiceRestart)
if err != nil {
t.Fatalf("ServiceActionResult error: %v", err)
}
if actionResult.Title != "service: bee-audit" || actionResult.Body != "restart ok" {
t.Fatalf("unexpected action result: %#v", actionResult)
}
}
func TestToolCheckAndLogTailResults(t *testing.T) {
t.Parallel()
a := &App{
tools: fakeTools{
tailFileFn: func(path string, lines int) string {
return path
},
checkToolsFn: func(names []string) []platform.ToolStatus {
return []platform.ToolStatus{
{Name: "dmidecode", OK: true, Path: "/usr/bin/dmidecode"},
{Name: "smartctl", OK: false},
}
},
},
}
toolsResult := a.ToolCheckResult([]string{"dmidecode", "smartctl"})
if toolsResult.Title != "Required tools" {
t.Fatalf("title=%q want %q", toolsResult.Title, "Required tools")
}
if want := "- dmidecode: OK (/usr/bin/dmidecode)"; !contains(toolsResult.Body, want) {
t.Fatalf("body missing %q\nbody=%s", want, toolsResult.Body)
}
if want := "- smartctl: MISSING"; !contains(toolsResult.Body, want) {
t.Fatalf("body missing %q\nbody=%s", want, toolsResult.Body)
}
logResult := a.AuditLogTailResult()
if logResult.Title != "Audit log tail" {
t.Fatalf("title=%q want %q", logResult.Title, "Audit log tail")
}
if want := DefaultAuditLogPath + "\n\n" + DefaultAuditJSONPath; logResult.Body != want {
t.Fatalf("body=%q want %q", logResult.Body, want)
}
}
func TestRunNvidiaAcceptancePackResult(t *testing.T) {
t.Parallel()
a := &App{
sat: fakeSAT{
runFn: func(baseDir string) (string, error) {
if baseDir != "/tmp/sat" {
t.Fatalf("baseDir=%q want %q", baseDir, "/tmp/sat")
}
return "/tmp/sat/out.tar.gz", nil
},
},
}
result, err := a.RunNvidiaAcceptancePackResult("/tmp/sat")
if err != nil {
t.Fatalf("RunNvidiaAcceptancePackResult error: %v", err)
}
if result.Title != "NVIDIA SAT" || result.Body != "Archive written to /tmp/sat/out.tar.gz" {
t.Fatalf("unexpected result: %#v", result)
}
}
func contains(haystack, needle string) bool {
return len(needle) == 0 || (len(haystack) >= len(needle) && (haystack == needle || containsAt(haystack, needle)))
}
func containsAt(haystack, needle string) bool {
for i := 0; i+len(needle) <= len(haystack); i++ {
if haystack[i:i+len(needle)] == needle {
return true
}
}
return false
}

View File

@@ -4,6 +4,7 @@
package collector
import (
"bee/audit/internal/runtimeenv"
"bee/audit/internal/schema"
"log/slog"
"time"
@@ -11,7 +12,7 @@ import (
// Run executes all collectors and returns the combined snapshot.
// Partial failures are logged as warnings; collection always completes.
func Run() schema.HardwareIngestRequest {
func Run(runtimeMode runtimeenv.Mode) schema.HardwareIngestRequest {
start := time.Now()
slog.Info("audit started")
@@ -39,7 +40,7 @@ func Run() schema.HardwareIngestRequest {
slog.Info("audit completed", "duration", time.Since(start).Round(time.Millisecond))
sourceType := "livcd"
sourceType := string(runtimeMode)
protocol := "os-direct"
return schema.HardwareIngestRequest{

View File

@@ -27,6 +27,9 @@ type nvidiaGPUInfo struct {
// If the driver/tool is unavailable, NVIDIA devices get UNKNOWN status and
// a stable serial fallback based on board serial + slot.
func enrichPCIeWithNVIDIA(devs []schema.HardwarePCIeDevice, boardSerial string) []schema.HardwarePCIeDevice {
if !hasNVIDIADevices(devs) {
return devs
}
gpuByBDF, err := queryNVIDIAGPUs()
if err != nil {
slog.Info("nvidia: enrichment skipped", "err", err)
@@ -35,6 +38,15 @@ func enrichPCIeWithNVIDIA(devs []schema.HardwarePCIeDevice, boardSerial string)
return enrichPCIeWithNVIDIAData(devs, gpuByBDF, boardSerial, true)
}
func hasNVIDIADevices(devs []schema.HardwarePCIeDevice) bool {
for _, dev := range devs {
if isNVIDIADevice(dev) {
return true
}
}
return false
}
func enrichPCIeWithNVIDIAData(devs []schema.HardwarePCIeDevice, gpuByBDF map[string]nvidiaGPUInfo, boardSerial string, driverLoaded bool) []schema.HardwarePCIeDevice {
enriched := 0
for i := range devs {

View File

@@ -0,0 +1,94 @@
package platform
import (
"fmt"
"os"
"os/exec"
"path/filepath"
"sort"
"strings"
)
func (s *System) ListRemovableTargets() ([]RemovableTarget, error) {
raw, err := exec.Command("lsblk", "-P", "-o", "NAME,TYPE,PKNAME,RM,FSTYPE,MOUNTPOINT,SIZE,LABEL,MODEL").Output()
if err != nil {
return nil, err
}
var out []RemovableTarget
for _, line := range strings.Split(strings.TrimSpace(string(raw)), "\n") {
if strings.TrimSpace(line) == "" {
continue
}
fields := parseLSBLKPairs(line)
deviceType := fields["TYPE"]
if deviceType == "rom" || deviceType == "loop" {
continue
}
removable := fields["RM"] == "1"
if !removable {
if parent := fields["PKNAME"]; parent != "" {
if data, err := os.ReadFile(filepath.Join("/sys/class/block", parent, "removable")); err == nil {
removable = strings.TrimSpace(string(data)) == "1"
}
}
}
if !removable || fields["FSTYPE"] == "" {
continue
}
out = append(out, RemovableTarget{
Device: "/dev/" + fields["NAME"],
FSType: fields["FSTYPE"],
Size: fields["SIZE"],
Label: fields["LABEL"],
Model: fields["MODEL"],
Mountpoint: fields["MOUNTPOINT"],
})
}
sort.Slice(out, func(i, j int) bool { return out[i].Device < out[j].Device })
return out, nil
}
func (s *System) ExportFileToTarget(src string, target RemovableTarget) (string, error) {
if src == "" || target.Device == "" {
return "", fmt.Errorf("source and target are required")
}
if _, err := os.Stat(src); err != nil {
return "", err
}
mountpoint := strings.TrimSpace(target.Mountpoint)
mountedHere := false
if mountpoint == "" {
mountpoint = filepath.Join("/tmp", "bee-export-"+filepath.Base(target.Device))
if err := os.MkdirAll(mountpoint, 0755); err != nil {
return "", err
}
if raw, err := exec.Command("mount", target.Device, mountpoint).CombinedOutput(); err != nil {
_ = os.Remove(mountpoint)
return string(raw), err
}
mountedHere = true
}
filename := filepath.Base(src)
dst := filepath.Join(mountpoint, filename)
data, err := os.ReadFile(src)
if err != nil {
return "", err
}
if err := os.WriteFile(dst, data, 0644); err != nil {
return "", err
}
_ = exec.Command("sync").Run()
if mountedHere {
_ = exec.Command("umount", mountpoint).Run()
_ = os.Remove(mountpoint)
}
return dst, nil
}

View File

@@ -0,0 +1,156 @@
package platform
import (
"bytes"
"fmt"
"os"
"os/exec"
"sort"
"strings"
)
func (s *System) ListInterfaces() ([]InterfaceInfo, error) {
names, err := listInterfaceNames()
if err != nil {
return nil, err
}
out := make([]InterfaceInfo, 0, len(names))
for _, name := range names {
state := "unknown"
if raw, err := exec.Command("ip", "-o", "link", "show", name).Output(); err == nil {
fields := strings.Fields(string(raw))
if len(fields) >= 9 {
state = fields[8]
}
}
var ipv4 []string
if raw, err := exec.Command("ip", "-o", "-4", "addr", "show", "dev", name).Output(); err == nil {
for _, line := range strings.Split(strings.TrimSpace(string(raw)), "\n") {
fields := strings.Fields(line)
if len(fields) >= 4 {
ipv4 = append(ipv4, fields[3])
}
}
}
out = append(out, InterfaceInfo{Name: name, State: state, IPv4: ipv4})
}
return out, nil
}
func (s *System) DefaultRoute() string {
raw, err := exec.Command("ip", "route", "show", "default").Output()
if err != nil {
return ""
}
fields := strings.Fields(string(raw))
for i := 0; i < len(fields)-1; i++ {
if fields[i] == "via" {
return fields[i+1]
}
}
return ""
}
func (s *System) DHCPOne(iface string) (string, error) {
var out bytes.Buffer
if err := exec.Command("ip", "link", "set", iface, "up").Run(); err != nil {
fmt.Fprintf(&out, "WARN: ip link set up failed: %v\n", err)
}
if raw, err := exec.Command("dhclient", "-r", iface).CombinedOutput(); err == nil {
out.Write(raw)
} else if len(raw) > 0 {
out.Write(raw)
}
raw, err := exec.Command("dhclient", "-4", "-v", iface).CombinedOutput()
out.Write(raw)
if err != nil {
return out.String(), err
}
return out.String(), nil
}
func (s *System) DHCPAll() (string, error) {
ifaces, err := listInterfaceNames()
if err != nil {
return "", err
}
var out strings.Builder
for _, iface := range ifaces {
fmt.Fprintf(&out, "[%s]\n", iface)
log, err := s.DHCPOne(iface)
out.WriteString(log)
if err != nil {
fmt.Fprintf(&out, "ERROR: %v\n", err)
}
out.WriteString("\n")
}
return out.String(), nil
}
func (s *System) SetStaticIPv4(cfg StaticIPv4Config) (string, error) {
if cfg.Interface == "" || cfg.Address == "" || cfg.Prefix == "" {
return "", fmt.Errorf("interface, address, and prefix are required")
}
dns := cfg.DNS
if len(dns) == 0 {
dns = []string{"77.88.8.8", "77.88.8.1", "1.1.1.1", "8.8.8.8"}
}
var out strings.Builder
_ = exec.Command("ip", "link", "set", cfg.Interface, "up").Run()
_ = exec.Command("ip", "addr", "flush", "dev", cfg.Interface).Run()
if raw, err := exec.Command("ip", "addr", "add", cfg.Address+"/"+cfg.Prefix, "dev", cfg.Interface).CombinedOutput(); err != nil {
return string(raw), err
}
out.WriteString("address configured\n")
if cfg.Gateway != "" {
_ = exec.Command("ip", "route", "del", "default").Run()
if raw, err := exec.Command("ip", "route", "add", "default", "via", cfg.Gateway, "dev", cfg.Interface).CombinedOutput(); err != nil {
return out.String() + string(raw), err
}
out.WriteString("default route configured\n")
}
var resolv strings.Builder
for _, dnsServer := range dns {
dnsServer = strings.TrimSpace(dnsServer)
if dnsServer == "" {
continue
}
fmt.Fprintf(&resolv, "nameserver %s\n", dnsServer)
}
if err := os.WriteFile("/etc/resolv.conf", []byte(resolv.String()), 0644); err != nil {
return out.String(), err
}
out.WriteString("dns configured\n")
return out.String(), nil
}
func listInterfaceNames() ([]string, error) {
raw, err := exec.Command("ip", "-o", "link", "show").Output()
if err != nil {
return nil, err
}
var out []string
for _, line := range strings.Split(strings.TrimSpace(string(raw)), "\n") {
fields := strings.SplitN(line, ": ", 3)
if len(fields) < 2 {
continue
}
name := fields[1]
if name == "lo" || strings.HasPrefix(name, "docker") || strings.HasPrefix(name, "virbr") ||
strings.HasPrefix(name, "veth") || strings.HasPrefix(name, "tun") ||
strings.HasPrefix(name, "tap") || strings.HasPrefix(name, "br-") ||
strings.HasPrefix(name, "bond") || strings.HasPrefix(name, "dummy") {
continue
}
out = append(out, name)
}
sort.Strings(out)
return out, nil
}

View File

@@ -0,0 +1,43 @@
package platform
import "strings"
func parseLSBLKPairs(line string) map[string]string {
out := map[string]string{}
for _, part := range splitQuotedFields(line) {
idx := strings.Index(part, "=")
if idx <= 0 {
continue
}
key := part[:idx]
value := strings.Trim(part[idx+1:], `"`)
out[key] = value
}
return out
}
func splitQuotedFields(s string) []string {
var out []string
var cur strings.Builder
inQuotes := false
for _, r := range s {
switch r {
case '"':
inQuotes = !inQuotes
cur.WriteRune(r)
case ' ':
if inQuotes {
cur.WriteRune(r)
} else if cur.Len() > 0 {
out = append(out, cur.String())
cur.Reset()
}
default:
cur.WriteRune(r)
}
}
if cur.Len() > 0 {
out = append(out, cur.String())
}
return out
}

View File

@@ -0,0 +1,101 @@
package platform
import (
"archive/tar"
"compress/gzip"
"fmt"
"io"
"os"
"os/exec"
"path/filepath"
"strings"
"time"
)
func (s *System) RunNvidiaAcceptancePack(baseDir string) (string, error) {
if baseDir == "" {
baseDir = "/var/log/bee-sat"
}
ts := time.Now().UTC().Format("20060102-150405")
runDir := filepath.Join(baseDir, "gpu-nvidia-"+ts)
if err := os.MkdirAll(runDir, 0755); err != nil {
return "", err
}
type job struct {
name string
cmd []string
}
jobs := []job{
{name: "01-nvidia-smi-q.log", cmd: []string{"nvidia-smi", "-q"}},
{name: "02-dmidecode-baseboard.log", cmd: []string{"dmidecode", "-t", "baseboard"}},
{name: "03-dmidecode-system.log", cmd: []string{"dmidecode", "-t", "system"}},
{name: "04-nvidia-bug-report.log", cmd: []string{"nvidia-bug-report.sh", "--output", filepath.Join(runDir, "nvidia-bug-report.log")}},
}
var summary strings.Builder
fmt.Fprintf(&summary, "run_at_utc=%s\n", time.Now().UTC().Format(time.RFC3339))
for _, job := range jobs {
out, err := exec.Command(job.cmd[0], job.cmd[1:]...).CombinedOutput()
if writeErr := os.WriteFile(filepath.Join(runDir, job.name), out, 0644); writeErr != nil {
return "", writeErr
}
rc := 0
if err != nil {
rc = 1
}
fmt.Fprintf(&summary, "%s_rc=%d\n", strings.TrimSuffix(strings.TrimPrefix(job.name, "0"), ".log"), rc)
}
if err := os.WriteFile(filepath.Join(runDir, "summary.txt"), []byte(summary.String()), 0644); err != nil {
return "", err
}
archive := filepath.Join(baseDir, "gpu-nvidia-"+ts+".tar.gz")
if err := createTarGz(archive, runDir); err != nil {
return "", err
}
return archive, nil
}
func createTarGz(dst, srcDir string) error {
file, err := os.Create(dst)
if err != nil {
return err
}
defer file.Close()
gz := gzip.NewWriter(file)
defer gz.Close()
tw := tar.NewWriter(gz)
defer tw.Close()
base := filepath.Dir(srcDir)
return filepath.Walk(srcDir, func(path string, info os.FileInfo, err error) error {
if err != nil {
return err
}
if info.IsDir() {
return nil
}
header, err := tar.FileInfoHeader(info, "")
if err != nil {
return err
}
rel, err := filepath.Rel(base, path)
if err != nil {
return err
}
header.Name = rel
if err := tw.WriteHeader(header); err != nil {
return err
}
file, err := os.Open(path)
if err != nil {
return err
}
defer file.Close()
_, err = io.Copy(tw, file)
return err
})
}

View File

@@ -0,0 +1,54 @@
package platform
import (
"os/exec"
"path/filepath"
"sort"
"strings"
)
func (s *System) ListBeeServices() ([]string, error) {
seen := map[string]bool{}
var out []string
for _, pattern := range []string{"/etc/systemd/system/bee-*.service", "/lib/systemd/system/bee-*.service"} {
matches, err := filepath.Glob(pattern)
if err != nil {
return nil, err
}
for _, match := range matches {
name := strings.TrimSuffix(filepath.Base(match), ".service")
if !seen[name] {
seen[name] = true
out = append(out, name)
}
}
}
sort.Strings(out)
return out, nil
}
func (s *System) ServiceState(name string) string {
raw, err := exec.Command("systemctl", "is-active", name).CombinedOutput()
if err == nil {
return strings.TrimSpace(string(raw))
}
raw, err = exec.Command("systemctl", "show", name, "--property=ActiveState", "--value").CombinedOutput()
if err != nil {
return "unknown"
}
state := strings.TrimSpace(string(raw))
if state == "" {
return "unknown"
}
return state
}
func (s *System) ServiceDo(name string, action ServiceAction) (string, error) {
raw, err := exec.Command("systemctl", string(action), name).CombinedOutput()
return string(raw), err
}
func (s *System) ServiceStatus(name string) (string, error) {
raw, err := exec.Command("systemctl", "status", name, "--no-pager").CombinedOutput()
return string(raw), err
}

View File

@@ -0,0 +1,49 @@
package platform
import "testing"
func TestSplitQuotedFields(t *testing.T) {
t.Parallel()
line := `NAME="sdb1" TYPE="part" LABEL="BEE EXPORT" MODEL="USB DISK 3.0"`
got := splitQuotedFields(line)
want := []string{
`NAME="sdb1"`,
`TYPE="part"`,
`LABEL="BEE EXPORT"`,
`MODEL="USB DISK 3.0"`,
}
if len(got) != len(want) {
t.Fatalf("len(got)=%d len(want)=%d; got=%q", len(got), len(want), got)
}
for i := range want {
if got[i] != want[i] {
t.Fatalf("got[%d]=%q want %q", i, got[i], want[i])
}
}
}
func TestParseLSBLKPairs(t *testing.T) {
t.Parallel()
line := `NAME="sdb1" TYPE="part" PKNAME="sdb" RM="1" FSTYPE="vfat" MOUNTPOINT="" SIZE="57.3G" LABEL="BEE EXPORT" MODEL="USB DISK 3.0"`
got := parseLSBLKPairs(line)
checks := map[string]string{
"NAME": "sdb1",
"TYPE": "part",
"PKNAME": "sdb",
"RM": "1",
"FSTYPE": "vfat",
"MOUNTPOINT": "",
"SIZE": "57.3G",
"LABEL": "BEE EXPORT",
"MODEL": "USB DISK 3.0",
}
for key, want := range checks {
if got[key] != want {
t.Fatalf("got[%s]=%q want %q", key, got[key], want)
}
}
}

View File

@@ -0,0 +1,29 @@
package platform
import (
"fmt"
"os"
"os/exec"
"strings"
)
func (s *System) TailFile(path string, lines int) string {
raw, err := os.ReadFile(path)
if err != nil {
return fmt.Sprintf("read %s: %v", path, err)
}
all := strings.Split(strings.TrimRight(string(raw), "\n"), "\n")
if lines <= 0 || len(all) <= lines {
return string(raw)
}
return strings.Join(all[len(all)-lines:], "\n")
}
func (s *System) CheckTools(names []string) []ToolStatus {
out := make([]ToolStatus, 0, len(names))
for _, name := range names {
path, err := exec.LookPath(name)
out = append(out, ToolStatus{Name: name, Path: path, OK: err == nil})
}
return out
}

View File

@@ -0,0 +1,44 @@
package platform
type System struct{}
type InterfaceInfo struct {
Name string
State string
IPv4 []string
}
type ServiceAction string
const (
ServiceStart ServiceAction = "start"
ServiceStop ServiceAction = "stop"
ServiceRestart ServiceAction = "restart"
)
type StaticIPv4Config struct {
Interface string
Address string
Prefix string
Gateway string
DNS []string
}
type RemovableTarget struct {
Device string
FSType string
Size string
Label string
Model string
Mountpoint string
}
type ToolStatus struct {
Name string
Path string
OK bool
}
func New() *System {
return &System{}
}

View File

@@ -0,0 +1,77 @@
package runtimeenv
import (
"fmt"
"os"
"strings"
)
type Mode string
const (
ModeAuto Mode = "auto"
ModeLocal Mode = "local"
ModeLiveCD Mode = "livecd"
)
type Info struct {
Mode Mode
Detected bool
Reason string
}
func ParseMode(raw string) (Mode, error) {
mode := Mode(strings.TrimSpace(strings.ToLower(raw)))
switch mode {
case "", ModeAuto:
return ModeAuto, nil
case ModeLocal, ModeLiveCD:
return mode, nil
default:
return "", fmt.Errorf("invalid runtime %q — use auto, local, or livecd", raw)
}
}
func Detect(flagValue string) (Info, error) {
flagMode, err := ParseMode(flagValue)
if err != nil {
return Info{}, err
}
if flagMode != ModeAuto {
return Info{Mode: flagMode, Reason: "flag"}, nil
}
if envMode, ok := getenvMode("BEE_RUNTIME"); ok {
return Info{Mode: envMode, Reason: "env:BEE_RUNTIME"}, nil
}
if fileExists("/etc/bee-release") {
return Info{Mode: ModeLiveCD, Detected: true, Reason: "marker:/etc/bee-release"}, nil
}
if data, err := os.ReadFile("/proc/cmdline"); err == nil {
cmdline := string(data)
if strings.Contains(cmdline, " boot=live") || strings.HasPrefix(cmdline, "boot=live ") || strings.Contains(cmdline, "live-media") {
return Info{Mode: ModeLiveCD, Detected: true, Reason: "kernel:boot=live"}, nil
}
}
return Info{Mode: ModeLocal, Detected: true, Reason: "default:local"}, nil
}
func getenvMode(name string) (Mode, bool) {
value := strings.TrimSpace(os.Getenv(name))
if value == "" {
return "", false
}
mode, err := ParseMode(value)
if err != nil || mode == ModeAuto {
return "", false
}
return mode, true
}
func fileExists(path string) bool {
info, err := os.Stat(path)
return err == nil && !info.IsDir()
}

View File

@@ -0,0 +1,67 @@
package runtimeenv
import (
"os"
"testing"
)
func TestParseMode(t *testing.T) {
t.Parallel()
tests := []struct {
in string
want Mode
ok bool
}{
{in: "", want: ModeAuto, ok: true},
{in: "auto", want: ModeAuto, ok: true},
{in: "local", want: ModeLocal, ok: true},
{in: "livecd", want: ModeLiveCD, ok: true},
{in: "bad", ok: false},
}
for _, test := range tests {
got, err := ParseMode(test.in)
if test.ok && err != nil {
t.Fatalf("ParseMode(%q): %v", test.in, err)
}
if !test.ok && err == nil {
t.Fatalf("ParseMode(%q): expected error", test.in)
}
if test.ok && got != test.want {
t.Fatalf("ParseMode(%q): got %q want %q", test.in, got, test.want)
}
}
}
func TestDetectHonorsFlag(t *testing.T) {
t.Parallel()
info, err := Detect("livecd")
if err != nil {
t.Fatalf("Detect(flag): %v", err)
}
if info.Mode != ModeLiveCD || info.Reason != "flag" {
t.Fatalf("unexpected info: %+v", info)
}
}
func TestDetectHonorsEnv(t *testing.T) {
t.Parallel()
old := os.Getenv("BEE_RUNTIME")
t.Cleanup(func() {
_ = os.Setenv("BEE_RUNTIME", old)
})
if err := os.Setenv("BEE_RUNTIME", "local"); err != nil {
t.Fatalf("Setenv: %v", err)
}
info, err := Detect("auto")
if err != nil {
t.Fatalf("Detect(env): %v", err)
}
if info.Mode != ModeLocal || info.Reason != "env:BEE_RUNTIME" {
t.Fatalf("unexpected info: %+v", info)
}
}

View File

@@ -2,7 +2,7 @@
// core/internal/ingest/parser_hardware.go. No import dependency on core.
package schema
// HardwareIngestRequest is the top-level output document produced by the audit binary.
// HardwareIngestRequest is the top-level output document produced by `bee audit`.
// It is accepted as-is by the core /api/ingest/hardware endpoint.
type HardwareIngestRequest struct {
Filename *string `json:"filename"`

View File

@@ -0,0 +1,98 @@
package tui
import tea "github.com/charmbracelet/bubbletea"
func (m model) updateStaticForm(msg tea.KeyMsg) (tea.Model, tea.Cmd) {
switch msg.String() {
case "esc":
m.screen = screenNetwork
m.formFields = nil
m.formIndex = 0
return m, nil
case "up", "shift+tab":
if m.formIndex > 0 {
m.formIndex--
}
case "down", "tab":
if m.formIndex < len(m.formFields)-1 {
m.formIndex++
}
case "enter":
if m.formIndex < len(m.formFields)-1 {
m.formIndex++
return m, nil
}
cfg := m.app.ParseStaticIPv4Config(m.selectedIface, []string{
m.formFields[0].Value,
m.formFields[1].Value,
m.formFields[2].Value,
m.formFields[3].Value,
})
m.busy = true
return m, func() tea.Msg {
result, err := m.app.SetStaticIPv4Result(cfg)
return resultMsg{title: result.Title, body: result.Body, err: err, back: screenNetwork}
}
case "backspace":
field := &m.formFields[m.formIndex]
if len(field.Value) > 0 {
field.Value = field.Value[:len(field.Value)-1]
}
default:
if msg.Type == tea.KeyRunes && len(msg.Runes) > 0 {
m.formFields[m.formIndex].Value += string(msg.Runes)
}
}
return m, nil
}
func (m model) updateConfirm(msg tea.KeyMsg) (tea.Model, tea.Cmd) {
switch msg.String() {
case "left", "up", "tab":
if m.cursor > 0 {
m.cursor--
}
case "right", "down":
if m.cursor < 1 {
m.cursor++
}
case "esc":
m.screen = m.confirmCancelTarget()
m.cursor = 0
return m, nil
case "enter":
if m.cursor == 1 {
m.screen = m.confirmCancelTarget()
m.cursor = 0
return m, nil
}
m.busy = true
switch m.pendingAction {
case actionExportAudit:
target := *m.selectedTarget
return m, func() tea.Msg {
result, err := m.app.ExportLatestAuditResult(target)
return resultMsg{title: result.Title, body: result.Body, err: err, back: screenMain}
}
case actionRunNvidiaSAT:
return m, func() tea.Msg {
result, err := m.app.RunNvidiaAcceptancePackResult("")
return resultMsg{title: result.Title, body: result.Body, err: err, back: screenAcceptance}
}
}
case "ctrl+c":
return m, tea.Quit
}
return m, nil
}
func (m model) confirmCancelTarget() screen {
switch m.pendingAction {
case actionExportAudit:
return screenExportTargets
case actionRunNvidiaSAT:
return screenAcceptance
default:
return screenMain
}
}

View File

@@ -0,0 +1,25 @@
package tui
import "bee/audit/internal/platform"
type resultMsg struct {
title string
body string
err error
back screen
}
type servicesMsg struct {
services []string
err error
}
type interfacesMsg struct {
ifaces []platform.InterfaceInfo
err error
}
type exportTargetsMsg struct {
targets []platform.RemovableTarget
err error
}

View File

@@ -0,0 +1,14 @@
package tui
import tea "github.com/charmbracelet/bubbletea"
func (m model) handleAcceptanceMenu() (tea.Model, tea.Cmd) {
if m.cursor == 1 {
m.screen = screenMain
m.cursor = 0
return m, nil
}
m.pendingAction = actionRunNvidiaSAT
m.screen = screenConfirm
return m, nil
}

View File

@@ -0,0 +1,14 @@
package tui
import tea "github.com/charmbracelet/bubbletea"
func (m model) handleExportTargetsMenu() (tea.Model, tea.Cmd) {
if len(m.targets) == 0 {
return m, resultCmd("Export audit", "No removable filesystems found", nil, screenMain)
}
target := m.targets[m.cursor]
m.selectedTarget = &target
m.pendingAction = actionExportAudit
m.screen = screenConfirm
return m, nil
}

View File

@@ -0,0 +1,51 @@
package tui
import (
tea "github.com/charmbracelet/bubbletea"
)
func (m model) handleMainMenu() (tea.Model, tea.Cmd) {
switch m.cursor {
case 0:
m.screen = screenNetwork
m.cursor = 0
return m, nil
case 1:
m.busy = true
return m, func() tea.Msg {
services, err := m.app.ListBeeServices()
return servicesMsg{services: services, err: err}
}
case 2:
m.screen = screenAcceptance
m.cursor = 0
return m, nil
case 3:
m.busy = true
return m, func() tea.Msg {
result, err := m.app.RunAuditNow(m.runtimeMode)
return resultMsg{title: result.Title, body: result.Body, err: err, back: screenMain}
}
case 4:
m.busy = true
return m, func() tea.Msg {
targets, err := m.app.ListRemovableTargets()
return exportTargetsMsg{targets: targets, err: err}
}
case 5:
m.busy = true
return m, func() tea.Msg {
result := m.app.ToolCheckResult([]string{"dmidecode", "smartctl", "nvme", "ipmitool", "lspci", "bee", "nvidia-smi", "dhclient", "lsblk", "mount"})
return resultMsg{title: result.Title, body: result.Body, back: screenMain}
}
case 6:
m.busy = true
return m, func() tea.Msg {
result := m.app.AuditLogTailResult()
return resultMsg{title: result.Title, body: result.Body, back: screenMain}
}
case 7:
return m, tea.Quit
}
return m, nil
}

View File

@@ -0,0 +1,71 @@
package tui
import (
"strings"
tea "github.com/charmbracelet/bubbletea"
)
func (m model) handleNetworkMenu() (tea.Model, tea.Cmd) {
switch m.cursor {
case 0:
m.busy = true
return m, func() tea.Msg {
result, err := m.app.NetworkStatus()
return resultMsg{title: result.Title, body: result.Body, err: err, back: screenNetwork}
}
case 1:
m.busy = true
return m, func() tea.Msg {
result, err := m.app.DHCPAllResult()
return resultMsg{title: result.Title, body: result.Body, err: err, back: screenNetwork}
}
case 2:
m.pendingAction = actionDHCPOne
m.busy = true
return m, func() tea.Msg {
ifaces, err := m.app.ListInterfaces()
return interfacesMsg{ifaces: ifaces, err: err}
}
case 3:
m.pendingAction = actionStaticIPv4
m.busy = true
return m, func() tea.Msg {
ifaces, err := m.app.ListInterfaces()
return interfacesMsg{ifaces: ifaces, err: err}
}
case 4:
m.screen = screenMain
m.cursor = 0
return m, nil
}
return m, nil
}
func (m model) handleInterfacePickMenu() (tea.Model, tea.Cmd) {
if len(m.interfaces) == 0 {
return m, resultCmd("interfaces", "No physical interfaces found", nil, screenNetwork)
}
m.selectedIface = m.interfaces[m.cursor].Name
switch m.pendingAction {
case actionDHCPOne:
m.busy = true
return m, func() tea.Msg {
result, err := m.app.DHCPOneResult(m.selectedIface)
return resultMsg{title: result.Title, body: result.Body, err: err, back: screenNetwork}
}
case actionStaticIPv4:
defaults := m.app.DefaultStaticIPv4FormFields(m.selectedIface)
m.formFields = []formField{
{Label: "IPv4 address", Value: defaults[0]},
{Label: "Prefix", Value: defaults[1]},
{Label: "Gateway", Value: strings.TrimSpace(defaults[2])},
{Label: "DNS (space-separated)", Value: defaults[3]},
}
m.formIndex = 0
m.screen = screenStaticForm
return m, nil
default:
return m, nil
}
}

View File

@@ -0,0 +1,46 @@
package tui
import (
"bee/audit/internal/platform"
tea "github.com/charmbracelet/bubbletea"
)
func (m model) handleServicesMenu() (tea.Model, tea.Cmd) {
if len(m.services) == 0 {
return m, resultCmd("bee services", "No bee-* services found", nil, screenMain)
}
m.selectedService = m.services[m.cursor]
m.screen = screenServiceAction
m.cursor = 0
return m, nil
}
func (m model) handleServiceActionMenu() (tea.Model, tea.Cmd) {
action := m.serviceMenu[m.cursor]
if action == "back" {
m.screen = screenServices
m.cursor = 0
return m, nil
}
m.busy = true
return m, func() tea.Msg {
switch action {
case "status":
result, err := m.app.ServiceStatusResult(m.selectedService)
return resultMsg{title: result.Title, body: result.Body, err: err, back: screenServiceAction}
case "restart":
result, err := m.app.ServiceActionResult(m.selectedService, platform.ServiceRestart)
return resultMsg{title: result.Title, body: result.Body, err: err, back: screenServiceAction}
case "start":
result, err := m.app.ServiceActionResult(m.selectedService, platform.ServiceStart)
return resultMsg{title: result.Title, body: result.Body, err: err, back: screenServiceAction}
case "stop":
result, err := m.app.ServiceActionResult(m.selectedService, platform.ServiceStop)
return resultMsg{title: result.Title, body: result.Body, err: err, back: screenServiceAction}
default:
return resultMsg{title: "service", body: "unknown action", back: screenServiceAction}
}
}
}

View File

@@ -0,0 +1,349 @@
package tui
import (
"testing"
"bee/audit/internal/app"
"bee/audit/internal/platform"
"bee/audit/internal/runtimeenv"
tea "github.com/charmbracelet/bubbletea"
)
func newTestModel() model {
return newModel(app.New(platform.New()), runtimeenv.ModeLocal)
}
func sendKey(t *testing.T, m model, key tea.KeyType) model {
t.Helper()
next, _ := m.Update(tea.KeyMsg{Type: key})
return next.(model)
}
func TestUpdateMainMenuCursorNavigation(t *testing.T) {
t.Parallel()
m := newTestModel()
m = sendKey(t, m, tea.KeyDown)
if m.cursor != 1 {
t.Fatalf("cursor=%d want 1 after down", m.cursor)
}
m = sendKey(t, m, tea.KeyDown)
if m.cursor != 2 {
t.Fatalf("cursor=%d want 2 after second down", m.cursor)
}
m = sendKey(t, m, tea.KeyUp)
if m.cursor != 1 {
t.Fatalf("cursor=%d want 1 after up", m.cursor)
}
}
func TestUpdateMainMenuEnterActions(t *testing.T) {
t.Parallel()
tests := []struct {
name string
cursor int
wantScreen screen
wantBusy bool
wantCmd bool
}{
{name: "network", cursor: 0, wantScreen: screenNetwork},
{name: "services", cursor: 1, wantScreen: screenMain, wantBusy: true, wantCmd: true},
{name: "acceptance", cursor: 2, wantScreen: screenAcceptance},
{name: "run audit", cursor: 3, wantScreen: screenMain, wantBusy: true, wantCmd: true},
{name: "export", cursor: 4, wantScreen: screenMain, wantBusy: true, wantCmd: true},
}
for _, test := range tests {
test := test
t.Run(test.name, func(t *testing.T) {
t.Parallel()
m := newTestModel()
m.cursor = test.cursor
next, cmd := m.Update(tea.KeyMsg{Type: tea.KeyEnter})
got := next.(model)
if got.screen != test.wantScreen {
t.Fatalf("screen=%q want %q", got.screen, test.wantScreen)
}
if got.busy != test.wantBusy {
t.Fatalf("busy=%v want %v", got.busy, test.wantBusy)
}
if (cmd != nil) != test.wantCmd {
t.Fatalf("cmd present=%v want %v", cmd != nil, test.wantCmd)
}
})
}
}
func TestUpdateConfirmCancelViaKeys(t *testing.T) {
t.Parallel()
m := newTestModel()
m.screen = screenConfirm
m.pendingAction = actionRunNvidiaSAT
next, _ := m.Update(tea.KeyMsg{Type: tea.KeyRight})
got := next.(model)
if got.cursor != 1 {
t.Fatalf("cursor=%d want 1 after right", got.cursor)
}
next, _ = got.Update(tea.KeyMsg{Type: tea.KeyEnter})
got = next.(model)
if got.screen != screenAcceptance {
t.Fatalf("screen=%q want %q", got.screen, screenAcceptance)
}
if got.cursor != 0 {
t.Fatalf("cursor=%d want 0 after cancel", got.cursor)
}
}
func TestMainMenuSimpleTransitions(t *testing.T) {
t.Parallel()
tests := []struct {
name string
cursor int
wantScreen screen
}{
{name: "network", cursor: 0, wantScreen: screenNetwork},
{name: "acceptance", cursor: 2, wantScreen: screenAcceptance},
}
for _, test := range tests {
test := test
t.Run(test.name, func(t *testing.T) {
t.Parallel()
m := newTestModel()
m.cursor = test.cursor
next, cmd := m.handleMainMenu()
got := next.(model)
if cmd != nil {
t.Fatalf("expected nil cmd for %s", test.name)
}
if got.screen != test.wantScreen {
t.Fatalf("screen=%q want %q", got.screen, test.wantScreen)
}
if got.cursor != 0 {
t.Fatalf("cursor=%d want 0", got.cursor)
}
})
}
}
func TestMainMenuAsyncActionsSetBusy(t *testing.T) {
t.Parallel()
tests := []struct {
name string
cursor int
}{
{name: "services", cursor: 1},
{name: "run audit", cursor: 3},
{name: "export", cursor: 4},
{name: "check tools", cursor: 5},
{name: "log tail", cursor: 6},
}
for _, test := range tests {
test := test
t.Run(test.name, func(t *testing.T) {
t.Parallel()
m := newTestModel()
m.cursor = test.cursor
next, cmd := m.handleMainMenu()
got := next.(model)
if !got.busy {
t.Fatalf("busy=false for %s", test.name)
}
if cmd == nil {
t.Fatalf("expected async cmd for %s", test.name)
}
})
}
}
func TestEscapeNavigation(t *testing.T) {
t.Parallel()
tests := []struct {
name string
screen screen
wantScreen screen
}{
{name: "network to main", screen: screenNetwork, wantScreen: screenMain},
{name: "services to main", screen: screenServices, wantScreen: screenMain},
{name: "acceptance to main", screen: screenAcceptance, wantScreen: screenMain},
{name: "service action to services", screen: screenServiceAction, wantScreen: screenServices},
{name: "export targets to main", screen: screenExportTargets, wantScreen: screenMain},
{name: "interface pick to network", screen: screenInterfacePick, wantScreen: screenNetwork},
}
for _, test := range tests {
test := test
t.Run(test.name, func(t *testing.T) {
t.Parallel()
m := newTestModel()
m.screen = test.screen
m.cursor = 3
next, _ := m.updateKey(tea.KeyMsg{Type: tea.KeyEsc})
got := next.(model)
if got.screen != test.wantScreen {
t.Fatalf("screen=%q want %q", got.screen, test.wantScreen)
}
if got.cursor != 0 {
t.Fatalf("cursor=%d want 0", got.cursor)
}
})
}
}
func TestOutputScreenReturnsToPreviousScreen(t *testing.T) {
t.Parallel()
m := newTestModel()
m.screen = screenOutput
m.prevScreen = screenNetwork
m.title = "title"
m.body = "body"
next, _ := m.updateKey(tea.KeyMsg{Type: tea.KeyEnter})
got := next.(model)
if got.screen != screenNetwork {
t.Fatalf("screen=%q want %q", got.screen, screenNetwork)
}
if got.title != "" || got.body != "" {
t.Fatalf("expected output state cleared, got title=%q body=%q", got.title, got.body)
}
}
func TestAcceptanceConfirmFlow(t *testing.T) {
t.Parallel()
m := newTestModel()
m.screen = screenAcceptance
m.cursor = 0
next, cmd := m.handleAcceptanceMenu()
got := next.(model)
if cmd != nil {
t.Fatal("expected nil cmd")
}
if got.screen != screenConfirm {
t.Fatalf("screen=%q want %q", got.screen, screenConfirm)
}
if got.pendingAction != actionRunNvidiaSAT {
t.Fatalf("pendingAction=%q want %q", got.pendingAction, actionRunNvidiaSAT)
}
next, _ = got.updateConfirm(tea.KeyMsg{Type: tea.KeyEsc})
got = next.(model)
if got.screen != screenAcceptance {
t.Fatalf("screen after esc=%q want %q", got.screen, screenAcceptance)
}
}
func TestExportTargetSelectionOpensConfirm(t *testing.T) {
t.Parallel()
m := newTestModel()
m.screen = screenExportTargets
m.targets = []platform.RemovableTarget{{Device: "/dev/sdb1", FSType: "vfat", Size: "16G"}}
next, cmd := m.handleExportTargetsMenu()
got := next.(model)
if cmd != nil {
t.Fatal("expected nil cmd")
}
if got.screen != screenConfirm {
t.Fatalf("screen=%q want %q", got.screen, screenConfirm)
}
if got.pendingAction != actionExportAudit {
t.Fatalf("pendingAction=%q want %q", got.pendingAction, actionExportAudit)
}
if got.selectedTarget == nil || got.selectedTarget.Device != "/dev/sdb1" {
t.Fatalf("selectedTarget=%+v want /dev/sdb1", got.selectedTarget)
}
}
func TestInterfacePickStaticIPv4OpensForm(t *testing.T) {
t.Parallel()
m := newTestModel()
m.pendingAction = actionStaticIPv4
m.interfaces = []platform.InterfaceInfo{{Name: "eth0"}}
next, cmd := m.handleInterfacePickMenu()
got := next.(model)
if cmd != nil {
t.Fatal("expected nil cmd")
}
if got.screen != screenStaticForm {
t.Fatalf("screen=%q want %q", got.screen, screenStaticForm)
}
if got.selectedIface != "eth0" {
t.Fatalf("selectedIface=%q want eth0", got.selectedIface)
}
if len(got.formFields) != 4 {
t.Fatalf("len(formFields)=%d want 4", len(got.formFields))
}
}
func TestResultMsgUsesExplicitBackScreen(t *testing.T) {
t.Parallel()
m := newTestModel()
m.screen = screenConfirm
next, _ := m.Update(resultMsg{title: "done", body: "ok", back: screenNetwork})
got := next.(model)
if got.screen != screenOutput {
t.Fatalf("screen=%q want %q", got.screen, screenOutput)
}
if got.prevScreen != screenNetwork {
t.Fatalf("prevScreen=%q want %q", got.prevScreen, screenNetwork)
}
}
func TestConfirmCancelTarget(t *testing.T) {
t.Parallel()
m := newTestModel()
m.pendingAction = actionExportAudit
if got := m.confirmCancelTarget(); got != screenExportTargets {
t.Fatalf("export cancel target=%q want %q", got, screenExportTargets)
}
m.pendingAction = actionRunNvidiaSAT
if got := m.confirmCancelTarget(); got != screenAcceptance {
t.Fatalf("sat cancel target=%q want %q", got, screenAcceptance)
}
m.pendingAction = actionNone
if got := m.confirmCancelTarget(); got != screenMain {
t.Fatalf("default cancel target=%q want %q", got, screenMain)
}
}

111
audit/internal/tui/types.go Normal file
View File

@@ -0,0 +1,111 @@
package tui
import (
"bee/audit/internal/app"
"bee/audit/internal/platform"
"bee/audit/internal/runtimeenv"
tea "github.com/charmbracelet/bubbletea"
)
type screen string
const (
screenMain screen = "main"
screenNetwork screen = "network"
screenInterfacePick screen = "interface_pick"
screenServices screen = "services"
screenServiceAction screen = "service_action"
screenAcceptance screen = "acceptance"
screenExportTargets screen = "export_targets"
screenOutput screen = "output"
screenStaticForm screen = "static_form"
screenConfirm screen = "confirm"
)
type actionKind string
const (
actionNone actionKind = ""
actionDHCPOne actionKind = "dhcp_one"
actionStaticIPv4 actionKind = "static_ipv4"
actionExportAudit actionKind = "export_audit"
actionRunNvidiaSAT actionKind = "run_nvidia_sat"
)
type model struct {
app *app.App
runtimeMode runtimeenv.Mode
screen screen
prevScreen screen
cursor int
busy bool
title string
body string
mainMenu []string
networkMenu []string
serviceMenu []string
services []string
interfaces []platform.InterfaceInfo
targets []platform.RemovableTarget
selectedService string
selectedIface string
selectedTarget *platform.RemovableTarget
pendingAction actionKind
formFields []formField
formIndex int
}
type formField struct {
Label string
Value string
}
func Run(application *app.App, runtimeMode runtimeenv.Mode) error {
options := []tea.ProgramOption{}
if runtimeMode != runtimeenv.ModeLiveCD {
options = append(options, tea.WithAltScreen())
}
program := tea.NewProgram(newModel(application, runtimeMode), options...)
_, err := program.Run()
return err
}
func newModel(application *app.App, runtimeMode runtimeenv.Mode) model {
return model{
app: application,
runtimeMode: runtimeMode,
screen: screenMain,
mainMenu: []string{
"Network setup",
"bee service management",
"System acceptance tests",
"Run audit now",
"Export audit to removable drive",
"Check required tools",
"Show last audit log tail",
"Exit",
},
networkMenu: []string{
"Show network status",
"DHCP on all interfaces",
"DHCP on one interface",
"Set static IPv4 on one interface",
"Back",
},
serviceMenu: []string{
"status",
"restart",
"start",
"stop",
"back",
},
}
}
func (m model) Init() tea.Cmd {
return nil
}

View File

@@ -0,0 +1,154 @@
package tui
import (
"fmt"
"strings"
tea "github.com/charmbracelet/bubbletea"
)
func (m model) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
switch msg := msg.(type) {
case tea.KeyMsg:
if m.busy {
switch msg.String() {
case "ctrl+c":
return m, tea.Quit
default:
return m, nil
}
}
return m.updateKey(msg)
case resultMsg:
m.busy = false
m.title = msg.title
if msg.err != nil {
m.body = fmt.Sprintf("%s\n\nERROR: %v", strings.TrimSpace(msg.body), msg.err)
} else {
m.body = msg.body
}
if msg.back != "" {
m.prevScreen = msg.back
} else {
m.prevScreen = m.screen
}
m.screen = screenOutput
m.cursor = 0
return m, nil
case servicesMsg:
m.busy = false
if msg.err != nil {
m.title = "bee services"
m.body = msg.err.Error()
m.prevScreen = screenMain
m.screen = screenOutput
return m, nil
}
m.services = msg.services
m.screen = screenServices
m.cursor = 0
return m, nil
case interfacesMsg:
m.busy = false
if msg.err != nil {
m.title = "interfaces"
m.body = msg.err.Error()
m.prevScreen = screenMain
m.screen = screenOutput
return m, nil
}
m.interfaces = msg.ifaces
m.screen = screenInterfacePick
m.cursor = 0
return m, nil
case exportTargetsMsg:
m.busy = false
if msg.err != nil {
m.title = "export"
m.body = msg.err.Error()
m.prevScreen = screenMain
m.screen = screenOutput
return m, nil
}
m.targets = msg.targets
m.screen = screenExportTargets
m.cursor = 0
return m, nil
}
return m, nil
}
func (m model) updateKey(msg tea.KeyMsg) (tea.Model, tea.Cmd) {
switch m.screen {
case screenMain:
return m.updateMenu(msg, len(m.mainMenu), m.handleMainMenu)
case screenNetwork:
return m.updateMenu(msg, len(m.networkMenu), m.handleNetworkMenu)
case screenServices:
return m.updateMenu(msg, len(m.services), m.handleServicesMenu)
case screenServiceAction:
return m.updateMenu(msg, len(m.serviceMenu), m.handleServiceActionMenu)
case screenAcceptance:
return m.updateMenu(msg, 2, m.handleAcceptanceMenu)
case screenExportTargets:
return m.updateMenu(msg, len(m.targets), m.handleExportTargetsMenu)
case screenInterfacePick:
return m.updateMenu(msg, len(m.interfaces), m.handleInterfacePickMenu)
case screenOutput:
switch msg.String() {
case "esc", "enter", "q":
m.screen = m.prevScreen
m.body = ""
m.title = ""
return m, nil
case "ctrl+c":
return m, tea.Quit
}
case screenStaticForm:
return m.updateStaticForm(msg)
case screenConfirm:
return m.updateConfirm(msg)
}
if msg.String() == "ctrl+c" {
return m, tea.Quit
}
return m, nil
}
func (m model) updateMenu(msg tea.KeyMsg, size int, onEnter func() (tea.Model, tea.Cmd)) (tea.Model, tea.Cmd) {
if size == 0 {
size = 1
}
switch msg.String() {
case "up", "k":
if m.cursor > 0 {
m.cursor--
}
case "down", "j":
if m.cursor < size-1 {
m.cursor++
}
case "enter":
return onEnter()
case "esc":
switch m.screen {
case screenNetwork, screenServices, screenAcceptance:
m.screen = screenMain
m.cursor = 0
case screenServiceAction:
m.screen = screenServices
m.cursor = 0
case screenExportTargets:
m.screen = screenMain
m.cursor = 0
case screenInterfacePick:
m.screen = screenNetwork
m.cursor = 0
}
case "q", "ctrl+c":
return m, tea.Quit
}
return m, nil
}

137
audit/internal/tui/view.go Normal file
View File

@@ -0,0 +1,137 @@
package tui
import (
"fmt"
"strings"
"bee/audit/internal/platform"
tea "github.com/charmbracelet/bubbletea"
)
func (m model) View() string {
if m.busy {
return "bee\n\nWorking...\n"
}
switch m.screen {
case screenMain:
return renderMenu("bee", "Select action", m.mainMenu, m.cursor)
case screenNetwork:
return renderMenu("Network", "Select action", m.networkMenu, m.cursor)
case screenServices:
return renderMenu("bee services", "Select service", m.services, m.cursor)
case screenServiceAction:
items := make([]string, len(m.serviceMenu))
copy(items, m.serviceMenu)
return renderMenu("Service: "+m.selectedService, "Select action", items, m.cursor)
case screenAcceptance:
return renderMenu("System acceptance tests", "Select action", []string{"Run NVIDIA command pack", "Back"}, m.cursor)
case screenExportTargets:
return renderMenu("Export audit", "Select removable filesystem", renderTargetItems(m.targets), m.cursor)
case screenInterfacePick:
return renderMenu("Interfaces", "Select interface", renderInterfaceItems(m.interfaces), m.cursor)
case screenStaticForm:
return renderForm("Static IPv4: "+m.selectedIface, m.formFields, m.formIndex)
case screenConfirm:
title, body := m.confirmBody()
return renderConfirm(title, body, m.cursor)
case screenOutput:
return fmt.Sprintf("%s\n\n%s\n\n[enter/esc] back [ctrl+c] quit\n", m.title, strings.TrimSpace(m.body))
default:
return "bee\n"
}
}
func (m model) confirmBody() (string, string) {
switch m.pendingAction {
case actionExportAudit:
if m.selectedTarget == nil {
return "Export audit", "No target selected"
}
return "Export audit", fmt.Sprintf("Copy latest audit JSON to %s?", m.selectedTarget.Device)
case actionRunNvidiaSAT:
return "NVIDIA SAT", "Run NVIDIA acceptance command pack?"
default:
return "Confirm", "Proceed?"
}
}
func renderTargetItems(targets []platform.RemovableTarget) []string {
items := make([]string, 0, len(targets))
for _, target := range targets {
desc := fmt.Sprintf("%s [%s %s]", target.Device, target.FSType, target.Size)
if target.Label != "" {
desc += " label=" + target.Label
}
if target.Mountpoint != "" {
desc += " mounted=" + target.Mountpoint
}
items = append(items, desc)
}
return items
}
func renderInterfaceItems(interfaces []platform.InterfaceInfo) []string {
items := make([]string, 0, len(interfaces))
for _, iface := range interfaces {
label := iface.Name
if len(iface.IPv4) > 0 {
label += " [" + strings.Join(iface.IPv4, ", ") + "]"
}
items = append(items, label)
}
return items
}
func renderMenu(title, subtitle string, items []string, cursor int) string {
var body strings.Builder
fmt.Fprintf(&body, "%s\n\n%s\n\n", title, subtitle)
if len(items) == 0 {
body.WriteString("(no items)\n")
} else {
for i, item := range items {
prefix := " "
if i == cursor {
prefix = "> "
}
fmt.Fprintf(&body, "%s%s\n", prefix, item)
}
}
body.WriteString("\n[↑/↓] move [enter] select [esc] back [ctrl+c] quit\n")
return body.String()
}
func renderForm(title string, fields []formField, idx int) string {
var body strings.Builder
fmt.Fprintf(&body, "%s\n\n", title)
for i, field := range fields {
prefix := " "
if i == idx {
prefix = "> "
}
fmt.Fprintf(&body, "%s%s: %s\n", prefix, field.Label, field.Value)
}
body.WriteString("\n[tab/↑/↓] move [enter] next/submit [backspace] delete [esc] cancel\n")
return body.String()
}
func renderConfirm(title, body string, cursor int) string {
options := []string{"Confirm", "Cancel"}
var out strings.Builder
fmt.Fprintf(&out, "%s\n\n%s\n\n", title, body)
for i, option := range options {
prefix := " "
if i == cursor {
prefix = "> "
}
fmt.Fprintf(&out, "%s%s\n", prefix, option)
}
out.WriteString("\n[←/→/↑/↓] move [enter] select [esc] cancel\n")
return out.String()
}
func resultCmd(title, body string, err error, back screen) tea.Cmd {
return func() tea.Msg {
return resultMsg{title: title, body: body, err: err, back: back}
}
}

View File

@@ -4,100 +4,68 @@
**The live CD runs in an isolated network segment with no internet access.**
All binaries, kernel modules, and tools must be baked into the ISO at build time.
No `apk add`, no downloads, no package manager calls are allowed at boot.
No package installation, no downloads, and no package manager calls are allowed at boot.
DHCP is used only for LAN (operator SSH access). Internet is NOT available.
## Boot sequence (single ISO)
OpenRC default runlevel, service start order:
`systemd` boot order:
```
localmount
├── bee-sshsetup (creates bee user, sets password; runs before dropbear)
│ └── dropbear (SSH on port 22 — starts without network)
├── bee-network (udhcpc -b on all physical interfaces, non-blocking)
│ └── bee-nvidia (insmod nvidia*.ko from /usr/local/lib/nvidia/,
│ creates libnvidia-ml.so.1 symlinks in /usr/lib/)
└── bee-audit (runs audit binary → /var/log/bee-audit.json)
local-fs.target
├── bee-sshsetup.service (enables SSH key auth; password fallback only if marker exists)
│ └── ssh.service (OpenSSH on port 22 — starts without network)
├── bee-network.service (starts `dhclient -nw` on all physical interfaces, non-blocking)
── bee-nvidia.service (insmod nvidia*.ko from /usr/local/lib/nvidia/,
creates /dev/nvidia* nodes)
└── bee-audit.service (runs `bee audit` → /var/log/bee-audit.json,
never blocks boot on partial collector failures)
```
**Critical invariants:**
- Dropbear MUST start without network. `bee-sshsetup` has `need localmount` only.
- `bee-network` uses `udhcpc -b` (background) — retries indefinitely if no cable.
- `bee-nvidia` loads modules via `insmod` with absolute paths — NOT `modprobe`.
Reason: modloop squashfs mounts over `/lib/modules/<kver>/` at boot, making it
read-only. The overlay's modules at that path are inaccessible. Modules are stored
at `/usr/local/lib/nvidia/` (overlay path, always writable).
- `bee-nvidia` creates `libnvidia-ml.so.1` symlinks in `/usr/lib/` — required because
`nvidia-smi` is a glibc binary that looks for the soname symlink, not the versioned file.
- `gcompat` package provides `/lib64/ld-linux-x86-64.so.2` for glibc compat on Alpine musl.
- `bee-audit` uses `after bee-nvidia` — ensures NVIDIA enrichment succeeds.
- `bee-audit` uses `eend 0` always — never fails boot even if audit errors.
- OpenSSH MUST start without network. `bee-sshsetup.service` runs before `ssh.service`.
- `bee-network.service` uses `dhclient -nw` (background) — network bring-up is best effort and non-blocking.
- `bee-nvidia.service` loads modules via `insmod` with absolute paths — NOT `modprobe`.
Reason: the modules are shipped in the ISO overlay under `/usr/local/lib/nvidia/`, not in the host module tree.
- `bee-audit.service` does not wait for `network-online.target`; audit is local and must run even if DHCP is broken.
- `bee-audit.service` logs audit failures but does not turn partial collector problems into a boot blocker.
## ISO build sequence
```
build.sh [--authorized-keys /path/to/keys]
1. compile audit binary (skip if .go files older than binary)
2. inject authorized_keys into overlay/root/.ssh/ (or set password fallback)
3. copy audit binary → overlay/usr/local/bin/audit
4. copy vendor binaries from iso/vendor/ → overlay/usr/local/bin/
(storcli64, sas2ircu, sas3ircu, mstflint, gpu_burn — each optional)
5. build-nvidia-module.sh:
a. apk add linux-lts-dev (always, to get current Alpine 3.21 kernel headers)
b. detect KVER from /usr/src/linux-headers-*
c. download NVIDIA .run installer (sha256 verified, cached in dist/)
d. extract installer
e. build kernel modules against linux-lts headers
f. create libnvidia-ml.so.1 / libcuda.so.1 symlinks in cache
g. cache in dist/nvidia-<version>-<kver>/
6. inject NVIDIA .ko → overlay/usr/local/lib/nvidia/
7. inject nvidia-smi → overlay/usr/local/bin/nvidia-smi
8. inject libnvidia-ml + libcuda → overlay/usr/lib/
9. write overlay/etc/bee-release (versions + git commit)
10. export BEE_BUILD_INFO for motd substitution
11. mkimage.sh (from /var/tmp, TMPDIR=/var/tmp):
kernel_* section — cached (linux-lts modloop)
apks_* section — cached (downloaded packages)
syslinux_* / grub_* — cached
apkovl — always regenerated (genapkovl-bee.sh)
final ISO — always assembled
1. compile `bee` binary (skip if .go files older than binary)
2. create a temporary overlay staging dir under `dist/`
3. inject authorized_keys into staged `root/.ssh/` (or set password fallback marker)
4. copy `bee` binary → staged `/usr/local/bin/bee`
5. copy vendor binaries from `iso/vendor/` → staged `/usr/local/bin/`
(`storcli64`, `sas2ircu`, `sas3ircu`, `mstflint` — each optional)
6. `build-nvidia-module.sh`:
a. install Debian kernel headers if missing
b. download NVIDIA `.run` installer (sha256 verified, cached in `dist/`)
c. extract installer
d. build kernel modules against Debian headers
e. create `libnvidia-ml.so.1` / `libcuda.so.1` symlinks in cache
f. cache in `dist/nvidia-<version>-<kver>/`
7. inject NVIDIA `.ko`staged `/usr/local/lib/nvidia/`
8. inject `nvidia-smi`staged `/usr/local/bin/nvidia-smi`
9. inject `libnvidia-ml` + `libcuda`staged `/usr/lib/`
10. write staged `/etc/bee-release` (versions + git commit)
11. patch staged `motd` with build metadata
12. copy `iso/builder/` into a temporary live-build workdir under `dist/`
13. sync staged overlay into workdir `config/includes.chroot/`
14. run `lb config && lb build` inside the temporary workdir
(either on a Debian host/VM or inside the privileged builder container)
```
**Critical invariants:**
- `KERNEL_PKG_VERSION` in `iso/builder/VERSIONS` pins the exact Alpine package version
(e.g. `6.12.76-r0`). This version is used in THREE places that MUST stay in sync:
1. `build-nvidia-module.sh``apk add linux-lts-dev=${KERNEL_PKG_VERSION}` (compile headers)
2. `mkimg.bee.sh``linux-lts=${KERNEL_PKG_VERSION}` in apks list (ISO kernel)
3. `build.sh` — build-time verification that headers match pin (fails loudly if not)
When Alpine releases a new linux-lts patch (e.g. r0 → r1), update KERNEL_PKG_VERSION
in VERSIONS — that's the only place to change. The build will fail loudly if the pin
doesn't match the installed headers, so stale pins are caught immediately.
- **All three must use the same APK mirror: `dl-cdn.alpinelinux.org`.** Both
`build-nvidia-module.sh` (apk add) and `mkimage.sh` (--repository) explicitly use
`https://dl-cdn.alpinelinux.org/alpine/v${ALPINE_VERSION}/main|community`.
Never use the builder's local `/etc/apk/repositories` — its mirror may serve
a different package state, causing "unable to select package" failures.
- `linux-lts-dev` is always installed (not conditional) — stale 6.6.x headers on the
builder would cause modules to be built for the wrong kernel and never load at runtime.
- NVIDIA modules go to `overlay/usr/local/lib/nvidia/` — NOT `lib/modules/<kver>/extra/`.
- `genapkovl-bee.sh` must be copied to `/var/tmp/` (CWD when mkimage runs).
- `TMPDIR=/var/tmp` required — tmpfs `/tmp` is only ~1GB, too small for kernel firmware.
- Workdir cleanup preserves `apks_*`, `kernel_*`, `syslinux_*`, `grub_*` cache dirs.
## gpu_burn vendor binary
`gpu_burn` requires CUDA nvcc to build. It is NOT built as part of the main ISO build.
Build separately on the builder VM and place in `iso/vendor/gpu_burn`:
```sh
sh iso/builder/build-gpu-burn.sh dist/
cp dist/gpu_burn iso/vendor/gpu_burn
cp dist/compare.ptx iso/vendor/compare.ptx
```
Requires: CUDA 12.8+ (supports GCC 14, Alpine 3.21), libxml2, g++, make, git.
The `build.sh` will include it automatically if `iso/vendor/gpu_burn` exists.
- `DEBIAN_KERNEL_ABI` in `iso/builder/VERSIONS` pins the exact kernel ABI used in BOTH places:
1. `setup-builder.sh` / `build-in-container.sh` / `build-nvidia-module.sh` — Debian kernel headers for module build
2. `auto/config``linux-image-${DEBIAN_KERNEL_ABI}` in the ISO
- NVIDIA modules go to staged `usr/local/lib/nvidia/` — NOT to `/lib/modules/<kver>/extra/`.
- The source overlay in `iso/overlay/` is treated as immutable source. Build-time files are injected only into the staged overlay.
- The live-build workdir under `dist/` is disposable; source files under `iso/builder/` stay clean.
- Container build requires `--privileged` because `live-build` uses mounts/chroots/loop devices during ISO assembly.
## Post-boot smoke test
@@ -109,26 +77,19 @@ ssh root@<ip> 'sh -s' < iso/builder/smoketest.sh
Exit code 0 = all required checks pass. All `FAIL` lines must be zero before shipping.
Key checks: NVIDIA modules loaded, nvidia-smi sees all GPUs, lib symlinks present,
gcompat installed, services running, audit completed with NVIDIA enrichment, internet.
Key checks: NVIDIA modules loaded, `nvidia-smi` sees all GPUs, lib symlinks present,
systemd services running, audit completed with NVIDIA enrichment, LAN reachability.
## apkovl mechanism
## Overlay mechanism
The apkovl is a `.tar.gz` injected into the ISO at `/boot/`. Alpine initramfs extracts
it at boot, overlaying `/etc`, `/usr`, `/root`, `/lib` on the tmpfs root.
`genapkovl-bee.sh` generates the tarball containing:
- `/etc/apk/world` — package list (apk installs on first boot)
- `/etc/runlevels/*/` — OpenRC service symlinks
- `/etc/conf.d/dropbear``DROPBEAR_OPTS="-R -B"`
- `/etc/network/interfaces` — lo only (bee-network handles DHCP)
- `/etc/hostname`
- Everything from `iso/overlay/` (init scripts, binaries, ssh keys, tui)
`live-build` copies files from `config/includes.chroot/` into the ISO filesystem.
`build.sh` prepares a staged overlay, then syncs it into a temporary workdir's
`config/includes.chroot/` before running `lb build`.
## Collector flow
```
audit binary start
`bee audit` start
1. board collector (dmidecode -t 0,1,2)
2. cpu collector (dmidecode -t 4)
3. memory collector (dmidecode -t 17)

View File

@@ -21,16 +21,15 @@ Fills gaps where Redfish/logpile is blind:
- Read-only hardware inventory: board, CPU, memory, storage, PCIe, PSU, GPU, NIC, RAID
- Unattended operation — no user interaction required
- NVIDIA proprietary driver loaded at boot for GPU enrichment via `nvidia-smi`
- SSH access (dropbear) always available for inspection and debugging
- Interactive TUI (`bee-tui`) for network setup, service management, GPU tests
- GPU stress testing via `gpu_burn` (vendor binary, optional)
- SSH access (OpenSSH) always available for inspection and debugging
- Interactive Go TUI via `bee tui` for network setup, service management, and acceptance tests
## Network isolation — CRITICAL
**The live CD runs in an isolated network segment with no internet access.**
- All tools, drivers, and binaries MUST be pre-baked into the ISO at build time
- No `apk add` at boot — packages are installed during ISO creation, not at runtime
- No package installation at boot — packages are installed during ISO creation, not at runtime
- No downloads at boot — NVIDIA modules, vendor tools, and all binaries come from the ISO overlay
- DHCP is used only for LAN access (SSH from operator laptop); internet is NOT assumed
- Any feature requiring network downloads cannot be added to the live CD
@@ -49,26 +48,32 @@ Fills gaps where Redfish/logpile is blind:
| Component | Technology |
|---|---|
| Audit binary | Go, static, `CGO_ENABLED=0` |
| LiveCD | Alpine Linux 3.21, linux-lts 6.12.x |
| ISO build | Alpine mkimage + apkovl overlay (`iso/overlay/`) |
| Init system | OpenRC |
| SSH | Dropbear (always included) |
| NVIDIA driver | Proprietary `.run` installer, built against linux-lts headers |
| NVIDIA modules | Loaded via `insmod` from `/usr/local/lib/nvidia/` (not modloop path) |
| glibc compat | `gcompat` — required for `nvidia-smi` (glibc binary on musl Alpine) |
| Builder VM | Alpine 3.21 |
| Live ISO | Debian 12 (bookworm), amd64 live-build image |
| ISO build | Debian `live-build` + overlay sync into `config/includes.chroot/` |
| Init system | `systemd` |
| SSH | OpenSSH server |
| NVIDIA driver | Proprietary `.run` installer, built against Debian kernel headers |
| NVIDIA modules | Loaded via `insmod` from `/usr/local/lib/nvidia/` |
| Builder | Debian 12 host/VM or Debian 12 container image |
## Runtime split
- The main Go application must run both on a normal Linux host and inside the live ISO
- Live-ISO-only responsibilities stay in `iso/` integration code
- Live ISO launches the Go CLI with `--runtime livecd`
- Local/manual runs use `--runtime auto` or `--runtime local`
## Key paths
| Path | Purpose |
|---|---|
| `audit/cmd/audit/` | CLI entry point |
| `audit/cmd/bee/` | Main CLI entry point |
| `audit/internal/collector/` | Per-subsystem collectors |
| `audit/internal/schema/` | HardwareIngestRequest types |
| `iso/builder/` | ISO build scripts and mkimage profile |
| `iso/overlay/` | Single overlay: files injected into ISO via apkovl |
| `iso/vendor/` | Optional pre-built vendor binaries (storcli64, gpu_burn, …) |
| `iso/builder/VERSIONS` | Pinned versions: Alpine, Go, NVIDIA driver, kernel |
| `iso/builder/` | ISO build scripts and `live-build` profile |
| `iso/overlay/` | Source overlay copied into a staged build overlay |
| `iso/vendor/` | Optional pre-built vendor binaries (storcli64, sas2ircu, sas3ircu, mstflint, …) |
| `iso/builder/VERSIONS` | Pinned versions: Debian, Go, NVIDIA driver, kernel ABI |
| `iso/builder/smoketest.sh` | Post-boot smoke test — run via SSH to verify live ISO |
| `dist/` | Build outputs (gitignored) |
| `iso/out/` | Downloaded ISO files (gitignored) |

View File

@@ -2,19 +2,20 @@
## GPU stress test (H100)
**Задача:** добавить GPU burn/stress тест в bee-tui без существенного увеличения ISO.
**Статус:** отложено. В текущем ISO `gpu_burn` не включается и не запускается.
**Контекст:**
- `gpu_burn` (wilicc/gpu-burn) не подходит — требует `libcublas.so` (~500MB), что раздует ISO кратно
- `libcuda.so` уже есть в ISO (из NVIDIA .run installer)
**Почему задача всё ещё в backlog:**
- `gpu_burn` остаётся тяжёлым и неудобным с точки зрения зависимостей
- хочется штатный lightweight stress tool без `libcublas.so` и без заметного раздувания ISO
- для H100 нужен предсказуемый offline-инструмент, который можно стабильно возить внутри ISO
**Выбранный подход:** написать минимальный стресс-тул на CUDA Driver API
- Использует только `libcuda.so` (уже в ISO) — никаких новых зависимостей
- Реализует матричное умножение или memory bandwidth через `cuLaunchKernel`
- Бинарь ~100KB, компилируется через `nvcc` на builder VM, кладётся в `iso/vendor/`
- bee-tui вызывает его вместо `gpu_burn`
**Желаемый следующий шаг:** написать минимальный stress tool на CUDA Driver API
- использует только `libcuda.so`, уже присутствующий в ISO
- выполняет простой compute / memory workload через `cuLaunchKernel`
- собирается отдельно на builder VM и кладётся в `iso/vendor/`
- в будущем может вызываться из `bee tui` как предпочтительный встроенный GPU SAT/stress path
**Отклонённые варианты:**
**Отклонённые / проблемные варианты:**
- `gpu_burn` — нужен libcublas (~500MB)
- `nvbandwidth` — только bandwidth, не жжёт FLOPs; нужен libcudart (~8MB)
- DCGM diag — правильный инструмент для H100 но ~100MB установка

44
iso/builder/Dockerfile Normal file
View File

@@ -0,0 +1,44 @@
FROM debian:12
ARG GO_VERSION=1.23.6
ARG DEBIAN_KERNEL_ABI=6.1.0-43
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update -qq && apt-get install -y \
ca-certificates \
live-build \
debootstrap \
squashfs-tools \
xorriso \
grub-pc-bin \
grub-efi-amd64-bin \
mtools \
git \
wget \
curl \
tar \
xz-utils \
rsync \
build-essential \
gcc \
make \
perl \
"linux-headers-${DEBIAN_KERNEL_ABI}-amd64" \
&& rm -rf /var/lib/apt/lists/*
RUN arch="$(dpkg --print-architecture)" \
&& case "$arch" in \
amd64) goarch=amd64 ;; \
arm64) goarch=arm64 ;; \
*) echo "unsupported architecture: $arch" >&2; exit 1 ;; \
esac \
&& wget -q -O /tmp/go.tar.gz "https://go.dev/dl/go${GO_VERSION}.linux-${goarch}.tar.gz" \
&& rm -rf /usr/local/go \
&& tar -C /usr/local -xzf /tmp/go.tar.gz \
&& rm -f /tmp/go.tar.gz
ENV PATH=/usr/local/go/bin:${PATH}
WORKDIR /work
CMD ["/bin/bash"]

View File

@@ -1,4 +1,5 @@
ALPINE_VERSION=3.21
DEBIAN_VERSION=12
DEBIAN_KERNEL_ABI=6.1.0-43
NVIDIA_DRIVER_VERSION=590.48.01
GO_VERSION=1.23.6
AUDIT_VERSION=0.1.0
AUDIT_VERSION=0.1.1

5
iso/builder/auto/build Executable file
View File

@@ -0,0 +1,5 @@
#!/bin/sh
# auto/build — live-build build wrapper for bee ISO
set -e
lb build noauto "${@}" 2>&1

28
iso/builder/auto/config Executable file
View File

@@ -0,0 +1,28 @@
#!/bin/sh
# auto/config — live-build configuration for bee ISO
# Runs automatically when lb config is called.
# See: man lb_config
set -e
. "$(dirname "$0")/../VERSIONS"
lb config noauto \
--distribution bookworm \
--architectures amd64 \
--binary-images iso-hybrid \
--bootloaders "grub-efi,syslinux" \
--debian-installer none \
--archive-areas "main contrib non-free non-free-firmware" \
--mirror-bootstrap "https://deb.debian.org/debian" \
--mirror-chroot "https://deb.debian.org/debian" \
--mirror-binary "https://deb.debian.org/debian" \
--security true \
--linux-flavours "amd64" \
--linux-packages "linux-image-${DEBIAN_KERNEL_ABI}" \
--memtest none \
--iso-volume "BEE-DEBUG" \
--iso-application "Bee Hardware Audit" \
--bootappend-live "boot=live components console=tty0 console=ttyS0,115200n8 username=bee user-fullname=Bee modprobe.blacklist=nouveau" \
--apt-recommends false \
"${@}"

View File

@@ -0,0 +1,95 @@
#!/bin/sh
# build-in-container.sh — build the bee ISO inside a Debian container.
set -e
REPO_ROOT="$(cd "$(dirname "$0")/../.." && pwd)"
BUILDER_DIR="${REPO_ROOT}/iso/builder"
CONTAINER_TOOL="${CONTAINER_TOOL:-docker}"
IMAGE_TAG="${BEE_BUILDER_IMAGE:-bee-iso-builder}"
CACHE_DIR="${BEE_BUILDER_CACHE_DIR:-${REPO_ROOT}/dist/container-cache}"
AUTH_KEYS=""
REBUILD_IMAGE=0
. "${BUILDER_DIR}/VERSIONS"
while [ $# -gt 0 ]; do
case "$1" in
--cache-dir)
CACHE_DIR="$2"
shift 2
;;
--rebuild-image)
REBUILD_IMAGE=1
shift
;;
--authorized-keys)
AUTH_KEYS="$2"
shift 2
;;
*)
echo "unknown arg: $1" >&2
echo "usage: $0 [--cache-dir /path] [--rebuild-image] [--authorized-keys /path/to/authorized_keys]" >&2
exit 1
;;
esac
done
if ! command -v "$CONTAINER_TOOL" >/dev/null 2>&1; then
echo "container tool not found: $CONTAINER_TOOL" >&2
exit 1
fi
if [ -n "$AUTH_KEYS" ]; then
[ -f "$AUTH_KEYS" ] || { echo "authorized_keys not found: $AUTH_KEYS" >&2; exit 1; }
AUTH_KEYS_ABS="$(cd "$(dirname "$AUTH_KEYS")" && pwd)/$(basename "$AUTH_KEYS")"
AUTH_KEYS_DIR="$(dirname "$AUTH_KEYS_ABS")"
AUTH_KEYS_BASE="$(basename "$AUTH_KEYS_ABS")"
fi
mkdir -p \
"${CACHE_DIR}" \
"${CACHE_DIR}/go-build" \
"${CACHE_DIR}/go-mod" \
"${CACHE_DIR}/tmp" \
"${CACHE_DIR}/bee"
IMAGE_REF="${IMAGE_TAG}:debian${DEBIAN_VERSION}"
if [ "$REBUILD_IMAGE" = "1" ] || ! "$CONTAINER_TOOL" image inspect "${IMAGE_REF}" >/dev/null 2>&1; then
"$CONTAINER_TOOL" build \
--build-arg GO_VERSION="${GO_VERSION}" \
--build-arg DEBIAN_KERNEL_ABI="${DEBIAN_KERNEL_ABI}" \
-t "${IMAGE_REF}" \
"${BUILDER_DIR}"
else
echo "=== using existing builder image ${IMAGE_REF} ==="
fi
set -- \
run --rm --privileged \
-v "${REPO_ROOT}:/work" \
-v "${CACHE_DIR}:/cache" \
-e GOCACHE=/cache/go-build \
-e GOMODCACHE=/cache/go-mod \
-e TMPDIR=/cache/tmp \
-e BEE_CACHE_DIR=/cache/bee \
-w /work \
"${IMAGE_REF}" \
sh /work/iso/builder/build.sh
if [ -n "$AUTH_KEYS" ]; then
set -- run --rm --privileged \
-v "${REPO_ROOT}:/work" \
-v "${CACHE_DIR}:/cache" \
-v "${AUTH_KEYS_DIR}:/tmp/bee-authkeys:ro" \
-e GOCACHE=/cache/go-build \
-e GOMODCACHE=/cache/go-mod \
-e TMPDIR=/cache/tmp \
-e BEE_CACHE_DIR=/cache/bee \
-w /work \
"${IMAGE_REF}" \
sh /work/iso/builder/build.sh --authorized-keys "/tmp/bee-authkeys/${AUTH_KEYS_BASE}"
fi
"$CONTAINER_TOOL" "$@"

View File

@@ -1,5 +1,5 @@
#!/bin/sh
# build-nvidia-module.sh — install NVIDIA proprietary driver into ISO overlay
# build-nvidia-module.sh — compile NVIDIA proprietary driver modules for Debian 12
#
# Downloads the official NVIDIA .run installer, extracts kernel modules and
# userspace tools (nvidia-smi, libnvidia-ml). Everything is proprietary NVIDIA.
@@ -16,34 +16,36 @@ set -e
NVIDIA_VERSION="$1"
DIST_DIR="$2"
ALPINE_VERSION="$3"
DEBIAN_KERNEL_ABI="$3"
[ -n "$NVIDIA_VERSION" ] || { echo "usage: $0 <nvidia-version> <dist-dir> <alpine-version>"; exit 1; }
[ -n "$DIST_DIR" ] || { echo "usage: $0 <nvidia-version> <dist-dir> <alpine-version>"; exit 1; }
[ -n "$ALPINE_VERSION" ] || { echo "usage: $0 <nvidia-version> <dist-dir> <alpine-version>"; exit 1; }
[ -n "$NVIDIA_VERSION" ] || { echo "usage: $0 <nvidia-version> <dist-dir> <debian-kernel-abi>"; exit 1; }
[ -n "$DIST_DIR" ] || { echo "usage: $0 <nvidia-version> <dist-dir> <debian-kernel-abi>"; exit 1; }
[ -n "$DEBIAN_KERNEL_ABI" ] || { echo "usage: $0 <nvidia-version> <dist-dir> <debian-kernel-abi>"; exit 1; }
# Install linux-lts-dev (no version pin — always use whatever is current in Alpine 3.21 main).
# This ensures modules are compiled for the same kernel that mkimage will install in the ISO.
# Both use dl-cdn.alpinelinux.org, so they see the same package state at build time.
echo "=== installing linux-lts-dev (latest from dl-cdn) ==="
apk add --quiet --update \
--repository "https://dl-cdn.alpinelinux.org/alpine/v${ALPINE_VERSION}/main" \
linux-lts-dev
KVER="${DEBIAN_KERNEL_ABI}-amd64"
# On Debian, kernel headers are split into two packages:
# linux-headers-<kver> — arch-specific (generated, Makefile)
# linux-headers-<kver>-common — common source headers (linux/, asm-generic/, etc.)
# NVIDIA conftest needs SYSSRC pointing to common (for source headers like linux/mm.h)
# and SYSOUT pointing to amd64 (for generated headers like autoconf.h, asm/).
KDIR_ARCH="/usr/src/linux-headers-${KVER}"
KDIR_COMMON="/usr/src/linux-headers-${DEBIAN_KERNEL_ABI}-common"
# Detect kernel version from installed headers (pick highest version if multiple).
detect_kver() {
ls /usr/src/ 2>/dev/null \
| grep '^linux-headers-' \
| sed 's/linux-headers-//' \
| sort -V \
| tail -1
}
KVER="$(detect_kver)"
KDIR="/usr/src/linux-headers-${KVER}"
echo "=== NVIDIA ${NVIDIA_VERSION} (proprietary) for kernel ${KVER} ==="
if [ ! -d "$KDIR_ARCH" ] || [ ! -d "$KDIR_COMMON" ]; then
echo "=== installing linux-headers-${KVER} ==="
DEBIAN_FRONTEND=noninteractive apt-get install -y \
"linux-headers-${KVER}" \
gcc make perl
fi
echo "kernel headers (arch): $KDIR_ARCH"
echo "kernel headers (common): $KDIR_COMMON"
CACHE_DIR="${DIST_DIR}/nvidia-${NVIDIA_VERSION}-${KVER}"
CACHE_ROOT="${BEE_CACHE_DIR:-${DIST_DIR}/cache}"
DOWNLOAD_CACHE_DIR="${CACHE_ROOT}/nvidia-downloads"
EXTRACT_CACHE_DIR="${CACHE_ROOT}/nvidia-extract"
if [ -d "$CACHE_DIR/modules" ] && [ -f "$CACHE_DIR/bin/nvidia-smi" ]; then
echo "=== NVIDIA cached, skipping build ==="
echo "cache: $CACHE_DIR"
@@ -51,20 +53,16 @@ if [ -d "$CACHE_DIR/modules" ] && [ -f "$CACHE_DIR/bin/nvidia-smi" ]; then
exit 0
fi
# Install build dependencies (linux-lts-dev already at correct version from above)
apk add --quiet \
--repository "https://dl-cdn.alpinelinux.org/alpine/v${ALPINE_VERSION}/main" \
gcc make perl linux-lts-dev wget
# Download official NVIDIA .run installer (proprietary) with sha256 verification
# Download official NVIDIA .run installer with sha256 verification
BASE_URL="https://download.nvidia.com/XFree86/Linux-x86_64/${NVIDIA_VERSION}"
RUN_FILE="/var/tmp/NVIDIA-Linux-x86_64-${NVIDIA_VERSION}.run"
SHA_FILE="/var/tmp/NVIDIA-Linux-x86_64-${NVIDIA_VERSION}.run.sha256sum"
mkdir -p "$DOWNLOAD_CACHE_DIR" "$EXTRACT_CACHE_DIR"
RUN_FILE="${DOWNLOAD_CACHE_DIR}/NVIDIA-Linux-x86_64-${NVIDIA_VERSION}.run"
SHA_FILE="${DOWNLOAD_CACHE_DIR}/NVIDIA-Linux-x86_64-${NVIDIA_VERSION}.run.sha256sum"
verify_run() {
[ -s "$SHA_FILE" ] || return 1
[ -s "$RUN_FILE" ] || return 1
cd /var/tmp
cd "$DOWNLOAD_CACHE_DIR"
sha256sum -c "$SHA_FILE" --status 2>/dev/null
}
@@ -75,7 +73,7 @@ if ! verify_run; then
echo "sha256: $(cat "$SHA_FILE")"
wget --show-progress -O "$RUN_FILE" "${BASE_URL}/NVIDIA-Linux-x86_64-${NVIDIA_VERSION}.run"
echo "=== verifying sha256 ==="
cd /var/tmp && sha256sum -c "$SHA_FILE" || { echo "ERROR: sha256 mismatch"; rm -f "$RUN_FILE"; exit 1; }
cd "$DOWNLOAD_CACHE_DIR" && sha256sum -c "$SHA_FILE" || { echo "ERROR: sha256 mismatch"; rm -f "$RUN_FILE"; exit 1; }
echo "sha256 OK"
else
echo "=== NVIDIA installer verified from cache ==="
@@ -84,7 +82,7 @@ fi
# Extract installer contents
echo "=== extracting installer ==="
chmod +x "$RUN_FILE"
EXTRACT_DIR="/var/tmp/nvidia-extract-${NVIDIA_VERSION}"
EXTRACT_DIR="${EXTRACT_CACHE_DIR}/nvidia-extract-${NVIDIA_VERSION}"
rm -rf "$EXTRACT_DIR"
"$RUN_FILE" --extract-only --target "$EXTRACT_DIR"
@@ -96,10 +94,20 @@ done
[ -n "$KERNEL_SRC" ] || { echo "ERROR: kernel source dir not found in:"; ls "$EXTRACT_DIR/"; exit 1; }
echo "kernel source: $KERNEL_SRC"
# Build kernel modules from extracted source
# Build kernel modules
# CFLAGS_MODULE: add GCC include dir so NVIDIA's nv_stdarg.h can find stdarg.h.
# Kernel build uses -nostdinc which strips GCC's own includes; we restore it here.
echo "=== building kernel modules ($(nproc) cores) ==="
cd "$KERNEL_SRC"
make -j$(nproc) KERNEL_UNAME="$KVER" SYSSRC="$KDIR" modules 2>&1 | tail -5
# SYSSRC=common: conftest finds real kernel headers (linux/mm.h etc.)
# SYSOUT=amd64: generated headers (autoconf.h, asm/) from arch package
# Without this split, conftest uses amd64/include/ which is nearly empty,
# all compile-tests fail silently, and NVIDIA assumes all APIs present → link errors.
make -j$(nproc) \
KERNEL_UNAME="$KVER" \
SYSSRC="$KDIR_COMMON" \
SYSOUT="$KDIR_ARCH" \
modules 2>&1 | tail -10
# Collect outputs
mkdir -p "$CACHE_DIR/modules" "$CACHE_DIR/bin" "$CACHE_DIR/lib"
@@ -112,32 +120,40 @@ done
cp "$EXTRACT_DIR/nvidia-smi" "$CACHE_DIR/bin/"
cp "$EXTRACT_DIR/nvidia-bug-report.sh" "$CACHE_DIR/bin/" 2>/dev/null || true
# Copy userspace libraries — use find to handle any versioning scheme (libnvidia-ml.so.X.Y.Z or .so.1)
# Copy GSP firmware (required for Hopper/Ada GPUs — H100, H800, etc.)
mkdir -p "$CACHE_DIR/firmware"
if [ -d "$EXTRACT_DIR/firmware" ]; then
cp -r "$EXTRACT_DIR/firmware/." "$CACHE_DIR/firmware/"
echo "firmware: $(ls "$CACHE_DIR/firmware/" | wc -l) files"
else
echo "WARNING: no firmware/ dir found in installer (may be needed for Hopper GPUs)"
fi
# Copy ALL userspace library files
for lib in libnvidia-ml libcuda; do
found=$(find "$EXTRACT_DIR" -maxdepth 1 -name "${lib}.so.*" | head -1)
if [ -z "$found" ]; then
count=0
for f in $(find "$EXTRACT_DIR" -maxdepth 1 -name "${lib}.so.*" 2>/dev/null); do
cp "$f" "$CACHE_DIR/lib/" && count=$((count+1))
done
if [ "$count" -eq 0 ]; then
echo "ERROR: ${lib}.so.* not found in $EXTRACT_DIR"
ls "$EXTRACT_DIR/"*.so* 2>/dev/null | head -20 || true
exit 1
fi
cp "$found" "$CACHE_DIR/lib/"
done
# Verify .ko files were actually built
# Verify .ko files were built
ko_count=$(ls "$CACHE_DIR/modules/"*.ko 2>/dev/null | wc -l)
[ "$ko_count" -gt 0 ] || { echo "ERROR: no .ko files built in $CACHE_DIR/modules/"; exit 1; }
# Create soname symlinks required by nvidia-smi on Alpine (musl/glibc via gcompat + libc6-compat)
# Create soname symlinks: use [0-9][0-9]* to avoid circular symlink (.so.1 has single digit)
for lib in libnvidia-ml libcuda; do
versioned=$(ls "$CACHE_DIR/lib/${lib}.so."* 2>/dev/null | grep -v '\.so\.1$' | head -1)
[ -n "$versioned" ] || versioned=$(ls "$CACHE_DIR/lib/${lib}.so."* 2>/dev/null | head -1)
versioned=$(ls "$CACHE_DIR/lib/${lib}.so."[0-9][0-9]* 2>/dev/null | head -1)
[ -n "$versioned" ] || continue
base=$(basename "$versioned")
# Only create .so.1 if versioned file is not already named .so.1
if [ "$base" != "${lib}.so.1" ]; then
ln -sf "$base" "$CACHE_DIR/lib/${lib}.so.1"
fi
ln -sf "$base" "$CACHE_DIR/lib/${lib}.so.1"
ln -sf "${lib}.so.1" "$CACHE_DIR/lib/${lib}.so" 2>/dev/null || true
echo "${lib}: .so.1 -> $base"
done
echo "=== NVIDIA build complete ==="

View File

@@ -1,9 +1,9 @@
#!/bin/sh
# build.sh — build bee ISO
# build.sh — build bee ISO (Debian 12 / live-build)
#
# Single build script. Produces a bootable live ISO with SSH access, TUI, NVIDIA drivers.
#
# Run on Alpine builder VM as root after setup-builder.sh.
# Run on Debian 12 builder VM as root after setup-builder.sh.
# Usage:
# sh iso/builder/build.sh [--authorized-keys /path/to/authorized_keys]
@@ -14,6 +14,9 @@ BUILDER_DIR="${REPO_ROOT}/iso/builder"
OVERLAY_DIR="${REPO_ROOT}/iso/overlay"
DIST_DIR="${REPO_ROOT}/dist"
VENDOR_DIR="${REPO_ROOT}/iso/vendor"
BUILD_WORK_DIR="${DIST_DIR}/live-build-work"
OVERLAY_STAGE_DIR="${DIST_DIR}/overlay-stage"
CACHE_ROOT="${BEE_CACHE_DIR:-${DIST_DIR}/cache}"
AUTH_KEYS=""
# parse args
@@ -26,49 +29,68 @@ done
. "${BUILDER_DIR}/VERSIONS"
export PATH="$PATH:/usr/local/go/bin"
# NOTE: lz4 compression for modloop is disabled — Alpine initramfs may not support lz4 squashfs.
# Default xz compression is used until lz4 support is confirmed.
mkdir -p "${DIST_DIR}"
mkdir -p "${CACHE_ROOT}"
: "${GOCACHE:=${CACHE_ROOT}/go-build}"
: "${GOMODCACHE:=${CACHE_ROOT}/go-mod}"
export GOCACHE GOMODCACHE
echo "=== bee ISO build ==="
echo "Alpine: ${ALPINE_VERSION}, Go: ${GO_VERSION}"
echo "Debian: ${DEBIAN_VERSION}, Kernel ABI: ${DEBIAN_KERNEL_ABI}, Go: ${GO_VERSION}"
echo ""
# --- compile audit binary (static, Linux amd64) ---
# Skip rebuild if binary is newer than all Go source files.
AUDIT_BIN="${DIST_DIR}/bee-audit-linux-amd64"
# --- compile bee binary (static, Linux amd64) ---
BEE_BIN="${DIST_DIR}/bee-linux-amd64"
NEED_BUILD=1
if [ -f "$AUDIT_BIN" ]; then
NEWEST_SRC=$(find "${REPO_ROOT}/audit" -name '*.go' -newer "$AUDIT_BIN" | head -1)
if [ -f "$BEE_BIN" ]; then
NEWEST_SRC=$(find "${REPO_ROOT}/audit" -name '*.go' -newer "$BEE_BIN" | head -1)
[ -z "$NEWEST_SRC" ] && NEED_BUILD=0
fi
if [ "$NEED_BUILD" = "1" ]; then
echo "=== building audit binary ==="
echo "=== building bee binary ==="
cd "${REPO_ROOT}/audit"
GOOS=linux GOARCH=amd64 CGO_ENABLED=0 \
go build \
-ldflags "-s -w -X main.Version=${AUDIT_VERSION:-$(date +%Y%m%d)}" \
-o "$AUDIT_BIN" \
./cmd/audit
echo "binary: $AUDIT_BIN"
echo "size: $(du -sh "$AUDIT_BIN" | cut -f1)"
-o "$BEE_BIN" \
./cmd/bee
echo "binary: $BEE_BIN"
if command -v stat >/dev/null 2>&1; then
BEE_SIZE_BYTES="$(stat -c '%s' "$BEE_BIN" 2>/dev/null || stat -f '%z' "$BEE_BIN")"
else
BEE_SIZE_BYTES="$(wc -c < "$BEE_BIN" | tr -d ' ')"
fi
if command -v numfmt >/dev/null 2>&1; then
echo "size: $(numfmt --to=iec --suffix=B "$BEE_SIZE_BYTES")"
else
echo "size: ${BEE_SIZE_BYTES} bytes"
fi
else
echo "=== audit binary up to date, skipping build ==="
echo "=== bee binary up to date, skipping build ==="
fi
echo "=== preparing staged overlay ==="
rm -rf "${BUILD_WORK_DIR}" "${OVERLAY_STAGE_DIR}"
mkdir -p "${BUILD_WORK_DIR}" "${OVERLAY_STAGE_DIR}"
rsync -a "${BUILDER_DIR}/" "${BUILD_WORK_DIR}/"
rsync -a "${OVERLAY_DIR}/" "${OVERLAY_STAGE_DIR}/"
rm -f \
"${OVERLAY_STAGE_DIR}/etc/bee-ssh-password-fallback" \
"${OVERLAY_STAGE_DIR}/etc/bee-release" \
"${OVERLAY_STAGE_DIR}/root/.ssh/authorized_keys" \
"${OVERLAY_STAGE_DIR}/usr/local/bin/bee" \
"${OVERLAY_STAGE_DIR}/usr/local/bin/bee-smoketest"
# --- inject authorized_keys for SSH access ---
# Uses the same Ed25519 keys as release signing (from git.mchus.pro/mchus/keys).
# SSH public keys are stored alongside signing keys as ~/.keys/<name>.key.pub
AUTHORIZED_KEYS_FILE="${OVERLAY_DIR}/root/.ssh/authorized_keys"
mkdir -p "${OVERLAY_DIR}/root/.ssh"
AUTHORIZED_KEYS_FILE="${OVERLAY_STAGE_DIR}/root/.ssh/authorized_keys"
mkdir -p "${OVERLAY_STAGE_DIR}/root/.ssh"
if [ -n "$AUTH_KEYS" ]; then
cp "$AUTH_KEYS" "$AUTHORIZED_KEYS_FILE"
chmod 600 "$AUTHORIZED_KEYS_FILE"
echo "SSH authorized_keys: installed from $AUTH_KEYS"
else
# auto-collect all developer SSH public keys from ~/.keys/*.key.pub
> "$AUTHORIZED_KEYS_FILE"
FOUND=0
for ssh_pub in "$HOME"/.keys/*.key.pub; do
@@ -82,127 +104,131 @@ else
echo "SSH authorized_keys: $FOUND key(s) from ~/.keys/*.key.pub"
else
echo "WARNING: no SSH public keys found — falling back to password auth"
echo " SSH login: bee / eeb (user created by bee-sshsetup at boot)"
echo " (generate a key with: sh keys/scripts/keygen.sh <your-name>)"
echo " SSH login: bee / eeb"
USE_PASSWORD_FALLBACK=1
fi
fi
# --- password fallback: write marker file read by init script ---
if [ "${USE_PASSWORD_FALLBACK:-0}" = "1" ]; then
touch "${OVERLAY_DIR}/etc/bee-ssh-password-fallback"
touch "${OVERLAY_STAGE_DIR}/etc/bee-ssh-password-fallback"
else
rm -f "${OVERLAY_STAGE_DIR}/etc/bee-ssh-password-fallback"
fi
# --- copy audit binary into overlay ---
mkdir -p "${OVERLAY_DIR}/usr/local/bin"
cp "${DIST_DIR}/bee-audit-linux-amd64" "${OVERLAY_DIR}/usr/local/bin/audit"
chmod +x "${OVERLAY_DIR}/usr/local/bin/audit"
# --- copy bee binary into overlay ---
mkdir -p "${OVERLAY_STAGE_DIR}/usr/local/bin"
cp "${DIST_DIR}/bee-linux-amd64" "${OVERLAY_STAGE_DIR}/usr/local/bin/bee"
chmod +x "${OVERLAY_STAGE_DIR}/usr/local/bin/bee"
# --- inject smoketest into overlay so it runs directly on the live CD ---
cp "${BUILDER_DIR}/smoketest.sh" "${OVERLAY_DIR}/usr/local/bin/bee-smoketest"
chmod +x "${OVERLAY_DIR}/usr/local/bin/bee-smoketest"
cp "${BUILDER_DIR}/smoketest.sh" "${OVERLAY_STAGE_DIR}/usr/local/bin/bee-smoketest"
chmod +x "${OVERLAY_STAGE_DIR}/usr/local/bin/bee-smoketest"
# --- vendor utilities (optional pre-fetched binaries) ---
for tool in storcli64 sas2ircu sas3ircu mstflint; do
if [ -f "${VENDOR_DIR}/${tool}" ]; then
cp "${VENDOR_DIR}/${tool}" "${OVERLAY_DIR}/usr/local/bin/${tool}"
chmod +x "${OVERLAY_DIR}/usr/local/bin/${tool}" || true
cp "${VENDOR_DIR}/${tool}" "${OVERLAY_STAGE_DIR}/usr/local/bin/${tool}"
chmod +x "${OVERLAY_STAGE_DIR}/usr/local/bin/${tool}" || true
echo "vendor tool: ${tool} (included)"
else
echo "vendor tool: ${tool} (not found, skipped)"
fi
done
# --- build NVIDIA kernel modules and inject into overlay ---
# --- build NVIDIA kernel modules ---
echo ""
echo "=== building NVIDIA ${NVIDIA_DRIVER_VERSION} modules ==="
sh "${BUILDER_DIR}/build-nvidia-module.sh" "${NVIDIA_DRIVER_VERSION}" "${DIST_DIR}" "${ALPINE_VERSION}"
# Detect kernel version from installed headers (set by build-nvidia-module.sh above)
KVER=$(ls /usr/src/ 2>/dev/null | grep '^linux-headers-' | sed 's/linux-headers-//' | sort -V | tail -1)
[ -n "$KVER" ] || { echo "ERROR: linux-lts-dev not installed — no headers in /usr/src/"; exit 1; }
echo "=== kernel version: ${KVER} ==="
sh "${BUILDER_DIR}/build-nvidia-module.sh" "${NVIDIA_DRIVER_VERSION}" "${DIST_DIR}" "${DEBIAN_KERNEL_ABI}"
KVER="${DEBIAN_KERNEL_ABI}-amd64"
NVIDIA_CACHE="${DIST_DIR}/nvidia-${NVIDIA_DRIVER_VERSION}-${KVER}"
# Inject .ko files into overlay at /usr/local/lib/nvidia/ (not /lib/modules/ — modloop squashfs
# mounts over that path at boot and makes it read-only, so overlay content there is inaccessible)
# Inject .ko files into overlay at /usr/local/lib/nvidia/
OVERLAY_KMOD_DIR="${OVERLAY_DIR}/usr/local/lib/nvidia"
OVERLAY_KMOD_DIR="${OVERLAY_STAGE_DIR}/usr/local/lib/nvidia"
mkdir -p "${OVERLAY_KMOD_DIR}"
cp "${NVIDIA_CACHE}/modules/"*.ko "${OVERLAY_KMOD_DIR}/"
# Inject nvidia-smi and libnvidia-ml
mkdir -p "${OVERLAY_DIR}/usr/local/bin" "${OVERLAY_DIR}/usr/lib"
cp "${NVIDIA_CACHE}/bin/nvidia-smi" "${OVERLAY_DIR}/usr/local/bin/"
chmod +x "${OVERLAY_DIR}/usr/local/bin/nvidia-smi"
cp "${NVIDIA_CACHE}/bin/nvidia-bug-report.sh" "${OVERLAY_DIR}/usr/local/bin/" 2>/dev/null || true
chmod +x "${OVERLAY_DIR}/usr/local/bin/nvidia-bug-report.sh" 2>/dev/null || true
cp "${NVIDIA_CACHE}/lib/"* "${OVERLAY_DIR}/usr/lib/" 2>/dev/null || true
mkdir -p "${OVERLAY_STAGE_DIR}/usr/local/bin" "${OVERLAY_STAGE_DIR}/usr/lib"
cp "${NVIDIA_CACHE}/bin/nvidia-smi" "${OVERLAY_STAGE_DIR}/usr/local/bin/"
chmod +x "${OVERLAY_STAGE_DIR}/usr/local/bin/nvidia-smi"
cp "${NVIDIA_CACHE}/bin/nvidia-bug-report.sh" "${OVERLAY_STAGE_DIR}/usr/local/bin/" 2>/dev/null || true
chmod +x "${OVERLAY_STAGE_DIR}/usr/local/bin/nvidia-bug-report.sh" 2>/dev/null || true
cp "${NVIDIA_CACHE}/lib/"* "${OVERLAY_STAGE_DIR}/usr/lib/" 2>/dev/null || true
# Inject GSP firmware into /lib/firmware/nvidia/<version>/
if [ -d "${NVIDIA_CACHE}/firmware" ] && [ "$(ls -A "${NVIDIA_CACHE}/firmware" 2>/dev/null)" ]; then
mkdir -p "${OVERLAY_STAGE_DIR}/lib/firmware/nvidia/${NVIDIA_DRIVER_VERSION}"
cp "${NVIDIA_CACHE}/firmware/"* "${OVERLAY_STAGE_DIR}/lib/firmware/nvidia/${NVIDIA_DRIVER_VERSION}/"
echo "=== firmware: $(ls "${OVERLAY_STAGE_DIR}/lib/firmware/nvidia/${NVIDIA_DRIVER_VERSION}/" | wc -l) files injected ==="
fi
# --- embed build metadata ---
mkdir -p "${OVERLAY_DIR}/etc"
mkdir -p "${OVERLAY_STAGE_DIR}/etc"
BUILD_DATE="$(date +%Y-%m-%d)"
GIT_COMMIT="$(git -C "${REPO_ROOT}" rev-parse --short HEAD 2>/dev/null || echo unknown)"
cat > "${OVERLAY_DIR}/etc/bee-release" <<EOF
cat > "${OVERLAY_STAGE_DIR}/etc/bee-release" <<EOF
BEE_ISO_VERSION=${AUDIT_VERSION}
BEE_AUDIT_VERSION=${AUDIT_VERSION}
BUILD_DATE=${BUILD_DATE}
GIT_COMMIT=${GIT_COMMIT}
ALPINE_VERSION=${ALPINE_VERSION}
DEBIAN_VERSION=${DEBIAN_VERSION}
DEBIAN_KERNEL_ABI=${DEBIAN_KERNEL_ABI}
NVIDIA_DRIVER_VERSION=${NVIDIA_DRIVER_VERSION}
EOF
# --- export build info for genapkovl to inject into motd ---
BUILD_DATE=$(date +%Y-%m-%d)
GIT_COMMIT=$(git -C "${REPO_ROOT}" rev-parse --short HEAD 2>/dev/null || echo "unknown")
export BEE_BUILD_INFO="${BUILD_DATE} git:${GIT_COMMIT} alpine:${ALPINE_VERSION} nvidia:${NVIDIA_DRIVER_VERSION}"
# --- build ISO using mkimage ---
mkdir -p "${DIST_DIR}"
echo ""
echo "=== building ISO ==="
# Install our mkimage profile where mkimage.sh can find it.
# ~/.mkimage is the user plugin directory loaded by mkimage.sh.
# Clear ~/.mkimage to avoid stale profiles from previous builds being picked up
rm -rf "${HOME}/.mkimage"
mkdir -p "${HOME}/.mkimage"
cp "${BUILDER_DIR}/mkimg.bee.sh" "${HOME}/.mkimage/"
cp "${BUILDER_DIR}/genapkovl-bee.sh" "${HOME}/.mkimage/"
# Export overlay dir so the profile script can find it regardless of SRCDIR.
export BEE_OVERLAY_DIR="${OVERLAY_DIR}"
# Clean workdir: always nuke apks_* (stale packages from old mirror/version cause "unable to select" errors).
# Keep kernel_*, syslinux_*, grub_* — these are large but stable; they only change when KERNEL_PKG_VERSION changes.
if [ -d /var/tmp/bee-iso-work ]; then
find /var/tmp/bee-iso-work -maxdepth 1 -mindepth 1 \
-not -name 'kernel_*' \
-not -name 'syslinux_*' -not -name 'grub_*' \
-exec rm -rf {} + 2>/dev/null || true
# Patch motd with build info
BEE_BUILD_INFO="${BUILD_DATE} git:${GIT_COMMIT} debian:${DEBIAN_VERSION} nvidia:${NVIDIA_DRIVER_VERSION}"
if [ -f "${OVERLAY_STAGE_DIR}/etc/motd" ]; then
sed "s/%%BUILD_INFO%%/${BEE_BUILD_INFO}/" "${OVERLAY_STAGE_DIR}/etc/motd" \
> "${OVERLAY_STAGE_DIR}/etc/motd.patched"
mv "${OVERLAY_STAGE_DIR}/etc/motd.patched" "${OVERLAY_STAGE_DIR}/etc/motd"
fi
# Run from /var/tmp: mkimage.sh calls git internally; running from inside /root/bee causes
# "outside repository" errors. /var/tmp is outside the git repo and has enough scratch space.
# genapkovl-bee.sh is found by mkimage via ~/.mkimage/.
# Remove any stale genapkovl from /var/tmp — mkimage checks CWD first, stale files override ~/.mkimage/.
rm -f /var/tmp/genapkovl-*.sh
export TMPDIR=/var/tmp
cd /var/tmp
sh /usr/share/aports/scripts/mkimage.sh \
--tag "v${ALPINE_VERSION}" \
--outdir "${DIST_DIR}" \
--arch x86_64 \
--repository "https://dl-cdn.alpinelinux.org/alpine/v${ALPINE_VERSION}/main" \
--repository "https://dl-cdn.alpinelinux.org/alpine/v${ALPINE_VERSION}/community" \
--workdir /var/tmp/bee-iso-work \
--profile bee
# --- sync overlay into live-build includes.chroot ---
LB_DIR="${BUILD_WORK_DIR}"
LB_INCLUDES="${LB_DIR}/config/includes.chroot"
mkdir -p "${LB_INCLUDES}"
rsync -a "${OVERLAY_STAGE_DIR}/" "${LB_INCLUDES}/"
ISO="${DIST_DIR}/alpine-bee-${ALPINE_VERSION}-x86_64.iso"
# Ensure SSH authorized_keys perms are correct (rsync may alter)
if [ -f "${LB_INCLUDES}/root/.ssh/authorized_keys" ]; then
chmod 700 "${LB_INCLUDES}/root/.ssh"
chmod 600 "${LB_INCLUDES}/root/.ssh/authorized_keys"
fi
# --- build ISO using live-build ---
echo ""
echo "=== done ==="
echo "ISO: $ISO"
echo "Size: $(du -sh "$ISO" 2>/dev/null | cut -f1 || echo 'not found')"
echo "=== building ISO (live-build) ==="
cd "${LB_DIR}"
lb clean 2>&1 | tail -3
lb config 2>&1 | tail -5
lb build 2>&1
# live-build outputs live-image-amd64.hybrid.iso in LB_DIR
ISO_RAW="${LB_DIR}/live-image-amd64.hybrid.iso"
ISO_OUT="${DIST_DIR}/bee-debian${DEBIAN_VERSION}-v${AUDIT_VERSION}-amd64.iso"
if [ -f "$ISO_RAW" ]; then
cp "$ISO_RAW" "$ISO_OUT"
echo ""
echo "=== done ==="
echo "ISO: $ISO_OUT"
if command -v stat >/dev/null 2>&1; then
ISO_SIZE_BYTES="$(stat -c '%s' "$ISO_OUT" 2>/dev/null || stat -f '%z' "$ISO_OUT")"
else
ISO_SIZE_BYTES="$(wc -c < "$ISO_OUT" | tr -d ' ')"
fi
if command -v numfmt >/dev/null 2>&1; then
echo "Size: $(numfmt --to=iec --suffix=B "$ISO_SIZE_BYTES")"
else
echo "Size: ${ISO_SIZE_BYTES} bytes"
fi
else
echo "ERROR: ISO not found at $ISO_RAW"
exit 1
fi
echo ""
echo "Boot via BMC virtual media and SSH to the server IP on port 22 as root."

View File

@@ -0,0 +1,31 @@
set default=0
set timeout=5
if [ x$feature_default_font_path = xy ] ; then
font=unicode
else
font=$prefix/unicode.pf2
fi
if loadfont $font ; then
set gfxmode=800x600
set gfxpayload=keep
insmod efi_gop
insmod efi_uga
insmod video_bochs
insmod video_cirrus
else
set gfxmode=auto
insmod all_video
fi
insmod serial
serial --unit=0 --speed=115200 --word=8 --parity=no --stop=1
insmod gfxterm
insmod png
source /boot/grub/theme.cfg
terminal_input console serial
terminal_output gfxterm serial

View File

@@ -0,0 +1,17 @@
source /boot/grub/config.cfg
menuentry "Bee Hardware Audit" {
linux @KERNEL_LIVE@ @APPEND_LIVE@
initrd @INITRD_LIVE@
}
menuentry "Bee Hardware Audit (fail-safe)" {
linux @KERNEL_LIVE@ @APPEND_LIVE@ memtest noapic noapm nodma nomce nolapic nosmp vga=normal
initrd @INITRD_LIVE@
}
if [ "${grub_platform}" = "efi" ]; then
menuentry "UEFI Firmware Settings" {
fwsetup
}
fi

View File

@@ -0,0 +1,51 @@
desktop-image: "../splash.png"
title-color: "#f5a800"
title-font: "Unifont Regular 16"
title-text: ""
message-font: "Unifont Regular 16"
terminal-font: "Unifont Regular 16"
#help bar at the bottom
+ label {
top = 100%-50
left = 0
width = 100%
height = 20
text = "@KEYMAP_SHORT@"
align = "center"
color = "#5a4800"
font = "Unifont Regular 16"
}
#boot menu
+ boot_menu {
left = 20%
width = 60%
top = 62%
height = 38%-80
item_color = "#c88000"
item_font = "Unifont Regular 16"
selected_item_color= "#f5a800"
selected_item_font = "Unifont Regular 16"
item_height = 16
item_padding = 0
item_spacing = 4
icon_width = 0
icon_heigh = 0
item_icon_space = 0
}
#progress bar
+ progress_bar {
id = "__timeout__"
left = 20%
top = 100%-80
height = 14
width = 60%
font = "Unifont Regular 16"
text_color = "#0a0a00"
fg_color = "#f5a800"
bg_color = "#2a2200"
border_color = "#5a4800"
text = "@TIMEOUT_NOTIFICATION_LONG@"
}

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.7 KiB

View File

@@ -0,0 +1,9 @@
set color_normal=light-gray/black
set color_highlight=white/dark-gray
if [ -e /boot/grub/splash.png ]; then
set theme=/boot/grub/live-theme/theme.txt
else
set menu_color_normal=cyan/black
set menu_color_highlight=white/dark-gray
fi

View File

@@ -0,0 +1,35 @@
#!/bin/sh
# 9000-bee-setup.hook.chroot — runs inside Debian chroot during live-build
# Enables bee systemd services and configures the live environment.
set -e
echo "=== bee chroot setup ==="
# Enable bee services
systemctl enable bee-network.service
systemctl enable bee-nvidia.service
systemctl enable bee-audit.service
systemctl enable bee-sshsetup.service
systemctl enable ssh.service
systemctl enable qemu-guest-agent.service 2>/dev/null || true
systemctl enable serial-getty@ttyS0.service 2>/dev/null || true
# Ensure scripts are executable
chmod +x /usr/local/bin/bee-network.sh 2>/dev/null || true
chmod +x /usr/local/bin/bee-nvidia-load 2>/dev/null || true
chmod +x /usr/local/bin/bee-sshsetup 2>/dev/null || true
chmod +x /usr/local/bin/bee-smoketest 2>/dev/null || true
chmod +x /usr/local/bin/bee-tui 2>/dev/null || true
chmod +x /usr/local/bin/bee 2>/dev/null || true
# Reload udev rules
udevadm control --reload-rules 2>/dev/null || true
# Create log directory
mkdir -p /var/log
if [ -f /etc/sudoers.d/bee ]; then
chmod 0440 /etc/sudoers.d/bee
fi
echo "=== bee chroot setup complete ==="

View File

@@ -0,0 +1,12 @@
# Generated at build time — do not commit
usr/local/bin/audit
usr/local/bin/bee-smoketest
usr/local/bin/nvidia-smi
usr/local/bin/nvidia-bug-report.sh
usr/local/lib/
usr/lib/libnvidia-ml*
usr/lib/libcuda*
root/.ssh/authorized_keys
etc/bee-release
etc/bee-ssh-password-fallback
etc/motd

View File

@@ -0,0 +1,38 @@
# Hardware audit tools
dmidecode
smartmontools
nvme-cli
pciutils
ipmitool
util-linux
e2fsprogs
lshw
# Network
iproute2
isc-dhcp-client
iputils-ping
qemu-guest-agent
# SSH
openssh-server
# Utilities
bash
procps
lsof
file
less
vim-tiny
mc
htop
sudo
zstd
# QR codes (for displaying audit results)
qrencode
# Firmware
firmware-linux-free
# glibc compat helpers (for any external binaries that need it)
libc6

View File

@@ -1,108 +0,0 @@
#!/bin/sh -e
HOSTNAME="$1"
[ -n "$HOSTNAME" ] || { echo "usage: $0 hostname"; exit 1; }
OVERLAY="${BEE_OVERLAY_DIR}"
[ -n "$OVERLAY" ] || { echo "ERROR: BEE_OVERLAY_DIR not set"; exit 1; }
cleanup() { rm -rf "$tmp"; }
tmp="$(mktemp -d)"
trap cleanup EXIT
makefile() { OWNER="$1" PERMS="$2" FILENAME="$3"; cat > "$FILENAME"; chown "$OWNER" "$FILENAME"; chmod "$PERMS" "$FILENAME"; }
rc_add() { mkdir -p "$tmp/etc/runlevels/$2"; ln -sf /etc/init.d/"$1" "$tmp/etc/runlevels/$2/$1"; }
mkdir -p "$tmp/etc"
makefile root:root 0644 "$tmp/etc/hostname" <<EOF
$HOSTNAME
EOF
# Empty interfaces file — prevents ifupdown from erroring, bee-network handles DHCP
mkdir -p "$tmp/etc/network"
makefile root:root 0644 "$tmp/etc/network/interfaces" <<EOF
auto lo
iface lo inet loopback
EOF
mkdir -p "$tmp/etc/apk"
makefile root:root 0644 "$tmp/etc/apk/world" <<EOF
alpine-base
dmidecode
smartmontools
nvme-cli
pciutils
ipmitool
util-linux
lsblk
e2fsprogs
lshw
dropbear
libqrencode-tools
tzdata
ca-certificates
strace
procps
lsof
file
less
vim
dialog
gcompat
libc6-compat
EOF
rc_add devfs sysinit
rc_add dmesg sysinit
rc_add mdev sysinit
rc_add hwdrivers sysinit
rc_add modloop sysinit
rc_add hwclock boot
rc_add modules boot
rc_add sysctl boot
rc_add hostname boot
rc_add bootmisc boot
rc_add syslog boot
rc_add mount-ro shutdown
rc_add killprocs shutdown
rc_add savecache shutdown
rc_add bee-sshsetup default
rc_add bee-network default
rc_add dropbear default
rc_add bee-nvidia default
rc_add bee-audit default
if [ -d "$OVERLAY/etc" ]; then
cp -r "$OVERLAY/etc/." "$tmp/etc/"
chmod +x "$tmp/etc/init.d/"* 2>/dev/null || true
[ -n "$BEE_BUILD_INFO" ] && sed -i "s/%%BUILD_INFO%%/${BEE_BUILD_INFO}/" "$tmp/etc/motd" 2>/dev/null || true
fi
mkdir -p "$tmp/usr"
if [ -d "$OVERLAY/usr" ]; then
cp -r "$OVERLAY/usr/." "$tmp/usr/"
chmod +x "$tmp/usr/local/bin/"* 2>/dev/null || true
fi
if [ -d "$OVERLAY/root" ]; then
mkdir -p "$tmp/root"
cp -r "$OVERLAY/root/." "$tmp/root/"
chmod 700 "$tmp/root/.ssh" 2>/dev/null || true
chmod 600 "$tmp/root/.ssh/authorized_keys" 2>/dev/null || true
fi
if [ -d "$OVERLAY/lib" ]; then
mkdir -p "$tmp/lib"
cp -r "$OVERLAY/lib/." "$tmp/lib/"
fi
mkdir -p "$tmp/etc/dropbear" "$tmp/etc/conf.d"
# -R: auto-generate host keys if missing
# no dependency on networking service — bee-network handles DHCP independently
makefile root:root 0644 "$tmp/etc/conf.d/dropbear" <<EOF
DROPBEAR_OPTS="-R -B"
EOF
tar -c -C "$tmp" etc usr root lib 2>/dev/null | gzip -9n > "$HOSTNAME.apkovl.tar.gz"

View File

@@ -1,58 +0,0 @@
#!/bin/sh
# Alpine mkimage profile: bee
profile_bee() {
title="Bee Hardware Audit"
desc="Hardware audit LiveCD"
arch="x86_64"
hostname="alpine-bee"
apkovl="genapkovl-bee.sh"
image_ext="iso"
output_format="iso"
kernel_flavors="lts"
kernel_addons=""
initfs_cmdline="modules=loop,squashfs,sd-mod,usb-storage modloop=/boot/modloop-lts quiet"
initfs_features="ata base cdrom ext4 mmc nvme raid scsi squashfs usb virtio nfit"
grub_mod="all_video disk part_gpt part_msdos linux normal configfile search search_label efi_gop fat iso9660 cat echo ls test true help gzio multiboot2 efi_uga"
syslinux_serial="0 115200"
apks="
alpine-base
linux-firmware-none
linux-firmware-rtl_nic
linux-firmware-bnx2
linux-firmware-bnx2x
linux-firmware-tigon
linux-firmware-qlogic
linux-firmware-netronome
linux-firmware-mellanox
linux-firmware-intel
linux-firmware-other
dmidecode
smartmontools
nvme-cli
pciutils
ipmitool
util-linux
lsblk
e2fsprogs
lshw
dropbear
openrc
libqrencode-tools
tzdata
ca-certificates
strace
procps
lsof
file
less
vim
dialog
gcompat
libc6-compat
"
}

View File

@@ -1,10 +1,11 @@
#!/bin/sh
# setup-builder.sh — prepare Alpine VM as bee ISO builder
# setup-builder.sh — prepare Debian 12 host/VM as bee ISO builder
#
# Run once on a fresh Alpine 3.21 VM as root.
# After this script completes, the VM can build ISO images.
# Run once on a fresh Debian 12 (Bookworm) host/VM as root.
# After this script completes, the machine can build bee ISO images directly.
# Container alternative: use `iso/builder/build-in-container.sh`.
#
# Usage (on Alpine VM):
# Usage (on Debian VM):
# wget -O- https://git.mchus.pro/mchus/bee/raw/branch/main/iso/builder/setup-builder.sh | sh
# or: sh setup-builder.sh
@@ -12,65 +13,41 @@ set -e
. "$(dirname "$0")/VERSIONS" 2>/dev/null || true
GO_VERSION="${GO_VERSION:-1.23.6}"
DEBIAN_VERSION="${DEBIAN_VERSION:-12}"
DEBIAN_KERNEL_ABI="${DEBIAN_KERNEL_ABI:-6.1.0-28}"
echo "=== bee builder setup ==="
echo "Alpine: $(cat /etc/alpine-release)"
echo "Debian: $(cat /etc/debian_version)"
echo "Go target: ${GO_VERSION}"
echo "Kernel ABI: ${DEBIAN_KERNEL_ABI}"
echo ""
# --- system packages ---
apk update
# enable community repo if not already enabled
sed -i 's|^#\(.*community\)|\1|' /etc/apk/repositories
export DEBIAN_FRONTEND=noninteractive
apt-get update -qq
apk update
apk add \
alpine-sdk \
abuild \
apt-get install -y \
live-build \
debootstrap \
squashfs-tools \
xorriso \
grub-pc-bin \
grub-efi-amd64-bin \
mtools \
grub \
grub-efi \
grub-bios \
git \
wget \
curl \
tar \
xz \
screen
xz-utils \
screen \
rsync \
build-essential \
gcc \
make \
perl \
"linux-headers-${DEBIAN_KERNEL_ABI}-amd64"
# --- audit runtime packages (verify they exist in Alpine repos) ---
echo ""
echo "=== verifying audit runtime packages ==="
RUNTIME_PKGS="
dmidecode
smartmontools
nvme-cli
pciutils
ipmitool
util-linux
e2fsprogs
qrencode
dropbear
udhcpc
pciutils-libs
lshw
"
MISSING=""
for pkg in $RUNTIME_PKGS; do
if apk info --quiet "$pkg" 2>/dev/null || apk search --quiet "$pkg" 2>/dev/null | grep -q "^${pkg}-"; then
echo " OK: $pkg"
else
echo " MISSING: $pkg"
MISSING="$MISSING $pkg"
fi
done
if [ -n "$MISSING" ]; then
echo ""
echo "WARNING: missing packages:$MISSING"
echo "These will not be available in the ISO."
fi
echo "linux-headers installed: $(dpkg -l "linux-headers-${DEBIAN_KERNEL_ABI}-amd64" | awk '/^ii/{print $3}')"
# --- Go toolchain ---
echo ""
@@ -93,38 +70,6 @@ fi
export PATH="$PATH:/usr/local/go/bin"
echo "Go: $(go version)"
# --- alpine-conf for mkimage ---
apk add alpine-conf
# --- aports for mkimage.sh ---
if [ ! -d /usr/share/aports ]; then
echo ""
echo "=== cloning aports ==="
git clone --depth=1 --branch "v${ALPINE_VERSION:-3.21}.0" \
https://gitlab.alpinelinux.org/alpine/aports.git \
/usr/share/aports
fi
# --- abuild signing key (required by mkimage.sh) ---
if [ ! -f "${HOME}/.abuild/abuild.conf" ]; then
echo ""
echo "=== generating abuild signing key ==="
mkdir -p "${HOME}/.abuild"
abuild-keygen -a -n 2>/dev/null || true
# abuild-keygen requires doas to install the key system-wide; do it manually
PUB=$(ls "${HOME}/.abuild/"*.pub 2>/dev/null | head -1)
if [ -n "$PUB" ]; then
cp "$PUB" /etc/apk/keys/
PRIV="${PUB%.pub}"
echo "PACKAGER_PRIVKEY=\"${PRIV}\"" > "${HOME}/.abuild/abuild.conf"
echo "abuild key: $PRIV"
else
echo "WARNING: abuild key generation failed"
fi
fi
# NOTE: lz4 compression for modloop is disabled — Alpine initramfs may not support lz4 squashfs.
echo ""
echo "=== builder setup complete ==="
echo "Next: sh iso/builder/build-debug.sh"
echo "Next: sh iso/builder/build.sh"

View File

@@ -24,13 +24,12 @@ echo " date: $(date -u)"
echo "========================================"
echo ""
# --- kernel version ---
KVER=$(uname -r)
info "kernel: $KVER"
# --- PATH ---
# --- PATH & binaries ---
echo "-- PATH & binaries --"
for tool in dmidecode smartctl nvme ipmitool lspci audit; do
for tool in dmidecode smartctl nvme ipmitool lspci bee; do
if p=$(PATH="/usr/local/bin:$PATH" command -v "$tool" 2>/dev/null); then
ok "$tool found: $p"
else
@@ -96,48 +95,37 @@ for lib in libnvidia-ml libcuda; do
done
echo ""
echo "-- gcompat (glibc compat for nvidia-smi) --"
if [ -L /lib64/ld-linux-x86-64.so.2 ] || [ -f /lib64/ld-linux-x86-64.so.2 ]; then
ok "gcompat: /lib64/ld-linux-x86-64.so.2 present"
else
fail "gcompat: /lib64/ld-linux-x86-64.so.2 MISSING — nvidia-smi will fail to exec"
fi
echo ""
echo "-- openrc services --"
echo "-- systemd services --"
for svc in bee-nvidia bee-network bee-audit; do
if rc-service "$svc" status >/dev/null 2>&1; then
ok "service running: $svc"
if systemctl is-active --quiet "$svc" 2>/dev/null; then
ok "service active: $svc"
else
fail "service NOT running: $svc"
fail "service NOT active: $svc"
fi
done
for svc in dropbear bee-sshsetup; do
if [ -f "/etc/init.d/$svc" ]; then
if rc-service "$svc" status >/dev/null 2>&1; then
ok "service running: $svc"
else
warn "service not running: $svc (may be one-shot)"
fi
for svc in ssh bee-sshsetup; do
if systemctl is-active --quiet "$svc" 2>/dev/null \
|| systemctl show "$svc" --property=ActiveState 2>/dev/null | grep -q "inactive\|exited"; then
ok "service ok: $svc"
else
warn "service status unknown: $svc"
fi
done
echo ""
echo "-- audit binary --"
AUDIT=/usr/local/bin/audit
if [ -x "$AUDIT" ]; then
ok "audit binary: present"
ver=$("$AUDIT" --version 2>/dev/null || "$AUDIT" version 2>/dev/null || echo "unknown")
info "audit version: $ver"
echo "-- bee binary --"
BEE=/usr/local/bin/bee
if [ -x "$BEE" ]; then
ok "bee binary: present"
ver=$("$BEE" version 2>/dev/null || echo "unknown")
info "bee version: $ver"
else
fail "audit binary: NOT FOUND at $AUDIT"
fail "bee binary: NOT FOUND at $BEE"
fi
echo ""
echo "-- audit last run --"
# audit binary logs via slog to stderr (bee-audit.log); JSON output goes to bee-audit.json.
# slog format: time=... level=INFO msg="audit output written" path=...
if [ -f /var/log/bee-audit.json ] && [ -s /var/log/bee-audit.json ]; then
ok "audit: bee-audit.json present and non-empty"
info "size: $(du -sh /var/log/bee-audit.json | cut -f1)"
@@ -148,13 +136,11 @@ fi
if [ -f /var/log/bee-audit.log ]; then
last_line=$(tail -1 /var/log/bee-audit.log)
info "last log line: $last_line"
# slog writes: msg="audit output written" on success
if grep -q "audit output written" /var/log/bee-audit.log 2>/dev/null; then
ok "audit: completed successfully"
else
warn "audit: 'audit output written' not found in log — may have failed"
fi
# check for nvidia enrichment skip (slog message from nvidia collector)
if grep -q "nvidia: enrichment skipped\|nvidia.*skipped\|enrichment skipped" /var/log/bee-audit.log 2>/dev/null; then
reason=$(grep -E "nvidia.*skipped|enrichment skipped" /var/log/bee-audit.log | tail -1)
fail "audit: nvidia enrichment skipped — $reason"

View File

@@ -1 +0,0 @@
DROPBEAR_OPTS="-p 22 -R -B"

View File

@@ -1,21 +0,0 @@
#!/sbin/openrc-run
description="Bee: run hardware audit"
depend() {
need localmount
after bee-network bee-nvidia
}
start() {
ebegin "Running hardware audit"
/usr/local/bin/audit --output "file:/var/log/bee-audit.json" 2>/var/log/bee-audit.log
local rc=$?
if [ $rc -eq 0 ]; then
einfo "Audit complete: /var/log/bee-audit.json"
einfo "SSH in and inspect results. Dropbear is running."
else
ewarn "Audit finished with errors — check /var/log/bee-audit.log"
fi
eend 0
}

View File

@@ -1,14 +0,0 @@
#!/sbin/openrc-run
description="Bee: bring up network interfaces via DHCP"
depend() {
need localmount
before bee-audit
}
start() {
ebegin "Bringing up network interfaces"
/usr/local/bin/bee-network.sh >> /var/log/bee-network.log 2>&1
eend 0
}

View File

@@ -1,79 +0,0 @@
#!/sbin/openrc-run
description="Bee: load NVIDIA kernel modules"
NVIDIA_KO_DIR="/usr/local/lib/nvidia"
depend() {
need localmount
before bee-audit
}
start() {
ebegin "Loading NVIDIA modules"
einfo "kernel: $(uname -r)"
if [ ! -d "$NVIDIA_KO_DIR" ]; then
ewarn "NVIDIA module dir missing: $NVIDIA_KO_DIR"
eend 1
return 1
fi
einfo "module dir: $NVIDIA_KO_DIR"
ls "$NVIDIA_KO_DIR"/*.ko 2>/dev/null | sed 's/^/ /' || true
# Create libnvidia-ml soname symlinks needed by nvidia-smi (glibc binary on Alpine/musl)
for lib in libnvidia-ml libcuda; do
versioned=$(ls /usr/lib/${lib}.so.[0-9]* 2>/dev/null | head -1)
[ -n "$versioned" ] || continue
base=$(basename "$versioned")
ln -sf "$base" "/usr/lib/${lib}.so.1" 2>/dev/null || true
ln -sf "${lib}.so.1" "/usr/lib/${lib}.so" 2>/dev/null || true
done
# Load modules via insmod (bypasses modules.dep — modloop squashfs is read-only)
for mod in nvidia nvidia-modeset nvidia-uvm; do
ko="$NVIDIA_KO_DIR/${mod}.ko"
[ -f "$ko" ] || ko="$NVIDIA_KO_DIR/${mod//-/_}.ko"
if [ -f "$ko" ]; then
if insmod "$ko" 2>/dev/null; then
einfo "loaded: $mod"
else
ewarn "failed to load: $mod"
dmesg | tail -n 5 | sed 's/^/ dmesg: /' || true
fi
else
ewarn "not found: $ko"
fi
done
# Create /dev/nvidia* device nodes — mdev on Alpine does not have NVIDIA rules,
# so the kernel hotplug events are not handled and nodes are never created.
# Without /dev/nvidiactl nvidia-smi returns NVML_ERROR_LIBRARY_NOT_FOUND (exit 12).
nvidia_major=$(grep -m1 ' nvidiactl$' /proc/devices 2>/dev/null | awk '{print $1}')
if [ -n "$nvidia_major" ]; then
mknod -m 666 /dev/nvidiactl c "$nvidia_major" 255 2>/dev/null \
&& einfo "created /dev/nvidiactl (major $nvidia_major)" \
|| ewarn "/dev/nvidiactl already exists or mknod failed"
for i in 0 1 2 3 4 5 6 7; do
mknod -m 666 "/dev/nvidia$i" c "$nvidia_major" "$i" 2>/dev/null || true
done
einfo "created /dev/nvidia{0-7}"
else
ewarn "/dev/nvidiactl: nvidia not in /proc/devices — no GPU hardware present?"
eend 0
return 0
fi
uvm_major=$(grep -m1 ' nvidia-uvm$' /proc/devices 2>/dev/null | awk '{print $1}')
if [ -n "$uvm_major" ]; then
mknod -m 666 /dev/nvidia-uvm c "$uvm_major" 0 2>/dev/null \
&& einfo "created /dev/nvidia-uvm (major $uvm_major)" \
|| ewarn "/dev/nvidia-uvm already exists or mknod failed"
mknod -m 666 /dev/nvidia-uvm-tools c "$uvm_major" 1 2>/dev/null || true
else
ewarn "/dev/nvidia-uvm: nvidia-uvm not in /proc/devices"
fi
eend 0
}

View File

@@ -1,28 +0,0 @@
#!/sbin/openrc-run
description="Bee: configure SSH access (keys or password fallback)"
depend() {
need localmount
before dropbear
}
start() {
# Always create dedicated 'bee' user for password fallback.
# If no SSH keys embedded: login with bee / eeb
if ! id bee > /dev/null 2>&1; then
adduser -D -s /bin/sh bee > /dev/null 2>&1
fi
printf 'eeb\neeb\n' | passwd bee > /dev/null 2>&1
if [ -f /etc/bee-ssh-password-fallback ]; then
ebegin "SSH key auth unavailable — password fallback active"
ewarn "Login: bee / eeb"
ewarn "Generate a key: sh keys/scripts/keygen.sh <name>"
eend 0
else
ebegin "SSH key auth configured"
# bee user exists but password login less useful when keys work
eend 0
fi
}

View File

@@ -1,37 +0,0 @@
#!/sbin/openrc-run
description="Dropbear SSH server"
depend() {
need localmount
after bee-sshsetup
use logger
}
check_config() {
if [ ! -e /etc/dropbear/dropbear_rsa_host_key ]; then
einfo "Generating RSA host key..."
/usr/bin/dropbearkey -t rsa -f /etc/dropbear/dropbear_rsa_host_key
fi
if [ ! -e /etc/dropbear/dropbear_ecdsa_host_key ]; then
einfo "Generating ECDSA host key..."
/usr/bin/dropbearkey -t ecdsa -f /etc/dropbear/dropbear_ecdsa_host_key
fi
if [ ! -e /etc/dropbear/dropbear_ed25519_host_key ]; then
einfo "Generating ED25519 host key..."
/usr/bin/dropbearkey -t ed25519 -f /etc/dropbear/dropbear_ed25519_host_key
fi
}
start() {
check_config || return 1
ebegin "Starting dropbear"
/usr/sbin/dropbear ${DROPBEAR_OPTS}
eend $?
}
stop() {
ebegin "Stopping dropbear"
start-stop-daemon --stop --pidfile /var/run/dropbear.pid
eend $?
}

View File

@@ -0,0 +1,5 @@
# Virtual GPU drivers for KVM/VMware guests
virtio_gpu
bochs_drm
qxl
vmwgfx

View File

@@ -1,13 +0,0 @@
::sysinit:/sbin/openrc sysinit
::sysinit:/sbin/openrc boot
::wait:/sbin/openrc default
# Autologin on tty1
tty1::respawn:/sbin/agetty --autologin root --noclear tty1 linux
tty2::respawn:/sbin/getty 38400 tty2
tty3::respawn:/sbin/getty 38400 tty3
ttyS0::respawn:/sbin/getty -L 115200 ttyS0 vt100
::ctrlaltdel:/sbin/reboot
::shutdown:/sbin/openrc shutdown

View File

@@ -9,13 +9,12 @@ menu() {
fi
}
# Auto-open TUI on local tty1 after boot.
# Exiting TUI returns to this shell (console prompt).
if [ -z "${BEE_TUI_AUTO_LAUNCHED:-}" ] \
&& [ -z "${SSH_CONNECTION:-}" ] \
# On the local console, keep the shell visible and let the operator
# start the TUI explicitly. This avoids black-screen failures if the
# terminal implementation does not support the TUI well.
if [ -z "${SSH_CONNECTION:-}" ] \
&& [ -z "${SSH_TTY:-}" ] \
&& [ "$(tty 2>/dev/null)" = "/dev/tty1" ] \
&& [ -x /usr/local/bin/bee-tui ]; then
export BEE_TUI_AUTO_LAUNCHED=1
/usr/local/bin/bee-tui
&& [ "$(tty 2>/dev/null)" = "/dev/tty1" ]; then
echo "Bee live environment ready."
echo "Run 'menu' to open the TUI."
fi

View File

@@ -0,0 +1 @@
bee ALL=(ALL) NOPASSWD: ALL

View File

@@ -0,0 +1,13 @@
[Unit]
Description=Bee: run hardware audit
After=bee-network.service bee-nvidia.service
[Service]
Type=oneshot
ExecStart=/bin/sh -c '/usr/local/bin/bee audit --runtime livecd --output file:/var/log/bee-audit.json; rc=$?; if [ "$rc" -ne 0 ]; then echo "[bee-audit] WARN: audit exited with rc=$rc"; fi; exit 0'
StandardOutput=append:/var/log/bee-audit.log
StandardError=append:/var/log/bee-audit.log
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target

View File

@@ -0,0 +1,14 @@
[Unit]
Description=Bee: bring up network interfaces via DHCP
After=local-fs.target
Before=network-online.target bee-audit.service
[Service]
Type=oneshot
ExecStart=/usr/local/bin/bee-network.sh
StandardOutput=append:/var/log/bee-network.log
StandardError=append:/var/log/bee-network.log
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target

View File

@@ -0,0 +1,14 @@
[Unit]
Description=Bee: load NVIDIA kernel modules and create device nodes
After=local-fs.target udev.service
Before=bee-audit.service
[Service]
Type=oneshot
ExecStart=/usr/local/bin/bee-nvidia-load
StandardOutput=journal
StandardError=journal
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target

View File

@@ -0,0 +1,12 @@
[Unit]
Description=Bee: configure SSH access (keys or password fallback)
After=local-fs.target
Before=ssh.service
[Service]
Type=oneshot
ExecStart=/usr/local/bin/bee-sshsetup
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target

View File

@@ -0,0 +1,13 @@
export PATH="/usr/local/bin:$PATH"
if [ -z "${SSH_CONNECTION:-}" ] \
&& [ -z "${SSH_TTY:-}" ] \
&& [ "$(tty 2>/dev/null)" = "/dev/tty1" ]; then
if command -v menu >/dev/null 2>&1; then
menu
elif [ -x /usr/local/bin/bee-tui ]; then
/usr/local/bin/bee-tui
else
echo "Bee menu is unavailable."
fi
fi

View File

@@ -3,6 +3,7 @@
for iface in $(ip -o link show | awk -F': ' '{print $2}' | grep -v '^lo$' | grep -vE '^(docker|virbr|veth|tun|tap|br-|bond|dummy)'); do
echo "[$iface] bringing up..."
ip link set "$iface" up 2>/dev/null
udhcpc -i "$iface" -t 5 -T 3
ip link set "$iface" up 2>/dev/null || true
dhclient -r "$iface" >/dev/null 2>&1 || true
dhclient -4 -v "$iface"
done

View File

@@ -22,9 +22,8 @@ for iface in $interfaces; do
log "bringing up $iface"
ip link set "$iface" up 2>/dev/null || { log "WARN: could not bring up $iface"; continue; }
# DHCP in background: -b forks if no immediate lease, & ensures non-blocking always.
# -t 0: unlimited retries, -T 3: 3s per attempt. No -q: stay running to renew lease.
udhcpc -i "$iface" -b -t 0 -T 3 &
# DHCP in background — non-blocking, retries indefinitely
dhclient -nw "$iface" 2>/dev/null &
log "DHCP started for $iface (pid $!)"
done

View File

@@ -0,0 +1,59 @@
#!/bin/sh
# bee-nvidia-load — load NVIDIA kernel modules and create device nodes
# Called by bee-nvidia.service at boot.
NVIDIA_KO_DIR="/usr/local/lib/nvidia"
log() { echo "[bee-nvidia] $*"; }
log "kernel: $(uname -r)"
if [ ! -d "$NVIDIA_KO_DIR" ]; then
log "ERROR: NVIDIA module dir missing: $NVIDIA_KO_DIR"
exit 1
fi
log "module dir: $NVIDIA_KO_DIR"
ls "$NVIDIA_KO_DIR"/*.ko 2>/dev/null | sed 's/^/ /' || true
# Load modules via insmod (direct load — no depmod needed)
for mod in nvidia nvidia-modeset nvidia-uvm; do
ko="$NVIDIA_KO_DIR/${mod}.ko"
[ -f "$ko" ] || ko="$NVIDIA_KO_DIR/${mod//-/_}.ko"
if [ -f "$ko" ]; then
if insmod "$ko" 2>/dev/null; then
log "loaded: $mod"
else
log "WARN: failed to load: $mod"
dmesg | tail -n 5 | sed 's/^/ dmesg: /' || true
fi
else
log "WARN: not found: $ko"
fi
done
# Create /dev/nvidia* device nodes (udev rules absent since we use .run installer)
nvidia_major=$(grep -m1 ' nvidiactl$' /proc/devices 2>/dev/null | awk '{print $1}')
if [ -n "$nvidia_major" ]; then
mknod -m 666 /dev/nvidiactl c "$nvidia_major" 255 2>/dev/null \
&& log "created /dev/nvidiactl (major $nvidia_major)" \
|| log "WARN: /dev/nvidiactl already exists or mknod failed"
for i in 0 1 2 3 4 5 6 7; do
mknod -m 666 "/dev/nvidia$i" c "$nvidia_major" "$i" 2>/dev/null || true
done
log "created /dev/nvidia{0-7}"
else
log "WARN: nvidiactl not in /proc/devices — no GPU hardware present?"
fi
uvm_major=$(grep -m1 ' nvidia-uvm$' /proc/devices 2>/dev/null | awk '{print $1}')
if [ -n "$uvm_major" ]; then
mknod -m 666 /dev/nvidia-uvm c "$uvm_major" 0 2>/dev/null \
&& log "created /dev/nvidia-uvm (major $uvm_major)" \
|| log "WARN: /dev/nvidia-uvm already exists"
mknod -m 666 /dev/nvidia-uvm-tools c "$uvm_major" 1 2>/dev/null || true
else
log "WARN: nvidia-uvm not in /proc/devices"
fi
log "done"

View File

@@ -0,0 +1,38 @@
#!/bin/sh
# bee-sshsetup — configure SSH access
# Called by bee-sshsetup.service before SSH starts.
log() { echo "[bee-sshsetup] $*"; }
SSHD_DIR="/etc/ssh/sshd_config.d"
AUTH_CONF="${SSHD_DIR}/99-bee-auth.conf"
mkdir -p "$SSHD_DIR"
if [ -f /etc/bee-ssh-password-fallback ]; then
if ! id bee > /dev/null 2>&1; then
useradd -m -s /bin/sh bee > /dev/null 2>&1
fi
echo "bee:eeb" | chpasswd > /dev/null 2>&1
cat > "$AUTH_CONF" <<'EOF'
PermitRootLogin prohibit-password
PasswordAuthentication yes
KbdInteractiveAuthentication yes
ChallengeResponseAuthentication yes
UsePAM yes
EOF
log "SSH key auth unavailable — password fallback active"
log "Login: bee / eeb"
else
if id bee > /dev/null 2>&1; then
passwd -l bee > /dev/null 2>&1 || true
fi
cat > "$AUTH_CONF" <<'EOF'
PermitRootLogin prohibit-password
PasswordAuthentication no
KbdInteractiveAuthentication no
ChallengeResponseAuthentication no
UsePAM yes
EOF
log "SSH key auth configured"
fi

576
iso/overlay/usr/local/bin/bee-tui Executable file → Normal file
View File

@@ -1,577 +1,7 @@
#!/bin/sh
# bee-tui: interactive text menu for debug LiveCD operations.
set -u
if ! command -v dialog >/dev/null 2>&1; then
echo "ERROR: dialog is required but not installed"
exit 1
if [ "$(id -u)" -ne 0 ]; then
exec sudo -n /usr/local/bin/bee tui --runtime livecd "$@"
fi
pause() {
echo
printf 'Press Enter to continue... '
read -r _
}
header() {
clear
echo "=============================================="
echo " bee TUI (debug)"
echo "=============================================="
echo
}
menu_choice() {
title="$1"
prompt="$2"
shift 2
dialog --clear --stdout --title "$title" --menu "$prompt" 20 90 12 "$@"
}
list_ifaces() {
ip -o link show \
| awk -F': ' '{print $2}' \
| grep -v '^lo$' \
| grep -vE '^(docker|virbr|veth|tun|tap|br-|bond|dummy)' \
| sort
}
show_network_status() {
header
echo "Network interfaces"
echo
for iface in $(list_ifaces); do
state=$(ip -o link show "$iface" | awk '{print $9}')
ipv4=$(ip -o -4 addr show dev "$iface" | awk '{print $4}' | paste -sd ',')
[ -n "$ipv4" ] || ipv4="(no IPv4)"
echo "- $iface: state=$state ip=$ipv4"
done
echo
ip route | sed 's/^/ route: /'
pause
}
choose_interface() {
ifaces="$(list_ifaces)"
if [ -z "$ifaces" ]; then
echo "No physical interfaces found"
return 1
fi
set --
for iface in $ifaces; do
set -- "$@" "$iface" "$iface"
done
iface=$(menu_choice "Network" "Select interface" "$@") || return 1
CHOSEN_IFACE="$iface"
return 0
}
network_dhcp_one() {
header
echo "DHCP on one interface"
echo
choose_interface || { pause; return; }
iface="$CHOSEN_IFACE"
echo
echo "Starting DHCP on $iface..."
ip link set "$iface" up 2>/dev/null || true
udhcpc -i "$iface" -t 5 -T 3
pause
}
network_dhcp_all() {
header
echo "Restarting DHCP on all physical interfaces..."
echo
/usr/local/bin/bee-net-restart
pause
}
network_static_one() {
header
echo "Static IPv4 setup"
echo
choose_interface || { pause; return; }
iface="$CHOSEN_IFACE"
echo
printf 'IPv4 address (example 192.168.1.10): '
read -r ip
if [ -z "$ip" ]; then
echo "IP address is required"
pause
return
fi
# derive default gateway: first three octets of IP + .1
ip_base="$(echo "$ip" | cut -d. -f1-3)"
default_gw="${ip_base}.1"
printf 'Netmask [24]: '
read -r mask
[ -z "$mask" ] && mask="24"
prefix=$(mask_to_prefix "$mask")
if [ -z "$prefix" ]; then
echo "Invalid netmask: $mask"
pause
return
fi
cidr="$ip/$prefix"
printf 'Default gateway [%s]: ' "$default_gw"
read -r gw
[ -z "$gw" ] && gw="$default_gw"
printf 'DNS servers [77.88.8.8 77.88.8.1 1.1.1.1 8.8.8.8]: '
read -r dns
ip link set "$iface" up 2>/dev/null || true
ip addr flush dev "$iface"
if ! ip addr add "$cidr" dev "$iface"; then
echo "Failed to set IP"
pause
return
fi
if [ -n "$gw" ]; then
ip route del default >/dev/null 2>&1 || true
ip route add default via "$gw" dev "$iface"
fi
if [ -z "$dns" ]; then
dns="77.88.8.8 77.88.8.1 1.1.1.1 8.8.8.8"
fi
: > /etc/resolv.conf
for d in $dns; do
printf 'nameserver %s\n' "$d" >> /etc/resolv.conf
done
echo
echo "Static config applied to $iface"
pause
}
mask_to_prefix() {
mask="$(echo "$1" | tr -d '[:space:]')"
case "$mask" in
0|1|2|3|4|5|6|7|8|9|10|11|12|13|14|15|16|17|18|19|20|21|22|23|24|25|26|27|28|29|30|31|32)
echo "$mask"
return 0
;;
esac
case "$mask" in
255.0.0.0) echo 8 ;;
255.128.0.0) echo 9 ;;
255.192.0.0) echo 10 ;;
255.224.0.0) echo 11 ;;
255.240.0.0) echo 12 ;;
255.248.0.0) echo 13 ;;
255.252.0.0) echo 14 ;;
255.254.0.0) echo 15 ;;
255.255.0.0) echo 16 ;;
255.255.128.0) echo 17 ;;
255.255.192.0) echo 18 ;;
255.255.224.0) echo 19 ;;
255.255.240.0) echo 20 ;;
255.255.248.0) echo 21 ;;
255.255.252.0) echo 22 ;;
255.255.254.0) echo 23 ;;
255.255.255.0) echo 24 ;;
255.255.255.128) echo 25 ;;
255.255.255.192) echo 26 ;;
255.255.255.224) echo 27 ;;
255.255.255.240) echo 28 ;;
255.255.255.248) echo 29 ;;
255.255.255.252) echo 30 ;;
255.255.255.254) echo 31 ;;
255.255.255.255) echo 32 ;;
*) return 1 ;;
esac
}
network_menu() {
while true; do
choice=$(menu_choice "Network" "Select action" \
"1" "Show network status" \
"2" "DHCP on all interfaces" \
"3" "DHCP on one interface" \
"4" "Set static IPv4 on one interface" \
"5" "Back") || return
case "$choice" in
1) show_network_status ;;
2) network_dhcp_all ;;
3) network_dhcp_one ;;
4) network_static_one ;;
5) return ;;
*) echo "Invalid choice"; pause ;;
esac
done
}
bee_services_list() {
for path in /etc/init.d/bee-*; do
[ -e "$path" ] || continue
basename "$path"
done
}
services_status_all() {
header
echo "bee service status"
echo
for svc in $(bee_services_list); do
if rc-service "$svc" status >/dev/null 2>&1; then
echo "- $svc: running"
else
echo "- $svc: stopped"
fi
done
pause
}
choose_service() {
svcs="$(bee_services_list)"
if [ -z "$svcs" ]; then
echo "No bee-* services found"
return 1
fi
set --
for svc in $svcs; do
set -- "$@" "$svc" "$svc"
done
svc=$(menu_choice "bee Services" "Select service" "$@") || return 1
CHOSEN_SERVICE="$svc"
return 0
}
service_action_menu() {
header
echo "Service action"
echo
choose_service || { pause; return; }
svc="$CHOSEN_SERVICE"
act=$(menu_choice "Service: $svc" "Select action" \
"1" "status" \
"2" "restart" \
"3" "start" \
"4" "stop" \
"5" "toggle start/stop" \
"6" "Back") || return
case "$act" in
1) rc-service "$svc" status || true ;;
2) rc-service "$svc" restart || true ;;
3) rc-service "$svc" start || true ;;
4) rc-service "$svc" stop || true ;;
5)
if rc-service "$svc" status >/dev/null 2>&1; then
rc-service "$svc" stop || true
else
rc-service "$svc" start || true
fi
;;
6) return ;;
*) echo "Invalid action" ;;
esac
pause
}
services_menu() {
while true; do
choice=$(menu_choice "bee Services" "Select action" \
"1" "Status of all bee-* services" \
"2" "Manage one service (status/restart/start/stop/toggle)" \
"3" "Back") || return
case "$choice" in
1) services_status_all ;;
2) service_action_menu ;;
3) return ;;
*) echo "Invalid choice"; pause ;;
esac
done
}
confirm_phrase() {
phrase="$1"
prompt="$2"
echo
printf '%s (%s): ' "$prompt" "$phrase"
read -r value
[ "$value" = "$phrase" ]
}
shutdown_menu() {
while true; do
choice=$(menu_choice "Shutdown/Reboot Tests" "Select action" \
"1" "Reboot now" \
"2" "Power off now" \
"3" "Schedule poweroff in 60s" \
"4" "Cancel scheduled shutdown" \
"5" "IPMI chassis power status" \
"6" "IPMI chassis power soft" \
"7" "IPMI chassis power cycle" \
"8" "Back") || return
case "$choice" in
1)
confirm_phrase "REBOOT" "Type confirmation" || { echo "Canceled"; pause; continue; }
reboot
;;
2)
confirm_phrase "POWEROFF" "Type confirmation" || { echo "Canceled"; pause; continue; }
poweroff
;;
3)
confirm_phrase "SCHEDULE" "Type confirmation" || { echo "Canceled"; pause; continue; }
shutdown -P +1 "bee test: scheduled poweroff in 60 seconds"
echo "Scheduled"
pause
;;
4)
shutdown -c || true
echo "Canceled (if any schedule existed)"
pause
;;
5)
ipmitool chassis power status || echo "ipmitool power status failed"
pause
;;
6)
confirm_phrase "IPMI-SOFT" "Type confirmation" || { echo "Canceled"; pause; continue; }
ipmitool chassis power soft || echo "ipmitool soft power failed"
pause
;;
7)
confirm_phrase "IPMI-CYCLE" "Type confirmation" || { echo "Canceled"; pause; continue; }
ipmitool chassis power cycle || echo "ipmitool power cycle failed"
pause
;;
8)
return
;;
*)
echo "Invalid choice"
pause
;;
esac
done
}
gpu_burn_10m() {
header
echo "GPU Burn (10 minutes)"
echo
if ! command -v gpu_burn >/dev/null 2>&1; then
echo "gpu_burn binary not found in PATH"
echo "Expected command: gpu_burn"
pause
return
fi
if ! command -v nvidia-smi >/dev/null 2>&1 || ! nvidia-smi -L >/dev/null 2>&1; then
echo "NVIDIA driver/GPU not ready (nvidia-smi failed)"
pause
return
fi
confirm_phrase "GPU-BURN" "Type confirmation to start benchmark" || { echo "Canceled"; pause; return; }
echo "Running: gpu_burn 600"
echo "Log: /var/log/bee-gpuburn.log"
gpu_burn 600 2>&1 | tee /var/log/bee-gpuburn.log
echo
echo "GPU Burn finished"
pause
}
gpu_benchmarks_menu() {
while true; do
choice=$(menu_choice "Benchmarks -> GPU" "Select action" \
"1" "GPU Burn (10 minutes)" \
"2" "Back") || return
case "$choice" in
1) gpu_burn_10m ;;
2) return ;;
*) echo "Invalid choice"; pause ;;
esac
done
}
benchmarks_menu() {
while true; do
choice=$(menu_choice "Benchmarks" "Select category" \
"1" "GPU" \
"2" "Back") || return
case "$choice" in
1) gpu_benchmarks_menu ;;
2) return ;;
*) echo "Invalid choice"; pause ;;
esac
done
}
run_cmd_log() {
label="$1"
cmd="$2"
log_file="$3"
{
echo "=== $label ==="
echo "time: $(date -u '+%Y-%m-%dT%H:%M:%SZ')"
echo "cmd: $cmd"
echo
sh -c "$cmd"
} >"$log_file" 2>&1
return $?
}
run_gpu_nvidia_acceptance_test() {
header
echo "System acceptance tests -> GPU NVIDIA"
echo
confirm_phrase "SAT-GPU" "Type confirmation to start tests" || { echo "Canceled"; pause; return; }
ts="$(date -u '+%Y%m%d-%H%M%S')"
base_dir="/var/log/bee-sat"
run_dir="$base_dir/gpu-nvidia-$ts"
archive="$base_dir/gpu-nvidia-$ts.tar.gz"
mkdir -p "$run_dir"
summary="$run_dir/summary.txt"
: >"$summary"
echo "Running acceptance commands..."
echo "Logs directory: $run_dir"
echo "Archive target: $archive"
echo
c1="nvidia-smi -q"
c2="dmidecode -t baseboard"
c3="dmidecode -t system"
c4="nvidia-bug-report.sh --output $run_dir/nvidia-bug-report.log"
run_cmd_log "nvidia_smi_q" "$c1" "$run_dir/01-nvidia-smi-q.log"; rc1=$?
run_cmd_log "dmidecode_baseboard" "$c2" "$run_dir/02-dmidecode-baseboard.log"; rc2=$?
run_cmd_log "dmidecode_system" "$c3" "$run_dir/03-dmidecode-system.log"; rc3=$?
run_cmd_log "nvidia_bug_report" "$c4" "$run_dir/04-nvidia-bug-report.log"; rc4=$?
{
echo "run_at_utc=$(date -u '+%Y-%m-%dT%H:%M:%SZ')"
echo "cmd_nvidia_smi_q_rc=$rc1"
echo "cmd_dmidecode_baseboard_rc=$rc2"
echo "cmd_dmidecode_system_rc=$rc3"
echo "cmd_nvidia_bug_report_rc=$rc4"
} >>"$summary"
tar -czf "$archive" -C "$base_dir" "gpu-nvidia-$ts"
tar_rc=$?
echo "archive_rc=$tar_rc" >>"$summary"
echo
echo "Done."
echo "- Logs: $run_dir"
echo "- Archive: $archive (rc=$tar_rc)"
pause
}
gpu_nvidia_sat_menu() {
while true; do
choice=$(menu_choice "System acceptance tests -> GPU NVIDIA" "Select action" \
"1" "Run command pack" \
"2" "Back") || return
case "$choice" in
1) run_gpu_nvidia_acceptance_test ;;
2) return ;;
*) echo "Invalid choice"; pause ;;
esac
done
}
system_acceptance_tests_menu() {
while true; do
choice=$(menu_choice "System acceptance tests" "Select category" \
"1" "GPU NVIDIA" \
"2" "Back") || return
case "$choice" in
1) gpu_nvidia_sat_menu ;;
2) return ;;
*) echo "Invalid choice"; pause ;;
esac
done
}
run_audit_now() {
header
echo "Run audit now"
echo
/usr/local/bin/audit --output stdout > /var/log/bee-audit.json 2>/var/log/bee-audit.log
rc=$?
if [ "$rc" -eq 0 ]; then
echo "Audit completed successfully"
else
echo "Audit finished with errors (rc=$rc)"
fi
echo "Logs: /var/log/bee-audit.log, /var/log/bee-audit.json"
pause
}
check_required_tools() {
header
echo "Required tools check"
echo
for tool in dmidecode smartctl nvme ipmitool lspci audit nvidia-smi gpu_burn dialog; do
if command -v "$tool" >/dev/null 2>&1; then
echo "- $tool: OK ($(command -v "$tool"))"
else
echo "- $tool: MISSING"
fi
done
pause
}
main_menu() {
while true; do
choice=$(menu_choice "Bee TUI (debug)" "Select action" \
"1" "Network setup" \
"2" "bee service management" \
"3" "Shutdown/reboot tests" \
"4" "Benchmarks" \
"5" "System acceptance tests" \
"6" "Run audit now" \
"7" "Check required tools" \
"8" "Show last audit log tail" \
"9" "Exit to console") || exit 0
case "$choice" in
1) network_menu ;;
2) services_menu ;;
3) shutdown_menu ;;
4) benchmarks_menu ;;
5) system_acceptance_tests_menu ;;
6) run_audit_now ;;
7) check_required_tools ;;
8)
header
tail -n 40 /var/log/bee-audit.log 2>/dev/null || echo "No /var/log/bee-audit.log"
echo
tail -n 20 /var/log/bee-audit.json 2>/dev/null || true
pause
;;
9) exit 0 ;;
*) echo "Invalid choice"; pause ;;
esac
done
}
main_menu
exec /usr/local/bin/bee tui --runtime livecd "$@"

View File

@@ -1,5 +1,5 @@
#!/bin/sh
# run-builder.sh — trigger debug ISO build on remote Alpine builder VM
# run-builder.sh — trigger ISO build on remote Debian 12 builder VM
#
# Usage:
# sh scripts/run-builder.sh
@@ -18,10 +18,15 @@ if [ -f "$ENV_FILE" ]; then
fi
BUILDER_HOST="${BUILDER_HOST:-}"
BUILDER_USER="${BUILDER_USER:-}"
if [ -z "$BUILDER_HOST" ]; then
echo "ERROR: BUILDER_HOST not set. Copy .env.example to .env and set the address."
exit 1
fi
if [ -z "$BUILDER_USER" ]; then
echo "ERROR: BUILDER_USER not set. Set it in .env."
exit 1
fi
EXTRA_ARGS=""
while [ $# -gt 0 ]; do
@@ -32,7 +37,7 @@ while [ $# -gt 0 ]; do
done
echo "=== bee builder ==="
echo "Builder: ${BUILDER_HOST}"
echo "Builder: ${BUILDER_USER}@${BUILDER_HOST}"
echo ""
# --- check local repo is in sync with remote ---
@@ -53,10 +58,10 @@ fi
echo "repo: in sync with remote ($LOCAL)"
echo ""
ssh -o StrictHostKeyChecking=no root@"${BUILDER_HOST}" /bin/sh <<ENDSSH
ssh -o StrictHostKeyChecking=no "${BUILDER_USER}@${BUILDER_HOST}" /bin/sh <<ENDSSH
set -e
REPO=/root/bee
LOG=/var/log/bee-build.log
REPO="/home/${BUILDER_USER}/bee"
LOG=/tmp/bee-build.log
if [ ! -d "\$REPO/.git" ]; then
echo "--- cloning bee repo ---"
@@ -65,26 +70,27 @@ fi
cd "\$REPO"
echo "--- pulling latest ---"
git checkout -- .
sudo git checkout -- .
git pull --ff-only
chmod +x iso/overlay/etc/init.d/* iso/overlay/usr/local/bin/* 2>/dev/null || true
chmod +x iso/overlay/usr/local/bin/* 2>/dev/null || true
# Kill any previous build session
screen -S bee-build -X quit 2>/dev/null || true
echo "--- starting build in screen session (survives SSH disconnect) ---"
echo "--- log: \$LOG ---"
screen -dmS bee-build sh -c "sh iso/builder/build.sh ${EXTRA_ARGS} > \$LOG 2>&1; echo \$? > /tmp/bee-build-exit"
screen -dmS bee-build sh -c "sudo sh iso/builder/build.sh ${EXTRA_ARGS} > \$LOG 2>&1; echo \$? > /tmp/bee-build-exit"
# Stream log until build finishes
echo "--- streaming build log (Ctrl+C safe — build continues on VM) ---"
while screen -list | grep -q bee-build; do
tail -n +1 -f "\$LOG" 2>/dev/null & TAIL_PID=\$!
sleep 1
while screen -list | grep -q bee-build; do sleep 2; done
kill \$TAIL_PID 2>/dev/null || true
break
tail -n +1 -f "\$LOG" 2>/dev/null &
TAIL_PID=\$!
while screen -list 2>/dev/null | grep -q bee-build; do
sleep 2
done
sleep 1
kill \$TAIL_PID 2>/dev/null || true
tail -n 20 "\$LOG" 2>/dev/null || true
EXIT_CODE=\$(cat /tmp/bee-build-exit 2>/dev/null || echo 1)
@@ -95,14 +101,14 @@ echo ""
echo "=== downloading ISO ==="
LOCAL_ISO_DIR="${REPO_ROOT}/iso/out"
mkdir -p "${LOCAL_ISO_DIR}"
if command -v rsync >/dev/null 2>&1 && ssh -o StrictHostKeyChecking=no root@"${BUILDER_HOST}" command -v rsync >/dev/null 2>&1; then
if command -v rsync >/dev/null 2>&1 && ssh -o StrictHostKeyChecking=no "${BUILDER_USER}@${BUILDER_HOST}" command -v rsync >/dev/null 2>&1; then
rsync -az --progress \
-e "ssh -o StrictHostKeyChecking=no" \
"root@${BUILDER_HOST}:/root/bee/dist/*.iso" \
"${BUILDER_USER}@${BUILDER_HOST}:/home/${BUILDER_USER}/bee/dist/*.iso" \
"${LOCAL_ISO_DIR}/"
else
scp -o StrictHostKeyChecking=no \
"root@${BUILDER_HOST}:/root/bee/dist/*.iso" \
"${BUILDER_USER}@${BUILDER_HOST}:/home/${BUILDER_USER}/bee/dist/*.iso" \
"${LOCAL_ISO_DIR}/"
fi
echo ""

View File

@@ -1,5 +1,5 @@
#!/bin/sh
# Local integration test for bee audit binary (plan step 1.12).
# Local integration test for `bee audit` (plan step 1.12).
# Runs audit on current machine and validates required JSON fields.
set -eu
@@ -17,10 +17,10 @@ if ! command -v go >/dev/null 2>&1; then
exit 1
fi
echo "[test-local] running audit -> $OUT_FILE"
echo "[test-local] running bee audit -> $OUT_FILE"
(
cd "$ROOT_DIR/audit"
go run ./cmd/audit --output "file:$OUT_FILE"
go run ./cmd/bee audit --output "file:$OUT_FILE"
)
if [ ! -s "$OUT_FILE" ]; then