Compare commits
5 Commits
34b457d654
...
72e10622ba
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
72e10622ba | ||
|
|
0e61346d20 | ||
| a38c35ce2d | |||
| c73ece6c7c | |||
| 456c1f022c |
@@ -24,6 +24,7 @@ rules/patterns/ — shared engineering rule contracts
|
||||
go-background-tasks/ — Task Manager pattern, polling
|
||||
go-code-style/ — layering, error wrapping, startup sequence
|
||||
go-project-bible/ — how to write and maintain a project bible
|
||||
bom-decomposition/ — one BOM row to many component/LOT mappings
|
||||
import-export/ — CSV Excel-compatible format, streaming export
|
||||
table-management/ — toolbar, filtering, pagination
|
||||
modal-workflows/ — state machine, htmx pattern, confirmation
|
||||
|
||||
142
rules/patterns/alpine-livecd/contract.md
Normal file
142
rules/patterns/alpine-livecd/contract.md
Normal file
@@ -0,0 +1,142 @@
|
||||
# Contract: Alpine LiveCD Build
|
||||
|
||||
Version: 1.0
|
||||
|
||||
## Purpose
|
||||
|
||||
Rules for building bootable Alpine Linux ISO images with custom overlays using `mkimage.sh`.
|
||||
Applies to any project that needs a LiveCD: hardware audit, rescue environments, kiosks.
|
||||
|
||||
---
|
||||
|
||||
## mkimage Profile
|
||||
|
||||
Every project must have a profile file `mkimg.<name>.sh` defining:
|
||||
|
||||
```sh
|
||||
profile_<name>() {
|
||||
arch="x86_64" # REQUIRED — without this mkimage silently skips the profile
|
||||
hostname="<hostname>"
|
||||
apkovl="genapkovl-<name>.sh"
|
||||
image_ext="iso"
|
||||
output_format="iso"
|
||||
kernel_flavors="lts"
|
||||
initfs_cmdline="modules=loop,squashfs,sd-mod,usb-storage quiet"
|
||||
initfs_features="ata base cdrom ext4 mmc nvme raid scsi squashfs usb virtio"
|
||||
grub_mod="all_video disk part_gpt part_msdos linux normal configfile search search_label efi_gop fat iso9660 cat echo ls test true help gzio"
|
||||
apks="alpine-base linux-lts linux-firmware-none ..."
|
||||
}
|
||||
```
|
||||
|
||||
**`arch` is mandatory.** If missing, mkimage silently builds nothing and exits 0.
|
||||
|
||||
---
|
||||
|
||||
## apkovl Mechanism
|
||||
|
||||
The apkovl is a `.tar.gz` overlay extracted by initramfs at boot, overlaying `/etc`, `/usr`, `/root`.
|
||||
|
||||
`genapkovl-<name>.sh` generates the tarball:
|
||||
- Must be in the **CWD** when mkimage runs — not only in `~/.mkimage/`
|
||||
- `~/.mkimage/` is searched for mkimg profiles only, not genapkovl scripts
|
||||
|
||||
```sh
|
||||
# Copy both scripts to ~/.mkimage AND to CWD (typically /var/tmp)
|
||||
cp "genapkovl-<name>.sh" ~/.mkimage/
|
||||
cp "genapkovl-<name>.sh" /var/tmp/
|
||||
cd /var/tmp
|
||||
sh mkimage.sh --workdir /var/tmp/work ...
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Build Environment
|
||||
|
||||
**Always use `/var/tmp`, not `/tmp`:**
|
||||
|
||||
```sh
|
||||
export TMPDIR=/var/tmp
|
||||
cd /var/tmp
|
||||
sh mkimage.sh ...
|
||||
```
|
||||
|
||||
`/tmp` on Alpine builder VMs is typically a 1GB tmpfs. Kernel firmware squashfs alone exceeds this.
|
||||
`/var/tmp` uses actual disk space.
|
||||
|
||||
---
|
||||
|
||||
## Workdir Caching
|
||||
|
||||
mkimage stores each ISO section in a hash-named subdirectory. Preserve expensive sections across builds:
|
||||
|
||||
```sh
|
||||
# Delete everything EXCEPT cached sections
|
||||
if [ -d /var/tmp/bee-iso-work ]; then
|
||||
find /var/tmp/bee-iso-work -maxdepth 1 -mindepth 1 \
|
||||
-not -name 'apks_*' \ # downloaded packages
|
||||
-not -name 'kernel_*' \ # modloop squashfs
|
||||
-not -name 'syslinux_*' \ # syslinux bootloader
|
||||
-not -name 'grub_*' \ # grub EFI
|
||||
-exec rm -rf {} +
|
||||
fi
|
||||
```
|
||||
|
||||
The apkovl section is always regenerated (contains project-specific config that changes per build).
|
||||
|
||||
---
|
||||
|
||||
## Squashfs Compression
|
||||
|
||||
Default compression is `xz` — slow but small. For RAM-loaded modloops, size rarely matters.
|
||||
Use `lz4` for faster builds:
|
||||
|
||||
```sh
|
||||
mkdir -p /etc/mkinitfs
|
||||
grep -q 'MKSQUASHFS_OPTS' /etc/mkinitfs/mkinitfs.conf 2>/dev/null || \
|
||||
echo 'MKSQUASHFS_OPTS="-comp lz4 -Xhc"' >> /etc/mkinitfs/mkinitfs.conf
|
||||
```
|
||||
|
||||
Apply before running mkimage. Rebuilds modloop only when kernel version changes.
|
||||
|
||||
---
|
||||
|
||||
## Long Builds
|
||||
|
||||
NVIDIA driver downloads, kernel compiles, and package fetches can take 10–30 minutes.
|
||||
Run in a `screen` session so builds survive SSH disconnects:
|
||||
|
||||
```sh
|
||||
apk add screen
|
||||
screen -dmS build sh -c "sh build.sh > /var/log/build.log 2>&1"
|
||||
tail -f /var/log/build.log
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## NIC Firmware
|
||||
|
||||
`linux-firmware-none` (default) contains zero firmware files. Real hardware NICs often require firmware.
|
||||
Include firmware packages matching expected hardware:
|
||||
|
||||
```
|
||||
linux-firmware-intel # Intel NICs (X710, E810, etc.)
|
||||
linux-firmware-mellanox # Mellanox/NVIDIA ConnectX
|
||||
linux-firmware-bnx2x # Broadcom NetXtreme
|
||||
linux-firmware-rtl_nic # Realtek
|
||||
linux-firmware-other # catch-all
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Versioning
|
||||
|
||||
Pin all versions in a single `VERSIONS` file sourced by all build scripts:
|
||||
|
||||
```sh
|
||||
ALPINE_VERSION=3.21
|
||||
KERNEL_VERSION=6.12
|
||||
GO_VERSION=1.23.6
|
||||
NVIDIA_DRIVER_VERSION=590.48.01
|
||||
```
|
||||
|
||||
Never hardcode versions inside build scripts.
|
||||
@@ -8,6 +8,46 @@ Version: 1.0
|
||||
|
||||
---
|
||||
|
||||
## Расположение на хосте
|
||||
|
||||
> Это правило применяется когда ИИ самостоятельно разворачивает приложение или выполняет команды на build-машине (деплой, копирование файлов, запуск сервисов).
|
||||
|
||||
Бинарник приложения размещается в директории:
|
||||
|
||||
```
|
||||
/appdata/<appname>/
|
||||
```
|
||||
|
||||
где `<appname>` — имя приложения (строчными буквами, без пробелов).
|
||||
|
||||
Пример: приложение `myservice` → `/appdata/myservice/myservice`.
|
||||
|
||||
Все файлы, связанные с конкретным приложением (бинарник, вспомогательные скрипты запуска, `docker-compose.yml`), хранятся внутри этой директории. Конфиг и данные — по правилам секций ниже.
|
||||
|
||||
### Примеры внедрения
|
||||
|
||||
При деплое, копировании файлов или запуске сервисов ИИ **всегда по умолчанию** использует этот путь:
|
||||
|
||||
```bash
|
||||
# Копирование бинарника
|
||||
scp bin/myservice user@host:/appdata/myservice/myservice
|
||||
|
||||
# Копирование docker-compose
|
||||
scp docker-compose.yml user@host:/appdata/myservice/docker-compose.yml
|
||||
|
||||
# Запуск на хосте
|
||||
ssh user@host "cd /appdata/myservice && docker compose up -d"
|
||||
```
|
||||
|
||||
```bash
|
||||
# Создание директории если не существует
|
||||
ssh user@host "mkdir -p /appdata/myservice"
|
||||
```
|
||||
|
||||
Не предлагать альтернативные пути (`/opt/`, `/usr/local/bin/`, `~/`) — только `/appdata/<appname>/`.
|
||||
|
||||
---
|
||||
|
||||
## Бинарник
|
||||
|
||||
Бинарник самодостаточен — все ресурсы встроены через `//go:embed`:
|
||||
|
||||
302
rules/patterns/bom-decomposition/contract.md
Normal file
302
rules/patterns/bom-decomposition/contract.md
Normal file
@@ -0,0 +1,302 @@
|
||||
# Contract: BOM Decomposition Mapping
|
||||
|
||||
Version: 1.0
|
||||
|
||||
## Purpose
|
||||
|
||||
Defines the canonical way to represent a BOM row that decomposes one external/vendor item into
|
||||
multiple internal component or LOT rows.
|
||||
|
||||
This is not an alternate-choice mapping.
|
||||
All mappings in the row apply simultaneously.
|
||||
|
||||
Use this contract when:
|
||||
- one vendor part number expands into multiple LOTs
|
||||
- one bundle SKU expands into multiple internal components
|
||||
- one external line item contributes quantities to multiple downstream rows
|
||||
|
||||
## Canonical Data Model
|
||||
|
||||
One BOM row has one item quantity and zero or more mapping entries:
|
||||
|
||||
```json
|
||||
{
|
||||
"sort_order": 10,
|
||||
"item_code": "SYS-821GE-TNHR",
|
||||
"quantity": 3,
|
||||
"description": "Vendor bundle",
|
||||
"unit_price": 12000.00,
|
||||
"total_price": 36000.00,
|
||||
"component_mappings": [
|
||||
{ "component_ref": "CHASSIS_X13_8GPU", "quantity_per_item": 1 },
|
||||
{ "component_ref": "PS_3000W_Titanium", "quantity_per_item": 2 },
|
||||
{ "component_ref": "RAILKIT_X13", "quantity_per_item": 1 }
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
Rules:
|
||||
- `component_mappings[]` is the only canonical persisted decomposition format.
|
||||
- Each mapping entry contains:
|
||||
- `component_ref` — stable identifier of the downstream component/LOT
|
||||
- `quantity_per_item` — how many units of that component are produced by one BOM row unit
|
||||
- Derived or UI-only fields may exist at runtime, but they are not the source of truth.
|
||||
|
||||
Project-specific names are allowed if the semantics stay identical:
|
||||
- `item_code` may be `vendor_partnumber`
|
||||
- `component_ref` may be `lot_name`, `lot_code`, or another stable project identifier
|
||||
- `component_mappings` may be `lot_mappings`
|
||||
|
||||
## Quantity Semantics
|
||||
|
||||
The total downstream quantity is always:
|
||||
|
||||
```text
|
||||
downstream_total_qty = row.quantity * mapping.quantity_per_item
|
||||
```
|
||||
|
||||
Example:
|
||||
- BOM row quantity = `3`
|
||||
- mapping A quantity per item = `1`
|
||||
- mapping B quantity per item = `2`
|
||||
|
||||
Result:
|
||||
- component A total = `3`
|
||||
- component B total = `6`
|
||||
|
||||
This multiplication rule is mandatory for estimate/cart/build expansion.
|
||||
|
||||
## Persistence Contract
|
||||
|
||||
The source of truth is the persisted BOM row JSON payload.
|
||||
|
||||
If the project stores BOM rows:
|
||||
- in a SQL JSON column, the JSON payload is the source of truth
|
||||
- in a text column containing JSON, that JSON payload is the source of truth
|
||||
- in an API document later persisted as JSON, the row payload shape must remain unchanged
|
||||
|
||||
Example persisted payload:
|
||||
|
||||
```json
|
||||
{
|
||||
"vendor_spec": [
|
||||
{
|
||||
"sort_order": 10,
|
||||
"vendor_partnumber": "ABC-123",
|
||||
"quantity": 2,
|
||||
"description": "Bundle",
|
||||
"lot_mappings": [
|
||||
{ "lot_name": "LOT_CPU", "quantity_per_pn": 1 },
|
||||
{ "lot_name": "LOT_RAIL", "quantity_per_pn": 1 }
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
Persistence rules:
|
||||
- the decomposition must be stored inside each BOM row
|
||||
- all mapping entries for that row must live in one array field
|
||||
- no secondary storage format may act as a competing source of truth
|
||||
|
||||
## API Contract
|
||||
|
||||
API read and write payloads must expose the same decomposition shape that is persisted.
|
||||
|
||||
Rules:
|
||||
- `GET` returns BOM rows with `component_mappings[]` or the project-specific equivalent
|
||||
- `PUT` / `POST` accepts the same shape
|
||||
- rebuild/apply/cart expansion must read only from the persisted mapping array
|
||||
- if the mapping array is empty, the row contributes nothing downstream
|
||||
- row order is defined by `sort_order`
|
||||
- mapping entry order may be preserved for UX, but business logic must not depend on it
|
||||
|
||||
Correct:
|
||||
|
||||
```json
|
||||
{
|
||||
"vendor_spec": [
|
||||
{
|
||||
"sort_order": 10,
|
||||
"vendor_partnumber": "ABC-123",
|
||||
"quantity": 2,
|
||||
"lot_mappings": [
|
||||
{ "lot_name": "LOT_CPU", "quantity_per_pn": 1 },
|
||||
{ "lot_name": "LOT_RAIL", "quantity_per_pn": 1 }
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
Wrong:
|
||||
|
||||
```json
|
||||
{
|
||||
"vendor_spec": [
|
||||
{
|
||||
"sort_order": 10,
|
||||
"vendor_partnumber": "ABC-123",
|
||||
"primary_lot": "LOT_CPU",
|
||||
"secondary_lots": ["LOT_RAIL"]
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## UI Invariants
|
||||
|
||||
The UI may render the mapping list in any layout, but it must preserve the same semantics.
|
||||
|
||||
Rules:
|
||||
- the first visible mapping row is not special; it is only the first entry in the array
|
||||
- additional rows may be added via `+`, modal, inline insert, or another UI affordance
|
||||
- every mapping row is equally editable and removable
|
||||
- `quantity_per_item` is edited per mapping row, not once for the whole row
|
||||
- blank mapping rows may exist temporarily in draft UI state, but they must not be persisted
|
||||
- new UI rows should default `quantity_per_item` to `1`
|
||||
|
||||
## Normalization and Validation
|
||||
|
||||
Two stages are allowed:
|
||||
- draft UI normalization for convenience
|
||||
- server-side persistence validation for correctness
|
||||
|
||||
Canonical rules before persistence:
|
||||
- trim `component_ref`
|
||||
- drop rows with empty `component_ref`
|
||||
- reject `quantity_per_item <= 0` with a validation error
|
||||
- merge duplicate `component_ref` values within one BOM row by summing `quantity_per_item`
|
||||
- preserve first-seen order when merging duplicates
|
||||
|
||||
Example input:
|
||||
|
||||
```json
|
||||
[
|
||||
{ "component_ref": "LOT_A", "quantity_per_item": 1 },
|
||||
{ "component_ref": " LOT_A ", "quantity_per_item": 2 },
|
||||
{ "component_ref": "", "quantity_per_item": 5 }
|
||||
]
|
||||
```
|
||||
|
||||
Normalized result:
|
||||
|
||||
```json
|
||||
[
|
||||
{ "component_ref": "LOT_A", "quantity_per_item": 3 }
|
||||
]
|
||||
```
|
||||
|
||||
Why validation instead of silent repair:
|
||||
- API contracts between applications must fail loudly on invalid quantities
|
||||
- UI may prefill `1`, but the server must not silently reinterpret `0` or negative values
|
||||
|
||||
## Forbidden Patterns
|
||||
|
||||
Do not introduce incompatible storage or logic variants such as:
|
||||
- `primary_lot`, `secondary_lots`, `main_component`, `bundle_lots`
|
||||
- one field for the component and a separate field for its quantity outside the mapping array
|
||||
- special-case logic where the first mapping row is "main" and later rows are optional add-ons
|
||||
- computing downstream rows from temporary UI fields instead of the persisted mapping array
|
||||
- storing the same decomposition in multiple shapes at once
|
||||
|
||||
## Reference Go Types
|
||||
|
||||
```go
|
||||
type BOMItem struct {
|
||||
SortOrder int `json:"sort_order"`
|
||||
ItemCode string `json:"item_code"`
|
||||
Quantity int `json:"quantity"`
|
||||
Description string `json:"description,omitempty"`
|
||||
UnitPrice *float64 `json:"unit_price,omitempty"`
|
||||
TotalPrice *float64 `json:"total_price,omitempty"`
|
||||
ComponentMappings []ComponentMapping `json:"component_mappings,omitempty"`
|
||||
}
|
||||
|
||||
type ComponentMapping struct {
|
||||
ComponentRef string `json:"component_ref"`
|
||||
QuantityPerItem int `json:"quantity_per_item"`
|
||||
}
|
||||
```
|
||||
|
||||
Project-specific aliases are acceptable if they preserve identical semantics:
|
||||
|
||||
```go
|
||||
type VendorSpecItem struct {
|
||||
SortOrder int `json:"sort_order"`
|
||||
VendorPartnumber string `json:"vendor_partnumber"`
|
||||
Quantity int `json:"quantity"`
|
||||
Description string `json:"description,omitempty"`
|
||||
UnitPrice *float64 `json:"unit_price,omitempty"`
|
||||
TotalPrice *float64 `json:"total_price,omitempty"`
|
||||
LotMappings []VendorSpecLotMapping `json:"lot_mappings,omitempty"`
|
||||
}
|
||||
|
||||
type VendorSpecLotMapping struct {
|
||||
LotName string `json:"lot_name"`
|
||||
QuantityPerPN int `json:"quantity_per_pn"`
|
||||
}
|
||||
```
|
||||
|
||||
## Reference Normalization (Go)
|
||||
|
||||
```go
|
||||
func NormalizeComponentMappings(in []ComponentMapping) ([]ComponentMapping, error) {
|
||||
if len(in) == 0 {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
merged := map[string]int{}
|
||||
order := make([]string, 0, len(in))
|
||||
|
||||
for _, m := range in {
|
||||
ref := strings.TrimSpace(m.ComponentRef)
|
||||
if ref == "" {
|
||||
continue
|
||||
}
|
||||
if m.QuantityPerItem <= 0 {
|
||||
return nil, fmt.Errorf("component %q has invalid quantity_per_item %d", ref, m.QuantityPerItem)
|
||||
}
|
||||
if _, exists := merged[ref]; !exists {
|
||||
order = append(order, ref)
|
||||
}
|
||||
merged[ref] += m.QuantityPerItem
|
||||
}
|
||||
|
||||
out := make([]ComponentMapping, 0, len(order))
|
||||
for _, ref := range order {
|
||||
out = append(out, ComponentMapping{
|
||||
ComponentRef: ref,
|
||||
QuantityPerItem: merged[ref],
|
||||
})
|
||||
}
|
||||
if len(out) == 0 {
|
||||
return nil, nil
|
||||
}
|
||||
return out, nil
|
||||
}
|
||||
```
|
||||
|
||||
## Reference Expansion (Go)
|
||||
|
||||
```go
|
||||
type CartItem struct {
|
||||
ComponentRef string
|
||||
Quantity int
|
||||
}
|
||||
|
||||
func ExpandBOMRow(row BOMItem) []CartItem {
|
||||
result := make([]CartItem, 0, len(row.ComponentMappings))
|
||||
for _, m := range row.ComponentMappings {
|
||||
qty := row.Quantity * m.QuantityPerItem
|
||||
if qty <= 0 {
|
||||
continue
|
||||
}
|
||||
result = append(result, CartItem{
|
||||
ComponentRef: m.ComponentRef,
|
||||
Quantity: qty,
|
||||
})
|
||||
}
|
||||
return result
|
||||
}
|
||||
```
|
||||
56
rules/patterns/kiss/contract.md
Normal file
56
rules/patterns/kiss/contract.md
Normal file
@@ -0,0 +1,56 @@
|
||||
# Contract: Keep It Simple
|
||||
|
||||
Version: 1.0
|
||||
|
||||
## Principle
|
||||
|
||||
Working solutions do not need to be interesting.
|
||||
|
||||
Prefer the simplest solution that correctly solves the problem. Complexity must be justified by a real, present requirement — not by anticipation of future needs or desire to use a particular technique.
|
||||
|
||||
## Rules
|
||||
|
||||
- Choose boring technology. A well-understood, dull solution beats a clever one.
|
||||
- Do not introduce abstractions, patterns, or frameworks before they are needed by at least two concrete use cases.
|
||||
- Do not design for hypothetical future requirements. Build for what exists now.
|
||||
- Prefer sequential, readable code over clever one-liners.
|
||||
- If you can delete code and the system still works, delete it.
|
||||
- Extra configurability, generalization, and extensibility are costs, not features. Add them only when explicitly required.
|
||||
|
||||
## Anti-patterns
|
||||
|
||||
- Adding helpers or utilities for one-time operations.
|
||||
- Wrapping simple logic in interfaces "for testability" when a direct call works.
|
||||
- Using a framework or library to solve a problem the standard library already handles.
|
||||
- Writing error handling, fallbacks, or validation for scenarios that cannot happen.
|
||||
- Refactoring working code because it "could be cleaner."
|
||||
|
||||
## Bulletproof features
|
||||
|
||||
A feature must be correct by construction, not by circumstance.
|
||||
|
||||
Do not write mechanisms that silently rely on:
|
||||
- another feature being in a specific state,
|
||||
- input data having a particular shape that "usually" holds,
|
||||
- a certain call order or timing,
|
||||
- a global flag, ambient variable, or external condition being set upstream.
|
||||
|
||||
Such mechanisms are thin: they work only when the world cooperates. When any surrounding assumption shifts, they break in ways that are hard to trace. This is the primary source of bugs.
|
||||
|
||||
**Design rules:**
|
||||
|
||||
- A feature owns its preconditions. If it requires data in a certain state, it must enforce or produce that state itself — not inherit it silently from a caller.
|
||||
- Never write logic that only works if a sibling feature runs first and succeeds. If coordination is needed, make it explicit (a parameter, a return value, a clear contract).
|
||||
- Avoid implicit state machines — sequences where operations must happen in the right order with no enforcement. Either enforce the order structurally or eliminate the dependency.
|
||||
- Prefer thick, unconditional logic over thin conditional chains that assume stable context. A mechanism that always does the right thing is more reliable than one that does the right thing only when conditions are favorable.
|
||||
|
||||
A feature is done when it is correct on its own, not when it is correct given that everything else is also correct.
|
||||
|
||||
## Checklist before committing
|
||||
|
||||
1. Could this be done with fewer lines without losing clarity?
|
||||
2. Does every abstraction here have more than one caller?
|
||||
3. Is any of this code handling a case that cannot actually occur?
|
||||
4. Did I add anything beyond what was asked?
|
||||
|
||||
If the answer to any of 1–4 is "yes," simplify before committing.
|
||||
149
rules/patterns/release-signing/contract.md
Normal file
149
rules/patterns/release-signing/contract.md
Normal file
@@ -0,0 +1,149 @@
|
||||
# Contract: Release Signing
|
||||
|
||||
Version: 1.0
|
||||
|
||||
## Purpose
|
||||
|
||||
Ed25519 asymmetric signing for Go release binaries.
|
||||
Guarantees that a binary accepted by a running application was produced by a trusted developer.
|
||||
Applies to any Go binary that is distributed or supports self-update.
|
||||
|
||||
---
|
||||
|
||||
## Key Management
|
||||
|
||||
Public keys are stored in the centralized keys repository: `git.mchus.pro/mchus/keys`
|
||||
|
||||
```
|
||||
keys/
|
||||
developers/
|
||||
<name>.pub ← raw Ed25519 public key, base64-encoded, one line per developer
|
||||
scripts/
|
||||
keygen.sh ← generates keypair
|
||||
sign-release.sh ← signs a binary
|
||||
verify-signature.sh ← verifies locally
|
||||
```
|
||||
|
||||
Public keys are safe to commit. Private keys stay on each developer's machine — never committed, never shared.
|
||||
|
||||
**Adding a developer:** add their `.pub` file → commit → rebuild affected releases.
|
||||
**Removing a developer:** delete their `.pub` file → commit → rebuild releases.
|
||||
Previously signed binaries with their key remain valid (already distributed), but they cannot sign new releases.
|
||||
|
||||
---
|
||||
|
||||
## Multi-Key Trust Model
|
||||
|
||||
A binary is accepted if its signature verifies against **any** of the embedded trusted public keys.
|
||||
This mirrors the SSH `authorized_keys` model.
|
||||
|
||||
- One developer signs a release with their private key → produces one `.sig` file.
|
||||
- The binary trusts all active developers — any of them can make a valid release.
|
||||
- Signature format: raw 64-byte Ed25519 signature (not PEM, not armored).
|
||||
|
||||
---
|
||||
|
||||
## Embedding Keys at Build Time
|
||||
|
||||
Public keys are injected via `-ldflags` at release build time — not hardcoded at compile time.
|
||||
This allows adding/removing developers without changing application source code.
|
||||
|
||||
```go
|
||||
// internal/updater/trust.go
|
||||
|
||||
// trustedKeysRaw is injected at build time via -ldflags.
|
||||
// Format: base64(key1):base64(key2):...
|
||||
// Empty string = dev build, updates disabled.
|
||||
var trustedKeysRaw string
|
||||
|
||||
func trustedKeys() ([]ed25519.PublicKey, error) {
|
||||
if trustedKeysRaw == "" {
|
||||
return nil, fmt.Errorf("dev build: trusted keys not embedded, updates disabled")
|
||||
}
|
||||
var keys []ed25519.PublicKey
|
||||
for _, enc := range strings.Split(trustedKeysRaw, ":") {
|
||||
b, err := base64.StdEncoding.DecodeString(strings.TrimSpace(enc))
|
||||
if err != nil || len(b) != ed25519.PublicKeySize {
|
||||
return nil, fmt.Errorf("invalid trusted key: %w", err)
|
||||
}
|
||||
keys = append(keys, ed25519.PublicKey(b))
|
||||
}
|
||||
return keys, nil
|
||||
}
|
||||
```
|
||||
|
||||
Release build script injects all current developer keys:
|
||||
|
||||
```sh
|
||||
# scripts/build-release.sh
|
||||
KEYS=$(paste -sd: /path/to/keys/developers/*.pub)
|
||||
go build \
|
||||
-ldflags "-s -w -X <module>/internal/updater.trustedKeysRaw=${KEYS}" \
|
||||
-o dist/<binary>-linux-amd64 \
|
||||
./cmd/<binary>
|
||||
```
|
||||
|
||||
Dev build (no `-ldflags` injection): `trustedKeysRaw` is empty → updates disabled, binary works normally.
|
||||
|
||||
---
|
||||
|
||||
## Signature Verification (stdlib only, no external tools)
|
||||
|
||||
Use `crypto/ed25519` from Go standard library. No third-party dependencies.
|
||||
|
||||
```go
|
||||
// internal/updater/trust.go
|
||||
|
||||
func verifySignature(binaryPath, sigPath string) error {
|
||||
keys, err := trustedKeys()
|
||||
if err != nil {
|
||||
return err // dev build or misconfiguration
|
||||
}
|
||||
data, err := os.ReadFile(binaryPath)
|
||||
if err != nil {
|
||||
return fmt.Errorf("read binary: %w", err)
|
||||
}
|
||||
sig, err := os.ReadFile(sigPath)
|
||||
if err != nil {
|
||||
return fmt.Errorf("read signature: %w", err)
|
||||
}
|
||||
for _, key := range keys {
|
||||
if ed25519.Verify(key, data, sig) {
|
||||
return nil // any trusted key accepts → pass
|
||||
}
|
||||
}
|
||||
return fmt.Errorf("signature verification failed: no trusted key matched")
|
||||
}
|
||||
```
|
||||
|
||||
Rejection behavior: log as WARNING, continue with current binary. Never crash, never block operation.
|
||||
|
||||
---
|
||||
|
||||
## Release Asset Convention
|
||||
|
||||
Every release must attach two files to the Gitea release:
|
||||
|
||||
```
|
||||
<binary>-linux-amd64 ← the binary
|
||||
<binary>-linux-amd64.sig ← raw 64-byte Ed25519 signature
|
||||
```
|
||||
|
||||
Signing:
|
||||
|
||||
```sh
|
||||
sh keys/scripts/sign-release.sh <developer-name> dist/<binary>-linux-amd64
|
||||
```
|
||||
|
||||
Both files are uploaded to the Gitea release as downloadable assets.
|
||||
|
||||
---
|
||||
|
||||
## Rules
|
||||
|
||||
- Never hardcode public keys as string literals in source code — always use ldflags injection.
|
||||
- Never commit private keys (`.key` files) anywhere.
|
||||
- A binary built without ldflags injection must work normally — it just cannot perform verified updates.
|
||||
- Signature verification failure must be a silent logged warning, not a crash or user-visible error.
|
||||
- Use `crypto/ed25519` (stdlib) only — no external signing libraries.
|
||||
- `.sig` file contains raw 64 bytes (not base64, not PEM). Produced by `openssl pkeyutl -sign -rawin`.
|
||||
21
rules/patterns/task-discipline/contract.md
Normal file
21
rules/patterns/task-discipline/contract.md
Normal file
@@ -0,0 +1,21 @@
|
||||
# Contract: Task Discipline
|
||||
|
||||
Version: 1.0
|
||||
|
||||
## Principle
|
||||
|
||||
Finish before switching. A task is not done until it reaches a logical end.
|
||||
|
||||
## Rules
|
||||
|
||||
- Do not start a new task while the current one is unfinished. Switching mid-task leaves half-done work that is harder to recover than if it had never been started.
|
||||
- If a new idea or requirement surfaces during work, note it and address it after the current task is complete.
|
||||
- "Logical end" means: the change works, is committed, and leaves the codebase in a coherent state — not just "the immediate code compiles."
|
||||
- Do not open new files, refactor adjacent code, or fix unrelated issues while implementing a specific task. Stay focused on the defined scope.
|
||||
- If the current task is blocked, resolve the blocker or explicitly hand off — do not silently pivot to something else.
|
||||
|
||||
## Anti-patterns
|
||||
|
||||
- Starting a refactor while in the middle of a bug fix.
|
||||
- Leaving a feature half-implemented because something more interesting came up.
|
||||
- Responding to a new requirement by abandoning the current one without documenting what was left unfinished.
|
||||
133
rules/patterns/unattended-boot-services/contract.md
Normal file
133
rules/patterns/unattended-boot-services/contract.md
Normal file
@@ -0,0 +1,133 @@
|
||||
# Contract: Unattended Boot Services (OpenRC)
|
||||
|
||||
Version: 1.0
|
||||
|
||||
## Purpose
|
||||
|
||||
Rules for OpenRC services that run in unattended environments: LiveCDs, kiosks, embedded systems.
|
||||
No user is present. No TTY prompts. Every failure path must have a silent fallback.
|
||||
|
||||
---
|
||||
|
||||
## Core Invariants
|
||||
|
||||
- **Never block boot.** A service failure must not prevent other services from starting.
|
||||
- **Never prompt.** No `read`, no `pause`, no interactive input of any kind.
|
||||
- **Always exit 0.** Use `eend 0` at the end of `start()` regardless of the operation result.
|
||||
- **Log everything.** Write results to `/var/log/` so SSH inspection is possible after boot.
|
||||
- **Fail silently, degrade gracefully.** Missing tool → skip that collector. No network → skip network-dependent steps.
|
||||
|
||||
---
|
||||
|
||||
## Service Dependencies
|
||||
|
||||
Use the minimum necessary dependencies:
|
||||
|
||||
```sh
|
||||
depend() {
|
||||
need localmount # almost always needed
|
||||
after some-service # ordering without hard dependency
|
||||
use logger # optional soft dependency
|
||||
# DO NOT add: need net, need networking, need network-online
|
||||
}
|
||||
```
|
||||
|
||||
**Never use `need net` or `need networking`** unless the service is genuinely useless without
|
||||
network and you want it to fail loudly when no cable is connected.
|
||||
For services that work with or without network, use `after` instead.
|
||||
|
||||
---
|
||||
|
||||
## Network-Independent SSH
|
||||
|
||||
Dropbear (and any SSH server) must start without network being available.
|
||||
Common mistake: installing dropbear-openrc which adds `need net` in its default init.
|
||||
|
||||
Override with a custom init:
|
||||
|
||||
```sh
|
||||
#!/sbin/openrc-run
|
||||
description="SSH server"
|
||||
|
||||
depend() {
|
||||
need localmount
|
||||
after bee-sshsetup # key/user setup, not networking
|
||||
use logger
|
||||
# NO need net
|
||||
}
|
||||
|
||||
start() {
|
||||
check_config || return 1
|
||||
ebegin "Starting dropbear"
|
||||
/usr/sbin/dropbear ${DROPBEAR_OPTS}
|
||||
eend $?
|
||||
}
|
||||
```
|
||||
|
||||
Place this file in the overlay at `etc/init.d/dropbear` — it overrides the package-installed version.
|
||||
|
||||
---
|
||||
|
||||
## Persistent DHCP
|
||||
|
||||
Do not use blocking DHCP (`-n` flag exits if no offer). Use background mode so the client
|
||||
retries automatically when a cable is connected after boot:
|
||||
|
||||
```sh
|
||||
# Wrong — exits immediately if no DHCP offer
|
||||
udhcpc -i "$iface" -t 3 -T 5 -n -q
|
||||
|
||||
# Correct — background daemon, retries indefinitely
|
||||
udhcpc -i "$iface" -b -t 0 -T 3 -q
|
||||
```
|
||||
|
||||
The network service itself should complete immediately (exit 0) — udhcpc daemons run in background.
|
||||
|
||||
---
|
||||
|
||||
## Service Start Order (typical LiveCD)
|
||||
|
||||
```
|
||||
localmount
|
||||
└── sshsetup (user creation, key injection — before dropbear)
|
||||
└── dropbear (SSH — independent of network)
|
||||
└── network (DHCP on all interfaces — does not block anything)
|
||||
└── nvidia (or other hardware init — after network in case firmware needs it)
|
||||
└── audit (main workload — after all hardware ready)
|
||||
```
|
||||
|
||||
Services at the same level can start in parallel. Use `after` not `need` for ordering without hard dependency.
|
||||
|
||||
---
|
||||
|
||||
## Error Handling in start()
|
||||
|
||||
```sh
|
||||
start() {
|
||||
ebegin "Running audit"
|
||||
/usr/local/bin/audit --output /var/log/audit.json >> /var/log/audit.log 2>&1
|
||||
local rc=$?
|
||||
if [ $rc -eq 0 ]; then
|
||||
einfo "Audit complete"
|
||||
else
|
||||
ewarn "Audit finished with errors — check /var/log/audit.log"
|
||||
fi
|
||||
eend 0 # always 0 — never fail the runlevel
|
||||
}
|
||||
```
|
||||
|
||||
- Capture exit code into a local variable.
|
||||
- Log the result with `einfo` or `ewarn`.
|
||||
- Always `eend 0` — a failed audit is not a reason to block the boot runlevel.
|
||||
- The exception: services whose failure makes SSH impossible (e.g. key setup) may `return 1`.
|
||||
|
||||
---
|
||||
|
||||
## Rules
|
||||
|
||||
- Every `start()` ends with `eend 0` unless failure makes the entire environment unusable.
|
||||
- Network is always best-effort. Test for it, don't depend on it.
|
||||
- Proprietary drivers (NVIDIA, etc.): load failure → log warning → continue without enrichment.
|
||||
- External tools (ipmitool, smartctl, etc.): not-found → skip that data source → do not abort.
|
||||
- Timeout all external commands: `timeout 30 smartctl ...` prevents infinite hangs.
|
||||
- Write all output to `/var/log/` — TTY output is secondary.
|
||||
82
rules/patterns/vendor-installer-verification/contract.md
Normal file
82
rules/patterns/vendor-installer-verification/contract.md
Normal file
@@ -0,0 +1,82 @@
|
||||
# Contract: Vendor Installer Verification
|
||||
|
||||
Version: 1.0
|
||||
|
||||
## Purpose
|
||||
|
||||
Rules for downloading and verifying proprietary vendor installers (`.run`, `.exe`, `.tar.gz`)
|
||||
where the vendor publishes a checksum alongside the binary.
|
||||
Applies to: NVIDIA drivers, vendor CLI tools, firmware packages.
|
||||
|
||||
---
|
||||
|
||||
## Download Order
|
||||
|
||||
Always download the checksum file **before** the installer:
|
||||
|
||||
```sh
|
||||
BASE_URL="https://vendor.example.com/downloads/${VERSION}"
|
||||
BIN_FILE="/var/cache/vendor-${VERSION}.run"
|
||||
SHA_FILE="/var/cache/vendor-${VERSION}.run.sha256sum"
|
||||
|
||||
# 1. Download checksum first
|
||||
wget -q -O "$SHA_FILE" "${BASE_URL}/vendor-${VERSION}.run.sha256sum"
|
||||
|
||||
# 2. Download installer
|
||||
wget --show-progress -O "$BIN_FILE" "${BASE_URL}/vendor-${VERSION}.run"
|
||||
|
||||
# 3. Verify
|
||||
cd /var/cache
|
||||
sha256sum -c "$SHA_FILE" || { echo "ERROR: sha256 mismatch"; rm -f "$BIN_FILE"; exit 1; }
|
||||
```
|
||||
|
||||
Reason: if the download is interrupted, you have the expected checksum to verify against on retry.
|
||||
|
||||
---
|
||||
|
||||
## Cache with Verification
|
||||
|
||||
Never assume a cached file is valid — a previous download may have been interrupted (0-byte file):
|
||||
|
||||
```sh
|
||||
verify_cached() {
|
||||
[ -s "$SHA_FILE" ] || return 1 # sha256 file missing or empty
|
||||
[ -s "$BIN_FILE" ] || return 1 # binary missing or empty
|
||||
cd "$(dirname "$BIN_FILE")"
|
||||
sha256sum -c "$SHA_FILE" --status 2>/dev/null
|
||||
}
|
||||
|
||||
if ! verify_cached; then
|
||||
rm -f "$BIN_FILE" "$SHA_FILE"
|
||||
# ... download and verify
|
||||
else
|
||||
echo "verified from cache"
|
||||
fi
|
||||
```
|
||||
|
||||
**Never check only for file existence.** Check that the file is non-empty (`-s`) AND passes checksum.
|
||||
|
||||
---
|
||||
|
||||
## Version Validation
|
||||
|
||||
Before writing build scripts, verify the version URL actually exists:
|
||||
|
||||
```sh
|
||||
curl -sIL "https://vendor.example.com/downloads/${VERSION}/installer.run" \
|
||||
| grep -i 'http/\|content-length'
|
||||
```
|
||||
|
||||
A `404` or `content-length: 0` means the version does not exist on that CDN.
|
||||
Vendor version numbering may have gaps (e.g. NVIDIA skips minor versions on some CDNs).
|
||||
|
||||
---
|
||||
|
||||
## Rules
|
||||
|
||||
- Download checksum before installer — never after.
|
||||
- Verify checksum before extracting or executing.
|
||||
- On mismatch: delete the file, exit with error. Never proceed with a bad installer.
|
||||
- Cache by `version` + any secondary key (e.g. kernel version for compiled modules).
|
||||
- Never commit installer files to git — always download at build time.
|
||||
- Log the expected hash when downloading so failures are diagnosable.
|
||||
Reference in New Issue
Block a user