Compare commits
18 Commits
0c829182a1
...
52444350c1
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
52444350c1 | ||
|
|
747c42499d | ||
|
|
5a69e0bba8 | ||
|
|
d2e11b8bdd | ||
|
|
f55bd84668 | ||
|
|
61ed2717d0 | ||
|
|
548eb70d55 | ||
|
|
72e10622ba | ||
|
|
0e61346d20 | ||
| a38c35ce2d | |||
| c73ece6c7c | |||
| 456c1f022c | |||
| 34b457d654 | |||
| 472c8a6918 | |||
| 91a1cc182d | |||
| af4d0f353b | |||
| 66c38f5a60 | |||
| e020c9b234 |
@@ -24,6 +24,7 @@ rules/patterns/ — shared engineering rule contracts
|
||||
go-background-tasks/ — Task Manager pattern, polling
|
||||
go-code-style/ — layering, error wrapping, startup sequence
|
||||
go-project-bible/ — how to write and maintain a project bible
|
||||
bom-decomposition/ — one BOM row to many component/LOT mappings
|
||||
import-export/ — CSV Excel-compatible format, streaming export
|
||||
table-management/ — toolbar, filtering, pagination
|
||||
modal-workflows/ — state machine, htmx pattern, confirmation
|
||||
|
||||
142
rules/patterns/alpine-livecd/contract.md
Normal file
142
rules/patterns/alpine-livecd/contract.md
Normal file
@@ -0,0 +1,142 @@
|
||||
# Contract: Alpine LiveCD Build
|
||||
|
||||
Version: 1.0
|
||||
|
||||
## Purpose
|
||||
|
||||
Rules for building bootable Alpine Linux ISO images with custom overlays using `mkimage.sh`.
|
||||
Applies to any project that needs a LiveCD: hardware audit, rescue environments, kiosks.
|
||||
|
||||
---
|
||||
|
||||
## mkimage Profile
|
||||
|
||||
Every project must have a profile file `mkimg.<name>.sh` defining:
|
||||
|
||||
```sh
|
||||
profile_<name>() {
|
||||
arch="x86_64" # REQUIRED — without this mkimage silently skips the profile
|
||||
hostname="<hostname>"
|
||||
apkovl="genapkovl-<name>.sh"
|
||||
image_ext="iso"
|
||||
output_format="iso"
|
||||
kernel_flavors="lts"
|
||||
initfs_cmdline="modules=loop,squashfs,sd-mod,usb-storage quiet"
|
||||
initfs_features="ata base cdrom ext4 mmc nvme raid scsi squashfs usb virtio"
|
||||
grub_mod="all_video disk part_gpt part_msdos linux normal configfile search search_label efi_gop fat iso9660 cat echo ls test true help gzio"
|
||||
apks="alpine-base linux-lts linux-firmware-none ..."
|
||||
}
|
||||
```
|
||||
|
||||
**`arch` is mandatory.** If missing, mkimage silently builds nothing and exits 0.
|
||||
|
||||
---
|
||||
|
||||
## apkovl Mechanism
|
||||
|
||||
The apkovl is a `.tar.gz` overlay extracted by initramfs at boot, overlaying `/etc`, `/usr`, `/root`.
|
||||
|
||||
`genapkovl-<name>.sh` generates the tarball:
|
||||
- Must be in the **CWD** when mkimage runs — not only in `~/.mkimage/`
|
||||
- `~/.mkimage/` is searched for mkimg profiles only, not genapkovl scripts
|
||||
|
||||
```sh
|
||||
# Copy both scripts to ~/.mkimage AND to CWD (typically /var/tmp)
|
||||
cp "genapkovl-<name>.sh" ~/.mkimage/
|
||||
cp "genapkovl-<name>.sh" /var/tmp/
|
||||
cd /var/tmp
|
||||
sh mkimage.sh --workdir /var/tmp/work ...
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Build Environment
|
||||
|
||||
**Always use `/var/tmp`, not `/tmp`:**
|
||||
|
||||
```sh
|
||||
export TMPDIR=/var/tmp
|
||||
cd /var/tmp
|
||||
sh mkimage.sh ...
|
||||
```
|
||||
|
||||
`/tmp` on Alpine builder VMs is typically a 1GB tmpfs. Kernel firmware squashfs alone exceeds this.
|
||||
`/var/tmp` uses actual disk space.
|
||||
|
||||
---
|
||||
|
||||
## Workdir Caching
|
||||
|
||||
mkimage stores each ISO section in a hash-named subdirectory. Preserve expensive sections across builds:
|
||||
|
||||
```sh
|
||||
# Delete everything EXCEPT cached sections
|
||||
if [ -d /var/tmp/bee-iso-work ]; then
|
||||
find /var/tmp/bee-iso-work -maxdepth 1 -mindepth 1 \
|
||||
-not -name 'apks_*' \ # downloaded packages
|
||||
-not -name 'kernel_*' \ # modloop squashfs
|
||||
-not -name 'syslinux_*' \ # syslinux bootloader
|
||||
-not -name 'grub_*' \ # grub EFI
|
||||
-exec rm -rf {} +
|
||||
fi
|
||||
```
|
||||
|
||||
The apkovl section is always regenerated (contains project-specific config that changes per build).
|
||||
|
||||
---
|
||||
|
||||
## Squashfs Compression
|
||||
|
||||
Default compression is `xz` — slow but small. For RAM-loaded modloops, size rarely matters.
|
||||
Use `lz4` for faster builds:
|
||||
|
||||
```sh
|
||||
mkdir -p /etc/mkinitfs
|
||||
grep -q 'MKSQUASHFS_OPTS' /etc/mkinitfs/mkinitfs.conf 2>/dev/null || \
|
||||
echo 'MKSQUASHFS_OPTS="-comp lz4 -Xhc"' >> /etc/mkinitfs/mkinitfs.conf
|
||||
```
|
||||
|
||||
Apply before running mkimage. Rebuilds modloop only when kernel version changes.
|
||||
|
||||
---
|
||||
|
||||
## Long Builds
|
||||
|
||||
NVIDIA driver downloads, kernel compiles, and package fetches can take 10–30 minutes.
|
||||
Run in a `screen` session so builds survive SSH disconnects:
|
||||
|
||||
```sh
|
||||
apk add screen
|
||||
screen -dmS build sh -c "sh build.sh > /var/log/build.log 2>&1"
|
||||
tail -f /var/log/build.log
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## NIC Firmware
|
||||
|
||||
`linux-firmware-none` (default) contains zero firmware files. Real hardware NICs often require firmware.
|
||||
Include firmware packages matching expected hardware:
|
||||
|
||||
```
|
||||
linux-firmware-intel # Intel NICs (X710, E810, etc.)
|
||||
linux-firmware-mellanox # Mellanox/NVIDIA ConnectX
|
||||
linux-firmware-bnx2x # Broadcom NetXtreme
|
||||
linux-firmware-rtl_nic # Realtek
|
||||
linux-firmware-other # catch-all
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Versioning
|
||||
|
||||
Pin all versions in a single `VERSIONS` file sourced by all build scripts:
|
||||
|
||||
```sh
|
||||
ALPINE_VERSION=3.21
|
||||
KERNEL_VERSION=6.12
|
||||
GO_VERSION=1.23.6
|
||||
NVIDIA_DRIVER_VERSION=590.48.01
|
||||
```
|
||||
|
||||
Never hardcode versions inside build scripts.
|
||||
157
rules/patterns/app-binary/contract.md
Normal file
157
rules/patterns/app-binary/contract.md
Normal file
@@ -0,0 +1,157 @@
|
||||
# Contract: Application Binary
|
||||
|
||||
Version: 1.0
|
||||
|
||||
## Purpose
|
||||
|
||||
Правила сборки, упаковки ресурсов и первого запуска Go-приложений.
|
||||
|
||||
---
|
||||
|
||||
## Расположение на хосте
|
||||
|
||||
> Это правило применяется когда ИИ самостоятельно разворачивает приложение или выполняет команды на build-машине (деплой, копирование файлов, запуск сервисов).
|
||||
|
||||
Бинарник приложения размещается в директории:
|
||||
|
||||
```
|
||||
/appdata/<appname>/
|
||||
```
|
||||
|
||||
где `<appname>` — имя приложения (строчными буквами, без пробелов).
|
||||
|
||||
Пример: приложение `myservice` → `/appdata/myservice/myservice`.
|
||||
|
||||
Все файлы, связанные с конкретным приложением (бинарник, вспомогательные скрипты запуска, `docker-compose.yml`), хранятся внутри этой директории. Конфиг и данные — по правилам секций ниже.
|
||||
|
||||
### Примеры внедрения
|
||||
|
||||
При деплое, копировании файлов или запуске сервисов ИИ **всегда по умолчанию** использует этот путь:
|
||||
|
||||
```bash
|
||||
# Копирование бинарника
|
||||
scp bin/myservice user@host:/appdata/myservice/myservice
|
||||
|
||||
# Копирование docker-compose
|
||||
scp docker-compose.yml user@host:/appdata/myservice/docker-compose.yml
|
||||
|
||||
# Запуск на хосте
|
||||
ssh user@host "cd /appdata/myservice && docker compose up -d"
|
||||
```
|
||||
|
||||
```bash
|
||||
# Создание директории если не существует
|
||||
ssh user@host "mkdir -p /appdata/myservice"
|
||||
```
|
||||
|
||||
Не предлагать альтернативные пути (`/opt/`, `/usr/local/bin/`, `~/`) — только `/appdata/<appname>/`.
|
||||
|
||||
---
|
||||
|
||||
## Бинарник
|
||||
|
||||
Бинарник самодостаточен — все ресурсы встроены через `//go:embed`:
|
||||
|
||||
- HTML-шаблоны
|
||||
- Статика (JS, CSS, иконки)
|
||||
- Шаблон конфиг-файла (`config.template.yaml`)
|
||||
- Миграции БД
|
||||
|
||||
Никаких внешних папок рядом с бинарником не требуется для запуска.
|
||||
|
||||
---
|
||||
|
||||
## Конфиг-файл
|
||||
|
||||
Создаётся автоматически при первом запуске, если не существует.
|
||||
|
||||
### Расположение
|
||||
|
||||
| Режим приложения | Путь |
|
||||
|---|---|
|
||||
| Однопользовательское | `~/.config/<appname>/config.yaml` |
|
||||
| Серверное / многопользовательское | `/etc/<appname>/config.yaml` или рядом с бинарником |
|
||||
|
||||
Приложение само определяет путь и создаёт директорию если её нет.
|
||||
|
||||
### Содержимое
|
||||
|
||||
Конфиг хранит:
|
||||
|
||||
- Настройки приложения (порт, язык, таймауты, feature flags)
|
||||
- Параметры подключения к централизованной СУБД (host, port, user, password, dbname)
|
||||
|
||||
Конфиг **не хранит**:
|
||||
|
||||
- Данные пользователя
|
||||
- Кеш или состояние
|
||||
- Что-либо что относится к SQLite (см. ниже)
|
||||
|
||||
### Шаблон
|
||||
|
||||
Шаблон конфига встроен в бинарник. При создании файла шаблон копируется в целевой путь.
|
||||
Шаблон содержит все ключи с комментариями и дефолтными значениями.
|
||||
|
||||
```yaml
|
||||
# <appname> configuration
|
||||
# Generated on first run. Edit as needed.
|
||||
|
||||
server:
|
||||
port: 8080
|
||||
|
||||
database:
|
||||
host: localhost
|
||||
port: 5432
|
||||
user: ""
|
||||
password: ""
|
||||
dbname: ""
|
||||
|
||||
# ... остальные настройки
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## SQLite (однопользовательский режим)
|
||||
|
||||
Если приложение использует локальную SQLite:
|
||||
|
||||
- Файл хранится рядом с конфигом: `~/.config/<appname>/<appname>.db`
|
||||
- Путь к файлу не выносится в конфиг — приложение вычисляет его из пути конфига
|
||||
- SQLite **не хранит** параметры подключения к централизованной СУБД — только локальные данные приложения
|
||||
|
||||
---
|
||||
|
||||
## Первый запуск — алгоритм
|
||||
|
||||
```
|
||||
Старт приложения
|
||||
│
|
||||
├── Конфиг существует? → Нет → создать директорию → скопировать шаблон → сообщить пользователю путь
|
||||
│ → завершить с кодом 0
|
||||
│ (пользователь заполняет конфиг)
|
||||
└── Конфиг существует? → Да → валидировать → запустить приложение
|
||||
```
|
||||
|
||||
При первом создании конфига приложение **не запускается** — выводит сообщение:
|
||||
|
||||
```
|
||||
Config created: ~/.config/<appname>/config.yaml
|
||||
Edit the file and restart the application.
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Сборка
|
||||
|
||||
Финальный бинарник собирается без CGO если это возможно (для SQLite — с CGO):
|
||||
|
||||
```
|
||||
CGO_ENABLED=0 go build -ldflags="-s -w" -o bin/<appname> ./cmd/<appname>
|
||||
```
|
||||
|
||||
С SQLite:
|
||||
```
|
||||
CGO_ENABLED=1 go build -ldflags="-s -w" -o bin/<appname> ./cmd/<appname>
|
||||
```
|
||||
|
||||
Бинарник не зависит от рабочей директории запуска.
|
||||
76
rules/patterns/backup-management/contract.md
Normal file
76
rules/patterns/backup-management/contract.md
Normal file
@@ -0,0 +1,76 @@
|
||||
# Contract: Backup Management
|
||||
|
||||
Version: 1.2
|
||||
|
||||
## Purpose
|
||||
|
||||
Общие правила для создания, хранения, именования, ротации и восстановления бэкапов вне зависимости от того, что именно сохраняется: SQLite, централизованная БД, конфиг, файлы пользователя или смешанный bundle.
|
||||
|
||||
## Backup Capability Must Be Shipped
|
||||
|
||||
Backup/restore must be built into the application runtime or into binaries/scripts shipped as part of the application itself. Do not assume the operator already has suitable software installed on their machine.
|
||||
|
||||
Rules:
|
||||
- AI must not rely on random machine-local applications (DB GUI clients, IDE plugins, desktop backup tools, ad-hoc admin utilities) being present on the user's machine.
|
||||
- Backup helpers must not depend on locally installed database clients such as `mysql`, `mysqldump`, `psql`, `pg_dump`, `sqlite3`, or similar tools being present on the user's machine.
|
||||
- If the application persists non-ephemeral state and does not already have backup functionality, implement it.
|
||||
- Preferred delivery is one of: built-in UI action, CLI subcommand, background scheduler, or another application-owned mechanism implemented in the project.
|
||||
- The backup path must work through application mechanics: application code, bundled libraries, and application-owned configuration.
|
||||
- Rollout instructions must reference only shipped or implemented backup/restore paths.
|
||||
|
||||
## Backup Storage
|
||||
|
||||
Backups are operational artifacts, not source artifacts.
|
||||
|
||||
Rules:
|
||||
- Never write backups into the git repository tree.
|
||||
- Backup files must never be staged or committed to git.
|
||||
- Every application must have an explicit backup root outside the repository.
|
||||
- Before creating, rotating, or restoring backups, the application must verify that the backup root resolves outside the git worktree.
|
||||
- Before creating, rotating, or restoring backups, the application must verify again that the target backup files are not tracked or staged in git.
|
||||
- Default local-app location: store backups next to the user config, for example `~/.config/<appname>/backups/`.
|
||||
- Default server/centralized location: store backups in an application-owned path outside the repository, for example `/appdata/<appname>/backups/` or `/var/backups/<appname>/`.
|
||||
- Keep retention tiers in separate directories: `daily/`, `weekly/`, `monthly/`, `yearly/`.
|
||||
|
||||
## Backup Naming and Format
|
||||
|
||||
Rules:
|
||||
- Each snapshot must be a single archive or dump artifact when feasible.
|
||||
- Backup filenames must include a timestamp and a version marker relevant to restore safety, for example schema version, migration number, app version, or backup format version.
|
||||
- If multiple artifacts are backed up independently, include the artifact identity in the filename.
|
||||
- Backups should be archived/compressed by default (`.zip`, `.tar.gz`, `.sql.gz`, `.dump.zst`, or equivalent) unless restore tooling requires a raw dump.
|
||||
- Include all sidecar files required for a correct restore.
|
||||
- Include the application config in the backup when it is required for a meaningful restore.
|
||||
|
||||
## Retention and Rotation
|
||||
|
||||
Use bounded retention. Do not keep an unbounded pile of snapshots.
|
||||
|
||||
Default policy:
|
||||
- Daily: keep 7
|
||||
- Weekly: keep 4
|
||||
- Monthly: keep 12
|
||||
- Yearly: keep 10
|
||||
|
||||
Rules:
|
||||
- Prevent duplicate backups within the same retention period.
|
||||
- Rotation/pruning must be automatic when the application manages recurring backups.
|
||||
- Pre-migration or pre-repair safety backups may be kept outside normal rotation until the change is verified.
|
||||
|
||||
## Automated Backup Behavior
|
||||
|
||||
For applications that manage recurring local or operator-triggered backups:
|
||||
|
||||
Rules:
|
||||
- On application startup, create a backup immediately if none exists yet for the current period.
|
||||
- Support scheduled daily backups at a configured local time.
|
||||
- Before migrations or other risky state-changing maintenance steps, trigger a fresh backup from the application-owned backup mechanism.
|
||||
- Before migrations or other risky state-changing maintenance steps, double-check that backup output is outside the git tree so it cannot be pushed to a remote by accident.
|
||||
- If backup location, schedule, or retention is configurable, provide safe defaults and an explicit disable switch.
|
||||
|
||||
## Restore Readiness
|
||||
|
||||
Rules:
|
||||
- The operator must know how to restore from the backup before applying risky changes.
|
||||
- Restore steps must be documented next to the backup workflow.
|
||||
- A backup that has never been validated for restore is only partially trusted.
|
||||
101
rules/patterns/batch-file-upload/contract.md
Normal file
101
rules/patterns/batch-file-upload/contract.md
Normal file
@@ -0,0 +1,101 @@
|
||||
# Contract: Batch File Upload
|
||||
|
||||
Version: 1.0
|
||||
|
||||
## Purpose
|
||||
|
||||
ADR: стратегия загрузки большого числа файлов через multipart-запросы
|
||||
без переработки серверного pipeline и без скрытых лимитов.
|
||||
|
||||
---
|
||||
|
||||
## ADR
|
||||
|
||||
**Дата:** 2026-03-01
|
||||
**Статус:** Accepted
|
||||
|
||||
### Контекст
|
||||
|
||||
Клиент должен загрузить список файлов на сервер для обработки.
|
||||
Загрузка всех файлов одним multipart-запросом упирается в скрытые лимиты:
|
||||
количество parts, размер тела запроса (413), таймауты соединения.
|
||||
Переработка серверного pipeline под стриминговую загрузку — отдельная дорогостоящая задача.
|
||||
|
||||
### Решение
|
||||
|
||||
Клиент делит список файлов на батчи фиксированного размера и отправляет каждый батч
|
||||
отдельным multipart-запросом.
|
||||
|
||||
- Размер батча определяется константой `MAX_FILES_PER_BATCH` (выбирается проектом).
|
||||
- Батчи считаются по **числу файлов**, не только по байтам.
|
||||
- Обработка батчей **последовательная** по умолчанию.
|
||||
Параллельная обработка допускается только если явно оговорена и задокументирована.
|
||||
- Каждый батч производит отдельный downloadable артефакт,
|
||||
либо агрегируется на финальном шаге — решение принимается на уровне проекта.
|
||||
|
||||
### Последствия
|
||||
|
||||
**Плюсы:**
|
||||
- Избегаем скрытых лимитов на количество multipart parts
|
||||
- Снижаем риск таймаутов и ошибок 413
|
||||
- Не требует немедленной переработки серверного parser pipeline
|
||||
|
||||
**Минусы:**
|
||||
- Больше round-trips (N батчей = N запросов)
|
||||
- Несколько выходных файлов если артефакты не агрегируются
|
||||
- Более долгий end-to-end UX для пользователя
|
||||
|
||||
---
|
||||
|
||||
## Правила реализации
|
||||
|
||||
### Константа батча
|
||||
|
||||
```go
|
||||
const MaxFilesPerBatch = 1000 // выбирается проектом
|
||||
```
|
||||
|
||||
Константа объявляется явно — не хардкодится inline.
|
||||
|
||||
### Именование артефактов
|
||||
|
||||
Результаты батчей именуются с суффиксом `_partN`:
|
||||
|
||||
```
|
||||
report_part1.csv
|
||||
report_part2.csv
|
||||
report_part3.csv
|
||||
```
|
||||
|
||||
Или агрегируются в единый файл на финальном шаге — тогда суффикс не нужен.
|
||||
|
||||
### Обработка ошибок
|
||||
|
||||
- Ошибку превышения лимита parts (`multipart: too many parts`) **не маскировать** под ошибку размера.
|
||||
- Логировать первичную причину с явным указанием типа лимита.
|
||||
- Клиент при получении ошибки лимита должен уменьшить батч и повторить, либо сообщить пользователю.
|
||||
|
||||
```go
|
||||
// Правильно
|
||||
log.Error("batch upload failed: multipart parts limit exceeded", "batch", batchNum, "files", len(batch))
|
||||
|
||||
// Неправильно
|
||||
log.Error("batch upload failed: file too large") // маскирует причину
|
||||
```
|
||||
|
||||
### UI
|
||||
|
||||
UI обязан показывать пользователю:
|
||||
|
||||
- Общее количество батчей (проходов): `Шаг 2 из 5`
|
||||
- Прогресс текущего батча: прогресс-бар или процент
|
||||
- Финальный статус каждого батча до начала следующего
|
||||
|
||||
Пользователь не должен видеть технический термин "батч" — использовать "шаг" или "проход".
|
||||
|
||||
---
|
||||
|
||||
## Связанные контракты
|
||||
|
||||
- `go-background-tasks` — каждый батч запускается как фоновая задача
|
||||
- `import-export` — именование и доставка артефактов
|
||||
302
rules/patterns/bom-decomposition/contract.md
Normal file
302
rules/patterns/bom-decomposition/contract.md
Normal file
@@ -0,0 +1,302 @@
|
||||
# Contract: BOM Decomposition Mapping
|
||||
|
||||
Version: 1.0
|
||||
|
||||
## Purpose
|
||||
|
||||
Defines the canonical way to represent a BOM row that decomposes one external/vendor item into
|
||||
multiple internal component or LOT rows.
|
||||
|
||||
This is not an alternate-choice mapping.
|
||||
All mappings in the row apply simultaneously.
|
||||
|
||||
Use this contract when:
|
||||
- one vendor part number expands into multiple LOTs
|
||||
- one bundle SKU expands into multiple internal components
|
||||
- one external line item contributes quantities to multiple downstream rows
|
||||
|
||||
## Canonical Data Model
|
||||
|
||||
One BOM row has one item quantity and zero or more mapping entries:
|
||||
|
||||
```json
|
||||
{
|
||||
"sort_order": 10,
|
||||
"item_code": "SYS-821GE-TNHR",
|
||||
"quantity": 3,
|
||||
"description": "Vendor bundle",
|
||||
"unit_price": 12000.00,
|
||||
"total_price": 36000.00,
|
||||
"component_mappings": [
|
||||
{ "component_ref": "CHASSIS_X13_8GPU", "quantity_per_item": 1 },
|
||||
{ "component_ref": "PS_3000W_Titanium", "quantity_per_item": 2 },
|
||||
{ "component_ref": "RAILKIT_X13", "quantity_per_item": 1 }
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
Rules:
|
||||
- `component_mappings[]` is the only canonical persisted decomposition format.
|
||||
- Each mapping entry contains:
|
||||
- `component_ref` — stable identifier of the downstream component/LOT
|
||||
- `quantity_per_item` — how many units of that component are produced by one BOM row unit
|
||||
- Derived or UI-only fields may exist at runtime, but they are not the source of truth.
|
||||
|
||||
Project-specific names are allowed if the semantics stay identical:
|
||||
- `item_code` may be `vendor_partnumber`
|
||||
- `component_ref` may be `lot_name`, `lot_code`, or another stable project identifier
|
||||
- `component_mappings` may be `lot_mappings`
|
||||
|
||||
## Quantity Semantics
|
||||
|
||||
The total downstream quantity is always:
|
||||
|
||||
```text
|
||||
downstream_total_qty = row.quantity * mapping.quantity_per_item
|
||||
```
|
||||
|
||||
Example:
|
||||
- BOM row quantity = `3`
|
||||
- mapping A quantity per item = `1`
|
||||
- mapping B quantity per item = `2`
|
||||
|
||||
Result:
|
||||
- component A total = `3`
|
||||
- component B total = `6`
|
||||
|
||||
This multiplication rule is mandatory for estimate/cart/build expansion.
|
||||
|
||||
## Persistence Contract
|
||||
|
||||
The source of truth is the persisted BOM row JSON payload.
|
||||
|
||||
If the project stores BOM rows:
|
||||
- in a SQL JSON column, the JSON payload is the source of truth
|
||||
- in a text column containing JSON, that JSON payload is the source of truth
|
||||
- in an API document later persisted as JSON, the row payload shape must remain unchanged
|
||||
|
||||
Example persisted payload:
|
||||
|
||||
```json
|
||||
{
|
||||
"vendor_spec": [
|
||||
{
|
||||
"sort_order": 10,
|
||||
"vendor_partnumber": "ABC-123",
|
||||
"quantity": 2,
|
||||
"description": "Bundle",
|
||||
"lot_mappings": [
|
||||
{ "lot_name": "LOT_CPU", "quantity_per_pn": 1 },
|
||||
{ "lot_name": "LOT_RAIL", "quantity_per_pn": 1 }
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
Persistence rules:
|
||||
- the decomposition must be stored inside each BOM row
|
||||
- all mapping entries for that row must live in one array field
|
||||
- no secondary storage format may act as a competing source of truth
|
||||
|
||||
## API Contract
|
||||
|
||||
API read and write payloads must expose the same decomposition shape that is persisted.
|
||||
|
||||
Rules:
|
||||
- `GET` returns BOM rows with `component_mappings[]` or the project-specific equivalent
|
||||
- `PUT` / `POST` accepts the same shape
|
||||
- rebuild/apply/cart expansion must read only from the persisted mapping array
|
||||
- if the mapping array is empty, the row contributes nothing downstream
|
||||
- row order is defined by `sort_order`
|
||||
- mapping entry order may be preserved for UX, but business logic must not depend on it
|
||||
|
||||
Correct:
|
||||
|
||||
```json
|
||||
{
|
||||
"vendor_spec": [
|
||||
{
|
||||
"sort_order": 10,
|
||||
"vendor_partnumber": "ABC-123",
|
||||
"quantity": 2,
|
||||
"lot_mappings": [
|
||||
{ "lot_name": "LOT_CPU", "quantity_per_pn": 1 },
|
||||
{ "lot_name": "LOT_RAIL", "quantity_per_pn": 1 }
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
Wrong:
|
||||
|
||||
```json
|
||||
{
|
||||
"vendor_spec": [
|
||||
{
|
||||
"sort_order": 10,
|
||||
"vendor_partnumber": "ABC-123",
|
||||
"primary_lot": "LOT_CPU",
|
||||
"secondary_lots": ["LOT_RAIL"]
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## UI Invariants
|
||||
|
||||
The UI may render the mapping list in any layout, but it must preserve the same semantics.
|
||||
|
||||
Rules:
|
||||
- the first visible mapping row is not special; it is only the first entry in the array
|
||||
- additional rows may be added via `+`, modal, inline insert, or another UI affordance
|
||||
- every mapping row is equally editable and removable
|
||||
- `quantity_per_item` is edited per mapping row, not once for the whole row
|
||||
- blank mapping rows may exist temporarily in draft UI state, but they must not be persisted
|
||||
- new UI rows should default `quantity_per_item` to `1`
|
||||
|
||||
## Normalization and Validation
|
||||
|
||||
Two stages are allowed:
|
||||
- draft UI normalization for convenience
|
||||
- server-side persistence validation for correctness
|
||||
|
||||
Canonical rules before persistence:
|
||||
- trim `component_ref`
|
||||
- drop rows with empty `component_ref`
|
||||
- reject `quantity_per_item <= 0` with a validation error
|
||||
- merge duplicate `component_ref` values within one BOM row by summing `quantity_per_item`
|
||||
- preserve first-seen order when merging duplicates
|
||||
|
||||
Example input:
|
||||
|
||||
```json
|
||||
[
|
||||
{ "component_ref": "LOT_A", "quantity_per_item": 1 },
|
||||
{ "component_ref": " LOT_A ", "quantity_per_item": 2 },
|
||||
{ "component_ref": "", "quantity_per_item": 5 }
|
||||
]
|
||||
```
|
||||
|
||||
Normalized result:
|
||||
|
||||
```json
|
||||
[
|
||||
{ "component_ref": "LOT_A", "quantity_per_item": 3 }
|
||||
]
|
||||
```
|
||||
|
||||
Why validation instead of silent repair:
|
||||
- API contracts between applications must fail loudly on invalid quantities
|
||||
- UI may prefill `1`, but the server must not silently reinterpret `0` or negative values
|
||||
|
||||
## Forbidden Patterns
|
||||
|
||||
Do not introduce incompatible storage or logic variants such as:
|
||||
- `primary_lot`, `secondary_lots`, `main_component`, `bundle_lots`
|
||||
- one field for the component and a separate field for its quantity outside the mapping array
|
||||
- special-case logic where the first mapping row is "main" and later rows are optional add-ons
|
||||
- computing downstream rows from temporary UI fields instead of the persisted mapping array
|
||||
- storing the same decomposition in multiple shapes at once
|
||||
|
||||
## Reference Go Types
|
||||
|
||||
```go
|
||||
type BOMItem struct {
|
||||
SortOrder int `json:"sort_order"`
|
||||
ItemCode string `json:"item_code"`
|
||||
Quantity int `json:"quantity"`
|
||||
Description string `json:"description,omitempty"`
|
||||
UnitPrice *float64 `json:"unit_price,omitempty"`
|
||||
TotalPrice *float64 `json:"total_price,omitempty"`
|
||||
ComponentMappings []ComponentMapping `json:"component_mappings,omitempty"`
|
||||
}
|
||||
|
||||
type ComponentMapping struct {
|
||||
ComponentRef string `json:"component_ref"`
|
||||
QuantityPerItem int `json:"quantity_per_item"`
|
||||
}
|
||||
```
|
||||
|
||||
Project-specific aliases are acceptable if they preserve identical semantics:
|
||||
|
||||
```go
|
||||
type VendorSpecItem struct {
|
||||
SortOrder int `json:"sort_order"`
|
||||
VendorPartnumber string `json:"vendor_partnumber"`
|
||||
Quantity int `json:"quantity"`
|
||||
Description string `json:"description,omitempty"`
|
||||
UnitPrice *float64 `json:"unit_price,omitempty"`
|
||||
TotalPrice *float64 `json:"total_price,omitempty"`
|
||||
LotMappings []VendorSpecLotMapping `json:"lot_mappings,omitempty"`
|
||||
}
|
||||
|
||||
type VendorSpecLotMapping struct {
|
||||
LotName string `json:"lot_name"`
|
||||
QuantityPerPN int `json:"quantity_per_pn"`
|
||||
}
|
||||
```
|
||||
|
||||
## Reference Normalization (Go)
|
||||
|
||||
```go
|
||||
func NormalizeComponentMappings(in []ComponentMapping) ([]ComponentMapping, error) {
|
||||
if len(in) == 0 {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
merged := map[string]int{}
|
||||
order := make([]string, 0, len(in))
|
||||
|
||||
for _, m := range in {
|
||||
ref := strings.TrimSpace(m.ComponentRef)
|
||||
if ref == "" {
|
||||
continue
|
||||
}
|
||||
if m.QuantityPerItem <= 0 {
|
||||
return nil, fmt.Errorf("component %q has invalid quantity_per_item %d", ref, m.QuantityPerItem)
|
||||
}
|
||||
if _, exists := merged[ref]; !exists {
|
||||
order = append(order, ref)
|
||||
}
|
||||
merged[ref] += m.QuantityPerItem
|
||||
}
|
||||
|
||||
out := make([]ComponentMapping, 0, len(order))
|
||||
for _, ref := range order {
|
||||
out = append(out, ComponentMapping{
|
||||
ComponentRef: ref,
|
||||
QuantityPerItem: merged[ref],
|
||||
})
|
||||
}
|
||||
if len(out) == 0 {
|
||||
return nil, nil
|
||||
}
|
||||
return out, nil
|
||||
}
|
||||
```
|
||||
|
||||
## Reference Expansion (Go)
|
||||
|
||||
```go
|
||||
type CartItem struct {
|
||||
ComponentRef string
|
||||
Quantity int
|
||||
}
|
||||
|
||||
func ExpandBOMRow(row BOMItem) []CartItem {
|
||||
result := make([]CartItem, 0, len(row.ComponentMappings))
|
||||
for _, m := range row.ComponentMappings {
|
||||
qty := row.Quantity * m.QuantityPerItem
|
||||
if qty <= 0 {
|
||||
continue
|
||||
}
|
||||
result = append(result, CartItem{
|
||||
ComponentRef: m.ComponentRef,
|
||||
Quantity: qty,
|
||||
})
|
||||
}
|
||||
return result
|
||||
}
|
||||
```
|
||||
66
rules/patterns/build-version-display/contract.md
Normal file
66
rules/patterns/build-version-display/contract.md
Normal file
@@ -0,0 +1,66 @@
|
||||
# Contract: Build Version Display
|
||||
|
||||
Version: 1.0
|
||||
|
||||
## Purpose
|
||||
|
||||
Every web application must display the current build version in the page footer so that users and support staff can identify exactly which version is running.
|
||||
|
||||
---
|
||||
|
||||
## Rule
|
||||
|
||||
The build version **must** be visible in the footer on every page of the web application.
|
||||
|
||||
---
|
||||
|
||||
## Requirements
|
||||
|
||||
- The version is shown in the footer on **all** pages, including error pages (404, 500, etc.).
|
||||
- The version string is injected at **build time** — it is never hardcoded in source and never fetched at runtime.
|
||||
- The version value comes from a single authoritative source (e.g. `package.json`, `version.go`, a CI environment variable). It is not duplicated manually.
|
||||
- Format: any human-readable string that uniquely identifies the build — a semver tag, a git commit SHA, or a combination (e.g. `1.4.2`, `1.4.2-abc1234`, `abc1234`).
|
||||
- The version text must be legible but visually subordinate — use a muted color and small font size so it does not compete with page content.
|
||||
|
||||
---
|
||||
|
||||
## Recommended implementation
|
||||
|
||||
**Frontend (JS/TS build tools)**
|
||||
|
||||
Expose the version through an environment variable at build time and reference it in the footer component:
|
||||
|
||||
```ts
|
||||
// vite.config.ts / webpack.config.js
|
||||
define: {
|
||||
__APP_VERSION__: JSON.stringify(process.env.APP_VERSION ?? "dev"),
|
||||
}
|
||||
|
||||
// Footer component
|
||||
<footer>v{__APP_VERSION__}</footer>
|
||||
```
|
||||
|
||||
**Go (server-rendered HTML)**
|
||||
|
||||
Inject via `-ldflags` at build time and pass to the template:
|
||||
|
||||
```go
|
||||
// main.go
|
||||
var Version = "dev"
|
||||
|
||||
// Build: go build -ldflags "-X main.Version=1.4.2"
|
||||
```
|
||||
|
||||
```html
|
||||
<!-- base template -->
|
||||
<footer>v{{ .Version }}</footer>
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## What is NOT allowed
|
||||
|
||||
- Omitting the version from any page, including error pages.
|
||||
- Fetching the version from an API endpoint at runtime (network dependency for a static value).
|
||||
- Hardcoding a version string in source code.
|
||||
- Storing the version in more than one place.
|
||||
22
rules/patterns/git-sync-check/contract.md
Normal file
22
rules/patterns/git-sync-check/contract.md
Normal file
@@ -0,0 +1,22 @@
|
||||
# Git Sync Check
|
||||
|
||||
## Rule
|
||||
|
||||
Before starting any work on a task, check whether the remote repository has commits that are not yet present locally.
|
||||
|
||||
## Required Steps
|
||||
|
||||
1. Run `git fetch` to update remote-tracking refs without merging.
|
||||
2. Check for upstream commits: `git log HEAD..@{u} --oneline`.
|
||||
3. If the output is non-empty (there are new remote commits):
|
||||
- **Stop immediately. Do not make any changes.**
|
||||
- Inform the user that the remote has new commits and ask how to proceed (e.g., pull, rebase, or ignore).
|
||||
4. If the output is empty, proceed with the task normally.
|
||||
|
||||
## Rationale
|
||||
|
||||
Working on an outdated local state risks merge conflicts, duplicate work, and overwriting changes made by other contributors. Checking remote state first keeps the working tree aligned and prevents avoidable conflicts.
|
||||
|
||||
## Exceptions
|
||||
|
||||
- Offline environments where `git fetch` is not possible: notify the user that the check could not be performed before proceeding.
|
||||
@@ -1,6 +1,6 @@
|
||||
# Contract: Go Code Style and Project Conventions
|
||||
|
||||
Version: 1.0
|
||||
Version: 1.1
|
||||
|
||||
## Logging
|
||||
|
||||
@@ -64,6 +64,7 @@ Never reverse steps 2 and 5. Never start serving before migrations complete.
|
||||
- Never hardcode ports, DSNs, or file paths in application code.
|
||||
- Provide a `config.example.yaml` committed to the repo.
|
||||
- The actual `config.yaml` is gitignored.
|
||||
- Secret handling and pre-commit/pre-push leak checks must follow the `secret-management` contract.
|
||||
|
||||
## Template / UI Rendering
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# Contract: Database Patterns (Go / MySQL / MariaDB)
|
||||
|
||||
Version: 1.0
|
||||
Version: 1.8
|
||||
|
||||
## MySQL Transaction Cursor Safety (CRITICAL)
|
||||
|
||||
@@ -104,8 +104,44 @@ items, _ := repo.GetItemsByPricelistIDs(ids) // 1 query with WHERE id IN (...)
|
||||
// then group in Go
|
||||
```
|
||||
|
||||
## Automatic Backup During Migration
|
||||
|
||||
The migration engine is responsible for all backup steps. The operator must never be required to take a backup manually.
|
||||
|
||||
Backup naming, storage, archive format, retention, and restore-readiness must follow the `backup-management` contract.
|
||||
|
||||
### Full DB Backup on New Migrations
|
||||
|
||||
When the migration engine detects that new (unapplied) migrations exist, it must take a full database backup before applying any of them.
|
||||
|
||||
Rules:
|
||||
- The full backup must complete and be verified before the first migration step runs.
|
||||
- The backup must be triggered by the application's own backup mechanism; do not assume `mysql`, `mysqldump`, `pg_dump`, or any other external DB client tool is present on the operator's machine.
|
||||
- Before creating the backup, verify that the backup output path resolves outside the git worktree and is not tracked or staged in git.
|
||||
|
||||
### Per-Table Backup Before Each Table Migration
|
||||
|
||||
Before applying a migration step that affects a specific table, take a targeted backup of that table.
|
||||
|
||||
Rules:
|
||||
- A per-table backup must be created immediately before the migration step that modifies that table.
|
||||
- If a single migration step touches multiple tables, back up each affected table before the step runs.
|
||||
- Per-table backups are in addition to the full DB backup; they are not a substitute for it.
|
||||
|
||||
### Session Rollback on Failure
|
||||
|
||||
If any migration step fails during a session, the engine must roll back all migrations applied in that session.
|
||||
|
||||
Rules:
|
||||
- "Session" means all migration steps started in a single run of the migration engine.
|
||||
- On failure, roll back every step applied in the current session in reverse order before surfacing the error.
|
||||
- If rollback of a step is not possible (e.g., the operation is not reversible in MySQL without the per-table backup), restore from the per-table backup taken before that step.
|
||||
- After rollback or restore, the database must be in the same state as before the session started.
|
||||
- The engine must emit structured diagnostics that identify which step failed, which steps were rolled back, and the final database state.
|
||||
|
||||
## Migration Policy
|
||||
|
||||
- For local-first desktop applications, startup and migration recovery must follow the `local-first-recovery` contract.
|
||||
- Migrations are numbered sequentially and never modified after merge.
|
||||
- Each migration must be reversible where possible (document rollback in a comment).
|
||||
- Never rename a column in one migration step — add new, backfill, drop old across separate deploys.
|
||||
|
||||
93
rules/patterns/identifier-normalization/contract.md
Normal file
93
rules/patterns/identifier-normalization/contract.md
Normal file
@@ -0,0 +1,93 @@
|
||||
# Contract: Identifier Normalization
|
||||
|
||||
Version: 1.0
|
||||
|
||||
## Purpose
|
||||
|
||||
Правила хранения и сравнения идентификаторов оборудования:
|
||||
серийные номера, вендоры, версии прошивок, партномера, артикулы.
|
||||
|
||||
---
|
||||
|
||||
## Правило
|
||||
|
||||
Оригинальное значение **сохраняется как пришло** — регистр не меняется.
|
||||
Все сравнения, поиск и дедупликация выполняются **без учёта регистра**.
|
||||
|
||||
```
|
||||
Пришло: "SN-001-ABC" → хранится: "SN-001-ABC"
|
||||
Пришло: "sn-001-abc" → это тот же объект, не дубликат
|
||||
Пришло: "Sn-001-Abc" → то же самое
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Применяется к полям
|
||||
|
||||
- Серийный номер (`serial_number`, `serial`)
|
||||
- Вендор / производитель (`vendor`, `manufacturer`)
|
||||
- Версия прошивки (`firmware_version`, `fw_version`)
|
||||
- Партномер (`part_number`, `part_no`)
|
||||
- Артикул (`article`, `sku`)
|
||||
|
||||
---
|
||||
|
||||
## Реализация
|
||||
|
||||
### Go — сравнение
|
||||
|
||||
```go
|
||||
import "strings"
|
||||
|
||||
func SameIdentifier(a, b string) bool {
|
||||
return strings.EqualFold(a, b)
|
||||
}
|
||||
```
|
||||
|
||||
### Go — дедупликация
|
||||
|
||||
```go
|
||||
func deduplicateBySerial(items []Device) []Device {
|
||||
seen := make(map[string]struct{})
|
||||
result := items[:0]
|
||||
for _, item := range items {
|
||||
key := strings.ToLower(item.SerialNumber)
|
||||
if _, exists := seen[key]; !exists {
|
||||
seen[key] = struct{}{}
|
||||
result = append(result, item)
|
||||
}
|
||||
}
|
||||
return result
|
||||
}
|
||||
```
|
||||
|
||||
Ключ в map — всегда `strings.ToLower(value)`. Сам объект сохраняется с оригинальным значением.
|
||||
|
||||
### SQL — поиск и уникальность
|
||||
|
||||
Поиск:
|
||||
```sql
|
||||
SELECT * FROM devices WHERE LOWER(serial_number) = LOWER(?);
|
||||
```
|
||||
|
||||
Уникальный индекс (MySQL / MariaDB):
|
||||
```sql
|
||||
-- Collation ci обеспечивает case-insensitive уникальность
|
||||
ALTER TABLE devices MODIFY serial_number VARCHAR(255)
|
||||
CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
|
||||
|
||||
ALTER TABLE devices ADD UNIQUE INDEX uniq_serial (serial_number);
|
||||
```
|
||||
|
||||
SQLite:
|
||||
```sql
|
||||
CREATE UNIQUE INDEX uniq_serial ON devices (LOWER(serial_number));
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Что не делать
|
||||
|
||||
- Не приводить значение к нижнему или верхнему регистру перед сохранением.
|
||||
- Не считать `"SN-001"` и `"sn-001"` разными объектами.
|
||||
- Не использовать `==` для сравнения идентификаторов в Go — только `strings.EqualFold`.
|
||||
56
rules/patterns/kiss/contract.md
Normal file
56
rules/patterns/kiss/contract.md
Normal file
@@ -0,0 +1,56 @@
|
||||
# Contract: Keep It Simple
|
||||
|
||||
Version: 1.0
|
||||
|
||||
## Principle
|
||||
|
||||
Working solutions do not need to be interesting.
|
||||
|
||||
Prefer the simplest solution that correctly solves the problem. Complexity must be justified by a real, present requirement — not by anticipation of future needs or desire to use a particular technique.
|
||||
|
||||
## Rules
|
||||
|
||||
- Choose boring technology. A well-understood, dull solution beats a clever one.
|
||||
- Do not introduce abstractions, patterns, or frameworks before they are needed by at least two concrete use cases.
|
||||
- Do not design for hypothetical future requirements. Build for what exists now.
|
||||
- Prefer sequential, readable code over clever one-liners.
|
||||
- If you can delete code and the system still works, delete it.
|
||||
- Extra configurability, generalization, and extensibility are costs, not features. Add them only when explicitly required.
|
||||
|
||||
## Anti-patterns
|
||||
|
||||
- Adding helpers or utilities for one-time operations.
|
||||
- Wrapping simple logic in interfaces "for testability" when a direct call works.
|
||||
- Using a framework or library to solve a problem the standard library already handles.
|
||||
- Writing error handling, fallbacks, or validation for scenarios that cannot happen.
|
||||
- Refactoring working code because it "could be cleaner."
|
||||
|
||||
## Bulletproof features
|
||||
|
||||
A feature must be correct by construction, not by circumstance.
|
||||
|
||||
Do not write mechanisms that silently rely on:
|
||||
- another feature being in a specific state,
|
||||
- input data having a particular shape that "usually" holds,
|
||||
- a certain call order or timing,
|
||||
- a global flag, ambient variable, or external condition being set upstream.
|
||||
|
||||
Such mechanisms are thin: they work only when the world cooperates. When any surrounding assumption shifts, they break in ways that are hard to trace. This is the primary source of bugs.
|
||||
|
||||
**Design rules:**
|
||||
|
||||
- A feature owns its preconditions. If it requires data in a certain state, it must enforce or produce that state itself — not inherit it silently from a caller.
|
||||
- Never write logic that only works if a sibling feature runs first and succeeds. If coordination is needed, make it explicit (a parameter, a return value, a clear contract).
|
||||
- Avoid implicit state machines — sequences where operations must happen in the right order with no enforcement. Either enforce the order structurally or eliminate the dependency.
|
||||
- Prefer thick, unconditional logic over thin conditional chains that assume stable context. A mechanism that always does the right thing is more reliable than one that does the right thing only when conditions are favorable.
|
||||
|
||||
A feature is done when it is correct on its own, not when it is correct given that everything else is also correct.
|
||||
|
||||
## Checklist before committing
|
||||
|
||||
1. Could this be done with fewer lines without losing clarity?
|
||||
2. Does every abstraction here have more than one caller?
|
||||
3. Is any of this code handling a case that cannot actually occur?
|
||||
4. Did I add anything beyond what was asked?
|
||||
|
||||
If the answer to any of 1–4 is "yes," simplify before committing.
|
||||
95
rules/patterns/local-first-recovery/contract.md
Normal file
95
rules/patterns/local-first-recovery/contract.md
Normal file
@@ -0,0 +1,95 @@
|
||||
# Contract: Local-First Recovery
|
||||
|
||||
Version: 1.2
|
||||
|
||||
## Purpose
|
||||
|
||||
Shared recovery and migration rules for local-first desktop applications that keep local state and may rebuild part of that state from sync, reload, import, or other deterministic upstream sources.
|
||||
|
||||
## Core Rule
|
||||
|
||||
A migration or startup strategy is not considered sufficient merely because it succeeded once on the current developer database.
|
||||
|
||||
Priority order:
|
||||
- Priority 1: protect user data. Do not do anything that can damage, discard, or silently rewrite non-recoverable user data.
|
||||
- Priority 2: preserve availability. Do not do anything that unnecessarily prevents the application from starting or operating in a reduced mode.
|
||||
|
||||
If user data is safe, prefer degraded startup over startup failure. If minimum useful functionality can be started safely, start it.
|
||||
|
||||
Startup and schema-migration behavior must be designed for degraded real-world states, including:
|
||||
- legacy schema versions
|
||||
- interrupted migrations
|
||||
- stale temp tables
|
||||
- invalid payloads
|
||||
- duplicates
|
||||
- `NULL` in required columns
|
||||
- partially migrated tables
|
||||
|
||||
## Required Data Classification
|
||||
|
||||
The architecture must explicitly separate:
|
||||
- disposable cache tables
|
||||
- protected user data tables
|
||||
|
||||
Definitions:
|
||||
- Disposable cache tables are read-only, sync-derived, imported, or otherwise rebuildable from a trusted source.
|
||||
- Protected user data tables contain user-authored or otherwise non-rebuildable data.
|
||||
|
||||
Do not mix both classes in one table if recovery semantics differ.
|
||||
|
||||
## Availability Policy For Disposable Data
|
||||
|
||||
For disposable cache tables, availability has priority.
|
||||
|
||||
Rules:
|
||||
- If a table cannot be migrated safely, it may be quarantined, dropped, or recreated empty.
|
||||
- The application must continue startup after such recovery.
|
||||
- The application must restore disposable data through the normal sync, reload, import, or rebuild path.
|
||||
- Recovery must not require manual SQL intervention for routine degraded states.
|
||||
|
||||
## Protection Policy For User Data
|
||||
|
||||
For protected user data, destructive reset is forbidden.
|
||||
|
||||
Rules:
|
||||
- Do not drop, truncate, or recreate protected tables as a recovery shortcut.
|
||||
- Backup-before-change is mandatory, must be performed automatically by the migration engine (never by the operator), and must follow the `backup-management` and `go-database` contracts.
|
||||
- Validate-before-migrate is mandatory.
|
||||
- Migration logic must use fail-safe semantics: stop before applying a risky destructive step when invariants are broken or input is invalid.
|
||||
- The application must emit explicit diagnostics that identify the blocked table, migration step, and reason.
|
||||
|
||||
## Recovery Logic Requirements
|
||||
|
||||
Rules:
|
||||
- Recovery logic must be deterministic.
|
||||
- Recovery logic must be idempotent.
|
||||
- Recovery logic must be retry-safe on every startup.
|
||||
- Recovery logic must be observable through structured logs.
|
||||
- Re-running startup after a partial failure must move the system toward a valid state, not deeper into corruption.
|
||||
|
||||
## Quality Bar
|
||||
|
||||
The application must either:
|
||||
- self-recover and continue startup
|
||||
|
||||
or:
|
||||
|
||||
- stop only when continuing would risk loss or corruption of non-recoverable user data
|
||||
|
||||
Stopping for disposable cache corruption alone is not acceptable when the data can be rebuilt safely.
|
||||
|
||||
If the full feature set cannot be restored safely during startup, the application should start with the minimum safe functionality instead of failing startup, as long as protected user data remains safe.
|
||||
|
||||
## Testing Requirements
|
||||
|
||||
Degraded and legacy states must be tested explicitly, not only happy-path fresh installs.
|
||||
|
||||
Required test coverage includes:
|
||||
- legacy schema upgrades
|
||||
- interrupted migration recovery
|
||||
- partially migrated tables
|
||||
- duplicate rows where uniqueness is expected
|
||||
- `NULL` in required columns
|
||||
- invalid payloads in persisted rows
|
||||
- disposable-table reset and rebuild flow
|
||||
- protected-data migration refusal with explicit diagnostics
|
||||
79
rules/patterns/no-hardcoded-vendors/contract.md
Normal file
79
rules/patterns/no-hardcoded-vendors/contract.md
Normal file
@@ -0,0 +1,79 @@
|
||||
# Contract: No Hardcoded Vendors or Models
|
||||
|
||||
Version: 1.0
|
||||
|
||||
## Purpose
|
||||
|
||||
Запрет на хардкод названий вендоров, моделей и партномеров в коде.
|
||||
|
||||
---
|
||||
|
||||
## Правило
|
||||
|
||||
Названия вендоров, моделей, серий оборудования и партномеров **не появляются в коде**.
|
||||
Они приходят из данных: БД, конфига, входного документа, справочника.
|
||||
|
||||
---
|
||||
|
||||
## Что запрещено
|
||||
|
||||
```go
|
||||
// Запрещено
|
||||
if device.Vendor == "Dell" { ... }
|
||||
if strings.Contains(model, "PowerEdge") { ... }
|
||||
switch vendor {
|
||||
case "HP", "HPE", "Hewlett Packard": ...
|
||||
}
|
||||
```
|
||||
|
||||
```go
|
||||
// Запрещено — список вендоров в коде
|
||||
var knownVendors = []string{"Dell", "HP", "Cisco", "Lenovo"}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Что делать вместо
|
||||
|
||||
Логика определяется по полям из данных, не по названию вендора:
|
||||
|
||||
```go
|
||||
// Правильно — смотрим на возможности объекта, не на имя вендора
|
||||
if device.HasIPMI { ... }
|
||||
if device.ParserType == "redfish" { ... }
|
||||
```
|
||||
|
||||
Если нужен маппинг — он живёт в конфиге или справочной таблице БД, не в коде:
|
||||
|
||||
```yaml
|
||||
# config.yaml
|
||||
vendor_parsers:
|
||||
dell: redfish
|
||||
hp: ilo
|
||||
cisco: ucs
|
||||
```
|
||||
|
||||
```sql
|
||||
-- справочник в БД
|
||||
SELECT parser_type FROM vendor_registry WHERE LOWER(vendor) = LOWER(?);
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Исключения
|
||||
|
||||
Допускается упоминание вендора **только** в:
|
||||
|
||||
- Названиях пакетов/директорий парсеров: `internal/parser/vendors/dell/`
|
||||
- Комментариях и документации
|
||||
- Тестовых фикстурах (XML/JSON с реальными данными оборудования)
|
||||
|
||||
В этих местах название вендора — идентификатор модуля, не условие в логике.
|
||||
|
||||
---
|
||||
|
||||
## Почему
|
||||
|
||||
Хардкод вендора делает код хрупким: новый вендор требует правок в коде, а не в данных.
|
||||
Опечатка в строке (`"HPE"` vs `"HP"`) создаёт незаметный баг.
|
||||
Регистр не контролируется (см. `identifier-normalization` contract).
|
||||
149
rules/patterns/release-signing/contract.md
Normal file
149
rules/patterns/release-signing/contract.md
Normal file
@@ -0,0 +1,149 @@
|
||||
# Contract: Release Signing
|
||||
|
||||
Version: 1.0
|
||||
|
||||
## Purpose
|
||||
|
||||
Ed25519 asymmetric signing for Go release binaries.
|
||||
Guarantees that a binary accepted by a running application was produced by a trusted developer.
|
||||
Applies to any Go binary that is distributed or supports self-update.
|
||||
|
||||
---
|
||||
|
||||
## Key Management
|
||||
|
||||
Public keys are stored in the centralized keys repository: `git.mchus.pro/mchus/keys`
|
||||
|
||||
```
|
||||
keys/
|
||||
developers/
|
||||
<name>.pub ← raw Ed25519 public key, base64-encoded, one line per developer
|
||||
scripts/
|
||||
keygen.sh ← generates keypair
|
||||
sign-release.sh ← signs a binary
|
||||
verify-signature.sh ← verifies locally
|
||||
```
|
||||
|
||||
Public keys are safe to commit. Private keys stay on each developer's machine — never committed, never shared.
|
||||
|
||||
**Adding a developer:** add their `.pub` file → commit → rebuild affected releases.
|
||||
**Removing a developer:** delete their `.pub` file → commit → rebuild releases.
|
||||
Previously signed binaries with their key remain valid (already distributed), but they cannot sign new releases.
|
||||
|
||||
---
|
||||
|
||||
## Multi-Key Trust Model
|
||||
|
||||
A binary is accepted if its signature verifies against **any** of the embedded trusted public keys.
|
||||
This mirrors the SSH `authorized_keys` model.
|
||||
|
||||
- One developer signs a release with their private key → produces one `.sig` file.
|
||||
- The binary trusts all active developers — any of them can make a valid release.
|
||||
- Signature format: raw 64-byte Ed25519 signature (not PEM, not armored).
|
||||
|
||||
---
|
||||
|
||||
## Embedding Keys at Build Time
|
||||
|
||||
Public keys are injected via `-ldflags` at release build time — not hardcoded at compile time.
|
||||
This allows adding/removing developers without changing application source code.
|
||||
|
||||
```go
|
||||
// internal/updater/trust.go
|
||||
|
||||
// trustedKeysRaw is injected at build time via -ldflags.
|
||||
// Format: base64(key1):base64(key2):...
|
||||
// Empty string = dev build, updates disabled.
|
||||
var trustedKeysRaw string
|
||||
|
||||
func trustedKeys() ([]ed25519.PublicKey, error) {
|
||||
if trustedKeysRaw == "" {
|
||||
return nil, fmt.Errorf("dev build: trusted keys not embedded, updates disabled")
|
||||
}
|
||||
var keys []ed25519.PublicKey
|
||||
for _, enc := range strings.Split(trustedKeysRaw, ":") {
|
||||
b, err := base64.StdEncoding.DecodeString(strings.TrimSpace(enc))
|
||||
if err != nil || len(b) != ed25519.PublicKeySize {
|
||||
return nil, fmt.Errorf("invalid trusted key: %w", err)
|
||||
}
|
||||
keys = append(keys, ed25519.PublicKey(b))
|
||||
}
|
||||
return keys, nil
|
||||
}
|
||||
```
|
||||
|
||||
Release build script injects all current developer keys:
|
||||
|
||||
```sh
|
||||
# scripts/build-release.sh
|
||||
KEYS=$(paste -sd: /path/to/keys/developers/*.pub)
|
||||
go build \
|
||||
-ldflags "-s -w -X <module>/internal/updater.trustedKeysRaw=${KEYS}" \
|
||||
-o dist/<binary>-linux-amd64 \
|
||||
./cmd/<binary>
|
||||
```
|
||||
|
||||
Dev build (no `-ldflags` injection): `trustedKeysRaw` is empty → updates disabled, binary works normally.
|
||||
|
||||
---
|
||||
|
||||
## Signature Verification (stdlib only, no external tools)
|
||||
|
||||
Use `crypto/ed25519` from Go standard library. No third-party dependencies.
|
||||
|
||||
```go
|
||||
// internal/updater/trust.go
|
||||
|
||||
func verifySignature(binaryPath, sigPath string) error {
|
||||
keys, err := trustedKeys()
|
||||
if err != nil {
|
||||
return err // dev build or misconfiguration
|
||||
}
|
||||
data, err := os.ReadFile(binaryPath)
|
||||
if err != nil {
|
||||
return fmt.Errorf("read binary: %w", err)
|
||||
}
|
||||
sig, err := os.ReadFile(sigPath)
|
||||
if err != nil {
|
||||
return fmt.Errorf("read signature: %w", err)
|
||||
}
|
||||
for _, key := range keys {
|
||||
if ed25519.Verify(key, data, sig) {
|
||||
return nil // any trusted key accepts → pass
|
||||
}
|
||||
}
|
||||
return fmt.Errorf("signature verification failed: no trusted key matched")
|
||||
}
|
||||
```
|
||||
|
||||
Rejection behavior: log as WARNING, continue with current binary. Never crash, never block operation.
|
||||
|
||||
---
|
||||
|
||||
## Release Asset Convention
|
||||
|
||||
Every release must attach two files to the Gitea release:
|
||||
|
||||
```
|
||||
<binary>-linux-amd64 ← the binary
|
||||
<binary>-linux-amd64.sig ← raw 64-byte Ed25519 signature
|
||||
```
|
||||
|
||||
Signing:
|
||||
|
||||
```sh
|
||||
sh keys/scripts/sign-release.sh <developer-name> dist/<binary>-linux-amd64
|
||||
```
|
||||
|
||||
Both files are uploaded to the Gitea release as downloadable assets.
|
||||
|
||||
---
|
||||
|
||||
## Rules
|
||||
|
||||
- Never hardcode public keys as string literals in source code — always use ldflags injection.
|
||||
- Never commit private keys (`.key` files) anywhere.
|
||||
- A binary built without ldflags injection must work normally — it just cannot perform verified updates.
|
||||
- Signature verification failure must be a silent logged warning, not a crash or user-visible error.
|
||||
- Use `crypto/ed25519` (stdlib) only — no external signing libraries.
|
||||
- `.sig` file contains raw 64 bytes (not base64, not PEM). Produced by `openssl pkeyutl -sign -rawin`.
|
||||
80
rules/patterns/secret-management/contract.md
Normal file
80
rules/patterns/secret-management/contract.md
Normal file
@@ -0,0 +1,80 @@
|
||||
# Contract: Secret Management
|
||||
|
||||
Version: 1.1
|
||||
|
||||
## Purpose
|
||||
|
||||
Общие правила, которые предотвращают утечку секретов в git, логи, конфиги, шаблоны и release-артефакты.
|
||||
|
||||
## No Secrets In Git
|
||||
|
||||
Secrets must never be committed to the repository, even temporarily.
|
||||
|
||||
This includes:
|
||||
- API keys
|
||||
- access tokens
|
||||
- passwords
|
||||
- DSNs with credentials
|
||||
- private keys
|
||||
- session secrets
|
||||
- OAuth client secrets
|
||||
- `.env` files with real values
|
||||
- production or staging config files with real credentials
|
||||
|
||||
Rules:
|
||||
- Real secrets must never appear in tracked files, commit history, tags, release assets, examples, fixtures, tests, or docs.
|
||||
- `.gitignore` is required for runtime config and local secret files, but `.gitignore` alone is not considered sufficient protection.
|
||||
- Commit only templates and examples with obvious placeholders, for example `CHANGEME`, `example`, or empty strings.
|
||||
- Never place secrets in screenshots, pasted logs, SQL dumps, backups, or exported archives that could later be committed.
|
||||
|
||||
## Where Secrets Live
|
||||
|
||||
Rules:
|
||||
- Store real secrets only in local runtime config, secret stores, environment injection, or deployment-specific configuration outside git.
|
||||
- Keep committed config files secret-free: `config.example.yaml`, `.env.example`, and similar files must contain placeholders only.
|
||||
- If a feature requires a new secret, document the config key name and format, not the real value.
|
||||
|
||||
## Required Git Checks
|
||||
|
||||
Before every commit:
|
||||
- Verify that files with real secrets are gitignored.
|
||||
- Inspect staged changes for secrets, not just working tree files.
|
||||
- Run an automated secret scan against staged content using project tooling or a repository-approved scanner.
|
||||
- If the scan cannot be run, stop and do not commit until an equivalent staged-content check is performed.
|
||||
|
||||
Before every push:
|
||||
- Scan the commits being pushed for secrets again.
|
||||
- Refuse the push if any potential secret is detected until it is reviewed and removed.
|
||||
|
||||
High-risk patterns that must be checked explicitly:
|
||||
- PEM blocks (`BEGIN PRIVATE KEY`, `BEGIN OPENSSH PRIVATE KEY`, `BEGIN RSA PRIVATE KEY`)
|
||||
- tokens in URLs or DSNs
|
||||
- `password=`, `token=`, `secret=`, `apikey=`, `api_key=`
|
||||
- cloud credentials
|
||||
- webhook secrets
|
||||
- JWT signing keys
|
||||
|
||||
## Scheduled Security Audit
|
||||
|
||||
Rules:
|
||||
- Perform a security audit at least once per week.
|
||||
- At least once per week, scan the git repository for leaked secrets, including current files, staged changes, commit history, and reachable tags.
|
||||
- Treat weekly secret scanning as mandatory even if pre-commit and pre-push checks already exist.
|
||||
- If the weekly audit finds a leaked secret, follow the Incident Response rules immediately.
|
||||
|
||||
## Logging and Generated Artifacts
|
||||
|
||||
Rules:
|
||||
- Do not print secrets into terminal output, structured logs, panic messages, or debug dumps.
|
||||
- Do not embed secrets into generated backups, exports, support bundles, or crash reports unless the artifact is explicitly treated as secret operational data and guaranteed to stay outside git.
|
||||
- If secrets must appear in an operational artifact, that artifact inherits the same "never in git" rule as backups.
|
||||
|
||||
## Incident Response
|
||||
|
||||
If a secret is committed or pushed:
|
||||
- Treat it as compromised immediately.
|
||||
- Rotate or revoke the secret.
|
||||
- Remove it from the current tree and from any generated artifacts.
|
||||
- Remove it from all affected commits and from repository history, not just from the latest revision.
|
||||
- Inform the user that history cleanup may be required.
|
||||
- Do not claim safety merely because the repo is private.
|
||||
21
rules/patterns/task-discipline/contract.md
Normal file
21
rules/patterns/task-discipline/contract.md
Normal file
@@ -0,0 +1,21 @@
|
||||
# Contract: Task Discipline
|
||||
|
||||
Version: 1.0
|
||||
|
||||
## Principle
|
||||
|
||||
Finish before switching. A task is not done until it reaches a logical end.
|
||||
|
||||
## Rules
|
||||
|
||||
- Do not start a new task while the current one is unfinished. Switching mid-task leaves half-done work that is harder to recover than if it had never been started.
|
||||
- If a new idea or requirement surfaces during work, note it and address it after the current task is complete.
|
||||
- "Logical end" means: the change works, is committed, and leaves the codebase in a coherent state — not just "the immediate code compiles."
|
||||
- Do not open new files, refactor adjacent code, or fix unrelated issues while implementing a specific task. Stay focused on the defined scope.
|
||||
- If the current task is blocked, resolve the blocker or explicitly hand off — do not silently pivot to something else.
|
||||
|
||||
## Anti-patterns
|
||||
|
||||
- Starting a refactor while in the middle of a bug fix.
|
||||
- Leaving a feature half-implemented because something more interesting came up.
|
||||
- Responding to a new requirement by abandoning the current one without documenting what was left unfinished.
|
||||
85
rules/patterns/testing-policy/contract.md
Normal file
85
rules/patterns/testing-policy/contract.md
Normal file
@@ -0,0 +1,85 @@
|
||||
# Contract: Testing Policy
|
||||
|
||||
Version: 1.1
|
||||
|
||||
## Purpose
|
||||
|
||||
Определяет когда писать тесты, когда не писать, и как их поддерживать.
|
||||
Применяется ко всем проектам на Go. Агенты следуют этим правилам самостоятельно, без запроса подтверждения.
|
||||
|
||||
---
|
||||
|
||||
## Когда тест обязателен
|
||||
|
||||
Тест пишется всегда, когда код делает нетривиальное преобразование данных или реализует бизнес-логику:
|
||||
|
||||
- **Парсеры** — любой код, читающий внешний формат (XML, JSON, CSV, бинарный)
|
||||
- **Трансформации** — конвертация единиц, нормализация, маппинг полей
|
||||
- **Бизнес-правила** — расчёты, фильтрация, агрегация, приоритизация
|
||||
- **Граничные случаи** — пустой ввод, нулевые значения, переполнение, отсутствующие поля
|
||||
- **Degraded / legacy states** — legacy schema, interrupted migrations, duplicates, invalid persisted rows, partially migrated tables
|
||||
- **Регрессии** — если баг был найден, тест фиксирует его до исправления
|
||||
|
||||
Тест пишется в том же коммите, что и функциональность. Функциональность без теста (там где он обязателен) — неполный коммит.
|
||||
|
||||
Для local-first desktop приложений правила деградированных состояний и recovery-тестов определяются также `local-first-recovery` contract.
|
||||
|
||||
---
|
||||
|
||||
## Когда тест не нужен
|
||||
|
||||
Тест не пишется на код, где он не даёт ценности:
|
||||
|
||||
- Геттеры и сеттеры: `func (s *Server) Port() int { return s.port }`
|
||||
- Конфиг-структуры и константы
|
||||
- Тривиальный клей: передача параметров, инициализация, dependency wiring
|
||||
- Логирование и форматирование вывода
|
||||
- HTTP-хендлеры без бизнес-логики (только роутинг и вызов сервиса)
|
||||
|
||||
---
|
||||
|
||||
## Структура теста
|
||||
|
||||
Использовать стандартный Go `testing`. Табличные тесты (`[]struct{ ... }`) — когда случаев больше двух.
|
||||
|
||||
```go
|
||||
func TestParseGPUSensor(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
xml string
|
||||
want int
|
||||
}{
|
||||
{"normal", `<VALUE>290</VALUE>`, 29},
|
||||
{"zero", `<VALUE>0</VALUE>`, 0},
|
||||
}
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
got := parseGPUTemp(tt.xml)
|
||||
if got != tt.want {
|
||||
t.Fatalf("got %d, want %d", got, tt.want)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Фикстуры (XML, JSON, бинарные данные) — инлайн-константы или файлы в `testdata/`.
|
||||
Не использовать реальные сетевые вызовы и реальную БД в юнит-тестах.
|
||||
|
||||
---
|
||||
|
||||
## Мейнтейнс
|
||||
|
||||
- Сломанный тест — чинится или удаляется в том же коммите где сломался.
|
||||
- Закомментированный тест — не допускается. Если тест неактуален — удалить.
|
||||
- Тест, проверяющий удалённую функциональность — удалить вместе с функциональностью.
|
||||
|
||||
---
|
||||
|
||||
## Инструкция для агентов (Codex, Claude)
|
||||
|
||||
1. При добавлении функциональности — проверь по списку выше, попадает ли код в категорию "обязателен".
|
||||
2. Если да — напиши тест в том же коммите, без запроса подтверждения.
|
||||
3. Если нет — не пиши тест, не упоминай его отсутствие.
|
||||
4. При удалении функциональности — удали соответствующие тесты.
|
||||
5. При обнаружении закомментированных или сломанных тестов — удали или почини.
|
||||
133
rules/patterns/unattended-boot-services/contract.md
Normal file
133
rules/patterns/unattended-boot-services/contract.md
Normal file
@@ -0,0 +1,133 @@
|
||||
# Contract: Unattended Boot Services (OpenRC)
|
||||
|
||||
Version: 1.0
|
||||
|
||||
## Purpose
|
||||
|
||||
Rules for OpenRC services that run in unattended environments: LiveCDs, kiosks, embedded systems.
|
||||
No user is present. No TTY prompts. Every failure path must have a silent fallback.
|
||||
|
||||
---
|
||||
|
||||
## Core Invariants
|
||||
|
||||
- **Never block boot.** A service failure must not prevent other services from starting.
|
||||
- **Never prompt.** No `read`, no `pause`, no interactive input of any kind.
|
||||
- **Always exit 0.** Use `eend 0` at the end of `start()` regardless of the operation result.
|
||||
- **Log everything.** Write results to `/var/log/` so SSH inspection is possible after boot.
|
||||
- **Fail silently, degrade gracefully.** Missing tool → skip that collector. No network → skip network-dependent steps.
|
||||
|
||||
---
|
||||
|
||||
## Service Dependencies
|
||||
|
||||
Use the minimum necessary dependencies:
|
||||
|
||||
```sh
|
||||
depend() {
|
||||
need localmount # almost always needed
|
||||
after some-service # ordering without hard dependency
|
||||
use logger # optional soft dependency
|
||||
# DO NOT add: need net, need networking, need network-online
|
||||
}
|
||||
```
|
||||
|
||||
**Never use `need net` or `need networking`** unless the service is genuinely useless without
|
||||
network and you want it to fail loudly when no cable is connected.
|
||||
For services that work with or without network, use `after` instead.
|
||||
|
||||
---
|
||||
|
||||
## Network-Independent SSH
|
||||
|
||||
Dropbear (and any SSH server) must start without network being available.
|
||||
Common mistake: installing dropbear-openrc which adds `need net` in its default init.
|
||||
|
||||
Override with a custom init:
|
||||
|
||||
```sh
|
||||
#!/sbin/openrc-run
|
||||
description="SSH server"
|
||||
|
||||
depend() {
|
||||
need localmount
|
||||
after bee-sshsetup # key/user setup, not networking
|
||||
use logger
|
||||
# NO need net
|
||||
}
|
||||
|
||||
start() {
|
||||
check_config || return 1
|
||||
ebegin "Starting dropbear"
|
||||
/usr/sbin/dropbear ${DROPBEAR_OPTS}
|
||||
eend $?
|
||||
}
|
||||
```
|
||||
|
||||
Place this file in the overlay at `etc/init.d/dropbear` — it overrides the package-installed version.
|
||||
|
||||
---
|
||||
|
||||
## Persistent DHCP
|
||||
|
||||
Do not use blocking DHCP (`-n` flag exits if no offer). Use background mode so the client
|
||||
retries automatically when a cable is connected after boot:
|
||||
|
||||
```sh
|
||||
# Wrong — exits immediately if no DHCP offer
|
||||
udhcpc -i "$iface" -t 3 -T 5 -n -q
|
||||
|
||||
# Correct — background daemon, retries indefinitely
|
||||
udhcpc -i "$iface" -b -t 0 -T 3 -q
|
||||
```
|
||||
|
||||
The network service itself should complete immediately (exit 0) — udhcpc daemons run in background.
|
||||
|
||||
---
|
||||
|
||||
## Service Start Order (typical LiveCD)
|
||||
|
||||
```
|
||||
localmount
|
||||
└── sshsetup (user creation, key injection — before dropbear)
|
||||
└── dropbear (SSH — independent of network)
|
||||
└── network (DHCP on all interfaces — does not block anything)
|
||||
└── nvidia (or other hardware init — after network in case firmware needs it)
|
||||
└── audit (main workload — after all hardware ready)
|
||||
```
|
||||
|
||||
Services at the same level can start in parallel. Use `after` not `need` for ordering without hard dependency.
|
||||
|
||||
---
|
||||
|
||||
## Error Handling in start()
|
||||
|
||||
```sh
|
||||
start() {
|
||||
ebegin "Running audit"
|
||||
/usr/local/bin/audit --output /var/log/audit.json >> /var/log/audit.log 2>&1
|
||||
local rc=$?
|
||||
if [ $rc -eq 0 ]; then
|
||||
einfo "Audit complete"
|
||||
else
|
||||
ewarn "Audit finished with errors — check /var/log/audit.log"
|
||||
fi
|
||||
eend 0 # always 0 — never fail the runlevel
|
||||
}
|
||||
```
|
||||
|
||||
- Capture exit code into a local variable.
|
||||
- Log the result with `einfo` or `ewarn`.
|
||||
- Always `eend 0` — a failed audit is not a reason to block the boot runlevel.
|
||||
- The exception: services whose failure makes SSH impossible (e.g. key setup) may `return 1`.
|
||||
|
||||
---
|
||||
|
||||
## Rules
|
||||
|
||||
- Every `start()` ends with `eend 0` unless failure makes the entire environment unusable.
|
||||
- Network is always best-effort. Test for it, don't depend on it.
|
||||
- Proprietary drivers (NVIDIA, etc.): load failure → log warning → continue without enrichment.
|
||||
- External tools (ipmitool, smartctl, etc.): not-found → skip that data source → do not abort.
|
||||
- Timeout all external commands: `timeout 30 smartctl ...` prevents infinite hangs.
|
||||
- Write all output to `/var/log/` — TTY output is secondary.
|
||||
82
rules/patterns/vendor-installer-verification/contract.md
Normal file
82
rules/patterns/vendor-installer-verification/contract.md
Normal file
@@ -0,0 +1,82 @@
|
||||
# Contract: Vendor Installer Verification
|
||||
|
||||
Version: 1.0
|
||||
|
||||
## Purpose
|
||||
|
||||
Rules for downloading and verifying proprietary vendor installers (`.run`, `.exe`, `.tar.gz`)
|
||||
where the vendor publishes a checksum alongside the binary.
|
||||
Applies to: NVIDIA drivers, vendor CLI tools, firmware packages.
|
||||
|
||||
---
|
||||
|
||||
## Download Order
|
||||
|
||||
Always download the checksum file **before** the installer:
|
||||
|
||||
```sh
|
||||
BASE_URL="https://vendor.example.com/downloads/${VERSION}"
|
||||
BIN_FILE="/var/cache/vendor-${VERSION}.run"
|
||||
SHA_FILE="/var/cache/vendor-${VERSION}.run.sha256sum"
|
||||
|
||||
# 1. Download checksum first
|
||||
wget -q -O "$SHA_FILE" "${BASE_URL}/vendor-${VERSION}.run.sha256sum"
|
||||
|
||||
# 2. Download installer
|
||||
wget --show-progress -O "$BIN_FILE" "${BASE_URL}/vendor-${VERSION}.run"
|
||||
|
||||
# 3. Verify
|
||||
cd /var/cache
|
||||
sha256sum -c "$SHA_FILE" || { echo "ERROR: sha256 mismatch"; rm -f "$BIN_FILE"; exit 1; }
|
||||
```
|
||||
|
||||
Reason: if the download is interrupted, you have the expected checksum to verify against on retry.
|
||||
|
||||
---
|
||||
|
||||
## Cache with Verification
|
||||
|
||||
Never assume a cached file is valid — a previous download may have been interrupted (0-byte file):
|
||||
|
||||
```sh
|
||||
verify_cached() {
|
||||
[ -s "$SHA_FILE" ] || return 1 # sha256 file missing or empty
|
||||
[ -s "$BIN_FILE" ] || return 1 # binary missing or empty
|
||||
cd "$(dirname "$BIN_FILE")"
|
||||
sha256sum -c "$SHA_FILE" --status 2>/dev/null
|
||||
}
|
||||
|
||||
if ! verify_cached; then
|
||||
rm -f "$BIN_FILE" "$SHA_FILE"
|
||||
# ... download and verify
|
||||
else
|
||||
echo "verified from cache"
|
||||
fi
|
||||
```
|
||||
|
||||
**Never check only for file existence.** Check that the file is non-empty (`-s`) AND passes checksum.
|
||||
|
||||
---
|
||||
|
||||
## Version Validation
|
||||
|
||||
Before writing build scripts, verify the version URL actually exists:
|
||||
|
||||
```sh
|
||||
curl -sIL "https://vendor.example.com/downloads/${VERSION}/installer.run" \
|
||||
| grep -i 'http/\|content-length'
|
||||
```
|
||||
|
||||
A `404` or `content-length: 0` means the version does not exist on that CDN.
|
||||
Vendor version numbering may have gaps (e.g. NVIDIA skips minor versions on some CDNs).
|
||||
|
||||
---
|
||||
|
||||
## Rules
|
||||
|
||||
- Download checksum before installer — never after.
|
||||
- Verify checksum before extracting or executing.
|
||||
- On mismatch: delete the file, exit with error. Never proceed with a bad installer.
|
||||
- Cache by `version` + any secondary key (e.g. kernel version for compiled modules).
|
||||
- Never commit installer files to git — always download at build time.
|
||||
- Log the expected hash when downloading so failures are diagnosable.
|
||||
Reference in New Issue
Block a user