docs: add agent bootstrap and contract read router
This commit is contained in:
@@ -1,55 +1,24 @@
|
||||
# {{ .project_name }} — Instructions for Claude
|
||||
|
||||
## Shared Engineering Rules
|
||||
Read `bible/` — shared rules for all projects (CSV, logging, DB, tables, background tasks, code style).
|
||||
Start with `bible/rules/patterns/` for specific contracts.
|
||||
Read `bible/AGENT-BOOTSTRAP.md` first.
|
||||
Do not read the whole `bible/` submodule by default.
|
||||
Read only the contracts that `AGENT-BOOTSTRAP.md` routes you to for the current task.
|
||||
|
||||
## Project Architecture
|
||||
Read `bible-local/` — project-specific architecture.
|
||||
Read `bible-local/README.md` first, then only the relevant project-specific architecture files.
|
||||
Every architectural decision specific to this project must be recorded in `bible-local/`.
|
||||
|
||||
---
|
||||
|
||||
## Quick Reference (full contracts in `bible/rules/patterns/`)
|
||||
## Minimum Read Path
|
||||
|
||||
### Go Code Style (`go-code-style/contract.md`)
|
||||
- Handler → Service → Repository. No SQL in handlers, no HTTP writes in services.
|
||||
- Errors: `fmt.Errorf("context: %w", err)`. Never discard with `_`.
|
||||
- `gofmt` before every commit.
|
||||
- Thresholds and status logic on the server — UI only reflects what server returns.
|
||||
1. `bible/AGENT-BOOTSTRAP.md`
|
||||
2. `bible-local/README.md`
|
||||
3. Relevant files in `bible-local/architecture/` and `bible-local/decisions/`
|
||||
4. Relevant `bible/rules/patterns/*/contract.md`
|
||||
|
||||
### Logging (`go-logging/contract.md`)
|
||||
- `slog`, stdout/stderr only. Never `console.log` as substitute for server logging.
|
||||
- Always log: startup, task start/finish/error, export row counts, ingest results, any 500.
|
||||
## Default Rule
|
||||
|
||||
### Database (`go-database/contract.md`)
|
||||
- **CRITICAL**: never run SQL on the same tx while iterating a cursor. Two-phase: read all → close → write.
|
||||
- Soft delete via `is_active = false`.
|
||||
- Fail-fast DB ping before starting HTTP server.
|
||||
- No N+1: use JOINs or batch `WHERE id IN (...)`.
|
||||
- GORM: `gorm:"-"` = fully ignored; `gorm:"-:migration"` = skip migration only.
|
||||
|
||||
### REST API (`go-api/contract.md`)
|
||||
- Plural nouns: `/api/assets`, `/api/components`.
|
||||
- Never `200 OK` for errors — use `422` for validation, `404`, `500`.
|
||||
- Error body: `{"error": "message", "fields": {"field": "reason"}}`.
|
||||
- List response always includes `total_count`, `page`, `per_page`, `total_pages`.
|
||||
- `/health` and `/api/db-status` required in every app.
|
||||
|
||||
### Background Tasks (`go-background-tasks/contract.md`)
|
||||
- Slow ops (>300ms): POST → `{task_id}` → client polls `/api/tasks/:id`.
|
||||
- No SSE. Polling only. Return `202 Accepted`.
|
||||
|
||||
### Tables, Filtering, Pagination (`table-management/contract.md`)
|
||||
- Server-side only. Filter state in URL params. Filter resets to page 1.
|
||||
- Display: "51–100 из 342".
|
||||
|
||||
### Modals (`modal-workflows/contract.md`)
|
||||
- States: open → submitting → success | error.
|
||||
- Destructive actions require confirmation modal naming the target.
|
||||
- Never close on error. Use `422` for validation errors in htmx flows.
|
||||
|
||||
### CSV Export (`import-export/contract.md`)
|
||||
- BOM: `\xEF\xBB\xBF`. Delimiter: `;`. Decimal: `,` (`1 234,56`). Dates: `DD.MM.YYYY`.
|
||||
- Stream via callback — never load all rows into memory.
|
||||
- Always call `w.Flush()` after the loop.
|
||||
Do not claim you "read bible" unless you actually read the relevant files.
|
||||
Do not walk all shared contracts unless the task is explicitly about changing the rules library itself.
|
||||
|
||||
24
rules/ai/codex/AGENTS.template.md
Normal file
24
rules/ai/codex/AGENTS.template.md
Normal file
@@ -0,0 +1,24 @@
|
||||
# {{ .project_name }} — Instructions for Codex
|
||||
|
||||
## Shared Engineering Rules
|
||||
Read `bible/AGENT-BOOTSTRAP.md` first.
|
||||
Do not read the whole `bible/` submodule by default.
|
||||
Read only the contracts that `AGENT-BOOTSTRAP.md` routes you to for the current task.
|
||||
|
||||
## Project Architecture
|
||||
Read `bible-local/README.md` first, then only the relevant project-specific architecture files.
|
||||
Every architectural decision specific to this project must be recorded in `bible-local/`.
|
||||
|
||||
---
|
||||
|
||||
## Minimum Read Path
|
||||
|
||||
1. `bible/AGENT-BOOTSTRAP.md`
|
||||
2. `bible-local/README.md`
|
||||
3. Relevant files in `bible-local/architecture/` and `bible-local/decisions/`
|
||||
4. Relevant `bible/rules/patterns/*/contract.md`
|
||||
|
||||
## Default Rule
|
||||
|
||||
Do not claim you "read bible" unless you actually read the relevant files.
|
||||
Do not walk all shared contracts unless the task is explicitly about changing the rules library itself.
|
||||
90
rules/patterns/alpine-livecd/README.md
Normal file
90
rules/patterns/alpine-livecd/README.md
Normal file
@@ -0,0 +1,90 @@
|
||||
# Alpine LiveCD Pattern Notes
|
||||
|
||||
This file keeps examples and rationale. The normative rules live in `contract.md`.
|
||||
|
||||
## Minimal mkimage Profile
|
||||
|
||||
```sh
|
||||
profile_<name>() {
|
||||
arch="x86_64"
|
||||
hostname="<hostname>"
|
||||
apkovl="genapkovl-<name>.sh"
|
||||
image_ext="iso"
|
||||
output_format="iso"
|
||||
kernel_flavors="lts"
|
||||
initfs_cmdline="modules=loop,squashfs,sd-mod,usb-storage quiet"
|
||||
initfs_features="ata base cdrom ext4 mmc nvme raid scsi squashfs usb virtio"
|
||||
grub_mod="all_video disk part_gpt part_msdos linux normal configfile search search_label efi_gop fat iso9660 cat echo ls test true help gzio"
|
||||
apks="alpine-base linux-lts linux-firmware-none ..."
|
||||
}
|
||||
```
|
||||
|
||||
`arch` is the easiest field to miss. Without it, mkimage may silently skip the profile.
|
||||
|
||||
## apkovl Placement
|
||||
|
||||
`genapkovl-<name>.sh` must be in the current working directory when mkimage runs.
|
||||
|
||||
Example:
|
||||
|
||||
```sh
|
||||
cp "genapkovl-<name>.sh" ~/.mkimage/
|
||||
cp "genapkovl-<name>.sh" /var/tmp/
|
||||
cd /var/tmp
|
||||
sh mkimage.sh --workdir /var/tmp/work ...
|
||||
```
|
||||
|
||||
## `/var/tmp` Build Root
|
||||
|
||||
Use `/var/tmp` instead of `/tmp`:
|
||||
|
||||
```sh
|
||||
export TMPDIR=/var/tmp
|
||||
cd /var/tmp
|
||||
sh mkimage.sh ...
|
||||
```
|
||||
|
||||
On Alpine builders, `/tmp` is often a small tmpfs and firmware/modloop builds overflow it.
|
||||
|
||||
## Cache Reuse
|
||||
|
||||
Typical cache-preserving cleanup:
|
||||
|
||||
```sh
|
||||
if [ -d /var/tmp/bee-iso-work ]; then
|
||||
find /var/tmp/bee-iso-work -maxdepth 1 -mindepth 1 \
|
||||
-not -name 'apks_*' \
|
||||
-not -name 'kernel_*' \
|
||||
-not -name 'syslinux_*' \
|
||||
-not -name 'grub_*' \
|
||||
-exec rm -rf {} +
|
||||
fi
|
||||
```
|
||||
|
||||
The apkovl section should still be rebuilt every time.
|
||||
|
||||
## Faster Squashfs
|
||||
|
||||
```sh
|
||||
mkdir -p /etc/mkinitfs
|
||||
grep -q 'MKSQUASHFS_OPTS' /etc/mkinitfs/mkinitfs.conf 2>/dev/null || \
|
||||
echo 'MKSQUASHFS_OPTS="-comp lz4 -Xhc"' >> /etc/mkinitfs/mkinitfs.conf
|
||||
```
|
||||
|
||||
## Long-Running Builds
|
||||
|
||||
```sh
|
||||
apk add screen
|
||||
screen -dmS build sh -c "sh build.sh > /var/log/build.log 2>&1"
|
||||
tail -f /var/log/build.log
|
||||
```
|
||||
|
||||
## Firmware Reminder
|
||||
|
||||
Typical extra firmware packages:
|
||||
|
||||
- `linux-firmware-intel`
|
||||
- `linux-firmware-mellanox`
|
||||
- `linux-firmware-bnx2x`
|
||||
- `linux-firmware-rtl_nic`
|
||||
- `linux-firmware-other`
|
||||
@@ -7,136 +7,17 @@ Version: 1.0
|
||||
Rules for building bootable Alpine Linux ISO images with custom overlays using `mkimage.sh`.
|
||||
Applies to any project that needs a LiveCD: hardware audit, rescue environments, kiosks.
|
||||
|
||||
---
|
||||
See `README.md` for detailed examples and build snippets.
|
||||
|
||||
## mkimage Profile
|
||||
## Rules
|
||||
|
||||
Every project must have a profile file `mkimg.<name>.sh` defining:
|
||||
|
||||
```sh
|
||||
profile_<name>() {
|
||||
arch="x86_64" # REQUIRED — without this mkimage silently skips the profile
|
||||
hostname="<hostname>"
|
||||
apkovl="genapkovl-<name>.sh"
|
||||
image_ext="iso"
|
||||
output_format="iso"
|
||||
kernel_flavors="lts"
|
||||
initfs_cmdline="modules=loop,squashfs,sd-mod,usb-storage quiet"
|
||||
initfs_features="ata base cdrom ext4 mmc nvme raid scsi squashfs usb virtio"
|
||||
grub_mod="all_video disk part_gpt part_msdos linux normal configfile search search_label efi_gop fat iso9660 cat echo ls test true help gzio"
|
||||
apks="alpine-base linux-lts linux-firmware-none ..."
|
||||
}
|
||||
```
|
||||
|
||||
**`arch` is mandatory.** If missing, mkimage silently builds nothing and exits 0.
|
||||
|
||||
---
|
||||
|
||||
## apkovl Mechanism
|
||||
|
||||
The apkovl is a `.tar.gz` overlay extracted by initramfs at boot, overlaying `/etc`, `/usr`, `/root`.
|
||||
|
||||
`genapkovl-<name>.sh` generates the tarball:
|
||||
- Must be in the **CWD** when mkimage runs — not only in `~/.mkimage/`
|
||||
- `~/.mkimage/` is searched for mkimg profiles only, not genapkovl scripts
|
||||
|
||||
```sh
|
||||
# Copy both scripts to ~/.mkimage AND to CWD (typically /var/tmp)
|
||||
cp "genapkovl-<name>.sh" ~/.mkimage/
|
||||
cp "genapkovl-<name>.sh" /var/tmp/
|
||||
cd /var/tmp
|
||||
sh mkimage.sh --workdir /var/tmp/work ...
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Build Environment
|
||||
|
||||
**Always use `/var/tmp`, not `/tmp`:**
|
||||
|
||||
```sh
|
||||
export TMPDIR=/var/tmp
|
||||
cd /var/tmp
|
||||
sh mkimage.sh ...
|
||||
```
|
||||
|
||||
`/tmp` on Alpine builder VMs is typically a 1GB tmpfs. Kernel firmware squashfs alone exceeds this.
|
||||
`/var/tmp` uses actual disk space.
|
||||
|
||||
---
|
||||
|
||||
## Workdir Caching
|
||||
|
||||
mkimage stores each ISO section in a hash-named subdirectory. Preserve expensive sections across builds:
|
||||
|
||||
```sh
|
||||
# Delete everything EXCEPT cached sections
|
||||
if [ -d /var/tmp/bee-iso-work ]; then
|
||||
find /var/tmp/bee-iso-work -maxdepth 1 -mindepth 1 \
|
||||
-not -name 'apks_*' \ # downloaded packages
|
||||
-not -name 'kernel_*' \ # modloop squashfs
|
||||
-not -name 'syslinux_*' \ # syslinux bootloader
|
||||
-not -name 'grub_*' \ # grub EFI
|
||||
-exec rm -rf {} +
|
||||
fi
|
||||
```
|
||||
|
||||
The apkovl section is always regenerated (contains project-specific config that changes per build).
|
||||
|
||||
---
|
||||
|
||||
## Squashfs Compression
|
||||
|
||||
Default compression is `xz` — slow but small. For RAM-loaded modloops, size rarely matters.
|
||||
Use `lz4` for faster builds:
|
||||
|
||||
```sh
|
||||
mkdir -p /etc/mkinitfs
|
||||
grep -q 'MKSQUASHFS_OPTS' /etc/mkinitfs/mkinitfs.conf 2>/dev/null || \
|
||||
echo 'MKSQUASHFS_OPTS="-comp lz4 -Xhc"' >> /etc/mkinitfs/mkinitfs.conf
|
||||
```
|
||||
|
||||
Apply before running mkimage. Rebuilds modloop only when kernel version changes.
|
||||
|
||||
---
|
||||
|
||||
## Long Builds
|
||||
|
||||
NVIDIA driver downloads, kernel compiles, and package fetches can take 10–30 minutes.
|
||||
Run in a `screen` session so builds survive SSH disconnects:
|
||||
|
||||
```sh
|
||||
apk add screen
|
||||
screen -dmS build sh -c "sh build.sh > /var/log/build.log 2>&1"
|
||||
tail -f /var/log/build.log
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## NIC Firmware
|
||||
|
||||
`linux-firmware-none` (default) contains zero firmware files. Real hardware NICs often require firmware.
|
||||
Include firmware packages matching expected hardware:
|
||||
|
||||
```
|
||||
linux-firmware-intel # Intel NICs (X710, E810, etc.)
|
||||
linux-firmware-mellanox # Mellanox/NVIDIA ConnectX
|
||||
linux-firmware-bnx2x # Broadcom NetXtreme
|
||||
linux-firmware-rtl_nic # Realtek
|
||||
linux-firmware-other # catch-all
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Versioning
|
||||
|
||||
Pin all versions in a single `VERSIONS` file sourced by all build scripts:
|
||||
|
||||
```sh
|
||||
ALPINE_VERSION=3.21
|
||||
KERNEL_VERSION=6.12
|
||||
GO_VERSION=1.23.6
|
||||
NVIDIA_DRIVER_VERSION=590.48.01
|
||||
```
|
||||
|
||||
Never hardcode versions inside build scripts.
|
||||
- Every project must have `mkimg.<name>.sh`.
|
||||
- `arch` is mandatory in the mkimage profile. If it is missing, `mkimage.sh` may exit 0 without building anything.
|
||||
- The `apkovl` generator `genapkovl-<name>.sh` must be present in the current working directory when `mkimage.sh` runs.
|
||||
- `~/.mkimage/` is for mkimg profiles only. Do not assume mkimage will find `genapkovl` there.
|
||||
- Run builds in `/var/tmp`, not `/tmp`. LiveCD builds often exceed typical `/tmp` tmpfs size.
|
||||
- Preserve expensive mkimage cache sections between builds when possible. Regenerate the apkovl section every build.
|
||||
- For RAM-loaded modloops, prefer faster squashfs settings such as `lz4` unless the project explicitly optimizes for smallest ISO size.
|
||||
- Long builds must run in a resilient session (`screen`, `tmux`, or equivalent) so SSH disconnects do not kill the build.
|
||||
- `linux-firmware-none` alone is not sufficient for real hardware targets. Include firmware packages matching the expected NIC/storage hardware.
|
||||
- Pin all build-critical versions in one shared versions file sourced by the build scripts. Do not hardcode versions inline in multiple scripts.
|
||||
|
||||
78
rules/patterns/app-binary/README.md
Normal file
78
rules/patterns/app-binary/README.md
Normal file
@@ -0,0 +1,78 @@
|
||||
# Application Binary Pattern Notes
|
||||
|
||||
This file keeps examples and rollout snippets. The normative rules live in `contract.md`.
|
||||
|
||||
## Host Layout
|
||||
|
||||
Default application root:
|
||||
|
||||
```text
|
||||
/appdata/<appname>/
|
||||
```
|
||||
|
||||
Example:
|
||||
|
||||
```bash
|
||||
scp bin/myservice user@host:/appdata/myservice/myservice
|
||||
scp docker-compose.yml user@host:/appdata/myservice/docker-compose.yml
|
||||
ssh user@host "mkdir -p /appdata/myservice"
|
||||
ssh user@host "cd /appdata/myservice && docker compose up -d"
|
||||
```
|
||||
|
||||
## Embedded Resources
|
||||
|
||||
Typical embedded assets:
|
||||
|
||||
- HTML templates
|
||||
- static JS/CSS/icons
|
||||
- `config.template.yaml`
|
||||
- DB migrations
|
||||
|
||||
## Config Template Example
|
||||
|
||||
```yaml
|
||||
# <appname> configuration
|
||||
# Generated on first run. Edit as needed.
|
||||
|
||||
server:
|
||||
port: 8080
|
||||
|
||||
database:
|
||||
host: localhost
|
||||
port: 5432
|
||||
user: ""
|
||||
password: ""
|
||||
dbname: ""
|
||||
```
|
||||
|
||||
## First-Run Behavior
|
||||
|
||||
```text
|
||||
Start
|
||||
-> config missing
|
||||
-> create directory
|
||||
-> write template
|
||||
-> print config path
|
||||
-> exit 0
|
||||
```
|
||||
|
||||
Expected message:
|
||||
|
||||
```text
|
||||
Config created: ~/.config/<appname>/config.yaml
|
||||
Edit the file and restart the application.
|
||||
```
|
||||
|
||||
## Build Examples
|
||||
|
||||
Without CGO:
|
||||
|
||||
```bash
|
||||
CGO_ENABLED=0 go build -ldflags="-s -w" -o bin/<appname> ./cmd/<appname>
|
||||
```
|
||||
|
||||
With CGO where required:
|
||||
|
||||
```bash
|
||||
CGO_ENABLED=1 go build -ldflags="-s -w" -o bin/<appname> ./cmd/<appname>
|
||||
```
|
||||
@@ -6,152 +6,19 @@ Version: 1.0
|
||||
|
||||
Правила сборки, упаковки ресурсов и первого запуска Go-приложений.
|
||||
|
||||
---
|
||||
See `README.md` for deployment examples and a sample config template.
|
||||
|
||||
## Расположение на хосте
|
||||
## Rules
|
||||
|
||||
> Это правило применяется когда ИИ самостоятельно разворачивает приложение или выполняет команды на build-машине (деплой, копирование файлов, запуск сервисов).
|
||||
|
||||
Бинарник приложения размещается в директории:
|
||||
|
||||
```
|
||||
/appdata/<appname>/
|
||||
```
|
||||
|
||||
где `<appname>` — имя приложения (строчными буквами, без пробелов).
|
||||
|
||||
Пример: приложение `myservice` → `/appdata/myservice/myservice`.
|
||||
|
||||
Все файлы, связанные с конкретным приложением (бинарник, вспомогательные скрипты запуска, `docker-compose.yml`), хранятся внутри этой директории. Конфиг и данные — по правилам секций ниже.
|
||||
|
||||
### Примеры внедрения
|
||||
|
||||
При деплое, копировании файлов или запуске сервисов ИИ **всегда по умолчанию** использует этот путь:
|
||||
|
||||
```bash
|
||||
# Копирование бинарника
|
||||
scp bin/myservice user@host:/appdata/myservice/myservice
|
||||
|
||||
# Копирование docker-compose
|
||||
scp docker-compose.yml user@host:/appdata/myservice/docker-compose.yml
|
||||
|
||||
# Запуск на хосте
|
||||
ssh user@host "cd /appdata/myservice && docker compose up -d"
|
||||
```
|
||||
|
||||
```bash
|
||||
# Создание директории если не существует
|
||||
ssh user@host "mkdir -p /appdata/myservice"
|
||||
```
|
||||
|
||||
Не предлагать альтернативные пути (`/opt/`, `/usr/local/bin/`, `~/`) — только `/appdata/<appname>/`.
|
||||
|
||||
---
|
||||
|
||||
## Бинарник
|
||||
|
||||
Бинарник самодостаточен — все ресурсы встроены через `//go:embed`:
|
||||
|
||||
- HTML-шаблоны
|
||||
- Статика (JS, CSS, иконки)
|
||||
- Шаблон конфиг-файла (`config.template.yaml`)
|
||||
- Миграции БД
|
||||
|
||||
Никаких внешних папок рядом с бинарником не требуется для запуска.
|
||||
|
||||
---
|
||||
|
||||
## Конфиг-файл
|
||||
|
||||
Создаётся автоматически при первом запуске, если не существует.
|
||||
|
||||
### Расположение
|
||||
|
||||
| Режим приложения | Путь |
|
||||
|---|---|
|
||||
| Однопользовательское | `~/.config/<appname>/config.yaml` |
|
||||
| Серверное / многопользовательское | `/etc/<appname>/config.yaml` или рядом с бинарником |
|
||||
|
||||
Приложение само определяет путь и создаёт директорию если её нет.
|
||||
|
||||
### Содержимое
|
||||
|
||||
Конфиг хранит:
|
||||
|
||||
- Настройки приложения (порт, язык, таймауты, feature flags)
|
||||
- Параметры подключения к централизованной СУБД (host, port, user, password, dbname)
|
||||
|
||||
Конфиг **не хранит**:
|
||||
|
||||
- Данные пользователя
|
||||
- Кеш или состояние
|
||||
- Что-либо что относится к SQLite (см. ниже)
|
||||
|
||||
### Шаблон
|
||||
|
||||
Шаблон конфига встроен в бинарник. При создании файла шаблон копируется в целевой путь.
|
||||
Шаблон содержит все ключи с комментариями и дефолтными значениями.
|
||||
|
||||
```yaml
|
||||
# <appname> configuration
|
||||
# Generated on first run. Edit as needed.
|
||||
|
||||
server:
|
||||
port: 8080
|
||||
|
||||
database:
|
||||
host: localhost
|
||||
port: 5432
|
||||
user: ""
|
||||
password: ""
|
||||
dbname: ""
|
||||
|
||||
# ... остальные настройки
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## SQLite (однопользовательский режим)
|
||||
|
||||
Если приложение использует локальную SQLite:
|
||||
|
||||
- Файл хранится рядом с конфигом: `~/.config/<appname>/<appname>.db`
|
||||
- Путь к файлу не выносится в конфиг — приложение вычисляет его из пути конфига
|
||||
- SQLite **не хранит** параметры подключения к централизованной СУБД — только локальные данные приложения
|
||||
|
||||
---
|
||||
|
||||
## Первый запуск — алгоритм
|
||||
|
||||
```
|
||||
Старт приложения
|
||||
│
|
||||
├── Конфиг существует? → Нет → создать директорию → скопировать шаблон → сообщить пользователю путь
|
||||
│ → завершить с кодом 0
|
||||
│ (пользователь заполняет конфиг)
|
||||
└── Конфиг существует? → Да → валидировать → запустить приложение
|
||||
```
|
||||
|
||||
При первом создании конфига приложение **не запускается** — выводит сообщение:
|
||||
|
||||
```
|
||||
Config created: ~/.config/<appname>/config.yaml
|
||||
Edit the file and restart the application.
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Сборка
|
||||
|
||||
Финальный бинарник собирается без CGO если это возможно (для SQLite — с CGO):
|
||||
|
||||
```
|
||||
CGO_ENABLED=0 go build -ldflags="-s -w" -o bin/<appname> ./cmd/<appname>
|
||||
```
|
||||
|
||||
С SQLite:
|
||||
```
|
||||
CGO_ENABLED=1 go build -ldflags="-s -w" -o bin/<appname> ./cmd/<appname>
|
||||
```
|
||||
|
||||
Бинарник не зависит от рабочей директории запуска.
|
||||
- When the agent deploys or runs commands on a host, the application lives in `/appdata/<appname>/`.
|
||||
- Do not suggest alternate default install paths such as `/opt`, `/usr/local/bin`, or `~/`.
|
||||
- The binary must be self-contained. Templates, static assets, config templates, and DB migrations are embedded with `//go:embed` or an equivalent application-owned mechanism.
|
||||
- The application creates its config automatically on first run if it does not exist yet.
|
||||
- Default config path:
|
||||
- single-user mode: `~/.config/<appname>/config.yaml`
|
||||
- server or multi-user mode: `/etc/<appname>/config.yaml` or next to the binary
|
||||
- Config stores application settings and centralized DB credentials only. It must not store user data, cache/state, or SQLite path configuration.
|
||||
- For local SQLite mode, the database file lives next to the config and its path is derived by the application, not configured separately.
|
||||
- On first run with no config, the application must create the config, print its path, exit 0, and stop. It must not continue startup with a fresh placeholder config.
|
||||
- The binary must not depend on the caller's working directory.
|
||||
- Build with `CGO_ENABLED=0` when possible. Enable CGO only when the chosen storage/runtime actually requires it, such as SQLite drivers that need CGO.
|
||||
|
||||
117
rules/patterns/bom-decomposition/README.md
Normal file
117
rules/patterns/bom-decomposition/README.md
Normal file
@@ -0,0 +1,117 @@
|
||||
# BOM Decomposition Pattern Notes
|
||||
|
||||
This file keeps examples and reference types. The normative rules live in `contract.md`.
|
||||
|
||||
## Canonical JSON Shape
|
||||
|
||||
```json
|
||||
{
|
||||
"sort_order": 10,
|
||||
"item_code": "SYS-821GE-TNHR",
|
||||
"quantity": 3,
|
||||
"description": "Vendor bundle",
|
||||
"unit_price": 12000.00,
|
||||
"total_price": 36000.00,
|
||||
"component_mappings": [
|
||||
{ "component_ref": "CHASSIS_X13_8GPU", "quantity_per_item": 1 },
|
||||
{ "component_ref": "PS_3000W_Titanium", "quantity_per_item": 2 },
|
||||
{ "component_ref": "RAILKIT_X13", "quantity_per_item": 1 }
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
Project-specific aliases are acceptable if the semantics stay identical:
|
||||
|
||||
- `item_code` -> `vendor_partnumber`
|
||||
- `component_ref` -> `lot_name`
|
||||
- `component_mappings` -> `lot_mappings`
|
||||
- `quantity_per_item` -> `quantity_per_pn`
|
||||
|
||||
## Persistence Example
|
||||
|
||||
```json
|
||||
{
|
||||
"vendor_spec": [
|
||||
{
|
||||
"sort_order": 10,
|
||||
"vendor_partnumber": "ABC-123",
|
||||
"quantity": 2,
|
||||
"description": "Bundle",
|
||||
"lot_mappings": [
|
||||
{ "lot_name": "LOT_CPU", "quantity_per_pn": 1 },
|
||||
{ "lot_name": "LOT_RAIL", "quantity_per_pn": 1 }
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## Wrong Shape
|
||||
|
||||
```json
|
||||
{
|
||||
"vendor_spec": [
|
||||
{
|
||||
"sort_order": 10,
|
||||
"vendor_partnumber": "ABC-123",
|
||||
"primary_lot": "LOT_CPU",
|
||||
"secondary_lots": ["LOT_RAIL"]
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## Reference Go Types
|
||||
|
||||
```go
|
||||
type BOMItem struct {
|
||||
SortOrder int `json:"sort_order"`
|
||||
ItemCode string `json:"item_code"`
|
||||
Quantity int `json:"quantity"`
|
||||
Description string `json:"description,omitempty"`
|
||||
UnitPrice *float64 `json:"unit_price,omitempty"`
|
||||
TotalPrice *float64 `json:"total_price,omitempty"`
|
||||
ComponentMappings []ComponentMapping `json:"component_mappings,omitempty"`
|
||||
}
|
||||
|
||||
type ComponentMapping struct {
|
||||
ComponentRef string `json:"component_ref"`
|
||||
QuantityPerItem int `json:"quantity_per_item"`
|
||||
}
|
||||
```
|
||||
|
||||
## Normalization Sketch
|
||||
|
||||
```go
|
||||
func NormalizeComponentMappings(in []ComponentMapping) ([]ComponentMapping, error) {
|
||||
if len(in) == 0 {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
merged := map[string]int{}
|
||||
order := make([]string, 0, len(in))
|
||||
|
||||
for _, m := range in {
|
||||
ref := strings.TrimSpace(m.ComponentRef)
|
||||
if ref == "" {
|
||||
continue
|
||||
}
|
||||
if m.QuantityPerItem <= 0 {
|
||||
return nil, fmt.Errorf("component %q has invalid quantity_per_item %d", ref, m.QuantityPerItem)
|
||||
}
|
||||
if _, exists := merged[ref]; !exists {
|
||||
order = append(order, ref)
|
||||
}
|
||||
merged[ref] += m.QuantityPerItem
|
||||
}
|
||||
|
||||
out := make([]ComponentMapping, 0, len(order))
|
||||
for _, ref := range order {
|
||||
out = append(out, ComponentMapping{
|
||||
ComponentRef: ref,
|
||||
QuantityPerItem: merged[ref],
|
||||
})
|
||||
}
|
||||
return out, nil
|
||||
}
|
||||
```
|
||||
@@ -15,288 +15,43 @@ Use this contract when:
|
||||
- one bundle SKU expands into multiple internal components
|
||||
- one external line item contributes quantities to multiple downstream rows
|
||||
|
||||
## Canonical Data Model
|
||||
See `README.md` for full JSON and Go examples.
|
||||
|
||||
One BOM row has one item quantity and zero or more mapping entries:
|
||||
## Canonical Shape
|
||||
|
||||
```json
|
||||
{
|
||||
"sort_order": 10,
|
||||
"item_code": "SYS-821GE-TNHR",
|
||||
"quantity": 3,
|
||||
"description": "Vendor bundle",
|
||||
"unit_price": 12000.00,
|
||||
"total_price": 36000.00,
|
||||
"component_mappings": [
|
||||
{ "component_ref": "CHASSIS_X13_8GPU", "quantity_per_item": 1 },
|
||||
{ "component_ref": "PS_3000W_Titanium", "quantity_per_item": 2 },
|
||||
{ "component_ref": "RAILKIT_X13", "quantity_per_item": 1 }
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
Rules:
|
||||
- A BOM row contains one quantity plus zero or more mapping entries in one array field.
|
||||
- `component_mappings[]` is the only canonical persisted decomposition format.
|
||||
- Each mapping entry contains:
|
||||
- `component_ref` — stable identifier of the downstream component/LOT
|
||||
- `quantity_per_item` — how many units of that component are produced by one BOM row unit
|
||||
- Derived or UI-only fields may exist at runtime, but they are not the source of truth.
|
||||
- Each mapping entry has:
|
||||
- `component_ref`
|
||||
- `quantity_per_item`
|
||||
- Project-specific field names are allowed only if the semantics stay identical.
|
||||
|
||||
Project-specific names are allowed if the semantics stay identical:
|
||||
- `item_code` may be `vendor_partnumber`
|
||||
- `component_ref` may be `lot_name`, `lot_code`, or another stable project identifier
|
||||
- `component_mappings` may be `lot_mappings`
|
||||
## Quantity and Persistence Rules
|
||||
|
||||
## Quantity Semantics
|
||||
- Downstream quantity is always `row.quantity * mapping.quantity_per_item`.
|
||||
- The persisted row payload is the source of truth.
|
||||
- The same mapping shape must be used for persistence, API read/write payloads, and downstream expansion logic.
|
||||
- If the mapping array is empty, the row contributes nothing downstream.
|
||||
- Row order is defined by `sort_order`.
|
||||
- Mapping entry order may be preserved for UX, but business logic must not depend on it.
|
||||
|
||||
The total downstream quantity is always:
|
||||
## UI and Validation Rules
|
||||
|
||||
```text
|
||||
downstream_total_qty = row.quantity * mapping.quantity_per_item
|
||||
```
|
||||
|
||||
Example:
|
||||
- BOM row quantity = `3`
|
||||
- mapping A quantity per item = `1`
|
||||
- mapping B quantity per item = `2`
|
||||
|
||||
Result:
|
||||
- component A total = `3`
|
||||
- component B total = `6`
|
||||
|
||||
This multiplication rule is mandatory for estimate/cart/build expansion.
|
||||
|
||||
## Persistence Contract
|
||||
|
||||
The source of truth is the persisted BOM row JSON payload.
|
||||
|
||||
If the project stores BOM rows:
|
||||
- in a SQL JSON column, the JSON payload is the source of truth
|
||||
- in a text column containing JSON, that JSON payload is the source of truth
|
||||
- in an API document later persisted as JSON, the row payload shape must remain unchanged
|
||||
|
||||
Example persisted payload:
|
||||
|
||||
```json
|
||||
{
|
||||
"vendor_spec": [
|
||||
{
|
||||
"sort_order": 10,
|
||||
"vendor_partnumber": "ABC-123",
|
||||
"quantity": 2,
|
||||
"description": "Bundle",
|
||||
"lot_mappings": [
|
||||
{ "lot_name": "LOT_CPU", "quantity_per_pn": 1 },
|
||||
{ "lot_name": "LOT_RAIL", "quantity_per_pn": 1 }
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
Persistence rules:
|
||||
- the decomposition must be stored inside each BOM row
|
||||
- all mapping entries for that row must live in one array field
|
||||
- no secondary storage format may act as a competing source of truth
|
||||
|
||||
## API Contract
|
||||
|
||||
API read and write payloads must expose the same decomposition shape that is persisted.
|
||||
|
||||
Rules:
|
||||
- `GET` returns BOM rows with `component_mappings[]` or the project-specific equivalent
|
||||
- `PUT` / `POST` accepts the same shape
|
||||
- rebuild/apply/cart expansion must read only from the persisted mapping array
|
||||
- if the mapping array is empty, the row contributes nothing downstream
|
||||
- row order is defined by `sort_order`
|
||||
- mapping entry order may be preserved for UX, but business logic must not depend on it
|
||||
|
||||
Correct:
|
||||
|
||||
```json
|
||||
{
|
||||
"vendor_spec": [
|
||||
{
|
||||
"sort_order": 10,
|
||||
"vendor_partnumber": "ABC-123",
|
||||
"quantity": 2,
|
||||
"lot_mappings": [
|
||||
{ "lot_name": "LOT_CPU", "quantity_per_pn": 1 },
|
||||
{ "lot_name": "LOT_RAIL", "quantity_per_pn": 1 }
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
Wrong:
|
||||
|
||||
```json
|
||||
{
|
||||
"vendor_spec": [
|
||||
{
|
||||
"sort_order": 10,
|
||||
"vendor_partnumber": "ABC-123",
|
||||
"primary_lot": "LOT_CPU",
|
||||
"secondary_lots": ["LOT_RAIL"]
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## UI Invariants
|
||||
|
||||
The UI may render the mapping list in any layout, but it must preserve the same semantics.
|
||||
|
||||
Rules:
|
||||
- the first visible mapping row is not special; it is only the first entry in the array
|
||||
- additional rows may be added via `+`, modal, inline insert, or another UI affordance
|
||||
- every mapping row is equally editable and removable
|
||||
- `quantity_per_item` is edited per mapping row, not once for the whole row
|
||||
- blank mapping rows may exist temporarily in draft UI state, but they must not be persisted
|
||||
- new UI rows should default `quantity_per_item` to `1`
|
||||
|
||||
## Normalization and Validation
|
||||
|
||||
Two stages are allowed:
|
||||
- draft UI normalization for convenience
|
||||
- server-side persistence validation for correctness
|
||||
|
||||
Canonical rules before persistence:
|
||||
- trim `component_ref`
|
||||
- drop rows with empty `component_ref`
|
||||
- reject `quantity_per_item <= 0` with a validation error
|
||||
- merge duplicate `component_ref` values within one BOM row by summing `quantity_per_item`
|
||||
- preserve first-seen order when merging duplicates
|
||||
|
||||
Example input:
|
||||
|
||||
```json
|
||||
[
|
||||
{ "component_ref": "LOT_A", "quantity_per_item": 1 },
|
||||
{ "component_ref": " LOT_A ", "quantity_per_item": 2 },
|
||||
{ "component_ref": "", "quantity_per_item": 5 }
|
||||
]
|
||||
```
|
||||
|
||||
Normalized result:
|
||||
|
||||
```json
|
||||
[
|
||||
{ "component_ref": "LOT_A", "quantity_per_item": 3 }
|
||||
]
|
||||
```
|
||||
|
||||
Why validation instead of silent repair:
|
||||
- API contracts between applications must fail loudly on invalid quantities
|
||||
- UI may prefill `1`, but the server must not silently reinterpret `0` or negative values
|
||||
- The first mapping row is not special. Every mapping row is equally editable and removable.
|
||||
- `quantity_per_item` is edited per mapping row, not once for the whole BOM row.
|
||||
- Blank mapping rows may exist temporarily in draft UI state, but they must not be persisted.
|
||||
- New UI rows should default `quantity_per_item` to `1`.
|
||||
- Before persistence:
|
||||
- trim `component_ref`
|
||||
- drop empty `component_ref` rows
|
||||
- reject `quantity_per_item <= 0`
|
||||
- merge duplicate `component_ref` values by summing quantities
|
||||
- preserve first-seen order when merging duplicates
|
||||
|
||||
## Forbidden Patterns
|
||||
|
||||
Do not introduce incompatible storage or logic variants such as:
|
||||
- `primary_lot`, `secondary_lots`, `main_component`, `bundle_lots`
|
||||
- one field for the component and a separate field for its quantity outside the mapping array
|
||||
- special-case logic where the first mapping row is "main" and later rows are optional add-ons
|
||||
- computing downstream rows from temporary UI fields instead of the persisted mapping array
|
||||
- storing the same decomposition in multiple shapes at once
|
||||
|
||||
## Reference Go Types
|
||||
|
||||
```go
|
||||
type BOMItem struct {
|
||||
SortOrder int `json:"sort_order"`
|
||||
ItemCode string `json:"item_code"`
|
||||
Quantity int `json:"quantity"`
|
||||
Description string `json:"description,omitempty"`
|
||||
UnitPrice *float64 `json:"unit_price,omitempty"`
|
||||
TotalPrice *float64 `json:"total_price,omitempty"`
|
||||
ComponentMappings []ComponentMapping `json:"component_mappings,omitempty"`
|
||||
}
|
||||
|
||||
type ComponentMapping struct {
|
||||
ComponentRef string `json:"component_ref"`
|
||||
QuantityPerItem int `json:"quantity_per_item"`
|
||||
}
|
||||
```
|
||||
|
||||
Project-specific aliases are acceptable if they preserve identical semantics:
|
||||
|
||||
```go
|
||||
type VendorSpecItem struct {
|
||||
SortOrder int `json:"sort_order"`
|
||||
VendorPartnumber string `json:"vendor_partnumber"`
|
||||
Quantity int `json:"quantity"`
|
||||
Description string `json:"description,omitempty"`
|
||||
UnitPrice *float64 `json:"unit_price,omitempty"`
|
||||
TotalPrice *float64 `json:"total_price,omitempty"`
|
||||
LotMappings []VendorSpecLotMapping `json:"lot_mappings,omitempty"`
|
||||
}
|
||||
|
||||
type VendorSpecLotMapping struct {
|
||||
LotName string `json:"lot_name"`
|
||||
QuantityPerPN int `json:"quantity_per_pn"`
|
||||
}
|
||||
```
|
||||
|
||||
## Reference Normalization (Go)
|
||||
|
||||
```go
|
||||
func NormalizeComponentMappings(in []ComponentMapping) ([]ComponentMapping, error) {
|
||||
if len(in) == 0 {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
merged := map[string]int{}
|
||||
order := make([]string, 0, len(in))
|
||||
|
||||
for _, m := range in {
|
||||
ref := strings.TrimSpace(m.ComponentRef)
|
||||
if ref == "" {
|
||||
continue
|
||||
}
|
||||
if m.QuantityPerItem <= 0 {
|
||||
return nil, fmt.Errorf("component %q has invalid quantity_per_item %d", ref, m.QuantityPerItem)
|
||||
}
|
||||
if _, exists := merged[ref]; !exists {
|
||||
order = append(order, ref)
|
||||
}
|
||||
merged[ref] += m.QuantityPerItem
|
||||
}
|
||||
|
||||
out := make([]ComponentMapping, 0, len(order))
|
||||
for _, ref := range order {
|
||||
out = append(out, ComponentMapping{
|
||||
ComponentRef: ref,
|
||||
QuantityPerItem: merged[ref],
|
||||
})
|
||||
}
|
||||
if len(out) == 0 {
|
||||
return nil, nil
|
||||
}
|
||||
return out, nil
|
||||
}
|
||||
```
|
||||
|
||||
## Reference Expansion (Go)
|
||||
|
||||
```go
|
||||
type CartItem struct {
|
||||
ComponentRef string
|
||||
Quantity int
|
||||
}
|
||||
|
||||
func ExpandBOMRow(row BOMItem) []CartItem {
|
||||
result := make([]CartItem, 0, len(row.ComponentMappings))
|
||||
for _, m := range row.ComponentMappings {
|
||||
qty := row.Quantity * m.QuantityPerItem
|
||||
if qty <= 0 {
|
||||
continue
|
||||
}
|
||||
result = append(result, CartItem{
|
||||
ComponentRef: m.ComponentRef,
|
||||
Quantity: qty,
|
||||
})
|
||||
}
|
||||
return result
|
||||
}
|
||||
```
|
||||
- Do not introduce alternate persisted shapes such as `primary_lot`, `secondary_lots`, `main_component`, or `bundle_lots`.
|
||||
- Do not split the component and its quantity across unrelated fields outside the mapping array.
|
||||
- Do not treat the first mapping row as a special primary component.
|
||||
- Do not compute downstream decomposition from temporary UI-only fields instead of the persisted mapping array.
|
||||
- Do not store the same decomposition in multiple competing formats.
|
||||
|
||||
72
rules/patterns/go-database/README.md
Normal file
72
rules/patterns/go-database/README.md
Normal file
@@ -0,0 +1,72 @@
|
||||
# Database Pattern Notes
|
||||
|
||||
This file keeps examples and rationale. The normative rules live in `contract.md`.
|
||||
|
||||
## Cursor Safety
|
||||
|
||||
Wrong:
|
||||
|
||||
```go
|
||||
rows, _ := tx.Query("SELECT id FROM machines")
|
||||
for rows.Next() {
|
||||
var id string
|
||||
rows.Scan(&id)
|
||||
tx.Exec("UPDATE machines SET processed=1 WHERE id=?", id)
|
||||
}
|
||||
```
|
||||
|
||||
Correct:
|
||||
|
||||
```go
|
||||
rows, _ := tx.Query("SELECT id FROM machines")
|
||||
var ids []string
|
||||
for rows.Next() {
|
||||
var id string
|
||||
rows.Scan(&id)
|
||||
ids = append(ids, id)
|
||||
}
|
||||
rows.Close()
|
||||
|
||||
for _, id := range ids {
|
||||
tx.Exec("UPDATE machines SET processed=1 WHERE id=?", id)
|
||||
}
|
||||
```
|
||||
|
||||
## GORM Virtual Fields
|
||||
|
||||
```go
|
||||
Count int `gorm:"-"`
|
||||
DisplayName string `gorm:"-:migration"`
|
||||
```
|
||||
|
||||
## SQL Header Example
|
||||
|
||||
```sql
|
||||
-- Tables affected: supplier, lot_log
|
||||
-- recovery.not-started: No action required.
|
||||
-- recovery.partial: DELETE FROM parts_log WHERE created_by = 'migration';
|
||||
-- recovery.completed: Same as partial.
|
||||
-- verify: No orphaned supplier_code | SELECT supplier_code FROM parts_log pl LEFT JOIN supplier s ON s.supplier_code = pl.supplier_code WHERE s.supplier_code IS NULL AND pl.supplier_code IS NOT NULL AND pl.supplier_code != '' LIMIT 1
|
||||
```
|
||||
|
||||
## Docker Validation Example
|
||||
|
||||
```bash
|
||||
docker run -d --name pf_test \
|
||||
-e MYSQL_ROOT_PASSWORD=test -e MYSQL_DATABASE=RFQ_LOG \
|
||||
mariadb:11.8 --character-set-server=utf8mb4 --collation-server=utf8mb4_uca1400_ai_ci
|
||||
|
||||
docker exec -i pf_test mariadb -uroot -ptest RFQ_LOG < prod_dump.sql
|
||||
|
||||
./pfs -migrate-dsn "root:test@tcp(127.0.0.1:3306)/RFQ_LOG?parseTime=true&charset=utf8mb4&multiStatements=true" \
|
||||
-no-backup -verbose
|
||||
```
|
||||
|
||||
## Legacy FK Repair Pattern
|
||||
|
||||
```sql
|
||||
INSERT IGNORE INTO parent (name)
|
||||
SELECT DISTINCT c.fk_col FROM child c
|
||||
LEFT JOIN parent p ON p.name = c.fk_col
|
||||
WHERE p.name IS NULL AND c.fk_col IS NOT NULL AND c.fk_col != '';
|
||||
```
|
||||
@@ -2,225 +2,56 @@
|
||||
|
||||
Version: 1.9
|
||||
|
||||
## MySQL Transaction Cursor Safety (CRITICAL)
|
||||
See `README.md` for examples, migration snippets, and Docker test commands.
|
||||
|
||||
**Never execute SQL on the same transaction while iterating over a query result cursor.**
|
||||
## Query and Startup Rules
|
||||
|
||||
This is the most common source of `invalid connection` and `unexpected EOF` driver panics.
|
||||
|
||||
### Rule
|
||||
|
||||
Use a two-phase approach: read all rows first, close the cursor, then execute writes.
|
||||
|
||||
```go
|
||||
// WRONG — executes SQL inside rows.Next() loop on the same tx
|
||||
rows, _ := tx.Query("SELECT id FROM machines")
|
||||
for rows.Next() {
|
||||
var id string
|
||||
rows.Scan(&id)
|
||||
tx.Exec("UPDATE machines SET processed=1 WHERE id=?", id) // DEADLOCK / driver panic
|
||||
}
|
||||
|
||||
// CORRECT — collect IDs first, then write
|
||||
rows, _ := tx.Query("SELECT id FROM machines")
|
||||
var ids []string
|
||||
for rows.Next() {
|
||||
var id string
|
||||
rows.Scan(&id)
|
||||
ids = append(ids, id)
|
||||
}
|
||||
rows.Close() // explicit close before any write
|
||||
|
||||
for _, id := range ids {
|
||||
tx.Exec("UPDATE machines SET processed=1 WHERE id=?", id)
|
||||
}
|
||||
```
|
||||
|
||||
This applies to:
|
||||
- `database/sql` with manual transactions
|
||||
- GORM `db.Raw().Scan()` inside a `db.Transaction()` callback
|
||||
- Any loop that calls a repository method while a cursor is open
|
||||
|
||||
## Soft Delete / Archive Pattern
|
||||
|
||||
Do not use hard deletes for user-visible records. Use an archive flag.
|
||||
|
||||
```go
|
||||
// Schema: is_active bool DEFAULT true
|
||||
// "Delete" = set is_active = false
|
||||
// Restore = set is_active = true
|
||||
|
||||
// All list queries must filter:
|
||||
WHERE is_active = true
|
||||
```
|
||||
|
||||
- Never physically delete rows that have foreign key references or history.
|
||||
- Hard delete is only acceptable for orphaned/temporary data with no audit trail requirement.
|
||||
- Never execute SQL on the same transaction while iterating an open result cursor. Use a two-phase flow: read all rows, close the cursor, then execute writes.
|
||||
- This rule applies to `database/sql`, GORM transactions, and any repository call made while another cursor in the same transaction is still open.
|
||||
- User-visible records use soft delete or archive flags. Do not hard-delete records with history or foreign-key references.
|
||||
- Archive operations must be reversible from the UI.
|
||||
- Use `gorm:"-"` only for fields that must be ignored entirely. Use `gorm:"-:migration"` for fields populated by queries but excluded from migrations.
|
||||
- Always verify the DB connection before starting the HTTP server. Never serve traffic with an unverified DB connection.
|
||||
- Prevent N+1 queries. Do not query inside loops over rows from another query; use JOINs or batched `IN (...)` queries.
|
||||
|
||||
## GORM Virtual Fields
|
||||
## Migration and Backup Rules
|
||||
|
||||
Use the correct tag based on whether the field should exist in the DB schema:
|
||||
- The migration engine owns backup creation. The operator must never be required to take a manual pre-migration backup.
|
||||
- Backup storage, retention, archive format, and restore-readiness must follow `backup-management`.
|
||||
- Before applying any unapplied migrations, take and verify a full DB backup.
|
||||
- Before applying a migration step that changes a table, take a targeted backup of each affected table.
|
||||
- Before writing any backup, verify that the output path resolves outside the git worktree and is not tracked or staged in git.
|
||||
- If any migration step in a session fails, roll back all steps applied in that session in reverse order.
|
||||
- If rollback is not sufficient, restore from the targeted backup taken before the failing step.
|
||||
- After rollback or restore, the DB must be back in the same state it had before the session started.
|
||||
- Migration failures must emit structured diagnostics naming the failed step, rollback actions, and final DB state.
|
||||
|
||||
```go
|
||||
// Field computed at runtime, column must NOT exist in DB (excludes from migrations AND queries)
|
||||
Count int `gorm:"-"`
|
||||
## Migration Authoring Rules
|
||||
|
||||
// Field computed at query time via JOIN/SELECT, column must NOT be in migrations
|
||||
// but IS populated from query results
|
||||
DisplayName string `gorm:"-:migration"`
|
||||
```
|
||||
- For local-first desktop apps, migration recovery must also follow `local-first-recovery`.
|
||||
- Migrations are sequential and immutable after merge.
|
||||
- Each migration should be reversible where possible.
|
||||
- Do not rename a column in one step. Add new, backfill, and drop old across separate deploys.
|
||||
- Auto-apply on startup is allowed for internal tools only if the behavior is documented.
|
||||
- Every `.sql` migration file must start with:
|
||||
- `-- Tables affected: ...`
|
||||
- `-- recovery.not-started: ...`
|
||||
- `-- recovery.partial: ...`
|
||||
- `-- recovery.completed: ...`
|
||||
- one or more `-- verify: <description> | <SQL>` checks
|
||||
- Verify queries must return rows only when something is wrong.
|
||||
- Verify queries must exclude NULL and empty values when those would create false positives.
|
||||
- A migration is recorded as applied only after all verify checks pass.
|
||||
|
||||
- `gorm:"-"` — fully ignored: no migration, no read, no write.
|
||||
- `gorm:"-:migration"` — skip migration only; GORM will still read/write if the column exists.
|
||||
- Do not use `gorm:"-"` for JOIN-populated fields — the value will always be zero.
|
||||
## Pre-Production Validation Rules
|
||||
|
||||
## Fail-Fast DB Check on Startup
|
||||
- Test pending migrations on a dump of the current production DB, not on fixtures.
|
||||
- Use a local MariaDB Docker container matching the production version and collation.
|
||||
- Execute each migration file as one DB session so session variables such as `SET FOREIGN_KEY_CHECKS = 0` remain in effect for the whole file.
|
||||
- If migrations fail in Docker, fix them before touching production.
|
||||
|
||||
Always verify the database connection before starting the HTTP server.
|
||||
## Common Pitfalls
|
||||
|
||||
```go
|
||||
sqlDB, err := db.DB()
|
||||
if err != nil || sqlDB.Ping() != nil {
|
||||
log.Fatal("database unavailable, refusing to start")
|
||||
}
|
||||
// then: run migrations, then: start gin/http server
|
||||
```
|
||||
|
||||
Never start serving traffic with an unverified DB connection. Fail loudly at boot.
|
||||
|
||||
## N+1 Query Prevention
|
||||
|
||||
Use JOINs or batch IN queries. Never query inside a loop over rows from another query.
|
||||
|
||||
```go
|
||||
// WRONG
|
||||
for _, pricelist := range pricelists {
|
||||
items, _ := repo.GetItems(pricelist.ID) // N queries
|
||||
}
|
||||
|
||||
// CORRECT
|
||||
items, _ := repo.GetItemsByPricelistIDs(ids) // 1 query with WHERE id IN (...)
|
||||
// then group in Go
|
||||
```
|
||||
|
||||
## Automatic Backup During Migration
|
||||
|
||||
The migration engine is responsible for all backup steps. The operator must never be required to take a backup manually.
|
||||
|
||||
Backup naming, storage, archive format, retention, and restore-readiness must follow the `backup-management` contract.
|
||||
|
||||
### Full DB Backup on New Migrations
|
||||
|
||||
When the migration engine detects that new (unapplied) migrations exist, it must take a full database backup before applying any of them.
|
||||
|
||||
Rules:
|
||||
- The full backup must complete and be verified before the first migration step runs.
|
||||
- The backup must be triggered by the application's own backup mechanism; do not assume `mysql`, `mysqldump`, `pg_dump`, or any other external DB client tool is present on the operator's machine.
|
||||
- Before creating the backup, verify that the backup output path resolves outside the git worktree and is not tracked or staged in git.
|
||||
|
||||
### Per-Table Backup Before Each Table Migration
|
||||
|
||||
Before applying a migration step that affects a specific table, take a targeted backup of that table.
|
||||
|
||||
Rules:
|
||||
- A per-table backup must be created immediately before the migration step that modifies that table.
|
||||
- If a single migration step touches multiple tables, back up each affected table before the step runs.
|
||||
- Per-table backups are in addition to the full DB backup; they are not a substitute for it.
|
||||
|
||||
### Session Rollback on Failure
|
||||
|
||||
If any migration step fails during a session, the engine must roll back all migrations applied in that session.
|
||||
|
||||
Rules:
|
||||
- "Session" means all migration steps started in a single run of the migration engine.
|
||||
- On failure, roll back every step applied in the current session in reverse order before surfacing the error.
|
||||
- If rollback of a step is not possible (e.g., the operation is not reversible in MySQL without the per-table backup), restore from the per-table backup taken before that step.
|
||||
- After rollback or restore, the database must be in the same state as before the session started.
|
||||
- The engine must emit structured diagnostics that identify which step failed, which steps were rolled back, and the final database state.
|
||||
|
||||
## Migration Policy
|
||||
|
||||
- For local-first desktop applications, startup and migration recovery must follow the `local-first-recovery` contract.
|
||||
- Migrations are numbered sequentially and never modified after merge.
|
||||
- Each migration must be reversible where possible (document rollback in a comment).
|
||||
- Never rename a column in one migration step — add new, backfill, drop old across separate deploys.
|
||||
- Auto-apply migrations on startup is acceptable for internal tools; document if used.
|
||||
|
||||
## SQL Migration File Format
|
||||
|
||||
Every `.sql` migration file must begin with a structured header block:
|
||||
|
||||
```sql
|
||||
-- Tables affected: supplier, lot_log
|
||||
-- recovery.not-started: No action required.
|
||||
-- recovery.partial: DELETE FROM parts_log WHERE created_by = 'migration';
|
||||
-- recovery.completed: Same as partial.
|
||||
-- verify: No orphaned supplier_code | SELECT supplier_code FROM parts_log pl LEFT JOIN supplier s ON s.supplier_code = pl.supplier_code WHERE s.supplier_code IS NULL LIMIT 1
|
||||
-- verify: No empty supplier_code | SELECT supplier_name FROM supplier WHERE supplier_code = '' LIMIT 1
|
||||
```
|
||||
|
||||
**`-- Tables affected:`** — comma-separated list of tables the migration touches. Used by the backup engine to take a targeted pre-migration backup. Omit only if no table can be identified; the engine falls back to full DB backup.
|
||||
|
||||
**`-- recovery.*:`** — human-readable rollback SQL for each migration state (`not-started`, `partial`, `completed`). Executed manually by an operator if automatic restore fails. Must be correct, copy-pasteable SQL.
|
||||
|
||||
**`-- verify:`** — post-migration assertion query. Format: `-- verify: <description> | <SQL>`. The engine runs the query after all statements in the file succeed. If the query returns **any row**, the migration is considered failed and is rolled back. Write the query so it returns a row only when something is **wrong**:
|
||||
|
||||
```sql
|
||||
-- verify: Orphaned FK refs | SELECT id FROM child c LEFT JOIN parent p ON p.id = c.parent_id WHERE p.id IS NULL LIMIT 1
|
||||
-- ^ returns a row = bad ^ returns nothing = good
|
||||
```
|
||||
|
||||
- Verify queries must filter out NULL/empty values that would cause false positives: add `AND col IS NOT NULL AND col != ''`.
|
||||
- A migration is only recorded as applied after all verify checks pass.
|
||||
- Verify checks are not a substitute for testing; they are a last-resort safety net on production.
|
||||
|
||||
## Pre-Production Migration Testing in Docker
|
||||
|
||||
Before applying a set of new migrations to production, always validate them against a copy of the production database in a local MariaDB Docker container that matches the production version and collation.
|
||||
|
||||
```bash
|
||||
# Start container matching production (MariaDB 11.8, utf8mb4_uca1400_ai_ci)
|
||||
docker run -d --name pf_test \
|
||||
-e MYSQL_ROOT_PASSWORD=test -e MYSQL_DATABASE=RFQ_LOG \
|
||||
mariadb:11.8 --character-set-server=utf8mb4 --collation-server=utf8mb4_uca1400_ai_ci
|
||||
|
||||
# Load production dump
|
||||
docker exec -i pf_test mariadb -uroot -ptest RFQ_LOG < prod_dump.sql
|
||||
|
||||
# Run migrations via pfs (uses real migration engine + verify checks, no backup)
|
||||
./pfs -migrate-dsn "root:test@tcp(127.0.0.1:3306)/RFQ_LOG?parseTime=true&charset=utf8mb4&multiStatements=true" \
|
||||
-no-backup -verbose
|
||||
```
|
||||
|
||||
The `-migrate-dsn` flag connects to the given DSN, runs all pending migrations, runs verify checks, and exits. No config file, no server, no browser.
|
||||
|
||||
**Rules:**
|
||||
- Always test on a dump of the **current production database**, not a fixture — schema drift and real data distributions expose bugs that fixtures miss.
|
||||
- The Docker container must use the same MariaDB version and `--collation-server` as production.
|
||||
- Each migration file is executed as a **single session** so `SET FOREIGN_KEY_CHECKS = 0` applies to all its statements. Never test by running statements from a migration file individually across separate sessions — the session variable will reset between them.
|
||||
- If any migration fails in Docker, fix the SQL before touching production. Do not rely on "it will be different in production."
|
||||
|
||||
## SQL Migration Authoring — Common Pitfalls
|
||||
|
||||
**Semicolons inside string literals break naive splitters.**
|
||||
The migration engine uses a quote-aware statement splitter. Do not rely on external tools that split on bare `;`. When writing supplier/product names with punctuation, use commas — not semicolons — as separators in string literals. A semicolon inside `'COMPANY; LTD'` will break any naive `split(";")` approach.
|
||||
|
||||
**`SET FOREIGN_KEY_CHECKS = 0` only applies to the current session.**
|
||||
This is a session variable. If statements run in separate connections (e.g. via individual subprocess calls), FK checks are re-enabled for each new connection. Always run an entire migration file as one session. The pfs migration engine runs all statements in a file on the same GORM db handle, which reuses the same connection.
|
||||
|
||||
**Verify queries must exclude NULL values.**
|
||||
A query like `SELECT c.col FROM child c LEFT JOIN parent p ON p.id = c.id WHERE p.id IS NULL` will return rows with `c.col = NULL` if the child table has rows with a NULL FK value. Add `AND c.col IS NOT NULL AND c.col != ''` to avoid false failures.
|
||||
|
||||
**Catch-all INSERT for referential integrity before adding FK constraints.**
|
||||
When adding a FK constraint to a table that previously had no FK (legacy data may have orphaned references), add a catch-all step before the constraint:
|
||||
|
||||
```sql
|
||||
-- Ensure every value referenced in child table exists in parent before adding FK.
|
||||
INSERT IGNORE INTO parent (name)
|
||||
SELECT DISTINCT c.fk_col FROM child c
|
||||
LEFT JOIN parent p ON p.name = c.fk_col
|
||||
WHERE p.name IS NULL AND c.fk_col IS NOT NULL AND c.fk_col != '';
|
||||
```
|
||||
|
||||
This is not a hack — it repairs data that was valid before the constraint existed. Never delete orphaned child rows unless data loss is acceptable.
|
||||
- Do not use tools that naively split SQL on bare `;`. String literals may contain semicolons.
|
||||
- `SET FOREIGN_KEY_CHECKS = 0` is session-scoped. If the file is split across multiple sessions, FK checks come back on.
|
||||
- When adding a new FK to legacy data, repair missing parent rows before enforcing the constraint unless data loss is explicitly acceptable.
|
||||
|
||||
@@ -11,3 +11,31 @@ Canonical file transfer UX patterns for Go web applications:
|
||||
This pattern covers UI and UX contracts. Business-specific validation and file schemas remain in
|
||||
the host project's own architecture docs.
|
||||
|
||||
## Export Handler Sketch
|
||||
|
||||
```go
|
||||
func ExportCSV(c *gin.Context) {
|
||||
c.Header("Content-Type", "text/csv; charset=utf-8")
|
||||
c.Header("Content-Disposition", `attachment; filename="export.csv"`)
|
||||
c.Writer.Write([]byte{0xEF, 0xBB, 0xBF})
|
||||
|
||||
w := csv.NewWriter(c.Writer)
|
||||
w.Comma = ';'
|
||||
w.Write([]string{"ID", "Name", "Price"})
|
||||
|
||||
err := svc.StreamRows(ctx, filters, func(row Row) error {
|
||||
return w.Write([]string{row.ID, row.Name, formatPrice(row.Price)})
|
||||
})
|
||||
w.Flush()
|
||||
if err != nil {
|
||||
slog.Error("csv export failed mid-stream", "err", err)
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Locale Notes
|
||||
|
||||
- BOM avoids broken UTF-8 in Excel on Windows.
|
||||
- Semicolon avoids single-column imports in RU/EU locales.
|
||||
- Decimal comma keeps numbers numeric in Excel.
|
||||
- `DD.MM.YYYY` is preferred over ISO dates for user-facing spreadsheet exports.
|
||||
|
||||
@@ -2,124 +2,38 @@
|
||||
|
||||
Version: 1.0
|
||||
|
||||
## Import Workflow
|
||||
See `README.md` for the reference export handler and locale examples.
|
||||
|
||||
Recommended stages:
|
||||
## Import Rules
|
||||
|
||||
1. `Upload`
|
||||
2. `Preview / Validate`
|
||||
3. `Confirm`
|
||||
4. `Execute`
|
||||
5. `Result summary`
|
||||
- Recommended flow: `Upload -> Preview / Validate -> Confirm -> Execute -> Result summary`.
|
||||
- Validation preview must be human-readable.
|
||||
- Warnings and errors should be visible per row and in aggregate.
|
||||
- The confirm step must communicate scope and side effects clearly.
|
||||
|
||||
Rules:
|
||||
## Export Rules
|
||||
|
||||
- Validation preview must be human-readable (table/list), not raw JSON only.
|
||||
- Warnings and errors should be shown per row and in aggregate summary.
|
||||
- Confirm step should clearly communicate scope and side effects.
|
||||
- The user must explicitly choose export scope when ambiguity exists, such as `selected`, `filtered`, or `all`.
|
||||
- Export format must be explicit.
|
||||
- Download responses must set `Content-Type` and `Content-Disposition` correctly.
|
||||
|
||||
## Export Workflow
|
||||
## CSV Rules
|
||||
|
||||
- User must explicitly choose export scope (`selected`, `filtered`, `all`) when ambiguity exists.
|
||||
- Export format should be explicit (`csv`, `json`, etc.).
|
||||
- Download response must set:
|
||||
- `Content-Type: text/csv; charset=utf-8`
|
||||
- `Content-Disposition: attachment; filename="..."`
|
||||
- For spreadsheet-facing CSV, write UTF-8 BOM as the first bytes.
|
||||
- Use semicolon `;` as the CSV delimiter.
|
||||
- Use comma as the decimal separator for user-facing numeric values.
|
||||
- Use `DD.MM.YYYY` for user-facing dates.
|
||||
- Use `encoding/csv` with `csv.Writer.Comma = ';'` so quoting and escaping stay correct.
|
||||
|
||||
## CSV Format Rules (Excel-compatible)
|
||||
## Streaming Rules
|
||||
|
||||
These rules are **mandatory** whenever CSV is exported for spreadsheet users.
|
||||
|
||||
### Encoding and BOM
|
||||
|
||||
- Write UTF-8 BOM (`\xEF\xBB\xBF`) as the very first bytes of the response.
|
||||
- Without BOM, Excel on Windows opens UTF-8 CSV as ANSI and garbles Cyrillic/special characters.
|
||||
|
||||
```go
|
||||
w.Write([]byte{0xEF, 0xBB, 0xBF})
|
||||
```
|
||||
|
||||
### Delimiter
|
||||
|
||||
- Use **semicolon** (`;`) as the field delimiter, not comma.
|
||||
- Excel in Russian/European locale uses semicolon as the list separator.
|
||||
- Comma-delimited files open as a single column in these locales.
|
||||
|
||||
### Numbers
|
||||
|
||||
- Write decimal numbers with a **comma** as the decimal separator: `1 234,56` — not `1234.56`.
|
||||
- Excel in Russian locale does not recognize period as a decimal separator in numeric cells.
|
||||
- Format integers and floats explicitly; do not rely on Go's default `%v` or `strconv.FormatFloat`.
|
||||
- Use a thin non-breaking space (`\u202F`) or regular space as a thousands separator when the value
|
||||
benefits from readability (e.g. prices, quantities > 9999).
|
||||
|
||||
```go
|
||||
// correct
|
||||
fmt.Sprintf("%.2f", price) // then replace "." -> ","
|
||||
strings.ReplaceAll(fmt.Sprintf("%.2f", price), ".", ",")
|
||||
|
||||
// wrong — produces "1234.56", Excel treats it as text in RU locale
|
||||
fmt.Sprintf("%.2f", price)
|
||||
```
|
||||
|
||||
### Dates
|
||||
|
||||
- Write dates as `DD.MM.YYYY` — the format Excel in Russian locale parses as a date cell automatically.
|
||||
- Do not use ISO 8601 (`2006-01-02`) for user-facing CSV; it is not auto-recognized as a date in RU locale.
|
||||
|
||||
### Text quoting
|
||||
|
||||
- Wrap any field that contains the delimiter (`;`), a newline, or a double-quote in double quotes.
|
||||
- Escape embedded double-quotes by doubling them: `""`.
|
||||
- Use `encoding/csv` with `csv.Writer` and set `csv.Writer.Comma = ';'`; it handles quoting automatically.
|
||||
|
||||
## Streaming Export Architecture (Go)
|
||||
|
||||
For exports with potentially large row counts use a 3-layer streaming pattern.
|
||||
Never load all rows into memory before writing — stream directly to the response writer.
|
||||
|
||||
```
|
||||
Handler → sets HTTP headers + writes BOM → calls Service
|
||||
Service → delegates to Repository with a row callback
|
||||
Repository → queries in batches → calls callback per row
|
||||
Handler/Service → csv.Writer.Flush() after all rows
|
||||
```
|
||||
|
||||
```go
|
||||
// Handler
|
||||
func ExportCSV(c *gin.Context) {
|
||||
c.Header("Content-Type", "text/csv; charset=utf-8")
|
||||
c.Header("Content-Disposition", `attachment; filename="export.csv"`)
|
||||
c.Writer.Write([]byte{0xEF, 0xBB, 0xBF}) // BOM
|
||||
|
||||
w := csv.NewWriter(c.Writer)
|
||||
w.Comma = ';'
|
||||
w.Write([]string{"ID", "Name", "Price"}) // header row
|
||||
|
||||
err := svc.StreamRows(ctx, filters, func(row Row) error {
|
||||
return w.Write([]string{row.ID, row.Name, formatPrice(row.Price)})
|
||||
})
|
||||
w.Flush()
|
||||
if err != nil {
|
||||
// headers already sent — log only, cannot change status
|
||||
slog.Error("csv export failed mid-stream", "err", err)
|
||||
}
|
||||
}
|
||||
|
||||
// Repository — batch fetch with callback
|
||||
func (r *Repo) StreamRows(ctx, filters, fn func(Row) error) error {
|
||||
rows, err := r.db.QueryContext(ctx, query, args...)
|
||||
// ... scan and call fn(row) for each row
|
||||
}
|
||||
```
|
||||
|
||||
- Use `JOIN` in the repository query to avoid N+1 per row.
|
||||
- Batch size is optional; streaming row-by-row is fine for most datasets.
|
||||
- Always call `w.Flush()` after the loop — `csv.Writer` buffers internally.
|
||||
- Large exports must stream rows directly to the response. Do not load the full dataset into memory first.
|
||||
- Use the canonical flow:
|
||||
`Handler -> Service -> Repository callback -> csv.Writer`
|
||||
- Repository queries should avoid N+1 by using JOINs or another batched shape.
|
||||
- Always call `csv.Writer.Flush()` after writing rows.
|
||||
|
||||
## Error Handling
|
||||
|
||||
- Import errors should map to clear user-facing messages.
|
||||
- Export errors after streaming starts must be logged server-side only — HTTP headers are already
|
||||
sent and the status code cannot be changed mid-stream.
|
||||
|
||||
- Import errors must map to clear user-facing messages.
|
||||
- Once streaming has started, export failures are logged server-side only. Do not try to change the HTTP status after headers/body bytes were already sent.
|
||||
|
||||
61
rules/patterns/module-versioning/README.md
Normal file
61
rules/patterns/module-versioning/README.md
Normal file
@@ -0,0 +1,61 @@
|
||||
# Module Versioning Pattern Notes
|
||||
|
||||
This file keeps examples and the decision tree. The normative rules live in `contract.md`.
|
||||
|
||||
## Version Format
|
||||
|
||||
```text
|
||||
N.M
|
||||
```
|
||||
|
||||
Examples:
|
||||
|
||||
- `1.0`
|
||||
- `2.0`
|
||||
- `2.1`
|
||||
- `2.3`
|
||||
|
||||
## Canonical Storage Options
|
||||
|
||||
Go constant:
|
||||
|
||||
```go
|
||||
const Version = "2.1"
|
||||
```
|
||||
|
||||
Document header:
|
||||
|
||||
```text
|
||||
Version: 2.1
|
||||
```
|
||||
|
||||
Config field:
|
||||
|
||||
```json
|
||||
{ "version": "2.1" }
|
||||
```
|
||||
|
||||
## Tag Format
|
||||
|
||||
```text
|
||||
<module-name>/v<N.M>
|
||||
```
|
||||
|
||||
Examples:
|
||||
|
||||
- `parser/v2.0`
|
||||
- `api-client/v1.3`
|
||||
|
||||
## Decision Tree
|
||||
|
||||
```text
|
||||
Module changed?
|
||||
-> no: version unchanged
|
||||
-> yes: behavior or interface changed?
|
||||
-> yes: N+1, reset minor to 0
|
||||
-> no: narrow bugfix only -> N+0.1
|
||||
```
|
||||
|
||||
## Commit Reminder
|
||||
|
||||
If a commit changes a module, the same commit should update the module version.
|
||||
@@ -9,114 +9,15 @@ Version: 1.0
|
||||
|
||||
Модули — это логические слои внутри одного репозитория, не отдельные пакеты.
|
||||
|
||||
---
|
||||
See `README.md` for examples and the decision tree.
|
||||
|
||||
## Формат версии
|
||||
## Rules
|
||||
|
||||
```
|
||||
N.M
|
||||
```
|
||||
|
||||
- `N` — мажорная версия (целое число, начинается с 1)
|
||||
- `M` — минорная версия (кратна 0.1, начинается с 0)
|
||||
|
||||
Примеры: `1.0`, `2.0`, `2.1`, `2.3`
|
||||
|
||||
---
|
||||
|
||||
## Правила инкремента
|
||||
|
||||
### N+1 — любая функциональная правка
|
||||
|
||||
Поднимаем мажор при **любом изменении функциональности**:
|
||||
|
||||
- добавление новой функции, метода, поля
|
||||
- изменение существующего поведения
|
||||
- удаление функциональности
|
||||
- рефакторинг, меняющий структуру модуля
|
||||
- изменение интерфейса взаимодействия с другими слоями
|
||||
|
||||
При инкременте мажора минор **сбрасывается в 0**: `2.3 → 3.0`
|
||||
|
||||
### N+0.1 — исправление бага
|
||||
|
||||
Поднимаем минор при **коротком точечном багфиксе**:
|
||||
|
||||
- исправление некорректного поведения без изменения интерфейса
|
||||
- правка крайнего случая (edge case)
|
||||
- исправление опечатки в логике
|
||||
|
||||
Функциональность при этом **не меняется**.
|
||||
|
||||
---
|
||||
|
||||
## Где хранить версию
|
||||
|
||||
Версия фиксируется в одном месте внутри модуля. Выбрать один из вариантов:
|
||||
|
||||
**Go** — константа в пакете:
|
||||
```go
|
||||
const Version = "2.1"
|
||||
```
|
||||
|
||||
**Файл** — заголовок contract.md или README модуля:
|
||||
```
|
||||
Version: 2.1
|
||||
```
|
||||
|
||||
**JSON/YAML конфиг** — поле `version`:
|
||||
```json
|
||||
{ "version": "2.1" }
|
||||
```
|
||||
|
||||
Не дублировать версию в нескольких местах одного модуля.
|
||||
|
||||
---
|
||||
|
||||
## Git-тег (опционально)
|
||||
|
||||
Если модуль выпускается как отдельная поставка, тег ставится в формате:
|
||||
|
||||
```
|
||||
<module-name>/v<N.M>
|
||||
```
|
||||
|
||||
Примеры: `parser/v2.0`, `api-client/v1.3`
|
||||
|
||||
Тег ставится только на коммит, в котором обновлена версия внутри модуля.
|
||||
Тег без обновления версии в коде — ошибка.
|
||||
|
||||
---
|
||||
|
||||
## Стартовая версия
|
||||
|
||||
Новый модуль начинается с `1.0`.
|
||||
Версия `0.x` не используется.
|
||||
|
||||
---
|
||||
|
||||
## Инструкция для агентов (Codex, Claude)
|
||||
|
||||
**Обязательно при каждом коммите:**
|
||||
|
||||
1. Определи, к какому модулю относятся изменения.
|
||||
2. Прочитай текущую версию модуля из канонического места (константа, заголовок, конфиг).
|
||||
3. Выбери инкремент по правилу:
|
||||
- Изменяется поведение, добавляется или удаляется функциональность → **N+1**, минор сбросить в 0
|
||||
- Только исправление бага, поведение не меняется → **N+0.1**
|
||||
4. Обнови версию в коде до коммита.
|
||||
5. Включи новую версию в сообщение коммита: `feat(parser): add csv dialect — v2.0`
|
||||
|
||||
**Агент не должен делать коммит без обновления версии затронутого модуля.**
|
||||
|
||||
### Дерево решений
|
||||
|
||||
```
|
||||
Изменения в модуле?
|
||||
│
|
||||
├── Да — это багфикс (логика была неверной, интерфейс не менялся)?
|
||||
│ ├── Да → N+0.1
|
||||
│ └── Нет → N+1, сброс минора в 0
|
||||
│
|
||||
└── Нет изменений в модуле → версия не меняется
|
||||
```
|
||||
- Module version format is `N.M`.
|
||||
- New modules start at `1.0`. `0.x` is not used.
|
||||
- Any functional change bumps the major version and resets minor to `0`.
|
||||
- A narrow bugfix that does not change behavior or interface bumps minor by `0.1`.
|
||||
- Store the version in one canonical place only: code constant, module document header, or config field.
|
||||
- If the module is tagged separately, use `<module-name>/v<N.M>`.
|
||||
- Do not create a tag without updating the module's canonical version first.
|
||||
- When a commit changes a module, update that module's version in the same commit.
|
||||
|
||||
84
rules/patterns/release-signing/README.md
Normal file
84
rules/patterns/release-signing/README.md
Normal file
@@ -0,0 +1,84 @@
|
||||
# Release Signing Pattern Notes
|
||||
|
||||
This file keeps examples and rationale. The normative rules live in `contract.md`.
|
||||
|
||||
## Keys Repository Shape
|
||||
|
||||
```text
|
||||
keys/
|
||||
developers/
|
||||
<name>.pub
|
||||
scripts/
|
||||
keygen.sh
|
||||
sign-release.sh
|
||||
verify-signature.sh
|
||||
```
|
||||
|
||||
## Runtime Trust Loader
|
||||
|
||||
```go
|
||||
// trustedKeysRaw is injected via -ldflags.
|
||||
// Format: base64(key1):base64(key2):...
|
||||
var trustedKeysRaw string
|
||||
```
|
||||
|
||||
Typical parsing pattern:
|
||||
|
||||
```go
|
||||
func trustedKeys() ([]ed25519.PublicKey, error) {
|
||||
if trustedKeysRaw == "" {
|
||||
return nil, fmt.Errorf("dev build: trusted keys not embedded, updates disabled")
|
||||
}
|
||||
var keys []ed25519.PublicKey
|
||||
for _, enc := range strings.Split(trustedKeysRaw, ":") {
|
||||
b, err := base64.StdEncoding.DecodeString(strings.TrimSpace(enc))
|
||||
if err != nil || len(b) != ed25519.PublicKeySize {
|
||||
return nil, fmt.Errorf("invalid trusted key: %w", err)
|
||||
}
|
||||
keys = append(keys, ed25519.PublicKey(b))
|
||||
}
|
||||
return keys, nil
|
||||
}
|
||||
```
|
||||
|
||||
## Build Example
|
||||
|
||||
```sh
|
||||
KEYS=$(paste -sd: /path/to/keys/developers/*.pub)
|
||||
go build \
|
||||
-ldflags "-s -w -X <module>/internal/updater.trustedKeysRaw=${KEYS}" \
|
||||
-o dist/<binary>-linux-amd64 \
|
||||
./cmd/<binary>
|
||||
```
|
||||
|
||||
## Verification Sketch
|
||||
|
||||
```go
|
||||
func verifySignature(binaryPath, sigPath string) error {
|
||||
keys, err := trustedKeys()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
data, err := os.ReadFile(binaryPath)
|
||||
if err != nil {
|
||||
return fmt.Errorf("read binary: %w", err)
|
||||
}
|
||||
sig, err := os.ReadFile(sigPath)
|
||||
if err != nil {
|
||||
return fmt.Errorf("read signature: %w", err)
|
||||
}
|
||||
for _, key := range keys {
|
||||
if ed25519.Verify(key, data, sig) {
|
||||
return nil
|
||||
}
|
||||
}
|
||||
return fmt.Errorf("signature verification failed: no trusted key matched")
|
||||
}
|
||||
```
|
||||
|
||||
## Release Assets
|
||||
|
||||
```text
|
||||
<binary>-linux-amd64
|
||||
<binary>-linux-amd64.sig
|
||||
```
|
||||
@@ -8,142 +8,16 @@ Ed25519 asymmetric signing for Go release binaries.
|
||||
Guarantees that a binary accepted by a running application was produced by a trusted developer.
|
||||
Applies to any Go binary that is distributed or supports self-update.
|
||||
|
||||
---
|
||||
|
||||
## Key Management
|
||||
|
||||
Public keys are stored in the centralized keys repository: `git.mchus.pro/mchus/keys`
|
||||
|
||||
```
|
||||
keys/
|
||||
developers/
|
||||
<name>.pub ← raw Ed25519 public key, base64-encoded, one line per developer
|
||||
scripts/
|
||||
keygen.sh ← generates keypair
|
||||
sign-release.sh ← signs a binary
|
||||
verify-signature.sh ← verifies locally
|
||||
```
|
||||
|
||||
Public keys are safe to commit. Private keys stay on each developer's machine — never committed, never shared.
|
||||
|
||||
**Adding a developer:** add their `.pub` file → commit → rebuild affected releases.
|
||||
**Removing a developer:** delete their `.pub` file → commit → rebuild releases.
|
||||
Previously signed binaries with their key remain valid (already distributed), but they cannot sign new releases.
|
||||
|
||||
---
|
||||
|
||||
## Multi-Key Trust Model
|
||||
|
||||
A binary is accepted if its signature verifies against **any** of the embedded trusted public keys.
|
||||
This mirrors the SSH `authorized_keys` model.
|
||||
|
||||
- One developer signs a release with their private key → produces one `.sig` file.
|
||||
- The binary trusts all active developers — any of them can make a valid release.
|
||||
- Signature format: raw 64-byte Ed25519 signature (not PEM, not armored).
|
||||
|
||||
---
|
||||
|
||||
## Embedding Keys at Build Time
|
||||
|
||||
Public keys are injected via `-ldflags` at release build time — not hardcoded at compile time.
|
||||
This allows adding/removing developers without changing application source code.
|
||||
|
||||
```go
|
||||
// internal/updater/trust.go
|
||||
|
||||
// trustedKeysRaw is injected at build time via -ldflags.
|
||||
// Format: base64(key1):base64(key2):...
|
||||
// Empty string = dev build, updates disabled.
|
||||
var trustedKeysRaw string
|
||||
|
||||
func trustedKeys() ([]ed25519.PublicKey, error) {
|
||||
if trustedKeysRaw == "" {
|
||||
return nil, fmt.Errorf("dev build: trusted keys not embedded, updates disabled")
|
||||
}
|
||||
var keys []ed25519.PublicKey
|
||||
for _, enc := range strings.Split(trustedKeysRaw, ":") {
|
||||
b, err := base64.StdEncoding.DecodeString(strings.TrimSpace(enc))
|
||||
if err != nil || len(b) != ed25519.PublicKeySize {
|
||||
return nil, fmt.Errorf("invalid trusted key: %w", err)
|
||||
}
|
||||
keys = append(keys, ed25519.PublicKey(b))
|
||||
}
|
||||
return keys, nil
|
||||
}
|
||||
```
|
||||
|
||||
Release build script injects all current developer keys:
|
||||
|
||||
```sh
|
||||
# scripts/build-release.sh
|
||||
KEYS=$(paste -sd: /path/to/keys/developers/*.pub)
|
||||
go build \
|
||||
-ldflags "-s -w -X <module>/internal/updater.trustedKeysRaw=${KEYS}" \
|
||||
-o dist/<binary>-linux-amd64 \
|
||||
./cmd/<binary>
|
||||
```
|
||||
|
||||
Dev build (no `-ldflags` injection): `trustedKeysRaw` is empty → updates disabled, binary works normally.
|
||||
|
||||
---
|
||||
|
||||
## Signature Verification (stdlib only, no external tools)
|
||||
|
||||
Use `crypto/ed25519` from Go standard library. No third-party dependencies.
|
||||
|
||||
```go
|
||||
// internal/updater/trust.go
|
||||
|
||||
func verifySignature(binaryPath, sigPath string) error {
|
||||
keys, err := trustedKeys()
|
||||
if err != nil {
|
||||
return err // dev build or misconfiguration
|
||||
}
|
||||
data, err := os.ReadFile(binaryPath)
|
||||
if err != nil {
|
||||
return fmt.Errorf("read binary: %w", err)
|
||||
}
|
||||
sig, err := os.ReadFile(sigPath)
|
||||
if err != nil {
|
||||
return fmt.Errorf("read signature: %w", err)
|
||||
}
|
||||
for _, key := range keys {
|
||||
if ed25519.Verify(key, data, sig) {
|
||||
return nil // any trusted key accepts → pass
|
||||
}
|
||||
}
|
||||
return fmt.Errorf("signature verification failed: no trusted key matched")
|
||||
}
|
||||
```
|
||||
|
||||
Rejection behavior: log as WARNING, continue with current binary. Never crash, never block operation.
|
||||
|
||||
---
|
||||
|
||||
## Release Asset Convention
|
||||
|
||||
Every release must attach two files to the Gitea release:
|
||||
|
||||
```
|
||||
<binary>-linux-amd64 ← the binary
|
||||
<binary>-linux-amd64.sig ← raw 64-byte Ed25519 signature
|
||||
```
|
||||
|
||||
Signing:
|
||||
|
||||
```sh
|
||||
sh keys/scripts/sign-release.sh <developer-name> dist/<binary>-linux-amd64
|
||||
```
|
||||
|
||||
Both files are uploaded to the Gitea release as downloadable assets.
|
||||
|
||||
---
|
||||
See `README.md` for reference code and build snippets.
|
||||
|
||||
## Rules
|
||||
|
||||
- Never hardcode public keys as string literals in source code — always use ldflags injection.
|
||||
- Never commit private keys (`.key` files) anywhere.
|
||||
- A binary built without ldflags injection must work normally — it just cannot perform verified updates.
|
||||
- Signature verification failure must be a silent logged warning, not a crash or user-visible error.
|
||||
- Use `crypto/ed25519` (stdlib) only — no external signing libraries.
|
||||
- `.sig` file contains raw 64 bytes (not base64, not PEM). Produced by `openssl pkeyutl -sign -rawin`.
|
||||
- Public keys are stored in the centralized keys repository. Public keys may be committed; private keys must stay on each developer machine and must never be committed or shared.
|
||||
- Adding or removing a trusted developer means changing the committed `.pub` set and rebuilding affected releases.
|
||||
- A release is trusted if its signature verifies against any embedded trusted public key.
|
||||
- The `.sig` asset is a raw 64-byte Ed25519 signature, not PEM and not base64.
|
||||
- Trusted public keys must be injected at build time via `-ldflags`. Do not hardcode them in source.
|
||||
- A build without injected keys is a valid dev build. It must continue working normally, but verified updates are disabled.
|
||||
- Signature verification uses Go stdlib `crypto/ed25519` only.
|
||||
- Signature verification failure must log a warning and keep the current binary. It must not crash the app and must not block unrelated operation.
|
||||
- Every signed release must ship the binary and its matching `.sig` asset.
|
||||
|
||||
80
rules/patterns/unattended-boot-services/README.md
Normal file
80
rules/patterns/unattended-boot-services/README.md
Normal file
@@ -0,0 +1,80 @@
|
||||
# Unattended Boot Services Pattern Notes
|
||||
|
||||
This file keeps examples and rationale. The normative rules live in `contract.md`.
|
||||
|
||||
## Dependency Skeleton
|
||||
|
||||
```sh
|
||||
depend() {
|
||||
need localmount
|
||||
after some-service
|
||||
use logger
|
||||
}
|
||||
```
|
||||
|
||||
Avoid `need net` for best-effort services.
|
||||
|
||||
## Network-Independent SSH
|
||||
|
||||
```sh
|
||||
#!/sbin/openrc-run
|
||||
description="SSH server"
|
||||
|
||||
depend() {
|
||||
need localmount
|
||||
after bee-sshsetup
|
||||
use logger
|
||||
}
|
||||
|
||||
start() {
|
||||
check_config || return 1
|
||||
ebegin "Starting dropbear"
|
||||
/usr/sbin/dropbear ${DROPBEAR_OPTS}
|
||||
eend $?
|
||||
}
|
||||
```
|
||||
|
||||
Place this in `etc/init.d/dropbear` in the overlay to override package defaults that require network.
|
||||
|
||||
## Persistent DHCP
|
||||
|
||||
Wrong:
|
||||
|
||||
```sh
|
||||
udhcpc -i "$iface" -t 3 -T 5 -n -q
|
||||
```
|
||||
|
||||
Correct:
|
||||
|
||||
```sh
|
||||
udhcpc -i "$iface" -b -t 0 -T 3 -q
|
||||
```
|
||||
|
||||
## Typical Start Order
|
||||
|
||||
```text
|
||||
localmount
|
||||
-> sshsetup
|
||||
-> dropbear
|
||||
-> network
|
||||
-> nvidia
|
||||
-> audit
|
||||
```
|
||||
|
||||
Use `after` for ordering without turning soft dependencies into hard boot blockers.
|
||||
|
||||
## Error Handling Skeleton
|
||||
|
||||
```sh
|
||||
start() {
|
||||
ebegin "Running audit"
|
||||
/usr/local/bin/audit --output /var/log/audit.json >> /var/log/audit.log 2>&1
|
||||
local rc=$?
|
||||
if [ $rc -eq 0 ]; then
|
||||
einfo "Audit complete"
|
||||
else
|
||||
ewarn "Audit finished with errors — check /var/log/audit.log"
|
||||
fi
|
||||
eend 0
|
||||
}
|
||||
```
|
||||
@@ -7,127 +7,16 @@ Version: 1.0
|
||||
Rules for OpenRC services that run in unattended environments: LiveCDs, kiosks, embedded systems.
|
||||
No user is present. No TTY prompts. Every failure path must have a silent fallback.
|
||||
|
||||
---
|
||||
|
||||
## Core Invariants
|
||||
|
||||
- **Never block boot.** A service failure must not prevent other services from starting.
|
||||
- **Never prompt.** No `read`, no `pause`, no interactive input of any kind.
|
||||
- **Always exit 0.** Use `eend 0` at the end of `start()` regardless of the operation result.
|
||||
- **Log everything.** Write results to `/var/log/` so SSH inspection is possible after boot.
|
||||
- **Fail silently, degrade gracefully.** Missing tool → skip that collector. No network → skip network-dependent steps.
|
||||
|
||||
---
|
||||
|
||||
## Service Dependencies
|
||||
|
||||
Use the minimum necessary dependencies:
|
||||
|
||||
```sh
|
||||
depend() {
|
||||
need localmount # almost always needed
|
||||
after some-service # ordering without hard dependency
|
||||
use logger # optional soft dependency
|
||||
# DO NOT add: need net, need networking, need network-online
|
||||
}
|
||||
```
|
||||
|
||||
**Never use `need net` or `need networking`** unless the service is genuinely useless without
|
||||
network and you want it to fail loudly when no cable is connected.
|
||||
For services that work with or without network, use `after` instead.
|
||||
|
||||
---
|
||||
|
||||
## Network-Independent SSH
|
||||
|
||||
Dropbear (and any SSH server) must start without network being available.
|
||||
Common mistake: installing dropbear-openrc which adds `need net` in its default init.
|
||||
|
||||
Override with a custom init:
|
||||
|
||||
```sh
|
||||
#!/sbin/openrc-run
|
||||
description="SSH server"
|
||||
|
||||
depend() {
|
||||
need localmount
|
||||
after bee-sshsetup # key/user setup, not networking
|
||||
use logger
|
||||
# NO need net
|
||||
}
|
||||
|
||||
start() {
|
||||
check_config || return 1
|
||||
ebegin "Starting dropbear"
|
||||
/usr/sbin/dropbear ${DROPBEAR_OPTS}
|
||||
eend $?
|
||||
}
|
||||
```
|
||||
|
||||
Place this file in the overlay at `etc/init.d/dropbear` — it overrides the package-installed version.
|
||||
|
||||
---
|
||||
|
||||
## Persistent DHCP
|
||||
|
||||
Do not use blocking DHCP (`-n` flag exits if no offer). Use background mode so the client
|
||||
retries automatically when a cable is connected after boot:
|
||||
|
||||
```sh
|
||||
# Wrong — exits immediately if no DHCP offer
|
||||
udhcpc -i "$iface" -t 3 -T 5 -n -q
|
||||
|
||||
# Correct — background daemon, retries indefinitely
|
||||
udhcpc -i "$iface" -b -t 0 -T 3 -q
|
||||
```
|
||||
|
||||
The network service itself should complete immediately (exit 0) — udhcpc daemons run in background.
|
||||
|
||||
---
|
||||
|
||||
## Service Start Order (typical LiveCD)
|
||||
|
||||
```
|
||||
localmount
|
||||
└── sshsetup (user creation, key injection — before dropbear)
|
||||
└── dropbear (SSH — independent of network)
|
||||
└── network (DHCP on all interfaces — does not block anything)
|
||||
└── nvidia (or other hardware init — after network in case firmware needs it)
|
||||
└── audit (main workload — after all hardware ready)
|
||||
```
|
||||
|
||||
Services at the same level can start in parallel. Use `after` not `need` for ordering without hard dependency.
|
||||
|
||||
---
|
||||
|
||||
## Error Handling in start()
|
||||
|
||||
```sh
|
||||
start() {
|
||||
ebegin "Running audit"
|
||||
/usr/local/bin/audit --output /var/log/audit.json >> /var/log/audit.log 2>&1
|
||||
local rc=$?
|
||||
if [ $rc -eq 0 ]; then
|
||||
einfo "Audit complete"
|
||||
else
|
||||
ewarn "Audit finished with errors — check /var/log/audit.log"
|
||||
fi
|
||||
eend 0 # always 0 — never fail the runlevel
|
||||
}
|
||||
```
|
||||
|
||||
- Capture exit code into a local variable.
|
||||
- Log the result with `einfo` or `ewarn`.
|
||||
- Always `eend 0` — a failed audit is not a reason to block the boot runlevel.
|
||||
- The exception: services whose failure makes SSH impossible (e.g. key setup) may `return 1`.
|
||||
|
||||
---
|
||||
See `README.md` for sample init scripts and ordering sketches.
|
||||
|
||||
## Rules
|
||||
|
||||
- Every `start()` ends with `eend 0` unless failure makes the entire environment unusable.
|
||||
- Network is always best-effort. Test for it, don't depend on it.
|
||||
- Proprietary drivers (NVIDIA, etc.): load failure → log warning → continue without enrichment.
|
||||
- External tools (ipmitool, smartctl, etc.): not-found → skip that data source → do not abort.
|
||||
- Timeout all external commands: `timeout 30 smartctl ...` prevents infinite hangs.
|
||||
- Write all output to `/var/log/` — TTY output is secondary.
|
||||
- Never block boot. A service failure must not stop the rest of the runlevel.
|
||||
- Never prompt. Do not use `read`, pause logic, or any interactive fallback.
|
||||
- Every `start()` must end with `eend 0` unless failure makes the environment fundamentally unusable, such as breaking SSH setup.
|
||||
- Write service diagnostics to `/var/log/`. TTY output is secondary.
|
||||
- Missing tools, absent network, or driver load failures must degrade gracefully: log and continue.
|
||||
- Use the minimum dependency set. Prefer `after` and `use`; do not add `need net`, `need networking`, or `need network-online` unless the service is truly useless without network and failure should be loud.
|
||||
- SSH services must start without requiring network availability.
|
||||
- DHCP must be non-blocking and persistent. Run the client in background retry mode rather than failing the boot sequence when no lease is immediately available.
|
||||
- External commands must be timeout-bounded so a bad device or tool cannot hang boot indefinitely.
|
||||
|
||||
Reference in New Issue
Block a user