Compare commits

...

10 Commits

Author SHA1 Message Date
Mikhail Chusavitin
5a69e0bba8 Add local-first recovery contract 2026-03-07 23:16:57 +03:00
Mikhail Chusavitin
d2e11b8bdd Strengthen backup and secret handling contracts 2026-03-07 22:03:49 +03:00
Mikhail Chusavitin
f55bd84668 Require application-owned backups for migrations 2026-03-07 21:56:51 +03:00
Mikhail Chusavitin
61ed2717d0 Extract shared backup management contract 2026-03-07 21:49:07 +03:00
Mikhail Chusavitin
548eb70d55 Add mandatory DB backup rule 2026-03-07 21:39:50 +03:00
Mikhail Chusavitin
72e10622ba Add BOM decomposition contract 2026-03-07 15:08:45 +03:00
Mikhail Chusavitin
0e61346d20 feat: add KISS and task-discipline contracts
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-06 18:07:47 +03:00
a38c35ce2d docs: add three LiveCD/embedded patterns from bee project
- alpine-livecd: mkimage profile rules, apkovl mechanics, workdir caching,
  squashfs compression, NIC firmware, long build survival via screen
- vendor-installer-verification: checksum-before-download, cache validation,
  version URL verification before writing build scripts
- unattended-boot-services: OpenRC invariants for headless environments,
  network-independent SSH, persistent DHCP, graceful degradation

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-05 18:18:22 +03:00
c73ece6c7c feat(app-binary): add host deployment path convention /appdata/<appname>
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-05 14:07:54 +03:00
456c1f022c feat(release-signing): add Ed25519 multi-key release signing contract
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-05 10:27:21 +03:00
15 changed files with 1207 additions and 3 deletions

View File

@@ -24,6 +24,7 @@ rules/patterns/ — shared engineering rule contracts
go-background-tasks/ — Task Manager pattern, polling
go-code-style/ — layering, error wrapping, startup sequence
go-project-bible/ — how to write and maintain a project bible
bom-decomposition/ — one BOM row to many component/LOT mappings
import-export/ — CSV Excel-compatible format, streaming export
table-management/ — toolbar, filtering, pagination
modal-workflows/ — state machine, htmx pattern, confirmation

View File

@@ -0,0 +1,142 @@
# Contract: Alpine LiveCD Build
Version: 1.0
## Purpose
Rules for building bootable Alpine Linux ISO images with custom overlays using `mkimage.sh`.
Applies to any project that needs a LiveCD: hardware audit, rescue environments, kiosks.
---
## mkimage Profile
Every project must have a profile file `mkimg.<name>.sh` defining:
```sh
profile_<name>() {
arch="x86_64" # REQUIRED — without this mkimage silently skips the profile
hostname="<hostname>"
apkovl="genapkovl-<name>.sh"
image_ext="iso"
output_format="iso"
kernel_flavors="lts"
initfs_cmdline="modules=loop,squashfs,sd-mod,usb-storage quiet"
initfs_features="ata base cdrom ext4 mmc nvme raid scsi squashfs usb virtio"
grub_mod="all_video disk part_gpt part_msdos linux normal configfile search search_label efi_gop fat iso9660 cat echo ls test true help gzio"
apks="alpine-base linux-lts linux-firmware-none ..."
}
```
**`arch` is mandatory.** If missing, mkimage silently builds nothing and exits 0.
---
## apkovl Mechanism
The apkovl is a `.tar.gz` overlay extracted by initramfs at boot, overlaying `/etc`, `/usr`, `/root`.
`genapkovl-<name>.sh` generates the tarball:
- Must be in the **CWD** when mkimage runs — not only in `~/.mkimage/`
- `~/.mkimage/` is searched for mkimg profiles only, not genapkovl scripts
```sh
# Copy both scripts to ~/.mkimage AND to CWD (typically /var/tmp)
cp "genapkovl-<name>.sh" ~/.mkimage/
cp "genapkovl-<name>.sh" /var/tmp/
cd /var/tmp
sh mkimage.sh --workdir /var/tmp/work ...
```
---
## Build Environment
**Always use `/var/tmp`, not `/tmp`:**
```sh
export TMPDIR=/var/tmp
cd /var/tmp
sh mkimage.sh ...
```
`/tmp` on Alpine builder VMs is typically a 1GB tmpfs. Kernel firmware squashfs alone exceeds this.
`/var/tmp` uses actual disk space.
---
## Workdir Caching
mkimage stores each ISO section in a hash-named subdirectory. Preserve expensive sections across builds:
```sh
# Delete everything EXCEPT cached sections
if [ -d /var/tmp/bee-iso-work ]; then
find /var/tmp/bee-iso-work -maxdepth 1 -mindepth 1 \
-not -name 'apks_*' \ # downloaded packages
-not -name 'kernel_*' \ # modloop squashfs
-not -name 'syslinux_*' \ # syslinux bootloader
-not -name 'grub_*' \ # grub EFI
-exec rm -rf {} +
fi
```
The apkovl section is always regenerated (contains project-specific config that changes per build).
---
## Squashfs Compression
Default compression is `xz` — slow but small. For RAM-loaded modloops, size rarely matters.
Use `lz4` for faster builds:
```sh
mkdir -p /etc/mkinitfs
grep -q 'MKSQUASHFS_OPTS' /etc/mkinitfs/mkinitfs.conf 2>/dev/null || \
echo 'MKSQUASHFS_OPTS="-comp lz4 -Xhc"' >> /etc/mkinitfs/mkinitfs.conf
```
Apply before running mkimage. Rebuilds modloop only when kernel version changes.
---
## Long Builds
NVIDIA driver downloads, kernel compiles, and package fetches can take 1030 minutes.
Run in a `screen` session so builds survive SSH disconnects:
```sh
apk add screen
screen -dmS build sh -c "sh build.sh > /var/log/build.log 2>&1"
tail -f /var/log/build.log
```
---
## NIC Firmware
`linux-firmware-none` (default) contains zero firmware files. Real hardware NICs often require firmware.
Include firmware packages matching expected hardware:
```
linux-firmware-intel # Intel NICs (X710, E810, etc.)
linux-firmware-mellanox # Mellanox/NVIDIA ConnectX
linux-firmware-bnx2x # Broadcom NetXtreme
linux-firmware-rtl_nic # Realtek
linux-firmware-other # catch-all
```
---
## Versioning
Pin all versions in a single `VERSIONS` file sourced by all build scripts:
```sh
ALPINE_VERSION=3.21
KERNEL_VERSION=6.12
GO_VERSION=1.23.6
NVIDIA_DRIVER_VERSION=590.48.01
```
Never hardcode versions inside build scripts.

View File

@@ -8,6 +8,46 @@ Version: 1.0
---
## Расположение на хосте
> Это правило применяется когда ИИ самостоятельно разворачивает приложение или выполняет команды на build-машине (деплой, копирование файлов, запуск сервисов).
Бинарник приложения размещается в директории:
```
/appdata/<appname>/
```
где `<appname>` — имя приложения (строчными буквами, без пробелов).
Пример: приложение `myservice``/appdata/myservice/myservice`.
Все файлы, связанные с конкретным приложением (бинарник, вспомогательные скрипты запуска, `docker-compose.yml`), хранятся внутри этой директории. Конфиг и данные — по правилам секций ниже.
### Примеры внедрения
При деплое, копировании файлов или запуске сервисов ИИ **всегда по умолчанию** использует этот путь:
```bash
# Копирование бинарника
scp bin/myservice user@host:/appdata/myservice/myservice
# Копирование docker-compose
scp docker-compose.yml user@host:/appdata/myservice/docker-compose.yml
# Запуск на хосте
ssh user@host "cd /appdata/myservice && docker compose up -d"
```
```bash
# Создание директории если не существует
ssh user@host "mkdir -p /appdata/myservice"
```
Не предлагать альтернативные пути (`/opt/`, `/usr/local/bin/`, `~/`) — только `/appdata/<appname>/`.
---
## Бинарник
Бинарник самодостаточен — все ресурсы встроены через `//go:embed`:

View File

@@ -0,0 +1,76 @@
# Contract: Backup Management
Version: 1.2
## Purpose
Общие правила для создания, хранения, именования, ротации и восстановления бэкапов вне зависимости от того, что именно сохраняется: SQLite, централизованная БД, конфиг, файлы пользователя или смешанный bundle.
## Backup Capability Must Be Shipped
Backup/restore must be built into the application runtime or into binaries/scripts shipped as part of the application itself. Do not assume the operator already has suitable software installed on their machine.
Rules:
- AI must not rely on random machine-local applications (DB GUI clients, IDE plugins, desktop backup tools, ad-hoc admin utilities) being present on the user's machine.
- Backup helpers must not depend on locally installed database clients such as `mysql`, `mysqldump`, `psql`, `pg_dump`, `sqlite3`, or similar tools being present on the user's machine.
- If the application persists non-ephemeral state and does not already have backup functionality, implement it.
- Preferred delivery is one of: built-in UI action, CLI subcommand, background scheduler, or another application-owned mechanism implemented in the project.
- The backup path must work through application mechanics: application code, bundled libraries, and application-owned configuration.
- Rollout instructions must reference only shipped or implemented backup/restore paths.
## Backup Storage
Backups are operational artifacts, not source artifacts.
Rules:
- Never write backups into the git repository tree.
- Backup files must never be staged or committed to git.
- Every application must have an explicit backup root outside the repository.
- Before creating, rotating, or restoring backups, the application must verify that the backup root resolves outside the git worktree.
- Before creating, rotating, or restoring backups, the application must verify again that the target backup files are not tracked or staged in git.
- Default local-app location: store backups next to the user config, for example `~/.config/<appname>/backups/`.
- Default server/centralized location: store backups in an application-owned path outside the repository, for example `/appdata/<appname>/backups/` or `/var/backups/<appname>/`.
- Keep retention tiers in separate directories: `daily/`, `weekly/`, `monthly/`, `yearly/`.
## Backup Naming and Format
Rules:
- Each snapshot must be a single archive or dump artifact when feasible.
- Backup filenames must include a timestamp and a version marker relevant to restore safety, for example schema version, migration number, app version, or backup format version.
- If multiple artifacts are backed up independently, include the artifact identity in the filename.
- Backups should be archived/compressed by default (`.zip`, `.tar.gz`, `.sql.gz`, `.dump.zst`, or equivalent) unless restore tooling requires a raw dump.
- Include all sidecar files required for a correct restore.
- Include the application config in the backup when it is required for a meaningful restore.
## Retention and Rotation
Use bounded retention. Do not keep an unbounded pile of snapshots.
Default policy:
- Daily: keep 7
- Weekly: keep 4
- Monthly: keep 12
- Yearly: keep 10
Rules:
- Prevent duplicate backups within the same retention period.
- Rotation/pruning must be automatic when the application manages recurring backups.
- Pre-migration or pre-repair safety backups may be kept outside normal rotation until the change is verified.
## Automated Backup Behavior
For applications that manage recurring local or operator-triggered backups:
Rules:
- On application startup, create a backup immediately if none exists yet for the current period.
- Support scheduled daily backups at a configured local time.
- Before migrations or other risky state-changing maintenance steps, trigger a fresh backup from the application-owned backup mechanism.
- Before migrations or other risky state-changing maintenance steps, double-check that backup output is outside the git tree so it cannot be pushed to a remote by accident.
- If backup location, schedule, or retention is configurable, provide safe defaults and an explicit disable switch.
## Restore Readiness
Rules:
- The operator must know how to restore from the backup before applying risky changes.
- Restore steps must be documented next to the backup workflow.
- A backup that has never been validated for restore is only partially trusted.

View File

@@ -0,0 +1,302 @@
# Contract: BOM Decomposition Mapping
Version: 1.0
## Purpose
Defines the canonical way to represent a BOM row that decomposes one external/vendor item into
multiple internal component or LOT rows.
This is not an alternate-choice mapping.
All mappings in the row apply simultaneously.
Use this contract when:
- one vendor part number expands into multiple LOTs
- one bundle SKU expands into multiple internal components
- one external line item contributes quantities to multiple downstream rows
## Canonical Data Model
One BOM row has one item quantity and zero or more mapping entries:
```json
{
"sort_order": 10,
"item_code": "SYS-821GE-TNHR",
"quantity": 3,
"description": "Vendor bundle",
"unit_price": 12000.00,
"total_price": 36000.00,
"component_mappings": [
{ "component_ref": "CHASSIS_X13_8GPU", "quantity_per_item": 1 },
{ "component_ref": "PS_3000W_Titanium", "quantity_per_item": 2 },
{ "component_ref": "RAILKIT_X13", "quantity_per_item": 1 }
]
}
```
Rules:
- `component_mappings[]` is the only canonical persisted decomposition format.
- Each mapping entry contains:
- `component_ref` — stable identifier of the downstream component/LOT
- `quantity_per_item` — how many units of that component are produced by one BOM row unit
- Derived or UI-only fields may exist at runtime, but they are not the source of truth.
Project-specific names are allowed if the semantics stay identical:
- `item_code` may be `vendor_partnumber`
- `component_ref` may be `lot_name`, `lot_code`, or another stable project identifier
- `component_mappings` may be `lot_mappings`
## Quantity Semantics
The total downstream quantity is always:
```text
downstream_total_qty = row.quantity * mapping.quantity_per_item
```
Example:
- BOM row quantity = `3`
- mapping A quantity per item = `1`
- mapping B quantity per item = `2`
Result:
- component A total = `3`
- component B total = `6`
This multiplication rule is mandatory for estimate/cart/build expansion.
## Persistence Contract
The source of truth is the persisted BOM row JSON payload.
If the project stores BOM rows:
- in a SQL JSON column, the JSON payload is the source of truth
- in a text column containing JSON, that JSON payload is the source of truth
- in an API document later persisted as JSON, the row payload shape must remain unchanged
Example persisted payload:
```json
{
"vendor_spec": [
{
"sort_order": 10,
"vendor_partnumber": "ABC-123",
"quantity": 2,
"description": "Bundle",
"lot_mappings": [
{ "lot_name": "LOT_CPU", "quantity_per_pn": 1 },
{ "lot_name": "LOT_RAIL", "quantity_per_pn": 1 }
]
}
]
}
```
Persistence rules:
- the decomposition must be stored inside each BOM row
- all mapping entries for that row must live in one array field
- no secondary storage format may act as a competing source of truth
## API Contract
API read and write payloads must expose the same decomposition shape that is persisted.
Rules:
- `GET` returns BOM rows with `component_mappings[]` or the project-specific equivalent
- `PUT` / `POST` accepts the same shape
- rebuild/apply/cart expansion must read only from the persisted mapping array
- if the mapping array is empty, the row contributes nothing downstream
- row order is defined by `sort_order`
- mapping entry order may be preserved for UX, but business logic must not depend on it
Correct:
```json
{
"vendor_spec": [
{
"sort_order": 10,
"vendor_partnumber": "ABC-123",
"quantity": 2,
"lot_mappings": [
{ "lot_name": "LOT_CPU", "quantity_per_pn": 1 },
{ "lot_name": "LOT_RAIL", "quantity_per_pn": 1 }
]
}
]
}
```
Wrong:
```json
{
"vendor_spec": [
{
"sort_order": 10,
"vendor_partnumber": "ABC-123",
"primary_lot": "LOT_CPU",
"secondary_lots": ["LOT_RAIL"]
}
]
}
```
## UI Invariants
The UI may render the mapping list in any layout, but it must preserve the same semantics.
Rules:
- the first visible mapping row is not special; it is only the first entry in the array
- additional rows may be added via `+`, modal, inline insert, or another UI affordance
- every mapping row is equally editable and removable
- `quantity_per_item` is edited per mapping row, not once for the whole row
- blank mapping rows may exist temporarily in draft UI state, but they must not be persisted
- new UI rows should default `quantity_per_item` to `1`
## Normalization and Validation
Two stages are allowed:
- draft UI normalization for convenience
- server-side persistence validation for correctness
Canonical rules before persistence:
- trim `component_ref`
- drop rows with empty `component_ref`
- reject `quantity_per_item <= 0` with a validation error
- merge duplicate `component_ref` values within one BOM row by summing `quantity_per_item`
- preserve first-seen order when merging duplicates
Example input:
```json
[
{ "component_ref": "LOT_A", "quantity_per_item": 1 },
{ "component_ref": " LOT_A ", "quantity_per_item": 2 },
{ "component_ref": "", "quantity_per_item": 5 }
]
```
Normalized result:
```json
[
{ "component_ref": "LOT_A", "quantity_per_item": 3 }
]
```
Why validation instead of silent repair:
- API contracts between applications must fail loudly on invalid quantities
- UI may prefill `1`, but the server must not silently reinterpret `0` or negative values
## Forbidden Patterns
Do not introduce incompatible storage or logic variants such as:
- `primary_lot`, `secondary_lots`, `main_component`, `bundle_lots`
- one field for the component and a separate field for its quantity outside the mapping array
- special-case logic where the first mapping row is "main" and later rows are optional add-ons
- computing downstream rows from temporary UI fields instead of the persisted mapping array
- storing the same decomposition in multiple shapes at once
## Reference Go Types
```go
type BOMItem struct {
SortOrder int `json:"sort_order"`
ItemCode string `json:"item_code"`
Quantity int `json:"quantity"`
Description string `json:"description,omitempty"`
UnitPrice *float64 `json:"unit_price,omitempty"`
TotalPrice *float64 `json:"total_price,omitempty"`
ComponentMappings []ComponentMapping `json:"component_mappings,omitempty"`
}
type ComponentMapping struct {
ComponentRef string `json:"component_ref"`
QuantityPerItem int `json:"quantity_per_item"`
}
```
Project-specific aliases are acceptable if they preserve identical semantics:
```go
type VendorSpecItem struct {
SortOrder int `json:"sort_order"`
VendorPartnumber string `json:"vendor_partnumber"`
Quantity int `json:"quantity"`
Description string `json:"description,omitempty"`
UnitPrice *float64 `json:"unit_price,omitempty"`
TotalPrice *float64 `json:"total_price,omitempty"`
LotMappings []VendorSpecLotMapping `json:"lot_mappings,omitempty"`
}
type VendorSpecLotMapping struct {
LotName string `json:"lot_name"`
QuantityPerPN int `json:"quantity_per_pn"`
}
```
## Reference Normalization (Go)
```go
func NormalizeComponentMappings(in []ComponentMapping) ([]ComponentMapping, error) {
if len(in) == 0 {
return nil, nil
}
merged := map[string]int{}
order := make([]string, 0, len(in))
for _, m := range in {
ref := strings.TrimSpace(m.ComponentRef)
if ref == "" {
continue
}
if m.QuantityPerItem <= 0 {
return nil, fmt.Errorf("component %q has invalid quantity_per_item %d", ref, m.QuantityPerItem)
}
if _, exists := merged[ref]; !exists {
order = append(order, ref)
}
merged[ref] += m.QuantityPerItem
}
out := make([]ComponentMapping, 0, len(order))
for _, ref := range order {
out = append(out, ComponentMapping{
ComponentRef: ref,
QuantityPerItem: merged[ref],
})
}
if len(out) == 0 {
return nil, nil
}
return out, nil
}
```
## Reference Expansion (Go)
```go
type CartItem struct {
ComponentRef string
Quantity int
}
func ExpandBOMRow(row BOMItem) []CartItem {
result := make([]CartItem, 0, len(row.ComponentMappings))
for _, m := range row.ComponentMappings {
qty := row.Quantity * m.QuantityPerItem
if qty <= 0 {
continue
}
result = append(result, CartItem{
ComponentRef: m.ComponentRef,
Quantity: qty,
})
}
return result
}
```

View File

@@ -1,6 +1,6 @@
# Contract: Go Code Style and Project Conventions
Version: 1.0
Version: 1.1
## Logging
@@ -64,6 +64,7 @@ Never reverse steps 2 and 5. Never start serving before migrations complete.
- Never hardcode ports, DSNs, or file paths in application code.
- Provide a `config.example.yaml` committed to the repo.
- The actual `config.yaml` is gitignored.
- Secret handling and pre-commit/pre-push leak checks must follow the `secret-management` contract.
## Template / UI Rendering

View File

@@ -1,6 +1,6 @@
# Contract: Database Patterns (Go / MySQL / MariaDB)
Version: 1.0
Version: 1.7
## MySQL Transaction Cursor Safety (CRITICAL)
@@ -104,9 +104,32 @@ items, _ := repo.GetItemsByPricelistIDs(ids) // 1 query with WHERE id IN (...)
// then group in Go
```
## Backup Before Any DB Change
Any operation that changes persisted database state must have a fresh backup taken immediately before execution.
This applies to:
- Go migrations
- Manual SQL runbooks
- Data backfills and repair scripts
- Imports, bulk updates, and bulk deletes
- Admin tools or one-off operator commands
Backup naming, storage, archive format, retention, and restore-readiness must follow the `backup-management` contract.
Rules:
- No schema change or data mutation is allowed on a non-ephemeral database without a current backup.
- "Small" or "safe" changes are not exceptions.
- The operator must know how to restore from that backup before applying the change.
- If a migration or script is intended for production/staging, the rollout instructions must state the backup step explicitly.
- The backup taken before a migration must be triggered by the application's own backup mechanism, not by assuming `mysql`, `mysqldump`, or other DB client tools exist on the user's machine.
- Before a migration starts, double-check that backup output resolves outside the git worktree and is not tracked or staged in git.
## Migration Policy
- For local-first desktop applications, startup and migration recovery must follow the `local-first-recovery` contract.
- Migrations are numbered sequentially and never modified after merge.
- Trigger, take, and verify a fresh backup through the application-owned backup mechanism before applying migrations to any non-ephemeral database.
- Each migration must be reversible where possible (document rollback in a comment).
- Never rename a column in one migration step — add new, backfill, drop old across separate deploys.
- Auto-apply migrations on startup is acceptable for internal tools; document if used.

View File

@@ -0,0 +1,56 @@
# Contract: Keep It Simple
Version: 1.0
## Principle
Working solutions do not need to be interesting.
Prefer the simplest solution that correctly solves the problem. Complexity must be justified by a real, present requirement — not by anticipation of future needs or desire to use a particular technique.
## Rules
- Choose boring technology. A well-understood, dull solution beats a clever one.
- Do not introduce abstractions, patterns, or frameworks before they are needed by at least two concrete use cases.
- Do not design for hypothetical future requirements. Build for what exists now.
- Prefer sequential, readable code over clever one-liners.
- If you can delete code and the system still works, delete it.
- Extra configurability, generalization, and extensibility are costs, not features. Add them only when explicitly required.
## Anti-patterns
- Adding helpers or utilities for one-time operations.
- Wrapping simple logic in interfaces "for testability" when a direct call works.
- Using a framework or library to solve a problem the standard library already handles.
- Writing error handling, fallbacks, or validation for scenarios that cannot happen.
- Refactoring working code because it "could be cleaner."
## Bulletproof features
A feature must be correct by construction, not by circumstance.
Do not write mechanisms that silently rely on:
- another feature being in a specific state,
- input data having a particular shape that "usually" holds,
- a certain call order or timing,
- a global flag, ambient variable, or external condition being set upstream.
Such mechanisms are thin: they work only when the world cooperates. When any surrounding assumption shifts, they break in ways that are hard to trace. This is the primary source of bugs.
**Design rules:**
- A feature owns its preconditions. If it requires data in a certain state, it must enforce or produce that state itself — not inherit it silently from a caller.
- Never write logic that only works if a sibling feature runs first and succeeds. If coordination is needed, make it explicit (a parameter, a return value, a clear contract).
- Avoid implicit state machines — sequences where operations must happen in the right order with no enforcement. Either enforce the order structurally or eliminate the dependency.
- Prefer thick, unconditional logic over thin conditional chains that assume stable context. A mechanism that always does the right thing is more reliable than one that does the right thing only when conditions are favorable.
A feature is done when it is correct on its own, not when it is correct given that everything else is also correct.
## Checklist before committing
1. Could this be done with fewer lines without losing clarity?
2. Does every abstraction here have more than one caller?
3. Is any of this code handling a case that cannot actually occur?
4. Did I add anything beyond what was asked?
If the answer to any of 14 is "yes," simplify before committing.

View File

@@ -0,0 +1,95 @@
# Contract: Local-First Recovery
Version: 1.1
## Purpose
Shared recovery and migration rules for local-first desktop applications that keep local state and may rebuild part of that state from sync, reload, import, or other deterministic upstream sources.
## Core Rule
A migration or startup strategy is not considered sufficient merely because it succeeded once on the current developer database.
Priority order:
- Priority 1: protect user data. Do not do anything that can damage, discard, or silently rewrite non-recoverable user data.
- Priority 2: preserve availability. Do not do anything that unnecessarily prevents the application from starting or operating in a reduced mode.
If user data is safe, prefer degraded startup over startup failure. If minimum useful functionality can be started safely, start it.
Startup and schema-migration behavior must be designed for degraded real-world states, including:
- legacy schema versions
- interrupted migrations
- stale temp tables
- invalid payloads
- duplicates
- `NULL` in required columns
- partially migrated tables
## Required Data Classification
The architecture must explicitly separate:
- disposable cache tables
- protected user data tables
Definitions:
- Disposable cache tables are read-only, sync-derived, imported, or otherwise rebuildable from a trusted source.
- Protected user data tables contain user-authored or otherwise non-rebuildable data.
Do not mix both classes in one table if recovery semantics differ.
## Availability Policy For Disposable Data
For disposable cache tables, availability has priority.
Rules:
- If a table cannot be migrated safely, it may be quarantined, dropped, or recreated empty.
- The application must continue startup after such recovery.
- The application must restore disposable data through the normal sync, reload, import, or rebuild path.
- Recovery must not require manual SQL intervention for routine degraded states.
## Protection Policy For User Data
For protected user data, destructive reset is forbidden.
Rules:
- Do not drop, truncate, or recreate protected tables as a recovery shortcut.
- Backup-before-change is mandatory and must follow the `backup-management` contract.
- Validate-before-migrate is mandatory.
- Migration logic must use fail-safe semantics: stop before applying a risky destructive step when invariants are broken or input is invalid.
- The application must emit explicit diagnostics that identify the blocked table, migration step, and reason.
## Recovery Logic Requirements
Rules:
- Recovery logic must be deterministic.
- Recovery logic must be idempotent.
- Recovery logic must be retry-safe on every startup.
- Recovery logic must be observable through structured logs.
- Re-running startup after a partial failure must move the system toward a valid state, not deeper into corruption.
## Quality Bar
The application must either:
- self-recover and continue startup
or:
- stop only when continuing would risk loss or corruption of non-recoverable user data
Stopping for disposable cache corruption alone is not acceptable when the data can be rebuilt safely.
If the full feature set cannot be restored safely during startup, the application should start with the minimum safe functionality instead of failing startup, as long as protected user data remains safe.
## Testing Requirements
Degraded and legacy states must be tested explicitly, not only happy-path fresh installs.
Required test coverage includes:
- legacy schema upgrades
- interrupted migration recovery
- partially migrated tables
- duplicate rows where uniqueness is expected
- `NULL` in required columns
- invalid payloads in persisted rows
- disposable-table reset and rebuild flow
- protected-data migration refusal with explicit diagnostics

View File

@@ -0,0 +1,149 @@
# Contract: Release Signing
Version: 1.0
## Purpose
Ed25519 asymmetric signing for Go release binaries.
Guarantees that a binary accepted by a running application was produced by a trusted developer.
Applies to any Go binary that is distributed or supports self-update.
---
## Key Management
Public keys are stored in the centralized keys repository: `git.mchus.pro/mchus/keys`
```
keys/
developers/
<name>.pub ← raw Ed25519 public key, base64-encoded, one line per developer
scripts/
keygen.sh ← generates keypair
sign-release.sh ← signs a binary
verify-signature.sh ← verifies locally
```
Public keys are safe to commit. Private keys stay on each developer's machine — never committed, never shared.
**Adding a developer:** add their `.pub` file → commit → rebuild affected releases.
**Removing a developer:** delete their `.pub` file → commit → rebuild releases.
Previously signed binaries with their key remain valid (already distributed), but they cannot sign new releases.
---
## Multi-Key Trust Model
A binary is accepted if its signature verifies against **any** of the embedded trusted public keys.
This mirrors the SSH `authorized_keys` model.
- One developer signs a release with their private key → produces one `.sig` file.
- The binary trusts all active developers — any of them can make a valid release.
- Signature format: raw 64-byte Ed25519 signature (not PEM, not armored).
---
## Embedding Keys at Build Time
Public keys are injected via `-ldflags` at release build time — not hardcoded at compile time.
This allows adding/removing developers without changing application source code.
```go
// internal/updater/trust.go
// trustedKeysRaw is injected at build time via -ldflags.
// Format: base64(key1):base64(key2):...
// Empty string = dev build, updates disabled.
var trustedKeysRaw string
func trustedKeys() ([]ed25519.PublicKey, error) {
if trustedKeysRaw == "" {
return nil, fmt.Errorf("dev build: trusted keys not embedded, updates disabled")
}
var keys []ed25519.PublicKey
for _, enc := range strings.Split(trustedKeysRaw, ":") {
b, err := base64.StdEncoding.DecodeString(strings.TrimSpace(enc))
if err != nil || len(b) != ed25519.PublicKeySize {
return nil, fmt.Errorf("invalid trusted key: %w", err)
}
keys = append(keys, ed25519.PublicKey(b))
}
return keys, nil
}
```
Release build script injects all current developer keys:
```sh
# scripts/build-release.sh
KEYS=$(paste -sd: /path/to/keys/developers/*.pub)
go build \
-ldflags "-s -w -X <module>/internal/updater.trustedKeysRaw=${KEYS}" \
-o dist/<binary>-linux-amd64 \
./cmd/<binary>
```
Dev build (no `-ldflags` injection): `trustedKeysRaw` is empty → updates disabled, binary works normally.
---
## Signature Verification (stdlib only, no external tools)
Use `crypto/ed25519` from Go standard library. No third-party dependencies.
```go
// internal/updater/trust.go
func verifySignature(binaryPath, sigPath string) error {
keys, err := trustedKeys()
if err != nil {
return err // dev build or misconfiguration
}
data, err := os.ReadFile(binaryPath)
if err != nil {
return fmt.Errorf("read binary: %w", err)
}
sig, err := os.ReadFile(sigPath)
if err != nil {
return fmt.Errorf("read signature: %w", err)
}
for _, key := range keys {
if ed25519.Verify(key, data, sig) {
return nil // any trusted key accepts → pass
}
}
return fmt.Errorf("signature verification failed: no trusted key matched")
}
```
Rejection behavior: log as WARNING, continue with current binary. Never crash, never block operation.
---
## Release Asset Convention
Every release must attach two files to the Gitea release:
```
<binary>-linux-amd64 ← the binary
<binary>-linux-amd64.sig ← raw 64-byte Ed25519 signature
```
Signing:
```sh
sh keys/scripts/sign-release.sh <developer-name> dist/<binary>-linux-amd64
```
Both files are uploaded to the Gitea release as downloadable assets.
---
## Rules
- Never hardcode public keys as string literals in source code — always use ldflags injection.
- Never commit private keys (`.key` files) anywhere.
- A binary built without ldflags injection must work normally — it just cannot perform verified updates.
- Signature verification failure must be a silent logged warning, not a crash or user-visible error.
- Use `crypto/ed25519` (stdlib) only — no external signing libraries.
- `.sig` file contains raw 64 bytes (not base64, not PEM). Produced by `openssl pkeyutl -sign -rawin`.

View File

@@ -0,0 +1,80 @@
# Contract: Secret Management
Version: 1.1
## Purpose
Общие правила, которые предотвращают утечку секретов в git, логи, конфиги, шаблоны и release-артефакты.
## No Secrets In Git
Secrets must never be committed to the repository, even temporarily.
This includes:
- API keys
- access tokens
- passwords
- DSNs with credentials
- private keys
- session secrets
- OAuth client secrets
- `.env` files with real values
- production or staging config files with real credentials
Rules:
- Real secrets must never appear in tracked files, commit history, tags, release assets, examples, fixtures, tests, or docs.
- `.gitignore` is required for runtime config and local secret files, but `.gitignore` alone is not considered sufficient protection.
- Commit only templates and examples with obvious placeholders, for example `CHANGEME`, `example`, or empty strings.
- Never place secrets in screenshots, pasted logs, SQL dumps, backups, or exported archives that could later be committed.
## Where Secrets Live
Rules:
- Store real secrets only in local runtime config, secret stores, environment injection, or deployment-specific configuration outside git.
- Keep committed config files secret-free: `config.example.yaml`, `.env.example`, and similar files must contain placeholders only.
- If a feature requires a new secret, document the config key name and format, not the real value.
## Required Git Checks
Before every commit:
- Verify that files with real secrets are gitignored.
- Inspect staged changes for secrets, not just working tree files.
- Run an automated secret scan against staged content using project tooling or a repository-approved scanner.
- If the scan cannot be run, stop and do not commit until an equivalent staged-content check is performed.
Before every push:
- Scan the commits being pushed for secrets again.
- Refuse the push if any potential secret is detected until it is reviewed and removed.
High-risk patterns that must be checked explicitly:
- PEM blocks (`BEGIN PRIVATE KEY`, `BEGIN OPENSSH PRIVATE KEY`, `BEGIN RSA PRIVATE KEY`)
- tokens in URLs or DSNs
- `password=`, `token=`, `secret=`, `apikey=`, `api_key=`
- cloud credentials
- webhook secrets
- JWT signing keys
## Scheduled Security Audit
Rules:
- Perform a security audit at least once per week.
- At least once per week, scan the git repository for leaked secrets, including current files, staged changes, commit history, and reachable tags.
- Treat weekly secret scanning as mandatory even if pre-commit and pre-push checks already exist.
- If the weekly audit finds a leaked secret, follow the Incident Response rules immediately.
## Logging and Generated Artifacts
Rules:
- Do not print secrets into terminal output, structured logs, panic messages, or debug dumps.
- Do not embed secrets into generated backups, exports, support bundles, or crash reports unless the artifact is explicitly treated as secret operational data and guaranteed to stay outside git.
- If secrets must appear in an operational artifact, that artifact inherits the same "never in git" rule as backups.
## Incident Response
If a secret is committed or pushed:
- Treat it as compromised immediately.
- Rotate or revoke the secret.
- Remove it from the current tree and from any generated artifacts.
- Remove it from all affected commits and from repository history, not just from the latest revision.
- Inform the user that history cleanup may be required.
- Do not claim safety merely because the repo is private.

View File

@@ -0,0 +1,21 @@
# Contract: Task Discipline
Version: 1.0
## Principle
Finish before switching. A task is not done until it reaches a logical end.
## Rules
- Do not start a new task while the current one is unfinished. Switching mid-task leaves half-done work that is harder to recover than if it had never been started.
- If a new idea or requirement surfaces during work, note it and address it after the current task is complete.
- "Logical end" means: the change works, is committed, and leaves the codebase in a coherent state — not just "the immediate code compiles."
- Do not open new files, refactor adjacent code, or fix unrelated issues while implementing a specific task. Stay focused on the defined scope.
- If the current task is blocked, resolve the blocker or explicitly hand off — do not silently pivot to something else.
## Anti-patterns
- Starting a refactor while in the middle of a bug fix.
- Leaving a feature half-implemented because something more interesting came up.
- Responding to a new requirement by abandoning the current one without documenting what was left unfinished.

View File

@@ -1,6 +1,6 @@
# Contract: Testing Policy
Version: 1.0
Version: 1.1
## Purpose
@@ -17,10 +17,13 @@ Version: 1.0
- **Трансформации** — конвертация единиц, нормализация, маппинг полей
- **Бизнес-правила** — расчёты, фильтрация, агрегация, приоритизация
- **Граничные случаи** — пустой ввод, нулевые значения, переполнение, отсутствующие поля
- **Degraded / legacy states** — legacy schema, interrupted migrations, duplicates, invalid persisted rows, partially migrated tables
- **Регрессии** — если баг был найден, тест фиксирует его до исправления
Тест пишется в том же коммите, что и функциональность. Функциональность без теста (там где он обязателен) — неполный коммит.
Для local-first desktop приложений правила деградированных состояний и recovery-тестов определяются также `local-first-recovery` contract.
---
## Когда тест не нужен

View File

@@ -0,0 +1,133 @@
# Contract: Unattended Boot Services (OpenRC)
Version: 1.0
## Purpose
Rules for OpenRC services that run in unattended environments: LiveCDs, kiosks, embedded systems.
No user is present. No TTY prompts. Every failure path must have a silent fallback.
---
## Core Invariants
- **Never block boot.** A service failure must not prevent other services from starting.
- **Never prompt.** No `read`, no `pause`, no interactive input of any kind.
- **Always exit 0.** Use `eend 0` at the end of `start()` regardless of the operation result.
- **Log everything.** Write results to `/var/log/` so SSH inspection is possible after boot.
- **Fail silently, degrade gracefully.** Missing tool → skip that collector. No network → skip network-dependent steps.
---
## Service Dependencies
Use the minimum necessary dependencies:
```sh
depend() {
need localmount # almost always needed
after some-service # ordering without hard dependency
use logger # optional soft dependency
# DO NOT add: need net, need networking, need network-online
}
```
**Never use `need net` or `need networking`** unless the service is genuinely useless without
network and you want it to fail loudly when no cable is connected.
For services that work with or without network, use `after` instead.
---
## Network-Independent SSH
Dropbear (and any SSH server) must start without network being available.
Common mistake: installing dropbear-openrc which adds `need net` in its default init.
Override with a custom init:
```sh
#!/sbin/openrc-run
description="SSH server"
depend() {
need localmount
after bee-sshsetup # key/user setup, not networking
use logger
# NO need net
}
start() {
check_config || return 1
ebegin "Starting dropbear"
/usr/sbin/dropbear ${DROPBEAR_OPTS}
eend $?
}
```
Place this file in the overlay at `etc/init.d/dropbear` — it overrides the package-installed version.
---
## Persistent DHCP
Do not use blocking DHCP (`-n` flag exits if no offer). Use background mode so the client
retries automatically when a cable is connected after boot:
```sh
# Wrong — exits immediately if no DHCP offer
udhcpc -i "$iface" -t 3 -T 5 -n -q
# Correct — background daemon, retries indefinitely
udhcpc -i "$iface" -b -t 0 -T 3 -q
```
The network service itself should complete immediately (exit 0) — udhcpc daemons run in background.
---
## Service Start Order (typical LiveCD)
```
localmount
└── sshsetup (user creation, key injection — before dropbear)
└── dropbear (SSH — independent of network)
└── network (DHCP on all interfaces — does not block anything)
└── nvidia (or other hardware init — after network in case firmware needs it)
└── audit (main workload — after all hardware ready)
```
Services at the same level can start in parallel. Use `after` not `need` for ordering without hard dependency.
---
## Error Handling in start()
```sh
start() {
ebegin "Running audit"
/usr/local/bin/audit --output /var/log/audit.json >> /var/log/audit.log 2>&1
local rc=$?
if [ $rc -eq 0 ]; then
einfo "Audit complete"
else
ewarn "Audit finished with errors — check /var/log/audit.log"
fi
eend 0 # always 0 — never fail the runlevel
}
```
- Capture exit code into a local variable.
- Log the result with `einfo` or `ewarn`.
- Always `eend 0` — a failed audit is not a reason to block the boot runlevel.
- The exception: services whose failure makes SSH impossible (e.g. key setup) may `return 1`.
---
## Rules
- Every `start()` ends with `eend 0` unless failure makes the entire environment unusable.
- Network is always best-effort. Test for it, don't depend on it.
- Proprietary drivers (NVIDIA, etc.): load failure → log warning → continue without enrichment.
- External tools (ipmitool, smartctl, etc.): not-found → skip that data source → do not abort.
- Timeout all external commands: `timeout 30 smartctl ...` prevents infinite hangs.
- Write all output to `/var/log/` — TTY output is secondary.

View File

@@ -0,0 +1,82 @@
# Contract: Vendor Installer Verification
Version: 1.0
## Purpose
Rules for downloading and verifying proprietary vendor installers (`.run`, `.exe`, `.tar.gz`)
where the vendor publishes a checksum alongside the binary.
Applies to: NVIDIA drivers, vendor CLI tools, firmware packages.
---
## Download Order
Always download the checksum file **before** the installer:
```sh
BASE_URL="https://vendor.example.com/downloads/${VERSION}"
BIN_FILE="/var/cache/vendor-${VERSION}.run"
SHA_FILE="/var/cache/vendor-${VERSION}.run.sha256sum"
# 1. Download checksum first
wget -q -O "$SHA_FILE" "${BASE_URL}/vendor-${VERSION}.run.sha256sum"
# 2. Download installer
wget --show-progress -O "$BIN_FILE" "${BASE_URL}/vendor-${VERSION}.run"
# 3. Verify
cd /var/cache
sha256sum -c "$SHA_FILE" || { echo "ERROR: sha256 mismatch"; rm -f "$BIN_FILE"; exit 1; }
```
Reason: if the download is interrupted, you have the expected checksum to verify against on retry.
---
## Cache with Verification
Never assume a cached file is valid — a previous download may have been interrupted (0-byte file):
```sh
verify_cached() {
[ -s "$SHA_FILE" ] || return 1 # sha256 file missing or empty
[ -s "$BIN_FILE" ] || return 1 # binary missing or empty
cd "$(dirname "$BIN_FILE")"
sha256sum -c "$SHA_FILE" --status 2>/dev/null
}
if ! verify_cached; then
rm -f "$BIN_FILE" "$SHA_FILE"
# ... download and verify
else
echo "verified from cache"
fi
```
**Never check only for file existence.** Check that the file is non-empty (`-s`) AND passes checksum.
---
## Version Validation
Before writing build scripts, verify the version URL actually exists:
```sh
curl -sIL "https://vendor.example.com/downloads/${VERSION}/installer.run" \
| grep -i 'http/\|content-length'
```
A `404` or `content-length: 0` means the version does not exist on that CDN.
Vendor version numbering may have gaps (e.g. NVIDIA skips minor versions on some CDNs).
---
## Rules
- Download checksum before installer — never after.
- Verify checksum before extracting or executing.
- On mismatch: delete the file, exit with error. Never proceed with a bad installer.
- Cache by `version` + any secondary key (e.g. kernel version for compiled modules).
- Never commit installer files to git — always download at build time.
- Log the expected hash when downloading so failures are diagnosable.