Compare commits
33 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
4bc7979a70 | ||
|
|
1137c6d4db | ||
|
|
7e1e2ac18d | ||
|
|
aea6bf91ab | ||
|
|
d58d52c5e7 | ||
|
|
7a628deb8a | ||
|
|
7f6be786a8 | ||
|
|
a360992a01 | ||
|
|
1ea21ece33 | ||
|
|
7ae804d2d3 | ||
|
|
da5414c708 | ||
|
|
7a69c1513d | ||
|
|
f448111e77 | ||
|
|
a5dafd37d3 | ||
|
|
3661e345b1 | ||
|
|
f915866f83 | ||
|
|
c34a42aaf5 | ||
|
|
7de0f359b6 | ||
|
|
a8d8d7dfa9 | ||
|
|
20ce0124be | ||
|
|
b0a106415f | ||
|
|
a054fc7564 | ||
|
|
68cd087356 | ||
|
|
579ff46a7f | ||
|
|
35c5600b36 | ||
|
|
c599897142 | ||
|
|
c964d66e64 | ||
|
|
f0e6bba7e9 | ||
|
|
61d7e493bd | ||
|
|
f930c79b34 | ||
|
|
a0a57e0969 | ||
|
|
b3003c4858 | ||
|
|
e2da8b4253 |
7
.gitignore
vendored
7
.gitignore
vendored
@@ -75,7 +75,12 @@ Network Trash Folder
|
||||
Temporary Items
|
||||
.apdisk
|
||||
|
||||
# Release artifacts (binaries, archives, checksums), but DO track releases/memory/ for changelog
|
||||
# Release artifacts (binaries, archives, checksums), but keep markdown notes tracked
|
||||
releases/*
|
||||
!releases/README.md
|
||||
!releases/memory/
|
||||
!releases/memory/**
|
||||
!releases/**/
|
||||
releases/**/*
|
||||
!releases/README.md
|
||||
!releases/*/RELEASE_NOTES.md
|
||||
|
||||
77
README.md
77
README.md
@@ -1,66 +1,53 @@
|
||||
# QuoteForge
|
||||
|
||||
**Корпоративный конфигуратор серверов и расчёт КП**
|
||||
Local-first desktop web app for server configuration, quotation, and project work.
|
||||
|
||||
Offline-first архитектура: пользовательские операции через локальную SQLite, MariaDB только для синхронизации.
|
||||
Runtime model:
|
||||
- user work is stored in local SQLite;
|
||||
- MariaDB is used only for setup checks and background sync;
|
||||
- HTTP server binds to loopback only.
|
||||
|
||||

|
||||

|
||||

|
||||
## What the app does
|
||||
|
||||
---
|
||||
- configuration editor with price refresh from synced pricelists;
|
||||
- projects with variants and ordered configurations;
|
||||
- vendor BOM import and PN -> LOT resolution;
|
||||
- revision history with rollback;
|
||||
- rotating local backups.
|
||||
|
||||
## Документация
|
||||
|
||||
Полная архитектурная документация хранится в **[bible/](bible/README.md)**:
|
||||
|
||||
| Файл | Тема |
|
||||
|------|------|
|
||||
| [bible/01-overview.md](bible/01-overview.md) | Продукт, возможности, технологии, структура репо |
|
||||
| [bible/02-architecture.md](bible/02-architecture.md) | Local-first, sync, ценообразование, версионность |
|
||||
| [bible/03-database.md](bible/03-database.md) | SQLite и MariaDB схемы, права, миграции |
|
||||
| [bible/04-api.md](bible/04-api.md) | Все API endpoints и web-маршруты |
|
||||
| [bible/05-config.md](bible/05-config.md) | Конфигурация, env vars, установка |
|
||||
| [bible/06-backup.md](bible/06-backup.md) | Резервное копирование |
|
||||
| [bible/07-dev.md](bible/07-dev.md) | Команды разработки, стиль кода, guardrails |
|
||||
|
||||
---
|
||||
|
||||
## Быстрый старт
|
||||
## Run
|
||||
|
||||
```bash
|
||||
# Применить миграции
|
||||
go run ./cmd/qfs -migrate
|
||||
|
||||
# Запустить
|
||||
go run ./cmd/qfs
|
||||
# или
|
||||
make run
|
||||
```
|
||||
|
||||
Приложение: http://localhost:8080 → откроется `/setup` для настройки подключения к MariaDB.
|
||||
Useful commands:
|
||||
|
||||
```bash
|
||||
# Сборка
|
||||
go run ./cmd/qfs -migrate
|
||||
go test ./...
|
||||
go vet ./...
|
||||
make build-release
|
||||
|
||||
# Проверка
|
||||
go build ./cmd/qfs && go vet ./...
|
||||
```
|
||||
|
||||
---
|
||||
On first run the app creates a minimal `config.yaml`, starts on `http://127.0.0.1:8080`, and opens `/setup` if DB credentials were not saved yet.
|
||||
|
||||
## Releases & Changelog
|
||||
## Documentation
|
||||
|
||||
Changelog между версиями: `releases/memory/v{major}.{minor}.{patch}.md`
|
||||
- Shared engineering rules: [bible/README.md](bible/README.md)
|
||||
- Project architecture: [bible-local/README.md](bible-local/README.md)
|
||||
- Release notes: `releases/<version>/RELEASE_NOTES.md`
|
||||
|
||||
---
|
||||
`bible-local/` is the source of truth for QuoteForge-specific architecture. If code changes behavior, update the matching file there in the same commit.
|
||||
|
||||
## Поддержка
|
||||
## Repository map
|
||||
|
||||
- Email: mike@mchus.pro
|
||||
- Internal: @mchus
|
||||
|
||||
## Лицензия
|
||||
|
||||
Собственность компании, только для внутреннего использования. См. [LICENSE](LICENSE).
|
||||
```text
|
||||
cmd/ entry points and migration tools
|
||||
internal/ application code
|
||||
web/ templates and static assets
|
||||
bible/ shared engineering rules
|
||||
bible-local/ project architecture and contracts
|
||||
releases/ packaged release artifacts and release notes
|
||||
config.example.yaml runtime config reference
|
||||
```
|
||||
|
||||
2
bible
2
bible
Submodule bible updated: 5a69e0bba8...52444350c1
@@ -1,130 +1,70 @@
|
||||
# 01 — Product Overview
|
||||
# 01 - Overview
|
||||
|
||||
## What is QuoteForge
|
||||
## Product
|
||||
|
||||
A corporate server configuration and quotation tool.
|
||||
Operates in **strict local-first** mode: all user operations go through local SQLite; MariaDB is used only by synchronization and dedicated setup/migration tooling.
|
||||
QuoteForge is a local-first tool for server configuration, quotation, and project tracking.
|
||||
|
||||
---
|
||||
Core user flows:
|
||||
- create and edit configurations locally;
|
||||
- calculate prices from synced pricelists;
|
||||
- group configurations into projects and variants;
|
||||
- import vendor workspaces and map vendor PNs to internal LOTs;
|
||||
- review revision history and roll back safely.
|
||||
|
||||
## Features
|
||||
## Runtime model
|
||||
|
||||
### For Users
|
||||
- Mobile-first interface — works comfortably on phones and tablets
|
||||
- Server configurator — step-by-step component selection
|
||||
- Automatic price calculation — based on pricelists from local cache
|
||||
- CSV export — ready-to-use specifications for clients
|
||||
- Configuration history — versioned snapshots with rollback support
|
||||
- Full offline operation — continue working without network, sync later
|
||||
- Guarded synchronization — sync is blocked by preflight check if local schema is not ready
|
||||
QuoteForge is a single-user thick client.
|
||||
|
||||
### Local Client Security Model
|
||||
Rules:
|
||||
- runtime HTTP binds to loopback only;
|
||||
- browser requests are treated as part of the same local user session;
|
||||
- MariaDB is not a live dependency for normal CRUD;
|
||||
- if non-loopback deployment is ever introduced, auth/RBAC must be added first.
|
||||
|
||||
QuoteForge is currently a **single-user thick client** bound to `localhost`.
|
||||
## Product scope
|
||||
|
||||
- The local HTTP/UI layer is not treated as a multi-user security boundary.
|
||||
- RBAC is not part of the active product contract for the local client.
|
||||
- The authoritative authentication boundary is the remote sync server and its DB credentials captured during setup.
|
||||
- If the app is ever exposed beyond `localhost`, auth/RBAC must be reintroduced as an enforced perimeter before release.
|
||||
In scope:
|
||||
- configurator and quote calculation;
|
||||
- projects, variants, and configuration ordering;
|
||||
- local revision history;
|
||||
- read-only pricelist browsing from SQLite cache;
|
||||
- background sync with MariaDB;
|
||||
- rotating local backups.
|
||||
|
||||
### Price Freshness Indicators
|
||||
Out of scope and intentionally removed:
|
||||
- admin pricing UI/API;
|
||||
- alerts and notification workflows;
|
||||
- stock import tooling;
|
||||
- cron jobs and importer utilities.
|
||||
|
||||
| Color | Status | Condition |
|
||||
|-------|--------|-----------|
|
||||
| Green | Fresh | < 30 days, ≥ 3 sources |
|
||||
| Yellow | Normal | 30–60 days |
|
||||
| Orange | Aging | 60–90 days |
|
||||
| Red | Stale | > 90 days or no data |
|
||||
|
||||
---
|
||||
|
||||
## Tech Stack
|
||||
## Tech stack
|
||||
|
||||
| Layer | Stack |
|
||||
|-------|-------|
|
||||
| Backend | Go 1.22+, Gin, GORM |
|
||||
| Frontend | HTML, Tailwind CSS, htmx |
|
||||
| Local DB | SQLite (`qfs.db`) |
|
||||
| Server DB | MariaDB 11+ (sync transport only for app runtime) |
|
||||
| Export | encoding/csv, excelize (XLSX) |
|
||||
| --- | --- |
|
||||
| Backend | Go, Gin, GORM |
|
||||
| Frontend | HTML templates, htmx, Tailwind CSS |
|
||||
| Local storage | SQLite |
|
||||
| Sync transport | MariaDB |
|
||||
| Export | CSV and XLSX generation |
|
||||
|
||||
---
|
||||
|
||||
## Product Scope
|
||||
|
||||
**In scope:**
|
||||
- Component configurator and quotation calculation
|
||||
- Projects and configurations
|
||||
- Read-only pricelist viewing from local cache
|
||||
- Sync (pull components/pricelists, push local changes)
|
||||
|
||||
**Out of scope (removed intentionally — do not restore):**
|
||||
- Admin pricing UI/API
|
||||
- Stock import
|
||||
- Alerts
|
||||
- Cron/importer utilities
|
||||
|
||||
---
|
||||
|
||||
## Repository Structure
|
||||
## Repository map
|
||||
|
||||
```text
|
||||
cmd/
|
||||
qfs/ main HTTP runtime
|
||||
migrate/ server migration tool
|
||||
migrate_ops_projects/ OPS project migration helper
|
||||
internal/
|
||||
appstate/ backup and runtime state
|
||||
config/ runtime config parsing
|
||||
handlers/ HTTP handlers
|
||||
localdb/ SQLite models and migrations
|
||||
repository/ repositories
|
||||
services/ business logic and sync
|
||||
web/
|
||||
templates/ HTML templates
|
||||
static/ static assets
|
||||
bible/ shared engineering rules
|
||||
bible-local/ project-specific architecture
|
||||
releases/ release artifacts and notes
|
||||
```
|
||||
quoteforge/
|
||||
├── cmd/
|
||||
│ ├── qfs/main.go # HTTP server entry point
|
||||
│ ├── migrate/ # Migration tool
|
||||
│ └── migrate_ops_projects/ # OPS project migrator
|
||||
├── internal/
|
||||
│ ├── appmeta/ # App version metadata
|
||||
│ ├── appstate/ # State management, backup
|
||||
│ ├── article/ # Article generation
|
||||
│ ├── config/ # Config parsing
|
||||
│ ├── db/ # DB initialization
|
||||
│ ├── handlers/ # HTTP handlers
|
||||
│ ├── localdb/ # SQLite layer
|
||||
│ ├── middleware/ # Auth, CORS, etc.
|
||||
│ ├── models/ # GORM models
|
||||
│ ├── repository/ # Repository layer
|
||||
│ └── services/ # Business logic
|
||||
├── web/
|
||||
│ ├── templates/ # HTML templates + partials
|
||||
│ └── static/ # CSS, JS, assets
|
||||
├── migrations/ # SQL migration files (30+)
|
||||
├── bible/ # Architectural documentation (this section)
|
||||
├── releases/memory/ # Per-version changelogs
|
||||
├── config.example.yaml # Config template (the only one in repo)
|
||||
└── go.mod
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Integration with Existing DB
|
||||
|
||||
QuoteForge integrates with the existing `RFQ_LOG` database.
|
||||
|
||||
Hard boundary:
|
||||
|
||||
- normal runtime HTTP handlers, UI flows, pricing, export, BOM resolution, and project/config CRUD must use SQLite only;
|
||||
- MariaDB access is allowed only inside `internal/services/sync/*` and dedicated setup/migration tools under `cmd/`;
|
||||
- any new direct MariaDB query in non-sync runtime code is an architectural violation.
|
||||
|
||||
**Read-only:**
|
||||
- `lot` — component catalog
|
||||
- `qt_lot_metadata` — extended component data
|
||||
- `qt_categories` — categories
|
||||
- `qt_pricelists`, `qt_pricelist_items` — pricelists
|
||||
- `stock_log` — stock quantities consumed during sync enrichment
|
||||
- `qt_partnumber_books`, `qt_partnumber_book_items` — partnumber book snapshots consumed during sync pull
|
||||
|
||||
**Read + Write:**
|
||||
- `qt_configurations` — configurations
|
||||
- `qt_projects` — projects
|
||||
|
||||
**Sync service tables:**
|
||||
- `qt_client_schema_state` — applied migrations state and operational client status per device (`username + hostname`)
|
||||
Fields written by QuoteForge:
|
||||
`app_version`, `last_sync_at`, `last_sync_status`,
|
||||
`pending_changes_count`, `pending_errors_count`, `configurations_count`, `projects_count`,
|
||||
`estimate_pricelist_version`, `warehouse_pricelist_version`, `competitor_pricelist_version`,
|
||||
`last_sync_error_code`, `last_sync_error_text`, `last_checked_at`, `updated_at`
|
||||
- `qt_pricelist_sync_status` — pricelist sync status
|
||||
|
||||
@@ -1,251 +1,127 @@
|
||||
# 02 — Architecture
|
||||
# 02 - Architecture
|
||||
|
||||
## Local-First Principle
|
||||
## Local-first rule
|
||||
|
||||
**SQLite** is the single source of truth for the user.
|
||||
**MariaDB** is a sync server only — it never blocks local operations.
|
||||
SQLite is the runtime source of truth.
|
||||
MariaDB is sync transport plus setup and migration tooling.
|
||||
|
||||
```
|
||||
User
|
||||
│
|
||||
▼
|
||||
SQLite (qfs.db) ← all CRUD operations go here
|
||||
│
|
||||
│ background sync (every 5 min)
|
||||
▼
|
||||
MariaDB (RFQ_LOG) ← pull/push only
|
||||
```text
|
||||
browser -> Gin handlers -> SQLite
|
||||
-> pending_changes
|
||||
background sync <------> MariaDB
|
||||
```
|
||||
|
||||
**Rules:**
|
||||
- All CRUD operations go through SQLite only
|
||||
- If MariaDB is unavailable → local work continues without restrictions
|
||||
- Changes are queued in `pending_changes` and pushed on next sync
|
||||
Rules:
|
||||
- user CRUD must continue when MariaDB is offline;
|
||||
- runtime handlers and pages must read and write SQLite only;
|
||||
- MariaDB access in runtime code is allowed only inside sync and setup flows;
|
||||
- no live MariaDB fallback for reads that already exist in local cache.
|
||||
|
||||
## MariaDB Boundary
|
||||
## Sync contract
|
||||
|
||||
MariaDB is not part of the runtime read/write path for user features.
|
||||
Bidirectional:
|
||||
- projects;
|
||||
- configurations;
|
||||
- `vendor_spec`;
|
||||
- pending change metadata.
|
||||
|
||||
Hard rules:
|
||||
Pull-only:
|
||||
- components;
|
||||
- pricelists and pricelist items;
|
||||
- partnumber books and partnumber book items.
|
||||
|
||||
- HTTP handlers, web pages, quote calculation, export, vendor BOM resolution, pricelist browsing, project browsing, and configuration CRUD must read/write SQLite only.
|
||||
- MariaDB access from the app runtime is allowed only inside the sync subsystem (`internal/services/sync/*`) for explicit pull/push work.
|
||||
- Dedicated tooling under `cmd/migrate` and `cmd/migrate_ops_projects` may access MariaDB for operator-run schema/data migration tasks.
|
||||
- Setup may test/store connection settings, but after setup the application must treat MariaDB as sync transport only.
|
||||
- Any new repository/service/handler that issues MariaDB queries outside sync is a regression and must be rejected in review.
|
||||
- Local SQLite migrations are code-defined only (`AutoMigrate` + `runLocalMigrations`); there is no server-driven client migration registry.
|
||||
- Read-only local sync caches are disposable. If a local cache table cannot be migrated safely at startup, the client may quarantine/reset that cache and continue booting.
|
||||
Readiness guard:
|
||||
- every sync push/pull runs a preflight check;
|
||||
- blocked sync returns `423 Locked` with a machine-readable reason;
|
||||
- local work continues even when sync is blocked.
|
||||
- sync metadata updates must preserve project `updated_at`; sync time belongs in `synced_at`, not in the user-facing last-modified timestamp.
|
||||
- pricelist pull must persist a new local snapshot atomically: header and items appear together, and `last_pricelist_sync` advances only after item download succeeds.
|
||||
- UI sync status must distinguish "last sync failed" from "up to date"; if the app can prove newer server pricelist data exists, the indicator must say local cache is incomplete.
|
||||
|
||||
Forbidden patterns:
|
||||
## Pricing contract
|
||||
|
||||
- calling `connMgr.GetDB()` from non-sync runtime business code;
|
||||
- constructing MariaDB-backed repositories in handlers for normal user requests;
|
||||
- using MariaDB as online fallback for reads when local SQLite already contains the synced dataset;
|
||||
- adding UI/API features that depend on live MariaDB availability.
|
||||
Prices come only from `local_pricelist_items`.
|
||||
|
||||
## Local Client Boundary
|
||||
Rules:
|
||||
- `local_components` is metadata-only;
|
||||
- quote calculation must not read prices from components;
|
||||
- latest pricelist selection ignores snapshots without items;
|
||||
- auto pricelist mode stays auto and must not be persisted as an explicit resolved ID.
|
||||
|
||||
The running app is a localhost-only thick client.
|
||||
## Pricing tab layout
|
||||
|
||||
- Browser/UI requests on the local machine are treated as part of the same trusted user session.
|
||||
- Local routes are not modeled as a hardened multi-user API perimeter.
|
||||
- Authorization to the central server happens through the saved MariaDB connection configured during setup.
|
||||
- Any future deployment that binds beyond `127.0.0.1` must add enforced auth/RBAC before exposure.
|
||||
The Pricing tab (Ценообразование) has two tables: Buy (Цена покупки) and Sale (Цена продажи).
|
||||
|
||||
---
|
||||
|
||||
## Synchronization
|
||||
|
||||
### Data Flow Diagram
|
||||
Column order (both tables):
|
||||
|
||||
```
|
||||
[ SERVER / MariaDB ]
|
||||
┌───────────────────────────┐
|
||||
│ qt_projects │
|
||||
│ qt_configurations │
|
||||
│ qt_pricelists │
|
||||
│ qt_pricelist_items │
|
||||
│ qt_pricelist_sync_status │
|
||||
└─────────────┬─────────────┘
|
||||
│
|
||||
pull (projects/configs/pricelists)
|
||||
│
|
||||
┌────────────────────┴────────────────────┐
|
||||
│ │
|
||||
[ CLIENT A / SQLite ] [ CLIENT B / SQLite ]
|
||||
local_projects local_projects
|
||||
local_configurations local_configurations
|
||||
local_pricelists local_pricelists
|
||||
local_pricelist_items local_pricelist_items
|
||||
pending_changes pending_changes
|
||||
│ │
|
||||
└────── push (projects/configs only) ─────┘
|
||||
│
|
||||
[ SERVER / MariaDB ]
|
||||
PN вендора | Описание | LOT | Кол-во | Estimate | Склад | Конкуренты | Ручная цена
|
||||
```
|
||||
|
||||
### Sync Direction by Entity
|
||||
Per-LOT row expansion rules:
|
||||
- each `lot_mappings` entry in a BOM row becomes its own table row with its own quantity and prices;
|
||||
- `baseLot` (resolved LOT without an explicit mapping) is treated as the first sub-row with `quantity_per_pn` from `_getRowLotQtyPerPN`;
|
||||
- when one vendor PN expands into N LOT sub-rows, PN вендора and Описание cells use `rowspan="N"` and appear only on the first sub-row;
|
||||
- a visual top border (`border-t border-gray-200`) separates each vendor PN group.
|
||||
|
||||
| Entity | Direction |
|
||||
|--------|-----------|
|
||||
| Configurations | Client ↔ Server ↔ Other Clients |
|
||||
| Projects | Client ↔ Server ↔ Other Clients |
|
||||
| Pricelists | Server → Clients only (no push) |
|
||||
| Components | Server → Clients only |
|
||||
| Partnumber books | Server → Clients only |
|
||||
Vendor price attachment:
|
||||
- `vendorOrig` and `vendorOrigUnit` (BOM unit/total price) are attached to the first LOT sub-row only;
|
||||
- subsequent sub-rows carry empty `data-vendor-orig` so `setPricingCustomPriceFromVendor` counts each vendor PN exactly once.
|
||||
|
||||
Local pricelists not present on the server and not referenced by active configurations are deleted automatically on sync.
|
||||
Controls terminology:
|
||||
- custom price input is labeled **Ручная цена** (not "Своя цена");
|
||||
- the button that fills custom price from BOM totals is labeled **BOM Цена** (not "Проставить цены BOM").
|
||||
|
||||
### Soft Deletes (Archive Pattern)
|
||||
CSV export reads PN вендора, Описание, and LOT from `data-vendor-pn`, `data-desc`, `data-lot` row attributes to bypass the rowspan cell offset problem.
|
||||
|
||||
Configurations and projects are **never hard-deleted**. Deletion is archive via `is_active = false`.
|
||||
## Configuration versioning
|
||||
|
||||
- `DELETE /api/configs/:uuid` → sets `is_active = false` (archived); can be restored via `reactivate`
|
||||
- `DELETE /api/projects/:uuid` → archives a project **variant** only (`variant` field must be non-empty); main projects cannot be deleted via this endpoint
|
||||
Configuration revisions are append-only snapshots stored in `local_configuration_versions`.
|
||||
|
||||
## Sync Readiness Guard
|
||||
Rules:
|
||||
- the editable working configuration is always the implicit head named `main`; UI must not switch the user to a numbered revision after save;
|
||||
- create a new revision when spec, BOM, or pricing content changes;
|
||||
- revision history is retrospective: the revisions page shows past snapshots, not the current `main` state;
|
||||
- rollback creates a new head revision from an old snapshot;
|
||||
- rename, reorder, project move, and similar operational edits do not create a new revision snapshot;
|
||||
- revision deduplication includes `items`, `server_count`, `total_price`, `custom_price`, `vendor_spec`, pricelist selectors, `disable_price_refresh`, and `only_in_stock`;
|
||||
- BOM updates must use version-aware save flow, not a direct SQL field update;
|
||||
- current revision pointer must be recoverable if legacy or damaged rows are found locally.
|
||||
|
||||
Before every push/pull, a preflight check runs:
|
||||
1. Is the server (MariaDB) reachable?
|
||||
2. Is the local client schema initialized and writable?
|
||||
## Sync UX
|
||||
|
||||
**If the check fails:**
|
||||
- Local CRUD continues without restriction
|
||||
- Sync API returns `423 Locked` with `reason_code` and `reason_text`
|
||||
- UI shows a red indicator with the block reason
|
||||
UI-facing sync status must never block on live MariaDB calls.
|
||||
|
||||
---
|
||||
Rules:
|
||||
- navbar sync indicator and sync info modal read only local cached state from SQLite/app settings;
|
||||
- background/manual sync may talk to MariaDB, but polling endpoints must stay fast even on slow or broken connections;
|
||||
- any MariaDB timeout/invalid-connection during sync must invalidate the cached remote handle immediately so UI stops treating the connection as healthy.
|
||||
|
||||
## Pricing
|
||||
## Naming collisions
|
||||
|
||||
### Principle
|
||||
UI-driven rename and copy flows use one suffix convention for conflicts.
|
||||
|
||||
**Prices come only from `local_pricelist_items`.**
|
||||
Components (`local_components`) are metadata-only — they contain no pricing information.
|
||||
Stock enrichment for pricelist rows is persisted into `local_pricelist_items` during sync; UI/runtime must not resolve it live from MariaDB.
|
||||
Rules:
|
||||
- configuration and variant names must auto-resolve collisions with `_копия`, then `_копия2`, `_копия3`, and so on;
|
||||
- copy checkboxes and copy modals must prefill `_копия`, not ` (копия)`;
|
||||
- the literal variant name `main` is reserved and must not be allowed for non-main variants.
|
||||
|
||||
### Lookup Pattern
|
||||
## Configuration types
|
||||
|
||||
```go
|
||||
// Look up a price for a line item
|
||||
price, found := s.lookupPriceByPricelistID(pricelistID, lotName)
|
||||
if found && price > 0 {
|
||||
// use price
|
||||
}
|
||||
Configurations have a `config_type` field: `"server"` (default) or `"storage"`.
|
||||
|
||||
// Inside lookupPriceByPricelistID:
|
||||
localPL, err := s.localDB.GetLocalPricelistByServerID(pricelistID)
|
||||
price, err := s.localDB.GetLocalPriceForLot(localPL.ID, lotName)
|
||||
```
|
||||
Rules:
|
||||
- `config_type` defaults to `"server"` for all existing and new configurations unless explicitly set;
|
||||
- the configurator page is shared for both types; the SW tab is always visible regardless of type;
|
||||
- storage configurations use the same vendor_spec + PN→LOT + pricing flow as server configurations;
|
||||
- storage component categories map to existing tabs: `ENC`/`DKC`/`CTL` → Base, `HIC` → PCI (HIC-карты СХД; `HBA`/`NIC` — серверные, не смешивать), `SSD`/`HDD` → Storage (используют существующие серверные LOT), `ACC` → Accessories (используют существующие серверные LOT), `SW` → SW.
|
||||
- `DKC` = контроллерная полка (модель СХД + тип дисков + кол-во слотов + кол-во контроллеров); `CTL` = контроллер (кэш + встроенные порты); `ENC` = дисковая полка без контроллера.
|
||||
|
||||
### Multi-Level Pricelists
|
||||
## Vendor BOM contract
|
||||
|
||||
A configuration can reference up to three pricelists simultaneously:
|
||||
Vendor BOM is stored in `vendor_spec` on the configuration row.
|
||||
|
||||
| Field | Purpose |
|
||||
|-------|---------|
|
||||
| `pricelist_id` | Primary (estimate) |
|
||||
| `warehouse_pricelist_id` | Warehouse pricing |
|
||||
| `competitor_pricelist_id` | Competitor pricing |
|
||||
|
||||
Pricelist sources: `estimate` | `warehouse` | `competitor`
|
||||
|
||||
### "Auto" Pricelist Selection
|
||||
|
||||
Configurator supports explicit and automatic selection per source (`estimate`, `warehouse`, `competitor`):
|
||||
|
||||
- **Explicit mode:** concrete `pricelist_id` is set by user in settings.
|
||||
- **Auto mode:** client sends no explicit ID for that source; backend resolves the current latest active pricelist.
|
||||
|
||||
`auto` must stay `auto` after price-level refresh and after manual "refresh prices":
|
||||
- resolved IDs are runtime-only and must not overwrite user's mode;
|
||||
- switching to explicit selection must clear runtime auto resolution for that source.
|
||||
|
||||
### Latest Pricelist Resolution Rules
|
||||
|
||||
For both server (`qt_pricelists`) and local cache (`local_pricelists`), "latest by source" is resolved with:
|
||||
|
||||
1. only pricelists that have at least one item (`EXISTS ...pricelist_items`);
|
||||
2. deterministic sort: `created_at DESC, id DESC`.
|
||||
|
||||
This prevents selecting empty/incomplete snapshots and removes nondeterministic ties.
|
||||
|
||||
---
|
||||
|
||||
## Configuration Versioning
|
||||
|
||||
### Principle
|
||||
|
||||
Append-only for **spec+price** changes: immutable snapshots are stored in `local_configuration_versions`.
|
||||
|
||||
```
|
||||
local_configurations
|
||||
└── current_version_id ──► local_configuration_versions (v3) ← active
|
||||
local_configuration_versions (v2)
|
||||
local_configuration_versions (v1)
|
||||
```
|
||||
|
||||
- `version_no = max + 1` when configuration **spec+price** changes
|
||||
- Old versions are never modified or deleted in normal flow
|
||||
- Rollback does **not** rewind history — it creates a **new** version from the snapshot
|
||||
- Operational updates (`line_no` reorder, server count, project move, rename)
|
||||
are synced via `pending_changes` but do **not** create a new revision snapshot
|
||||
|
||||
### Rollback
|
||||
|
||||
```bash
|
||||
POST /api/configs/:uuid/rollback
|
||||
{
|
||||
"target_version": 3,
|
||||
"note": "optional comment"
|
||||
}
|
||||
```
|
||||
|
||||
Result:
|
||||
- A new version `vN` is created with `data` from the target version
|
||||
- `change_note = "rollback to v{target_version}"` (+ note if provided)
|
||||
- `current_version_id` is switched to the new version
|
||||
- Configuration moves to `sync_status = pending`
|
||||
|
||||
### Sync Status Flow
|
||||
|
||||
```
|
||||
local → pending → synced
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Project Specification Ordering (`Line`)
|
||||
|
||||
- Each project configuration has persistent `line_no` (`10,20,30...`) in both SQLite and MariaDB.
|
||||
- Project list ordering is deterministic:
|
||||
`line_no ASC`, then `created_at DESC`, then `id DESC`.
|
||||
- Drag-and-drop reorder in project UI updates `line_no` for active project configurations.
|
||||
- Reorder writes are queued as configuration `update` events in `pending_changes`
|
||||
without creating new configuration versions.
|
||||
- Backward compatibility: if remote MariaDB schema does not yet include `line_no`,
|
||||
sync falls back to create/update without `line_no` instead of failing.
|
||||
|
||||
---
|
||||
|
||||
## Sync Payload for Versioning
|
||||
|
||||
Events in `pending_changes` for configurations contain:
|
||||
|
||||
| Field | Description |
|
||||
|-------|-------------|
|
||||
| `configuration_uuid` | Identifier |
|
||||
| `operation` | `create` / `update` / `rollback` |
|
||||
| `current_version_id` | Active version ID |
|
||||
| `current_version_no` | Version number |
|
||||
| `snapshot` | Current configuration state |
|
||||
| `idempotency_key` | For idempotent push |
|
||||
| `conflict_policy` | `last_write_wins` |
|
||||
|
||||
---
|
||||
|
||||
## Background Processes
|
||||
|
||||
| Process | Interval | What it does |
|
||||
|---------|----------|--------------|
|
||||
| Sync worker | 5 min | push pending + pull all |
|
||||
| Backup scheduler | configurable (`backup.time`) | creates ZIP archives |
|
||||
Rules:
|
||||
- PN to LOT resolution uses the active local partnumber book;
|
||||
- canonical persisted mapping is `lot_mappings[]`;
|
||||
- QuoteForge does not use legacy BOM tables such as `qt_bom`, `qt_lot_bundles`, or `qt_lot_bundle_items`.
|
||||
|
||||
@@ -1,267 +1,405 @@
|
||||
# 03 — Database
|
||||
# 03 - Database
|
||||
|
||||
## SQLite (local, client-side)
|
||||
## SQLite
|
||||
|
||||
File: `qfs.db` in the user-state directory (see [05-config.md](05-config.md)).
|
||||
SQLite is the local runtime database.
|
||||
|
||||
### Tables
|
||||
|
||||
#### Components and Reference Data
|
||||
|
||||
| Table | Purpose | Key Fields |
|
||||
|-------|---------|------------|
|
||||
| `local_components` | Component metadata (NO prices) | `lot_name` (PK), `lot_description`, `category`, `model` |
|
||||
| `connection_settings` | MariaDB connection settings | key-value store |
|
||||
| `app_settings` | Application settings | `key` (PK), `value`, `updated_at` |
|
||||
|
||||
Read-only cache contract:
|
||||
|
||||
- `local_components`, `local_pricelists`, `local_pricelist_items`, `local_partnumber_books`, and `local_partnumber_book_items` are synchronized caches, not user-authored data.
|
||||
- Startup must prefer application availability over preserving a broken cache schema.
|
||||
- If one of these tables cannot be migrated safely, the client may quarantine or drop it and recreate it empty; the next sync repopulates it.
|
||||
|
||||
#### Pricelists
|
||||
|
||||
| Table | Purpose | Key Fields |
|
||||
|-------|---------|------------|
|
||||
| `local_pricelists` | Pricelist headers | `id`, `server_id` (unique), `source`, `version`, `created_at` |
|
||||
| `local_pricelist_items` | Pricelist line items ← **sole source of prices** | `id`, `pricelist_id` (FK), `lot_name`, `price`, `lot_category` |
|
||||
|
||||
#### Partnumber Books (PN → LOT mapping, pull-only from PriceForge)
|
||||
|
||||
| Table | Purpose | Key Fields |
|
||||
|-------|---------|------------|
|
||||
| `local_partnumber_books` | Version snapshots of PN→LOT mappings | `id`, `server_id` (unique), `version`, `created_at`, `is_active` |
|
||||
| `local_partnumber_book_items` | Canonical PN catalog rows | `id`, `partnumber`, `lots_json`, `description` |
|
||||
|
||||
Active book: `WHERE is_active=1 ORDER BY created_at DESC, id DESC LIMIT 1`
|
||||
|
||||
#### Configurations and Projects
|
||||
|
||||
| Table | Purpose | Key Fields |
|
||||
|-------|---------|------------|
|
||||
| `local_configurations` | Saved configurations | `id`, `uuid` (unique), `items` (JSON), `vendor_spec` (JSON: PN/qty/description + canonical `lot_mappings[]`), `line_no`, `pricelist_id`, `warehouse_pricelist_id`, `competitor_pricelist_id`, `current_version_id`, `sync_status` |
|
||||
| `local_configuration_versions` | Immutable snapshots (revisions) | `id`, `configuration_id` (FK), `version_no`, `data` (JSON), `change_note`, `created_at` |
|
||||
| `local_projects` | Projects | `id`, `uuid` (unique), `name`, `code`, `sync_status` |
|
||||
|
||||
#### Sync
|
||||
Main tables:
|
||||
|
||||
| Table | Purpose |
|
||||
|-------|---------|
|
||||
| `pending_changes` | Queue of changes to push to MariaDB |
|
||||
| `local_schema_migrations` | Applied migrations (idempotency guard) |
|
||||
| --- | --- |
|
||||
| `local_components` | synced component metadata |
|
||||
| `local_pricelists` | local pricelist headers |
|
||||
| `local_pricelist_items` | local pricelist rows, the only runtime price source |
|
||||
| `local_projects` | user projects |
|
||||
| `local_configurations` | user configurations |
|
||||
| `local_configuration_versions` | immutable revision snapshots |
|
||||
| `local_partnumber_books` | partnumber book headers |
|
||||
| `local_partnumber_book_items` | PN -> LOT catalog payload |
|
||||
| `pending_changes` | sync queue |
|
||||
| `connection_settings` | encrypted MariaDB connection settings |
|
||||
| `app_settings` | local app state |
|
||||
| `local_schema_migrations` | applied local migration markers |
|
||||
|
||||
---
|
||||
Rules:
|
||||
- cache tables may be rebuilt if local migration recovery requires it;
|
||||
- user-authored tables must not be dropped as a recovery shortcut;
|
||||
- `local_pricelist_items` is the only valid runtime source of prices;
|
||||
- configuration `items` and `vendor_spec` are stored as JSON payloads inside configuration rows.
|
||||
|
||||
### Key SQLite Indexes
|
||||
## MariaDB
|
||||
|
||||
MariaDB is the central sync database (`RFQ_LOG`). Final schema as of 2026-04-15.
|
||||
|
||||
### QuoteForge tables (qt_*)
|
||||
|
||||
Runtime read:
|
||||
- `qt_categories` — pricelist categories
|
||||
- `qt_lot_metadata` — component metadata, price settings
|
||||
- `qt_pricelists` — pricelist headers (source: estimate / warehouse / competitor)
|
||||
- `qt_pricelist_items` — pricelist rows
|
||||
- `qt_partnumber_books` — partnumber book headers
|
||||
- `qt_partnumber_book_items` — PN→LOT catalog payload
|
||||
|
||||
Runtime read/write:
|
||||
- `qt_projects` — projects
|
||||
- `qt_configurations` — configurations
|
||||
- `qt_client_schema_state` — per-client sync status and version tracking
|
||||
- `qt_pricelist_sync_status` — pricelist sync timestamps per user
|
||||
|
||||
Insert-only tracking:
|
||||
- `qt_vendor_partnumber_seen` — vendor partnumbers encountered during sync
|
||||
|
||||
Server-side only (not queried by client runtime):
|
||||
- `qt_component_usage_stats` — aggregated component popularity stats (written by server jobs)
|
||||
- `qt_pricing_alerts` — price anomaly alerts (models exist in Go; feature disabled in runtime)
|
||||
- `qt_schema_migrations` — server migration history (applied via `go run ./cmd/qfs -migrate`)
|
||||
- `qt_scheduler_runs` — server background job tracking (no Go code references it in this repo)
|
||||
|
||||
### Competitor subsystem (server-side only, not used by QuoteForge Go code)
|
||||
|
||||
- `qt_competitors` — competitor registry
|
||||
- `partnumber_log_competitors` — competitor price log (FK → qt_competitors)
|
||||
|
||||
These tables exist in the schema and are maintained by another tool or workflow.
|
||||
QuoteForge references competitor pricelists only via `qt_pricelists` (source='competitor').
|
||||
|
||||
### Legacy RFQ tables (pre-QuoteForge, no Go code references)
|
||||
|
||||
- `lot` — original component registry (data preserved; superseded by `qt_lot_metadata`)
|
||||
- `lot_log` — original supplier price log
|
||||
- `supplier` — supplier registry (FK target for lot_log and machine_log)
|
||||
- `machine` — device model registry
|
||||
- `machine_log` — device price/quote log
|
||||
- `parts_log` — supplier partnumber log used by server-side import/pricing workflows, not by QuoteForge runtime
|
||||
|
||||
These tables are retained for historical data. QuoteForge does not read or write them at runtime.
|
||||
|
||||
Rules:
|
||||
- QuoteForge runtime must not depend on any legacy RFQ tables;
|
||||
- QuoteForge sync reads prices and categories from `qt_pricelists` / `qt_pricelist_items` only;
|
||||
- QuoteForge does not enrich local pricelist rows from `parts_log` or any other raw supplier log table;
|
||||
- normal UI requests must not query MariaDB tables directly;
|
||||
- `qt_client_local_migrations` exists in the 2026-04-15 schema dump, but runtime sync does not depend on it.
|
||||
|
||||
## MariaDB Table Structures
|
||||
|
||||
Full column reference as of 2026-03-21 (`RFQ_LOG` final schema).
|
||||
|
||||
### qt_categories
|
||||
| Column | Type | Notes |
|
||||
|--------|------|-------|
|
||||
| id | bigint UNSIGNED PK AUTO_INCREMENT | |
|
||||
| code | varchar(20) UNIQUE NOT NULL | |
|
||||
| name | varchar(100) NOT NULL | |
|
||||
| name_ru | varchar(100) | |
|
||||
| display_order | bigint DEFAULT 0 | |
|
||||
| is_required | tinyint(1) DEFAULT 0 | |
|
||||
|
||||
### qt_client_schema_state
|
||||
PK: (username, hostname)
|
||||
| Column | Type | Notes |
|
||||
|--------|------|-------|
|
||||
| username | varchar(100) | |
|
||||
| hostname | varchar(255) DEFAULT '' | |
|
||||
| last_applied_migration_id | varchar(128) | |
|
||||
| app_version | varchar(64) | |
|
||||
| last_sync_at | datetime | |
|
||||
| last_sync_status | varchar(32) | |
|
||||
| pending_changes_count | int DEFAULT 0 | |
|
||||
| pending_errors_count | int DEFAULT 0 | |
|
||||
| configurations_count | int DEFAULT 0 | |
|
||||
| projects_count | int DEFAULT 0 | |
|
||||
| estimate_pricelist_version | varchar(128) | |
|
||||
| warehouse_pricelist_version | varchar(128) | |
|
||||
| competitor_pricelist_version | varchar(128) | |
|
||||
| last_sync_error_code | varchar(128) | |
|
||||
| last_sync_error_text | text | |
|
||||
| last_checked_at | datetime NOT NULL | |
|
||||
| updated_at | datetime NOT NULL | |
|
||||
|
||||
### qt_component_usage_stats
|
||||
PK: lot_name
|
||||
| Column | Type | Notes |
|
||||
|--------|------|-------|
|
||||
| lot_name | varchar(255) | |
|
||||
| quotes_total | bigint DEFAULT 0 | |
|
||||
| quotes_last30d | bigint DEFAULT 0 | |
|
||||
| quotes_last7d | bigint DEFAULT 0 | |
|
||||
| total_quantity | bigint DEFAULT 0 | |
|
||||
| total_revenue | decimal(14,2) DEFAULT 0 | |
|
||||
| trend_direction | enum('up','stable','down') DEFAULT 'stable' | |
|
||||
| trend_percent | decimal(5,2) DEFAULT 0 | |
|
||||
| last_used_at | datetime(3) | |
|
||||
|
||||
### qt_competitors
|
||||
| Column | Type | Notes |
|
||||
|--------|------|-------|
|
||||
| id | bigint UNSIGNED PK AUTO_INCREMENT | |
|
||||
| name | varchar(255) NOT NULL | |
|
||||
| code | varchar(100) UNIQUE NOT NULL | |
|
||||
| delivery_basis | varchar(50) DEFAULT 'DDP' | |
|
||||
| currency | varchar(10) DEFAULT 'USD' | |
|
||||
| column_mapping | longtext JSON | |
|
||||
| is_active | tinyint(1) DEFAULT 1 | |
|
||||
| created_at | timestamp | |
|
||||
| updated_at | timestamp ON UPDATE | |
|
||||
| price_uplift | decimal(8,4) DEFAULT 1.3 | effective_price = price / price_uplift |
|
||||
|
||||
### qt_configurations
|
||||
| Column | Type | Notes |
|
||||
|--------|------|-------|
|
||||
| id | bigint UNSIGNED PK AUTO_INCREMENT | |
|
||||
| uuid | varchar(36) UNIQUE NOT NULL | |
|
||||
| user_id | bigint UNSIGNED | |
|
||||
| owner_username | varchar(100) NOT NULL | |
|
||||
| app_version | varchar(64) | |
|
||||
| project_uuid | char(36) | FK → qt_projects.uuid ON DELETE SET NULL |
|
||||
| name | varchar(200) NOT NULL | |
|
||||
| items | longtext JSON NOT NULL | component list |
|
||||
| total_price | decimal(12,2) | |
|
||||
| notes | text | |
|
||||
| is_template | tinyint(1) DEFAULT 0 | |
|
||||
| created_at | datetime(3) | |
|
||||
| custom_price | decimal(12,2) | |
|
||||
| server_count | bigint DEFAULT 1 | |
|
||||
| server_model | varchar(100) | |
|
||||
| support_code | varchar(20) | |
|
||||
| article | varchar(80) | |
|
||||
| pricelist_id | bigint UNSIGNED | FK → qt_pricelists.id |
|
||||
| warehouse_pricelist_id | bigint UNSIGNED | FK → qt_pricelists.id |
|
||||
| competitor_pricelist_id | bigint UNSIGNED | FK → qt_pricelists.id |
|
||||
| disable_price_refresh | tinyint(1) DEFAULT 0 | |
|
||||
| only_in_stock | tinyint(1) DEFAULT 0 | |
|
||||
| line_no | int | position within project |
|
||||
| price_updated_at | timestamp | |
|
||||
| vendor_spec | longtext JSON | |
|
||||
|
||||
### qt_lot_metadata
|
||||
PK: lot_name
|
||||
| Column | Type | Notes |
|
||||
|--------|------|-------|
|
||||
| lot_name | varchar(255) | |
|
||||
| category_id | bigint UNSIGNED | FK → qt_categories.id |
|
||||
| vendor | varchar(50) | |
|
||||
| model | varchar(100) | |
|
||||
| specs | longtext JSON | |
|
||||
| current_price | decimal(12,2) | cached computed price |
|
||||
| price_method | enum('manual','median','average','weighted_median') DEFAULT 'median' | |
|
||||
| price_period_days | bigint DEFAULT 90 | |
|
||||
| price_updated_at | datetime(3) | |
|
||||
| request_count | bigint DEFAULT 0 | |
|
||||
| last_request_date | date | |
|
||||
| popularity_score | decimal(10,4) DEFAULT 0 | |
|
||||
| price_coefficient | decimal(5,2) DEFAULT 0 | markup % |
|
||||
| manual_price | decimal(12,2) | |
|
||||
| meta_prices | varchar(1000) | raw price samples JSON |
|
||||
| meta_method | varchar(20) | method used for last compute |
|
||||
| meta_period_days | bigint DEFAULT 90 | |
|
||||
| is_hidden | tinyint(1) DEFAULT 0 | |
|
||||
|
||||
### qt_partnumber_books
|
||||
| Column | Type | Notes |
|
||||
|--------|------|-------|
|
||||
| id | bigint UNSIGNED PK AUTO_INCREMENT | |
|
||||
| version | varchar(30) UNIQUE NOT NULL | |
|
||||
| created_at | timestamp | |
|
||||
| created_by | varchar(100) | |
|
||||
| is_active | tinyint(1) DEFAULT 0 | only one active at a time |
|
||||
| partnumbers_json | longtext DEFAULT '[]' | flat list of partnumbers |
|
||||
|
||||
### qt_partnumber_book_items
|
||||
| Column | Type | Notes |
|
||||
|--------|------|-------|
|
||||
| id | bigint UNSIGNED PK AUTO_INCREMENT | |
|
||||
| partnumber | varchar(255) UNIQUE NOT NULL | |
|
||||
| lots_json | longtext NOT NULL | JSON array of lot_names |
|
||||
| description | varchar(10000) | |
|
||||
|
||||
### qt_pricelists
|
||||
| Column | Type | Notes |
|
||||
|--------|------|-------|
|
||||
| id | bigint UNSIGNED PK AUTO_INCREMENT | |
|
||||
| source | varchar(20) DEFAULT 'estimate' | 'estimate' / 'warehouse' / 'competitor' |
|
||||
| version | varchar(20) NOT NULL | UNIQUE with source |
|
||||
| created_at | datetime(3) | |
|
||||
| created_by | varchar(100) | |
|
||||
| is_active | tinyint(1) DEFAULT 1 | |
|
||||
| usage_count | bigint DEFAULT 0 | |
|
||||
| expires_at | datetime(3) | |
|
||||
| notification | varchar(500) | shown to clients on sync |
|
||||
|
||||
### qt_pricelist_items
|
||||
| Column | Type | Notes |
|
||||
|--------|------|-------|
|
||||
| id | bigint UNSIGNED PK AUTO_INCREMENT | |
|
||||
| pricelist_id | bigint UNSIGNED NOT NULL | FK → qt_pricelists.id |
|
||||
| lot_name | varchar(255) NOT NULL | INDEX with pricelist_id |
|
||||
| lot_category | varchar(50) | |
|
||||
| price | decimal(12,2) NOT NULL | |
|
||||
| price_method | varchar(20) | |
|
||||
| price_period_days | bigint DEFAULT 90 | |
|
||||
| price_coefficient | decimal(5,2) DEFAULT 0 | |
|
||||
| manual_price | decimal(12,2) | |
|
||||
| meta_prices | varchar(1000) | |
|
||||
|
||||
### qt_pricelist_sync_status
|
||||
PK: username
|
||||
| Column | Type | Notes |
|
||||
|--------|------|-------|
|
||||
| username | varchar(100) | |
|
||||
| last_sync_at | datetime NOT NULL | |
|
||||
| updated_at | datetime NOT NULL | |
|
||||
| app_version | varchar(64) | |
|
||||
|
||||
### qt_pricing_alerts
|
||||
| Column | Type | Notes |
|
||||
|--------|------|-------|
|
||||
| id | bigint UNSIGNED PK AUTO_INCREMENT | |
|
||||
| lot_name | varchar(255) NOT NULL | |
|
||||
| alert_type | enum('high_demand_stale_price','price_spike','price_drop','no_recent_quotes','trending_no_price') | |
|
||||
| severity | enum('low','medium','high','critical') DEFAULT 'medium' | |
|
||||
| message | text NOT NULL | |
|
||||
| details | longtext JSON | |
|
||||
| status | enum('new','acknowledged','resolved','ignored') DEFAULT 'new' | |
|
||||
| created_at | datetime(3) | |
|
||||
|
||||
### qt_projects
|
||||
| Column | Type | Notes |
|
||||
|--------|------|-------|
|
||||
| id | bigint UNSIGNED PK AUTO_INCREMENT | |
|
||||
| uuid | char(36) UNIQUE NOT NULL | |
|
||||
| owner_username | varchar(100) NOT NULL | |
|
||||
| code | varchar(100) NOT NULL | UNIQUE with variant |
|
||||
| variant | varchar(100) DEFAULT '' | UNIQUE with code |
|
||||
| name | varchar(200) | |
|
||||
| tracker_url | varchar(500) | |
|
||||
| is_active | tinyint(1) DEFAULT 1 | |
|
||||
| is_system | tinyint(1) DEFAULT 0 | |
|
||||
| created_at | timestamp | |
|
||||
| updated_at | timestamp ON UPDATE | |
|
||||
|
||||
### qt_schema_migrations
|
||||
| Column | Type | Notes |
|
||||
|--------|------|-------|
|
||||
| id | bigint UNSIGNED PK AUTO_INCREMENT | |
|
||||
| filename | varchar(255) UNIQUE NOT NULL | |
|
||||
| applied_at | datetime(3) | |
|
||||
|
||||
### qt_scheduler_runs
|
||||
PK: job_name
|
||||
| Column | Type | Notes |
|
||||
|--------|------|-------|
|
||||
| job_name | varchar(100) | |
|
||||
| last_started_at | datetime | |
|
||||
| last_finished_at | datetime | |
|
||||
| last_status | varchar(20) DEFAULT 'idle' | |
|
||||
| last_error | text | |
|
||||
| updated_at | timestamp ON UPDATE | |
|
||||
|
||||
### qt_vendor_partnumber_seen
|
||||
| Column | Type | Notes |
|
||||
|--------|------|-------|
|
||||
| id | bigint UNSIGNED PK AUTO_INCREMENT | |
|
||||
| source_type | varchar(32) NOT NULL | |
|
||||
| vendor | varchar(255) DEFAULT '' | |
|
||||
| partnumber | varchar(255) UNIQUE NOT NULL | |
|
||||
| description | varchar(10000) | |
|
||||
| last_seen_at | datetime(3) NOT NULL | |
|
||||
| is_ignored | tinyint(1) DEFAULT 0 | |
|
||||
| is_pattern | tinyint(1) DEFAULT 0 | |
|
||||
| ignored_at | datetime(3) | |
|
||||
| ignored_by | varchar(100) | |
|
||||
| created_at | datetime(3) | |
|
||||
| updated_at | datetime(3) | |
|
||||
|
||||
### stock_ignore_rules
|
||||
| Column | Type | Notes |
|
||||
|--------|------|-------|
|
||||
| id | bigint UNSIGNED PK AUTO_INCREMENT | |
|
||||
| target | varchar(20) NOT NULL | UNIQUE with match_type+pattern |
|
||||
| match_type | varchar(20) NOT NULL | |
|
||||
| pattern | varchar(500) NOT NULL | |
|
||||
| created_at | timestamp | |
|
||||
|
||||
### stock_log
|
||||
| Column | Type | Notes |
|
||||
|--------|------|-------|
|
||||
| stock_log_id | bigint UNSIGNED PK AUTO_INCREMENT | |
|
||||
| partnumber | varchar(255) NOT NULL | INDEX with date |
|
||||
| supplier | varchar(255) | |
|
||||
| date | date NOT NULL | |
|
||||
| price | decimal(12,2) NOT NULL | |
|
||||
| quality | varchar(255) | |
|
||||
| comments | text | |
|
||||
| vendor | varchar(255) | INDEX |
|
||||
| qty | decimal(14,3) | |
|
||||
|
||||
### partnumber_log_competitors
|
||||
| Column | Type | Notes |
|
||||
|--------|------|-------|
|
||||
| id | bigint UNSIGNED PK AUTO_INCREMENT | |
|
||||
| competitor_id | bigint UNSIGNED NOT NULL | FK → qt_competitors.id |
|
||||
| partnumber | varchar(255) NOT NULL | |
|
||||
| description | varchar(500) | |
|
||||
| vendor | varchar(255) | |
|
||||
| price | decimal(12,2) NOT NULL | |
|
||||
| price_loccur | decimal(12,2) | local currency price |
|
||||
| currency | varchar(10) | |
|
||||
| qty | decimal(12,4) DEFAULT 1 | |
|
||||
| date | date NOT NULL | |
|
||||
| created_at | timestamp | |
|
||||
|
||||
### Legacy tables (lot / lot_log / machine / machine_log / supplier)
|
||||
|
||||
Retained for historical data only. Not queried by QuoteForge.
|
||||
|
||||
**lot**: lot_name (PK, char 255), lot_category, lot_description
|
||||
**lot_log**: lot_log_id AUTO_INCREMENT, lot (FK→lot), supplier (FK→supplier), date, price double, quality, comments
|
||||
**supplier**: supplier_name (PK, char 255), supplier_comment
|
||||
**machine**: machine_name (PK, char 255), machine_description
|
||||
**machine_log**: machine_log_id AUTO_INCREMENT, date, supplier (FK→supplier), country, opty, type, machine (FK→machine), customer_requirement, variant, price_gpl, price_estimate, qty, quality, carepack, lead_time_weeks, prepayment_percent, price_got, Comment
|
||||
|
||||
## MariaDB User Permissions
|
||||
|
||||
The application user needs read-only access to reference tables and read/write access to runtime tables.
|
||||
|
||||
```sql
|
||||
-- Pricelists
|
||||
INDEX local_pricelist_items(pricelist_id)
|
||||
UNIQUE INDEX local_pricelists(server_id)
|
||||
INDEX local_pricelists(source, created_at) -- used for "latest by source" queries
|
||||
-- latest-by-source runtime query also applies deterministic tie-break by id DESC
|
||||
-- Read-only: reference and pricing data
|
||||
GRANT SELECT ON RFQ_LOG.qt_categories TO 'qfs_user'@'%';
|
||||
GRANT SELECT ON RFQ_LOG.qt_lot_metadata TO 'qfs_user'@'%';
|
||||
GRANT SELECT ON RFQ_LOG.qt_pricelists TO 'qfs_user'@'%';
|
||||
GRANT SELECT ON RFQ_LOG.qt_pricelist_items TO 'qfs_user'@'%';
|
||||
GRANT SELECT ON RFQ_LOG.stock_log TO 'qfs_user'@'%';
|
||||
GRANT SELECT ON RFQ_LOG.stock_ignore_rules TO 'qfs_user'@'%';
|
||||
GRANT SELECT ON RFQ_LOG.qt_partnumber_books TO 'qfs_user'@'%';
|
||||
GRANT SELECT ON RFQ_LOG.qt_partnumber_book_items TO 'qfs_user'@'%';
|
||||
GRANT SELECT ON RFQ_LOG.lot TO 'qfs_user'@'%';
|
||||
|
||||
-- Configurations
|
||||
INDEX local_configurations(pricelist_id)
|
||||
INDEX local_configurations(warehouse_pricelist_id)
|
||||
INDEX local_configurations(competitor_pricelist_id)
|
||||
INDEX local_configurations(project_uuid, line_no) -- project ordering (Line column)
|
||||
UNIQUE INDEX local_configurations(uuid)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### `items` JSON Structure in Configurations
|
||||
|
||||
```json
|
||||
{
|
||||
"items": [
|
||||
{
|
||||
"lot_name": "CPU_AMD_9654",
|
||||
"quantity": 2,
|
||||
"unit_price": 123456.78,
|
||||
"section": "Processors"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
Prices are stored inside the `items` JSON field and refreshed from the pricelist on configuration refresh.
|
||||
|
||||
---
|
||||
|
||||
## MariaDB (server-side, sync-only)
|
||||
|
||||
Database: `RFQ_LOG`
|
||||
|
||||
### Tables and Permissions
|
||||
|
||||
| Table | Purpose | Permissions |
|
||||
|-------|---------|-------------|
|
||||
| `lot` | Component catalog | SELECT |
|
||||
| `qt_lot_metadata` | Extended component data | SELECT |
|
||||
| `qt_categories` | Component categories | SELECT |
|
||||
| `qt_pricelists` | Pricelists | SELECT |
|
||||
| `qt_pricelist_items` | Pricelist line items | SELECT |
|
||||
| `stock_log` | Latest stock qty by partnumber (pricelist enrichment during sync only) | SELECT |
|
||||
| `qt_configurations` | Saved configurations (includes `line_no`) | SELECT, INSERT, UPDATE |
|
||||
| `qt_projects` | Projects | SELECT, INSERT, UPDATE |
|
||||
| `qt_client_schema_state` | Applied migrations state + client operational status per `username + hostname` | SELECT, INSERT, UPDATE |
|
||||
| `qt_pricelist_sync_status` | Pricelist sync status | SELECT, INSERT, UPDATE |
|
||||
| `qt_partnumber_books` | Partnumber book headers with snapshot membership in `partnumbers_json` (written by PriceForge) | SELECT |
|
||||
| `qt_partnumber_book_items` | Canonical PN catalog with `lots_json` composition (written by PriceForge) | SELECT |
|
||||
| `qt_vendor_partnumber_seen` | Vendor PN tracking for unresolved/ignored BOM rows (`is_ignored`) | INSERT only for new `partnumber`; existing rows must not be modified |
|
||||
|
||||
Legacy server tables not used by QuoteForge runtime anymore:
|
||||
|
||||
- `qt_bom`
|
||||
- `qt_lot_bundles`
|
||||
- `qt_lot_bundle_items`
|
||||
|
||||
QuoteForge canonical BOM storage is:
|
||||
|
||||
- `qt_configurations.vendor_spec`
|
||||
- row-level PN -> multiple LOT decomposition in `vendor_spec[].lot_mappings[]`
|
||||
|
||||
Partnumber book server read contract:
|
||||
|
||||
1. Read active or target book from `qt_partnumber_books`.
|
||||
2. Parse `partnumbers_json`.
|
||||
3. Load payloads from `qt_partnumber_book_items WHERE partnumber IN (...)`.
|
||||
|
||||
Pricelist stock enrichment contract:
|
||||
|
||||
1. Sync pulls base pricelist rows from `qt_pricelist_items`.
|
||||
2. Sync reads latest stock quantities from `stock_log`.
|
||||
3. Sync resolves `partnumber -> lot` through the local mirror of `qt_partnumber_book_items` (`local_partnumber_book_items.lots_json`).
|
||||
4. Sync stores enriched `available_qty` and `partnumbers` into `local_pricelist_items`.
|
||||
|
||||
Runtime rule:
|
||||
|
||||
- pricelist UI and quote logic read only `local_pricelist_items`;
|
||||
- runtime code must not query `stock_log`, `qt_pricelist_items`, or `qt_partnumber_book_items` directly outside sync.
|
||||
|
||||
`qt_partnumber_book_items` no longer contains `book_id` or `lot_name`.
|
||||
It stores one row per `partnumber` with:
|
||||
|
||||
- `partnumber`
|
||||
- `lots_json` as `[{"lot_name":"CPU_X","qty":2}, ...]`
|
||||
- `description`
|
||||
|
||||
`qt_client_schema_state` current contract:
|
||||
|
||||
- identity key: `username + hostname`
|
||||
- client/runtime state:
|
||||
`app_version`, `last_checked_at`, `updated_at`
|
||||
- operational state:
|
||||
`last_sync_at`, `last_sync_status`
|
||||
- queue health:
|
||||
`pending_changes_count`, `pending_errors_count`
|
||||
- local dataset size:
|
||||
`configurations_count`, `projects_count`
|
||||
- price context:
|
||||
`estimate_pricelist_version`, `warehouse_pricelist_version`, `competitor_pricelist_version`
|
||||
- last known sync problem:
|
||||
`last_sync_error_code`, `last_sync_error_text`
|
||||
|
||||
`last_sync_error_*` source priority:
|
||||
|
||||
1. blocked readiness state from `local_sync_guard_state`
|
||||
2. latest non-empty `pending_changes.last_error`
|
||||
3. `NULL` when no known sync problem exists
|
||||
|
||||
### Grant Permissions to Existing User
|
||||
|
||||
```sql
|
||||
GRANT SELECT ON RFQ_LOG.lot TO '<DB_USER>'@'%';
|
||||
GRANT SELECT ON RFQ_LOG.qt_lot_metadata TO '<DB_USER>'@'%';
|
||||
GRANT SELECT ON RFQ_LOG.qt_categories TO '<DB_USER>'@'%';
|
||||
GRANT SELECT ON RFQ_LOG.qt_pricelists TO '<DB_USER>'@'%';
|
||||
GRANT SELECT ON RFQ_LOG.qt_pricelist_items TO '<DB_USER>'@'%';
|
||||
GRANT SELECT ON RFQ_LOG.stock_log TO '<DB_USER>'@'%';
|
||||
|
||||
GRANT SELECT, INSERT, UPDATE ON RFQ_LOG.qt_configurations TO '<DB_USER>'@'%';
|
||||
GRANT SELECT, INSERT, UPDATE ON RFQ_LOG.qt_projects TO '<DB_USER>'@'%';
|
||||
|
||||
GRANT SELECT, INSERT, UPDATE ON RFQ_LOG.qt_client_schema_state TO '<DB_USER>'@'%';
|
||||
GRANT SELECT, INSERT, UPDATE ON RFQ_LOG.qt_pricelist_sync_status TO '<DB_USER>'@'%';
|
||||
|
||||
GRANT SELECT ON RFQ_LOG.qt_partnumber_books TO '<DB_USER>'@'%';
|
||||
GRANT SELECT ON RFQ_LOG.qt_partnumber_book_items TO '<DB_USER>'@'%';
|
||||
GRANT INSERT, UPDATE ON RFQ_LOG.qt_vendor_partnumber_seen TO '<DB_USER>'@'%';
|
||||
-- Read/write: runtime sync and user data
|
||||
GRANT SELECT, INSERT, UPDATE, DELETE ON RFQ_LOG.qt_projects TO 'qfs_user'@'%';
|
||||
GRANT SELECT, INSERT, UPDATE, DELETE ON RFQ_LOG.qt_configurations TO 'qfs_user'@'%';
|
||||
GRANT SELECT, INSERT, UPDATE ON RFQ_LOG.qt_client_schema_state TO 'qfs_user'@'%';
|
||||
GRANT SELECT, INSERT, UPDATE ON RFQ_LOG.qt_pricelist_sync_status TO 'qfs_user'@'%';
|
||||
GRANT SELECT, INSERT, UPDATE ON RFQ_LOG.qt_vendor_partnumber_seen TO 'qfs_user'@'%';
|
||||
|
||||
FLUSH PRIVILEGES;
|
||||
```
|
||||
|
||||
### Create a New User
|
||||
|
||||
```sql
|
||||
CREATE USER IF NOT EXISTS 'quote_user'@'%' IDENTIFIED BY '<DB_PASSWORD>';
|
||||
|
||||
GRANT SELECT ON RFQ_LOG.lot TO 'quote_user'@'%';
|
||||
GRANT SELECT ON RFQ_LOG.qt_lot_metadata TO 'quote_user'@'%';
|
||||
GRANT SELECT ON RFQ_LOG.qt_categories TO 'quote_user'@'%';
|
||||
GRANT SELECT ON RFQ_LOG.qt_pricelists TO 'quote_user'@'%';
|
||||
GRANT SELECT ON RFQ_LOG.qt_pricelist_items TO 'quote_user'@'%';
|
||||
GRANT SELECT ON RFQ_LOG.stock_log TO 'quote_user'@'%';
|
||||
GRANT SELECT, INSERT, UPDATE ON RFQ_LOG.qt_configurations TO 'quote_user'@'%';
|
||||
GRANT SELECT, INSERT, UPDATE ON RFQ_LOG.qt_projects TO 'quote_user'@'%';
|
||||
GRANT SELECT, INSERT, UPDATE ON RFQ_LOG.qt_client_schema_state TO 'quote_user'@'%';
|
||||
GRANT SELECT, INSERT, UPDATE ON RFQ_LOG.qt_pricelist_sync_status TO 'quote_user'@'%';
|
||||
GRANT SELECT ON RFQ_LOG.qt_partnumber_books TO 'quote_user'@'%';
|
||||
GRANT SELECT ON RFQ_LOG.qt_partnumber_book_items TO 'quote_user'@'%';
|
||||
GRANT INSERT, UPDATE ON RFQ_LOG.qt_vendor_partnumber_seen TO 'quote_user'@'%';
|
||||
|
||||
FLUSH PRIVILEGES;
|
||||
SHOW GRANTS FOR 'quote_user'@'%';
|
||||
```
|
||||
|
||||
**Note:** If pricelists sync but stock enrichment is empty, verify `SELECT` on `qt_pricelist_items`, `qt_partnumber_books`, `qt_partnumber_book_items`, and `stock_log`.
|
||||
|
||||
**Note:** If you see `Access denied for user ...@'<ip>'`, check for conflicting user entries (user@localhost vs user@'%').
|
||||
|
||||
---
|
||||
Rules:
|
||||
- `qt_client_schema_state` requires INSERT + UPDATE for sync status tracking (uses `ON DUPLICATE KEY UPDATE`);
|
||||
- `qt_vendor_partnumber_seen` requires INSERT + UPDATE (vendor PN discovery during sync);
|
||||
- no DELETE is needed on sync/tracking tables — rows are never removed by the client;
|
||||
- `lot` SELECT is required for the connection validation probe in `/setup`;
|
||||
- the setup page shows `can_write: true` only when `qt_client_schema_state` INSERT succeeds.
|
||||
|
||||
## Migrations
|
||||
|
||||
### SQLite Migrations (local) — два уровня, выполняются при каждом старте
|
||||
SQLite:
|
||||
- schema creation and additive changes go through GORM `AutoMigrate`;
|
||||
- data fixes, index repair, and one-off rewrites go through `runLocalMigrations`;
|
||||
- local migration state is tracked in `local_schema_migrations`.
|
||||
|
||||
**1. GORM AutoMigrate** (`internal/localdb/localdb.go`) — первый и основной уровень.
|
||||
Список Go-моделей передаётся в `db.AutoMigrate(...)`. GORM создаёт отсутствующие таблицы и добавляет новые колонки. Колонки и таблицы **не удаляет**.
|
||||
|
||||
Local SQLite partnumber book cache contract:
|
||||
|
||||
- `local_partnumber_books.partnumbers_json` stores PN membership for a pulled book.
|
||||
- `local_partnumber_book_items` is a deduplicated local catalog by `partnumber`.
|
||||
- `local_partnumber_book_items.lots_json` mirrors the server `lots_json` payload.
|
||||
- SQLite migration `2026_03_07_local_partnumber_book_catalog` rebuilds old `book_id + lot_name` rows into the new local cache shape.
|
||||
→ Для добавления новой таблицы или колонки достаточно добавить модель/поле и включить модель в AutoMigrate.
|
||||
|
||||
**2. `runLocalMigrations`** (`internal/localdb/migrations.go`) — второй уровень, для операций которые AutoMigrate не умеет: backfill данных, пересоздание таблиц, создание индексов.
|
||||
Каждая функция выполняется один раз — идемпотентность через запись `id` в `local_schema_migrations`.
|
||||
|
||||
QuoteForge does not use centralized server-driven SQLite migrations.
|
||||
All local SQLite schema/data migrations live in the client codebase.
|
||||
|
||||
### MariaDB Migrations (server-side)
|
||||
|
||||
- Stored in `migrations/` (SQL files)
|
||||
- Applied via `-migrate` flag
|
||||
- `min_app_version` — minimum app version required for the migration
|
||||
|
||||
---
|
||||
|
||||
## DB Debugging
|
||||
|
||||
```bash
|
||||
# Inspect schema
|
||||
sqlite3 ~/.local/state/quoteforge/qfs.db ".schema local_components"
|
||||
sqlite3 ~/.local/state/quoteforge/qfs.db ".schema local_configurations"
|
||||
|
||||
# Check pricelist item count
|
||||
sqlite3 ~/.local/state/quoteforge/qfs.db "SELECT COUNT(*) FROM local_pricelist_items"
|
||||
|
||||
# Check pending sync queue
|
||||
sqlite3 ~/.local/state/quoteforge/qfs.db "SELECT COUNT(*) FROM pending_changes"
|
||||
```
|
||||
MariaDB:
|
||||
- SQL files live in `migrations/`;
|
||||
- they are applied by `go run ./cmd/qfs -migrate`.
|
||||
|
||||
@@ -1,170 +1,125 @@
|
||||
# 04 — API and Web Routes
|
||||
# 04 - API
|
||||
|
||||
## API Endpoints
|
||||
## Public web routes
|
||||
|
||||
### Setup
|
||||
| Route | Purpose |
|
||||
| --- | --- |
|
||||
| `/` | configurator |
|
||||
| `/configs` | configuration list |
|
||||
| `/configs/:uuid/revisions` | revision history page |
|
||||
| `/projects` | project list |
|
||||
| `/projects/:uuid` | project detail |
|
||||
| `/pricelists` | pricelist list |
|
||||
| `/pricelists/:id` | pricelist detail |
|
||||
| `/partnumber-books` | partnumber book page |
|
||||
| `/setup` | DB setup page |
|
||||
|
||||
| Method | Endpoint | Purpose |
|
||||
|--------|----------|---------|
|
||||
| GET | `/setup` | Initial setup page |
|
||||
| POST | `/setup` | Save connection settings |
|
||||
| POST | `/setup/test` | Test MariaDB connection |
|
||||
| GET | `/setup/status` | Setup status |
|
||||
## Setup and health
|
||||
|
||||
### Components
|
||||
| Method | Path | Purpose |
|
||||
| --- | --- | --- |
|
||||
| `GET` | `/health` | process health |
|
||||
| `GET` | `/setup` | setup page |
|
||||
| `POST` | `/setup` | save tested DB settings |
|
||||
| `POST` | `/setup/test` | test DB connection |
|
||||
| `GET` | `/setup/status` | setup status |
|
||||
| `GET` | `/api/db-status` | current DB/sync status |
|
||||
| `GET` | `/api/current-user` | local user identity |
|
||||
| `GET` | `/api/ping` | lightweight API ping |
|
||||
|
||||
| Method | Endpoint | Purpose |
|
||||
|--------|----------|---------|
|
||||
| GET | `/api/components` | List components (metadata only) |
|
||||
| GET | `/api/components/:lot_name` | Component by lot_name |
|
||||
| GET | `/api/categories` | List categories |
|
||||
`POST /api/restart` exists only in `debug` mode.
|
||||
|
||||
### Quote
|
||||
## Reference data
|
||||
|
||||
| Method | Endpoint | Purpose |
|
||||
|--------|----------|---------|
|
||||
| POST | `/api/quote/validate` | Validate line items |
|
||||
| POST | `/api/quote/calculate` | Calculate quote (prices from pricelist) |
|
||||
| POST | `/api/quote/price-levels` | Prices by level (estimate/warehouse/competitor) |
|
||||
| Method | Path | Purpose |
|
||||
| --- | --- | --- |
|
||||
| `GET` | `/api/components` | list component metadata |
|
||||
| `GET` | `/api/components/:lot_name` | one component |
|
||||
| `GET` | `/api/categories` | list categories |
|
||||
| `GET` | `/api/pricelists` | list local pricelists |
|
||||
| `GET` | `/api/pricelists/latest` | latest pricelist by source |
|
||||
| `GET` | `/api/pricelists/:id` | pricelist header |
|
||||
| `GET` | `/api/pricelists/:id/items` | pricelist rows |
|
||||
| `GET` | `/api/pricelists/:id/lots` | lot names in a pricelist |
|
||||
| `GET` | `/api/partnumber-books` | local partnumber books |
|
||||
| `GET` | `/api/partnumber-books/:id` | book items by `server_id` |
|
||||
|
||||
### Pricelists (read-only)
|
||||
## Quote and export
|
||||
|
||||
| Method | Endpoint | Purpose |
|
||||
|--------|----------|---------|
|
||||
| GET | `/api/pricelists` | List pricelists (`source`, `active_only`, pagination) |
|
||||
| GET | `/api/pricelists/latest` | Latest pricelist by source |
|
||||
| GET | `/api/pricelists/:id` | Pricelist by ID |
|
||||
| GET | `/api/pricelists/:id/items` | Pricelist line items |
|
||||
| GET | `/api/pricelists/:id/lots` | Lot names in pricelist |
|
||||
| Method | Path | Purpose |
|
||||
| --- | --- | --- |
|
||||
| `POST` | `/api/quote/validate` | validate config items |
|
||||
| `POST` | `/api/quote/calculate` | calculate quote totals |
|
||||
| `POST` | `/api/quote/price-levels` | resolve estimate/warehouse/competitor prices |
|
||||
| `POST` | `/api/export/csv` | export a single configuration |
|
||||
| `GET` | `/api/configs/:uuid/export` | export a stored configuration |
|
||||
| `GET` | `/api/projects/:uuid/export` | legacy project BOM export |
|
||||
| `POST` | `/api/projects/:uuid/export` | pricing-tab project export |
|
||||
|
||||
`GET /api/pricelists?active_only=true` returns only pricelists that have synced items (`item_count > 0`).
|
||||
## Configurations
|
||||
|
||||
### Configurations
|
||||
| Method | Path | Purpose |
|
||||
| --- | --- | --- |
|
||||
| `GET` | `/api/configs` | list configurations |
|
||||
| `POST` | `/api/configs/import` | import configurations from server |
|
||||
| `POST` | `/api/configs` | create configuration |
|
||||
| `POST` | `/api/configs/preview-article` | preview article |
|
||||
| `GET` | `/api/configs/:uuid` | get configuration |
|
||||
| `PUT` | `/api/configs/:uuid` | update configuration |
|
||||
| `DELETE` | `/api/configs/:uuid` | archive configuration |
|
||||
| `POST` | `/api/configs/:uuid/reactivate` | reactivate configuration |
|
||||
| `PATCH` | `/api/configs/:uuid/rename` | rename configuration |
|
||||
| `POST` | `/api/configs/:uuid/clone` | clone configuration |
|
||||
| `POST` | `/api/configs/:uuid/refresh-prices` | refresh prices |
|
||||
| `PATCH` | `/api/configs/:uuid/project` | move configuration to project |
|
||||
| `GET` | `/api/configs/:uuid/versions` | list revisions |
|
||||
| `GET` | `/api/configs/:uuid/versions/:version` | get one revision |
|
||||
| `POST` | `/api/configs/:uuid/rollback` | rollback by creating a new head revision |
|
||||
| `PATCH` | `/api/configs/:uuid/server-count` | update server count |
|
||||
| `GET` | `/api/configs/:uuid/vendor-spec` | read vendor BOM |
|
||||
| `PUT` | `/api/configs/:uuid/vendor-spec` | replace vendor BOM |
|
||||
| `POST` | `/api/configs/:uuid/vendor-spec/resolve` | resolve PN -> LOT |
|
||||
| `POST` | `/api/configs/:uuid/vendor-spec/apply` | apply BOM to cart |
|
||||
|
||||
| Method | Endpoint | Purpose |
|
||||
|--------|----------|---------|
|
||||
| GET | `/api/configs` | List configurations |
|
||||
| POST | `/api/configs` | Create configuration |
|
||||
| GET | `/api/configs/:uuid` | Get configuration |
|
||||
| PUT | `/api/configs/:uuid` | Update configuration |
|
||||
| DELETE | `/api/configs/:uuid` | Archive configuration |
|
||||
| POST | `/api/configs/:uuid/refresh-prices` | Refresh prices from pricelist |
|
||||
| POST | `/api/configs/:uuid/clone` | Clone configuration |
|
||||
| POST | `/api/configs/:uuid/reactivate` | Restore archived configuration |
|
||||
| POST | `/api/configs/:uuid/rename` | Rename configuration |
|
||||
| POST | `/api/configs/preview-article` | Preview generated article for a configuration |
|
||||
| POST | `/api/configs/:uuid/rollback` | Roll back to a version |
|
||||
| GET | `/api/configs/:uuid/versions` | List versions |
|
||||
| GET | `/api/configs/:uuid/versions/:version` | Get specific version |
|
||||
## Projects
|
||||
|
||||
`line` field in configuration payloads is backed by persistent `line_no` in DB.
|
||||
| Method | Path | Purpose |
|
||||
| --- | --- | --- |
|
||||
| `GET` | `/api/projects` | paginated project list |
|
||||
| `GET` | `/api/projects/all` | lightweight list for dropdowns |
|
||||
| `POST` | `/api/projects` | create project |
|
||||
| `GET` | `/api/projects/:uuid` | get project |
|
||||
| `PUT` | `/api/projects/:uuid` | update project |
|
||||
| `POST` | `/api/projects/:uuid/archive` | archive project |
|
||||
| `POST` | `/api/projects/:uuid/reactivate` | reactivate project |
|
||||
| `DELETE` | `/api/projects/:uuid` | delete project variant only |
|
||||
| `GET` | `/api/projects/:uuid/configs` | list project configurations |
|
||||
| `PATCH` | `/api/projects/:uuid/configs/reorder` | persist line order |
|
||||
| `POST` | `/api/projects/:uuid/configs` | create configuration inside project |
|
||||
| `POST` | `/api/projects/:uuid/configs/:config_uuid/clone` | clone config into project |
|
||||
| `POST` | `/api/projects/:uuid/vendor-import` | import CFXML workspace into project |
|
||||
|
||||
### Projects
|
||||
Vendor import contract:
|
||||
- multipart field name is `file`;
|
||||
- file limit is `1 GiB`;
|
||||
- oversized payloads are rejected before XML parsing.
|
||||
|
||||
| Method | Endpoint | Purpose |
|
||||
|--------|----------|---------|
|
||||
| GET | `/api/projects` | List projects |
|
||||
| POST | `/api/projects` | Create project |
|
||||
| GET | `/api/projects/:uuid` | Get project |
|
||||
| PUT | `/api/projects/:uuid` | Update project |
|
||||
| DELETE | `/api/projects/:uuid` | Archive project variant (soft-delete via `is_active=false`; fails if project has no `variant` set — main projects cannot be deleted this way) |
|
||||
| GET | `/api/projects/:uuid/configs` | Project configurations |
|
||||
| PATCH | `/api/projects/:uuid/configs/reorder` | Reorder active project configurations (`ordered_uuids`) and persist `line_no` |
|
||||
| POST | `/api/projects/:uuid/vendor-import` | Import a vendor `CFXML` workspace into the existing project |
|
||||
## Sync
|
||||
|
||||
`GET /api/projects/:uuid/configs` ordering:
|
||||
`line ASC`, then `created_at DESC`, then `id DESC`.
|
||||
| Method | Path | Purpose |
|
||||
| --- | --- | --- |
|
||||
| `GET` | `/api/sync/status` | sync status |
|
||||
| `GET` | `/api/sync/readiness` | sync readiness |
|
||||
| `GET` | `/api/sync/info` | sync modal data |
|
||||
| `GET` | `/api/sync/users-status` | remote user status |
|
||||
| `GET` | `/api/sync/pending/count` | pending queue count |
|
||||
| `GET` | `/api/sync/pending` | pending queue rows |
|
||||
| `POST` | `/api/sync/components` | pull components |
|
||||
| `POST` | `/api/sync/pricelists` | pull pricelists |
|
||||
| `POST` | `/api/sync/partnumber-books` | pull partnumber books |
|
||||
| `POST` | `/api/sync/partnumber-seen` | report unresolved vendor PN |
|
||||
| `POST` | `/api/sync/all` | push and pull full sync |
|
||||
| `POST` | `/api/sync/push` | push pending changes |
|
||||
| `POST` | `/api/sync/repair` | repair broken pending rows |
|
||||
|
||||
`POST /api/projects/:uuid/vendor-import` accepts `multipart/form-data` with one required file field:
|
||||
|
||||
- `file` — vendor configurator export in `CFXML` format
|
||||
|
||||
### Sync
|
||||
|
||||
| Method | Endpoint | Purpose | Flow |
|
||||
|--------|----------|---------|------|
|
||||
| GET | `/api/sync/status` | Overall sync status | read-only |
|
||||
| GET | `/api/sync/readiness` | Preflight status (ready/blocked/unknown) | read-only |
|
||||
| GET | `/api/sync/info` | Data for sync modal | read-only |
|
||||
| GET | `/api/sync/users-status` | Users status | read-only |
|
||||
| GET | `/api/sync/pending` | List pending changes | read-only |
|
||||
| GET | `/api/sync/pending/count` | Count of pending changes | read-only |
|
||||
| POST | `/api/sync/push` | Push pending → MariaDB | SQLite → MariaDB |
|
||||
| POST | `/api/sync/components` | Pull components | MariaDB → SQLite |
|
||||
| POST | `/api/sync/pricelists` | Pull pricelists | MariaDB → SQLite |
|
||||
| POST | `/api/sync/all` | Full sync: push + pull + import | bidirectional |
|
||||
| POST | `/api/sync/repair` | Repair broken entries in pending_changes | SQLite |
|
||||
| POST | `/api/sync/partnumber-seen` | Report unresolved/ignored vendor PNs for server-side tracking | QuoteForge → MariaDB |
|
||||
|
||||
**If sync is blocked by the readiness guard:** all POST sync methods return `423 Locked` with `reason_code` and `reason_text`.
|
||||
|
||||
### Vendor Spec (BOM)
|
||||
|
||||
| Method | Endpoint | Purpose |
|
||||
|--------|----------|---------|
|
||||
| GET | `/api/configs/:uuid/vendor-spec` | Fetch stored vendor BOM |
|
||||
| PUT | `/api/configs/:uuid/vendor-spec` | Replace vendor BOM (full update) |
|
||||
| POST | `/api/configs/:uuid/vendor-spec/resolve` | Resolve PNs → LOTs (read-only) |
|
||||
| POST | `/api/configs/:uuid/vendor-spec/apply` | Apply resolved LOTs to cart |
|
||||
|
||||
Notes:
|
||||
- `GET` / `PUT /api/configs/:uuid/vendor-spec` exchange normalized BOM rows (`vendor_spec`), not raw pasted Excel layout.
|
||||
- BOM row contract stores canonical LOT mapping list as seen in BOM UI:
|
||||
- `lot_mappings[]`
|
||||
- each mapping contains `lot_name` + `quantity_per_pn`
|
||||
- `POST /api/configs/:uuid/vendor-spec/apply` rebuilds cart items from explicit BOM mappings:
|
||||
- all LOTs from `lot_mappings[]`
|
||||
|
||||
### Partnumber Books (read-only)
|
||||
|
||||
| Method | Endpoint | Purpose |
|
||||
|--------|----------|---------|
|
||||
| GET | `/api/partnumber-books` | List local book snapshots |
|
||||
| GET | `/api/partnumber-books/:id` | Items for a book by `server_id` (`page`, `per_page`, `search`) |
|
||||
| POST | `/api/sync/partnumber-books` | Pull book snapshots from MariaDB |
|
||||
|
||||
See [09-vendor-spec.md](09-vendor-spec.md) for schema and pull logic.
|
||||
See [09-vendor-spec.md](09-vendor-spec.md) for `vendor_spec` JSON schema and BOM UI mapping contract.
|
||||
|
||||
### Export
|
||||
|
||||
| Method | Endpoint | Purpose |
|
||||
|--------|----------|---------|
|
||||
| POST | `/api/export/csv` | Export configuration to CSV |
|
||||
| GET | `/api/projects/:uuid/export` | Legacy project CSV export in block BOM format |
|
||||
| POST | `/api/projects/:uuid/export` | Project CSV export in pricing-tab format with selectable columns (`include_lot`, `include_bom`, `include_estimate`, `include_stock`, `include_competitor`) |
|
||||
|
||||
**Export filename format:** `YYYY-MM-DD (ProjectCode) ConfigName Article.csv`
|
||||
(uses `project.Code`, not `project.Name`)
|
||||
|
||||
---
|
||||
|
||||
## Web Routes
|
||||
|
||||
| Route | Page |
|
||||
|-------|------|
|
||||
| `/configs` | Configuration list |
|
||||
| `/configurator` | Configurator |
|
||||
| `/configs/:uuid/revisions` | Configuration revision history |
|
||||
| `/projects` | Project list |
|
||||
| `/projects/:uuid` | Project details |
|
||||
| `/pricelists` | Pricelist list |
|
||||
| `/pricelists/:id` | Pricelist details |
|
||||
| `/partnumber-books` | Partnumber books (active book summary + snapshot history) |
|
||||
| `/setup` | Connection settings |
|
||||
|
||||
---
|
||||
|
||||
## Rollback API (details)
|
||||
|
||||
```bash
|
||||
POST /api/configs/:uuid/rollback
|
||||
Content-Type: application/json
|
||||
|
||||
{
|
||||
"target_version": 3,
|
||||
"note": "optional comment"
|
||||
}
|
||||
```
|
||||
|
||||
Response: updated configuration with the new version.
|
||||
When readiness is blocked, sync write endpoints return `423 Locked`.
|
||||
|
||||
@@ -1,141 +1,74 @@
|
||||
# 05 — Configuration and Environment
|
||||
# 05 - Config
|
||||
|
||||
## File Paths
|
||||
## Runtime files
|
||||
|
||||
### SQLite database (`qfs.db`)
|
||||
| Artifact | Default location |
|
||||
| --- | --- |
|
||||
| `qfs.db` | OS-specific user state directory |
|
||||
| `config.yaml` | same state directory as `qfs.db` |
|
||||
| `local_encryption.key` | same state directory as `qfs.db` |
|
||||
| `backups/` | next to `qfs.db` unless overridden |
|
||||
|
||||
| OS | Default path |
|
||||
|----|-------------|
|
||||
| macOS | `~/Library/Application Support/QuoteForge/qfs.db` |
|
||||
| Linux | `$XDG_STATE_HOME/quoteforge/qfs.db` or `~/.local/state/quoteforge/qfs.db` |
|
||||
| Windows | `%LOCALAPPDATA%\QuoteForge\qfs.db` |
|
||||
The runtime state directory can be overridden with `QFS_STATE_DIR`.
|
||||
Direct paths can be overridden with `QFS_DB_PATH` and `QFS_CONFIG_PATH`.
|
||||
|
||||
Override: `-localdb <path>` or `QFS_DB_PATH`.
|
||||
## Runtime config shape
|
||||
|
||||
### config.yaml
|
||||
|
||||
Searched in the same user-state directory as `qfs.db` by default.
|
||||
If the file does not exist, it is created automatically.
|
||||
If the format is outdated, it is automatically migrated to the runtime format (`server` + `logging` sections only).
|
||||
|
||||
Override: `-config <path>` or `QFS_CONFIG_PATH`.
|
||||
|
||||
**Important:** `config.yaml` is a runtime user file — it is **not stored in the repository**.
|
||||
`config.example.yaml` is the only config template in the repo.
|
||||
|
||||
### Local encryption key
|
||||
|
||||
Saved MariaDB credentials in SQLite are encrypted with:
|
||||
|
||||
1. `QUOTEFORGE_ENCRYPTION_KEY` if explicitly provided, otherwise
|
||||
2. an application-managed random key file stored at `<state dir>/local_encryption.key`.
|
||||
|
||||
Rules:
|
||||
- The key file is created automatically with mode `0600`.
|
||||
- The key file is not committed and is not included in normal backups.
|
||||
- Restoring `qfs.db` on another machine requires re-entering DB credentials unless the key file is migrated separately.
|
||||
|
||||
---
|
||||
|
||||
## config.yaml Structure
|
||||
Runtime keeps `config.yaml` intentionally small:
|
||||
|
||||
```yaml
|
||||
server:
|
||||
host: "0.0.0.0"
|
||||
host: "127.0.0.1"
|
||||
port: 8080
|
||||
mode: "release" # release | debug
|
||||
|
||||
logging:
|
||||
level: "info" # debug | info | warn | error
|
||||
format: "json" # json | text
|
||||
output: "stdout" # stdout | stderr | /path/to/file
|
||||
mode: "release"
|
||||
read_timeout: 30s
|
||||
write_timeout: 30s
|
||||
|
||||
backup:
|
||||
time: "00:00" # HH:MM in local time
|
||||
time: "00:00"
|
||||
|
||||
logging:
|
||||
level: "info"
|
||||
format: "json"
|
||||
output: "stdout"
|
||||
```
|
||||
|
||||
---
|
||||
Rules:
|
||||
- QuoteForge creates this file automatically if it does not exist;
|
||||
- startup rewrites legacy config files into this minimal runtime shape;
|
||||
- startup normalizes any `server.host` value to `127.0.0.1` before saving the runtime config;
|
||||
- `server.host` must stay on loopback.
|
||||
|
||||
## Environment Variables
|
||||
Saved MariaDB credentials do not live in `config.yaml`.
|
||||
They are stored in SQLite and encrypted with `local_encryption.key` unless `QUOTEFORGE_ENCRYPTION_KEY` overrides the key material.
|
||||
|
||||
| Variable | Description | Default |
|
||||
|----------|-------------|---------|
|
||||
| `QFS_DB_PATH` | Full path to SQLite DB | OS-specific user state dir |
|
||||
| `QFS_STATE_DIR` | State directory (if `QFS_DB_PATH` is not set) | OS-specific user state dir |
|
||||
| `QFS_CONFIG_PATH` | Full path to `config.yaml` | OS-specific user state dir |
|
||||
| `QFS_BACKUP_DIR` | Root directory for rotating backups | `<db dir>/backups` |
|
||||
| `QFS_BACKUP_DISABLE` | Disable automatic backups | — |
|
||||
| `QUOTEFORGE_ENCRYPTION_KEY` | Explicit override for local credential encryption key | app-managed key file |
|
||||
| `QF_DB_HOST` | MariaDB host | localhost |
|
||||
| `QF_DB_PORT` | MariaDB port | 3306 |
|
||||
| `QF_DB_NAME` | Database name | RFQ_LOG |
|
||||
| `QF_DB_USER` | DB user | — |
|
||||
| `QF_DB_PASSWORD` | DB password | — |
|
||||
| `QF_SERVER_PORT` | HTTP server port | 8080 |
|
||||
## Environment variables
|
||||
|
||||
`QFS_BACKUP_DISABLE` accepts: `1`, `true`, `yes`.
|
||||
| Variable | Purpose |
|
||||
| --- | --- |
|
||||
| `QFS_STATE_DIR` | override runtime state directory |
|
||||
| `QFS_DB_PATH` | explicit SQLite path |
|
||||
| `QFS_CONFIG_PATH` | explicit config path |
|
||||
| `QFS_BACKUP_DIR` | explicit backup root |
|
||||
| `QFS_BACKUP_DISABLE` | disable rotating backups |
|
||||
| `QUOTEFORGE_ENCRYPTION_KEY` | override encryption key |
|
||||
| `QF_SERVER_PORT` | override HTTP port |
|
||||
|
||||
---
|
||||
`QFS_BACKUP_DISABLE` accepts `1`, `true`, or `yes`.
|
||||
|
||||
## CLI Flags
|
||||
## CLI flags
|
||||
|
||||
| Flag | Description |
|
||||
|------|-------------|
|
||||
| `-config <path>` | Path to config.yaml |
|
||||
| `-localdb <path>` | Path to SQLite DB |
|
||||
| `-reset-localdb` | Reset local DB (destructive!) |
|
||||
| `-migrate` | Apply pending migrations and exit |
|
||||
| `-version` | Print version and exit |
|
||||
| Flag | Purpose |
|
||||
| --- | --- |
|
||||
| `-config <path>` | config file path |
|
||||
| `-localdb <path>` | SQLite path |
|
||||
| `-reset-localdb` | destructive local DB reset |
|
||||
| `-migrate` | apply server migrations and exit |
|
||||
| `-version` | print app version and exit |
|
||||
|
||||
---
|
||||
## First run
|
||||
|
||||
## Installation and First Run
|
||||
|
||||
### Requirements
|
||||
- Go 1.22 or higher
|
||||
- MariaDB 11.x (or MySQL 8.x)
|
||||
- ~50 MB disk space
|
||||
|
||||
### Steps
|
||||
|
||||
```bash
|
||||
# 1. Clone the repository
|
||||
git clone <repo-url>
|
||||
cd quoteforge
|
||||
|
||||
# 2. Apply migrations
|
||||
go run ./cmd/qfs -migrate
|
||||
|
||||
# 3. Start
|
||||
go run ./cmd/qfs
|
||||
# or
|
||||
make run
|
||||
```
|
||||
|
||||
Application is available at: http://localhost:8080
|
||||
|
||||
On first run, `/setup` opens for configuring the MariaDB connection.
|
||||
|
||||
### OPS Project Migrator
|
||||
|
||||
Migrates quotes whose names start with `OPS-xxxx` (where `x` is a digit) into a project named `OPS-xxxx`.
|
||||
|
||||
```bash
|
||||
# Preview first (always)
|
||||
go run ./cmd/migrate_ops_projects
|
||||
|
||||
# Apply
|
||||
go run ./cmd/migrate_ops_projects -apply
|
||||
|
||||
# Apply without interactive confirmation
|
||||
go run ./cmd/migrate_ops_projects -apply -yes
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Docker
|
||||
|
||||
```bash
|
||||
docker build -t quoteforge .
|
||||
docker-compose up -d
|
||||
```
|
||||
1. runtime ensures `config.yaml` exists;
|
||||
2. runtime opens the local SQLite database;
|
||||
3. if no stored MariaDB credentials exist, `/setup` is served;
|
||||
4. after setup, runtime works locally and sync uses saved DB settings in the background.
|
||||
|
||||
@@ -1,227 +1,55 @@
|
||||
# 06 — Backup
|
||||
# 06 - Backup
|
||||
|
||||
## Overview
|
||||
## Scope
|
||||
|
||||
Automatic rotating ZIP backup system for local data.
|
||||
QuoteForge creates rotating local ZIP backups of:
|
||||
- a consistent SQLite snapshot saved as `qfs.db`;
|
||||
- `config.yaml` when present.
|
||||
|
||||
**What is included in each archive:**
|
||||
- SQLite DB (`qfs.db`)
|
||||
- SQLite sidecars (`qfs.db-wal`, `qfs.db-shm`) if present
|
||||
- `config.yaml` if present
|
||||
The backup intentionally does not include `local_encryption.key`.
|
||||
|
||||
**Archive name format:** `qfs-backp-YYYY-MM-DD.zip`
|
||||
## Location and naming
|
||||
|
||||
Default root:
|
||||
- `<db dir>/backups`
|
||||
|
||||
Subdirectories:
|
||||
- `daily/`
|
||||
- `weekly/`
|
||||
- `monthly/`
|
||||
- `yearly/`
|
||||
|
||||
Archive name:
|
||||
- `qfs-backp-YYYY-MM-DD.zip`
|
||||
|
||||
## Retention
|
||||
|
||||
**Retention policy:**
|
||||
| Period | Keep |
|
||||
|--------|------|
|
||||
| Daily | 7 archives |
|
||||
| Weekly | 4 archives |
|
||||
| Monthly | 12 archives |
|
||||
| Yearly | 10 archives |
|
||||
|
||||
**Directories:** `<backup root>/daily`, `/weekly`, `/monthly`, `/yearly`
|
||||
|
||||
---
|
||||
|
||||
## Configuration
|
||||
|
||||
```yaml
|
||||
backup:
|
||||
time: "00:00" # Trigger time in local time (HH:MM format)
|
||||
```
|
||||
|
||||
**Environment variables:**
|
||||
- `QFS_BACKUP_DIR` — backup root directory (default: `<db dir>/backups`)
|
||||
- `QFS_BACKUP_DISABLE` — disable backups (`1/true/yes`)
|
||||
|
||||
**Safety rules:**
|
||||
- Backup root must resolve outside any git worktree.
|
||||
- If `qfs.db` is placed inside a repository checkout, default backups are rejected until `QFS_BACKUP_DIR` points outside the repo.
|
||||
- Backup archives intentionally do **not** include `local_encryption.key`; restored installations on another machine must re-enter DB credentials.
|
||||
|
||||
---
|
||||
| --- | --- |
|
||||
| Daily | 7 |
|
||||
| Weekly | 4 |
|
||||
| Monthly | 12 |
|
||||
| Yearly | 10 |
|
||||
|
||||
## Behavior
|
||||
|
||||
- **At startup:** if no backup exists for the current period, one is created immediately
|
||||
- **Daily:** at the configured time, a new backup is created
|
||||
- **Deduplication:** prevented via a `.period.json` marker file in each period directory
|
||||
- **Rotation:** excess old archives are deleted automatically
|
||||
- on startup, QuoteForge creates a backup if the current period has none yet;
|
||||
- a daily scheduler creates the next backup at `backup.time`;
|
||||
- duplicate snapshots inside the same period are prevented by a period marker file;
|
||||
- old archives are pruned automatically.
|
||||
|
||||
---
|
||||
## Safety rules
|
||||
|
||||
## Implementation
|
||||
- backup root must be outside the git worktree;
|
||||
- backup creation is blocked if the resolved backup root sits inside the repository;
|
||||
- SQLite snapshot must be created from a consistent database copy, not by copying live WAL files directly;
|
||||
- restore to another machine requires re-entering DB credentials unless the encryption key is migrated separately.
|
||||
|
||||
Module: `internal/appstate/backup.go`
|
||||
## Restore
|
||||
|
||||
Main function:
|
||||
```go
|
||||
func EnsureRotatingLocalBackup(dbPath, configPath string) ([]string, error)
|
||||
```
|
||||
|
||||
Scheduler (in `main.go`):
|
||||
```go
|
||||
func startBackupScheduler(ctx context.Context, cfg *config.Config, dbPath, configPath string)
|
||||
```
|
||||
|
||||
### Config struct
|
||||
|
||||
```go
|
||||
type BackupConfig struct {
|
||||
Time string `yaml:"time"`
|
||||
}
|
||||
// Default: "00:00"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Implementation Notes
|
||||
|
||||
- `backup.time` is in **local time** without timezone offset parsing
|
||||
- `.period.json` is the marker that prevents duplicate backups within the same period
|
||||
- Archive filenames contain only the date; uniqueness is ensured by per-period directories + the period marker
|
||||
- When changing naming or retention: update both the filename logic and the prune logic together
|
||||
- Git worktree detection is path-based (`.git` ancestor check) and blocks backup creation inside the repo tree
|
||||
|
||||
---
|
||||
|
||||
## Full Listing: `internal/appstate/backup.go`
|
||||
|
||||
```go
|
||||
package appstate
|
||||
|
||||
import (
|
||||
"archive/zip"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"sort"
|
||||
"strings"
|
||||
"time"
|
||||
)
|
||||
|
||||
type backupPeriod struct {
|
||||
name string
|
||||
retention int
|
||||
key func(time.Time) string
|
||||
date func(time.Time) string
|
||||
}
|
||||
|
||||
var backupPeriods = []backupPeriod{
|
||||
{
|
||||
name: "daily",
|
||||
retention: 7,
|
||||
key: func(t time.Time) string { return t.Format("2006-01-02") },
|
||||
date: func(t time.Time) string { return t.Format("2006-01-02") },
|
||||
},
|
||||
{
|
||||
name: "weekly",
|
||||
retention: 4,
|
||||
key: func(t time.Time) string {
|
||||
y, w := t.ISOWeek()
|
||||
return fmt.Sprintf("%04d-W%02d", y, w)
|
||||
},
|
||||
date: func(t time.Time) string { return t.Format("2006-01-02") },
|
||||
},
|
||||
{
|
||||
name: "monthly",
|
||||
retention: 12,
|
||||
key: func(t time.Time) string { return t.Format("2006-01") },
|
||||
date: func(t time.Time) string { return t.Format("2006-01-02") },
|
||||
},
|
||||
{
|
||||
name: "yearly",
|
||||
retention: 10,
|
||||
key: func(t time.Time) string { return t.Format("2006") },
|
||||
date: func(t time.Time) string { return t.Format("2006-01-02") },
|
||||
},
|
||||
}
|
||||
|
||||
func EnsureRotatingLocalBackup(dbPath, configPath string) ([]string, error) {
|
||||
if isBackupDisabled() || dbPath == "" {
|
||||
return nil, nil
|
||||
}
|
||||
if _, err := os.Stat(dbPath); os.IsNotExist(err) {
|
||||
return nil, nil
|
||||
}
|
||||
root := resolveBackupRoot(dbPath)
|
||||
now := time.Now()
|
||||
created := make([]string, 0)
|
||||
for _, period := range backupPeriods {
|
||||
newFiles, err := ensurePeriodBackup(root, period, now, dbPath, configPath)
|
||||
if err != nil {
|
||||
return created, err
|
||||
}
|
||||
created = append(created, newFiles...)
|
||||
}
|
||||
return created, nil
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Full Listing: Scheduler Hook (`main.go`)
|
||||
|
||||
```go
|
||||
func startBackupScheduler(ctx context.Context, cfg *config.Config, dbPath, configPath string) {
|
||||
if cfg == nil {
|
||||
return
|
||||
}
|
||||
hour, minute, err := parseBackupTime(cfg.Backup.Time)
|
||||
if err != nil {
|
||||
slog.Warn("invalid backup time; using 00:00", "value", cfg.Backup.Time, "error", err)
|
||||
hour, minute = 0, 0
|
||||
}
|
||||
|
||||
// Startup check: create backup immediately if none exists for current periods
|
||||
if created, backupErr := appstate.EnsureRotatingLocalBackup(dbPath, configPath); backupErr != nil {
|
||||
slog.Error("local backup failed", "error", backupErr)
|
||||
} else {
|
||||
for _, path := range created {
|
||||
slog.Info("local backup completed", "archive", path)
|
||||
}
|
||||
}
|
||||
|
||||
for {
|
||||
next := nextBackupTime(time.Now(), hour, minute)
|
||||
timer := time.NewTimer(time.Until(next))
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
timer.Stop()
|
||||
return
|
||||
case <-timer.C:
|
||||
start := time.Now()
|
||||
created, backupErr := appstate.EnsureRotatingLocalBackup(dbPath, configPath)
|
||||
duration := time.Since(start)
|
||||
if backupErr != nil {
|
||||
slog.Error("local backup failed", "error", backupErr, "duration", duration)
|
||||
} else {
|
||||
for _, path := range created {
|
||||
slog.Info("local backup completed", "archive", path, "duration", duration)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func parseBackupTime(value string) (int, int, error) {
|
||||
if strings.TrimSpace(value) == "" {
|
||||
return 0, 0, fmt.Errorf("empty backup time")
|
||||
}
|
||||
parsed, err := time.Parse("15:04", value)
|
||||
if err != nil {
|
||||
return 0, 0, err
|
||||
}
|
||||
return parsed.Hour(), parsed.Minute(), nil
|
||||
}
|
||||
|
||||
func nextBackupTime(now time.Time, hour, minute int) time.Time {
|
||||
target := time.Date(now.Year(), now.Month(), now.Day(), hour, minute, 0, 0, now.Location())
|
||||
if !now.Before(target) {
|
||||
target = target.Add(24 * time.Hour)
|
||||
}
|
||||
return target
|
||||
}
|
||||
```
|
||||
1. stop QuoteForge;
|
||||
2. unpack the chosen archive outside the repository;
|
||||
3. replace `qfs.db`;
|
||||
4. replace `config.yaml` if needed;
|
||||
5. restart the app;
|
||||
6. re-enter MariaDB credentials if the original encryption key is unavailable.
|
||||
|
||||
@@ -1,141 +1,35 @@
|
||||
# 07 — Development
|
||||
# 07 - Development
|
||||
|
||||
## Commands
|
||||
## Common commands
|
||||
|
||||
```bash
|
||||
# Run (dev)
|
||||
go run ./cmd/qfs
|
||||
make run
|
||||
|
||||
# Build
|
||||
make build-release # Optimized build with version info
|
||||
CGO_ENABLED=0 go build -o bin/qfs ./cmd/qfs
|
||||
|
||||
# Cross-platform build
|
||||
make build-all # Linux, macOS, Windows
|
||||
make build-windows # Windows only
|
||||
|
||||
# Verification
|
||||
go build ./cmd/qfs # Must compile without errors
|
||||
go vet ./... # Linter
|
||||
|
||||
# Tests
|
||||
go run ./cmd/qfs -migrate
|
||||
go run ./cmd/migrate_project_updated_at
|
||||
go test ./...
|
||||
make test
|
||||
|
||||
# Utilities
|
||||
make install-hooks # Git hooks (block committing secrets)
|
||||
make clean # Clean bin/
|
||||
make help # All available commands
|
||||
go vet ./...
|
||||
make build-release
|
||||
make install-hooks
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Code Style
|
||||
|
||||
- **Formatting:** `gofmt` (mandatory)
|
||||
- **Logging:** `slog` only (structured logging to the binary's stdout/stderr). No `console.log` or any other logging in browser-side JS — the browser console is never used for diagnostics.
|
||||
- **Errors:** explicit wrapping with context (`fmt.Errorf("context: %w", err)`)
|
||||
- **Style:** no unnecessary abstractions; minimum code for the task
|
||||
|
||||
---
|
||||
|
||||
## Guardrails
|
||||
|
||||
### What Must Never Be Restored
|
||||
- run `gofmt` before commit;
|
||||
- use `slog` for server logging;
|
||||
- keep runtime business logic SQLite-only;
|
||||
- limit MariaDB access to sync, setup, and migration tooling;
|
||||
- keep `config.yaml` out of git and use `config.example.yaml` only as a template;
|
||||
- update `bible-local/` in the same commit as architecture changes.
|
||||
|
||||
The following components were **intentionally removed** and must not be brought back:
|
||||
- cron jobs
|
||||
- importer utility
|
||||
- admin pricing UI/API
|
||||
- alerts
|
||||
- stock import
|
||||
## Removed features that must not return
|
||||
|
||||
### Configuration Files
|
||||
- admin pricing UI/API;
|
||||
- alerts and notification workflows;
|
||||
- stock import tooling;
|
||||
- cron jobs;
|
||||
- standalone importer utility.
|
||||
|
||||
- `config.yaml` — runtime user file, **not stored in the repository**
|
||||
- `config.example.yaml` — the only config template in the repo
|
||||
## Release notes
|
||||
|
||||
### Sync and Local-First
|
||||
|
||||
- Any sync changes must preserve local-first behavior
|
||||
- Local CRUD must not be blocked when MariaDB is unavailable
|
||||
- Runtime business code must not query MariaDB directly; all normal reads/writes go through SQLite snapshots
|
||||
- Direct MariaDB access is allowed only in `internal/services/sync/*` and dedicated setup/migration tools under `cmd/`
|
||||
- `connMgr.GetDB()` in handlers/services outside sync is a code review failure unless the code is strictly setup or operator tooling
|
||||
- Local SQLite migrations must be implemented in code; do not add a server-side registry of client SQLite SQL patches
|
||||
- Read-only local cache tables may be reset during startup recovery if migration fails; do not apply that strategy to user-authored tables like configurations, projects, pending changes, or connection settings
|
||||
|
||||
### Formats and UI
|
||||
|
||||
- **CSV export:** filename must use **project code** (`project.Code`), not project name
|
||||
Format: `YYYY-MM-DD (ProjectCode) ConfigName Article.csv`
|
||||
- **Breadcrumbs UI:** names longer than 16 characters must be truncated with an ellipsis
|
||||
|
||||
### Architecture Documentation
|
||||
|
||||
- **Every architectural decision must be recorded in `bible/`**
|
||||
- The corresponding Bible file must be updated **in the same commit** as the code change
|
||||
- On every user-requested commit, review and update the Bible in that same commit
|
||||
|
||||
---
|
||||
|
||||
## Common Tasks
|
||||
|
||||
### Add a Field to Configuration
|
||||
|
||||
1. Add the field to `LocalConfiguration` struct (`internal/models/`)
|
||||
2. Add GORM tags for the DB column
|
||||
3. Write a SQL migration (`migrations/`)
|
||||
4. Update `ConfigurationToLocal` / `LocalToConfiguration` converters
|
||||
5. Update API handlers and services
|
||||
|
||||
### Add a Field to Component
|
||||
|
||||
1. Add the field to `LocalComponent` struct (`internal/models/`)
|
||||
2. Update the SQL query in `SyncComponents()`
|
||||
3. Update the `componentRow` struct to match
|
||||
4. Update converter functions
|
||||
|
||||
### Add a Pricelist Price Lookup
|
||||
|
||||
```go
|
||||
// Modern pattern
|
||||
price, found := s.lookupPriceByPricelistID(pricelistID, lotName)
|
||||
if found && price > 0 {
|
||||
// use price
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Known Gotchas
|
||||
|
||||
1. **`CurrentPrice` removed from components** — any code using it will fail to compile
|
||||
2. **`HasPrice` filter removed** — `component.go ListComponents` no longer supports this filter
|
||||
3. **Quote calculation:** always SQLite-only; do not add a live MariaDB fallback
|
||||
4. **Items JSON:** prices are stored in the `items` field of the configuration, not fetched from components
|
||||
5. **Migrations are additive:** already-applied migrations are skipped (checked by `id` in `local_schema_migrations`)
|
||||
6. **`SyncedAt` removed:** last component sync time is now in `app_settings` (key=`last_component_sync`)
|
||||
|
||||
---
|
||||
|
||||
## Debugging Price Issues
|
||||
|
||||
**Problem: quote returns no prices**
|
||||
1. Check that `pricelist_id` is set on the configuration
|
||||
2. Check that pricelist items exist: `SELECT COUNT(*) FROM local_pricelist_items`
|
||||
3. Check `lookupPriceByPricelistID()` in `quote.go`
|
||||
4. Verify the correct source is used (estimate/warehouse/competitor)
|
||||
|
||||
**Problem: component sync not working**
|
||||
1. Components sync as metadata only — no prices
|
||||
2. Prices come via a separate pricelist sync
|
||||
3. Check `SyncComponents()` and the MariaDB query
|
||||
|
||||
**Problem: configuration refresh does not update prices**
|
||||
1. Refresh uses the latest estimate pricelist by default
|
||||
2. Latest resolution ignores pricelists without items (`EXISTS local_pricelist_items`)
|
||||
3. Old prices in `config.items` are preserved if a line item is not found in the pricelist
|
||||
4. To force a pricelist update: set `configuration.pricelist_id`
|
||||
5. In configurator, `Авто` must remain auto-mode (runtime resolved ID must not be persisted as explicit selection)
|
||||
Release history belongs under `releases/<version>/RELEASE_NOTES.md`.
|
||||
Do not keep temporary change summaries in the repository root.
|
||||
|
||||
@@ -1,568 +1,64 @@
|
||||
# 09 — Vendor Spec (BOM Import)
|
||||
# 09 - Vendor BOM
|
||||
|
||||
## Overview
|
||||
## Storage contract
|
||||
|
||||
The vendor spec feature allows importing a vendor BOM (Bill of Materials) into a configuration. It maps vendor part numbers (PN) to internal LOT names using an active partnumber book (snapshot pulled from PriceForge), then aggregates quantities to populate the Estimate (cart).
|
||||
Vendor BOM is stored in `local_configurations.vendor_spec` and synced with `qt_configurations.vendor_spec`.
|
||||
|
||||
---
|
||||
|
||||
## Architecture
|
||||
|
||||
### Storage
|
||||
|
||||
| Data | Storage | Sync direction |
|
||||
|------|---------|---------------|
|
||||
| `vendor_spec` JSON | `local_configurations.vendor_spec` (TEXT, JSON-encoded) | Two-way via `pending_changes` |
|
||||
| Partnumber book snapshots | `local_partnumber_books` + `local_partnumber_book_items` | Pull-only from PriceForge |
|
||||
|
||||
`vendor_spec` is a JSON array of `VendorSpecItem` objects stored inside the configuration row.
|
||||
It syncs to MariaDB `qt_configurations.vendor_spec` via the existing pending_changes mechanism.
|
||||
|
||||
Legacy storage note:
|
||||
|
||||
- QuoteForge does not use `qt_bom`
|
||||
- QuoteForge does not use `qt_lot_bundles`
|
||||
- QuoteForge does not use `qt_lot_bundle_items`
|
||||
|
||||
The only canonical persisted BOM contract for QuoteForge is `qt_configurations.vendor_spec`.
|
||||
|
||||
### `vendor_spec` JSON Schema
|
||||
|
||||
```json
|
||||
[
|
||||
{
|
||||
"sort_order": 10,
|
||||
"vendor_partnumber": "ABC-123",
|
||||
"quantity": 2,
|
||||
"description": "...",
|
||||
"unit_price": 4500.00,
|
||||
"total_price": 9000.00,
|
||||
"lot_mappings": [
|
||||
{ "lot_name": "LOT_A", "quantity_per_pn": 1 },
|
||||
{ "lot_name": "LOT_B", "quantity_per_pn": 2 }
|
||||
]
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
`lot_mappings[]` is the canonical persisted LOT mapping list for a BOM row.
|
||||
Each mapping entry stores:
|
||||
|
||||
- `lot_name`
|
||||
- `quantity_per_pn` (how many units of this LOT are included in **one vendor PN**)
|
||||
|
||||
### PN → LOT Mapping Contract (single LOT, multiplier, bundle)
|
||||
|
||||
QuoteForge expects the server to return/store BOM rows (`vendor_spec`) using a single canonical mapping list:
|
||||
|
||||
- `lot_mappings[]` contains **all** LOT mappings for the PN row (single LOT and bundle cases alike)
|
||||
- the list stores exactly what the user sees in BOM (LOT + "LOT в 1 PN")
|
||||
- the DB contract does **not** split mappings into "base LOT" vs "bundle LOTs"
|
||||
|
||||
#### Final quantity contribution to Estimate
|
||||
|
||||
For one BOM row with vendor PN quantity `pn_qty`:
|
||||
|
||||
- each mapping contribution:
|
||||
- `lot_qty = pn_qty * lot_mappings[i].quantity_per_pn`
|
||||
|
||||
#### Example: one PN maps to multiple LOTs
|
||||
Each row uses this canonical shape:
|
||||
|
||||
```json
|
||||
{
|
||||
"vendor_partnumber": "SYS-821GE-TNHR",
|
||||
"quantity": 3,
|
||||
"sort_order": 10,
|
||||
"vendor_partnumber": "ABC-123",
|
||||
"quantity": 2,
|
||||
"description": "row description",
|
||||
"unit_price": 4500.0,
|
||||
"total_price": 9000.0,
|
||||
"lot_mappings": [
|
||||
{ "lot_name": "CHASSIS_X13_8GPU", "quantity_per_pn": 1 },
|
||||
{ "lot_name": "PS_3000W_Titanium", "quantity_per_pn": 2 },
|
||||
{ "lot_name": "RAILKIT_X13", "quantity_per_pn": 1 }
|
||||
{ "lot_name": "LOT_A", "quantity_per_pn": 1 }
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
This row contributes to Estimate:
|
||||
Rules:
|
||||
- `lot_mappings[]` is the only persisted PN -> LOT mapping contract;
|
||||
- QuoteForge does not use legacy BOM tables;
|
||||
- apply flow rebuilds cart rows from `lot_mappings[]`.
|
||||
|
||||
- `CHASSIS_X13_8GPU` → `3 * 1 = 3`
|
||||
- `PS_3000W_Titanium` → `3 * 2 = 6`
|
||||
- `RAILKIT_X13` → `3 * 1 = 3`
|
||||
## Partnumber books
|
||||
|
||||
---
|
||||
Partnumber books are pull-only snapshots from PriceForge.
|
||||
|
||||
## Partnumber Books (Snapshots)
|
||||
Local tables:
|
||||
- `local_partnumber_books`
|
||||
- `local_partnumber_book_items`
|
||||
|
||||
Partnumber books are immutable versioned snapshots of the global PN→LOT mapping table, analogous to pricelists. PriceForge creates new snapshots; QuoteForge only pulls and reads them.
|
||||
Server tables:
|
||||
- `qt_partnumber_books`
|
||||
- `qt_partnumber_book_items`
|
||||
|
||||
### SQLite (local mirror)
|
||||
Resolution flow:
|
||||
1. load the active local book;
|
||||
2. find `vendor_partnumber`;
|
||||
3. copy `lots_json` into `lot_mappings[]`;
|
||||
4. keep unresolved rows editable in the UI.
|
||||
|
||||
```sql
|
||||
CREATE TABLE local_partnumber_books (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
server_id INTEGER UNIQUE NOT NULL, -- id from qt_partnumber_books
|
||||
version TEXT NOT NULL, -- format YYYY-MM-DD-NNN
|
||||
created_at DATETIME NOT NULL,
|
||||
is_active INTEGER NOT NULL DEFAULT 1
|
||||
);
|
||||
## CFXML import
|
||||
|
||||
CREATE TABLE local_partnumber_book_items (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
partnumber TEXT NOT NULL,
|
||||
lots_json TEXT NOT NULL,
|
||||
description TEXT
|
||||
);
|
||||
CREATE UNIQUE INDEX idx_local_book_pn ON local_partnumber_book_items(partnumber);
|
||||
```
|
||||
|
||||
**Active book query:** `WHERE is_active=1 ORDER BY created_at DESC, id DESC LIMIT 1`
|
||||
|
||||
**Schema creation:** GORM AutoMigrate (not `runLocalMigrations`).
|
||||
|
||||
### MariaDB (managed exclusively by PriceForge)
|
||||
|
||||
```sql
|
||||
CREATE TABLE qt_partnumber_books (
|
||||
id INT AUTO_INCREMENT PRIMARY KEY,
|
||||
version VARCHAR(50) NOT NULL,
|
||||
created_at DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP,
|
||||
is_active TINYINT(1) NOT NULL DEFAULT 1,
|
||||
partnumbers_json LONGTEXT NOT NULL
|
||||
);
|
||||
|
||||
CREATE TABLE qt_partnumber_book_items (
|
||||
id INT AUTO_INCREMENT PRIMARY KEY,
|
||||
partnumber VARCHAR(255) NOT NULL,
|
||||
lots_json LONGTEXT NOT NULL,
|
||||
description VARCHAR(10000) NULL,
|
||||
UNIQUE KEY uq_qt_partnumber_book_items_partnumber (partnumber)
|
||||
);
|
||||
|
||||
ALTER TABLE qt_configurations ADD COLUMN vendor_spec JSON NULL;
|
||||
```
|
||||
|
||||
QuoteForge has `SELECT` permission only on `qt_partnumber_books` and `qt_partnumber_book_items`. All writes are managed by PriceForge.
|
||||
|
||||
**Grant (add to existing user setup):**
|
||||
```sql
|
||||
GRANT SELECT ON RFQ_LOG.qt_partnumber_books TO '<DB_USER>'@'%';
|
||||
GRANT SELECT ON RFQ_LOG.qt_partnumber_book_items TO '<DB_USER>'@'%';
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Resolution Algorithm (3-step)
|
||||
|
||||
For each `vendor_partnumber` in the BOM, QuoteForge builds/updates UI-visible LOT mappings:
|
||||
|
||||
1. **Active book lookup** — read active `local_partnumber_books`, verify PN membership in `partnumbers_json`, then query `local_partnumber_book_items WHERE partnumber = ?`.
|
||||
2. **Populate BOM UI** — if a match exists, BOM row gets `lot_mappings[]` from `lots_json` (user can still edit it).
|
||||
3. **Unresolved** — red row + inline LOT input with strict autocomplete.
|
||||
|
||||
Persistence note: the application stores the final user-visible mappings in `lot_mappings[]` (not separate "resolved/manual" persisted fields).
|
||||
|
||||
---
|
||||
|
||||
## CFXML Workspace Import Contract
|
||||
|
||||
QuoteForge may import a vendor configurator workspace in `CFXML` format as an existing project update path.
|
||||
This import path must convert one external workspace into one QuoteForge project containing multiple configurations.
|
||||
|
||||
### Import Unit Boundaries
|
||||
|
||||
- One `CFXML` workspace file = one QuoteForge project import session.
|
||||
- One top-level configuration group inside the workspace = one QuoteForge configuration.
|
||||
- Software rows are **not** imported as standalone configurations.
|
||||
- All software rows must be attached to the configuration group they belong to.
|
||||
|
||||
### Configuration Grouping
|
||||
|
||||
Top-level `ProductLineItem` rows are grouped by:
|
||||
|
||||
- `ProprietaryGroupIdentifier`
|
||||
|
||||
This field is the canonical boundary of one imported configuration.
|
||||
`POST /api/projects/:uuid/vendor-import` imports one vendor workspace into an existing project.
|
||||
|
||||
Rules:
|
||||
|
||||
1. Read all top-level `ProductLineItem` rows in document order.
|
||||
2. Group them by `ProprietaryGroupIdentifier`.
|
||||
3. Preserve document order of groups by the first encountered `ProductLineNumber`.
|
||||
4. Import each group as exactly one QuoteForge configuration.
|
||||
|
||||
`ConfigurationGroupLineNumberReference` is not sufficient for grouping imported configurations because
|
||||
multiple independent configuration groups may share the same value in one workspace.
|
||||
|
||||
### Primary Row Selection (no SKU hardcode)
|
||||
|
||||
The importer must not hardcode vendor, model, or server SKU values.
|
||||
|
||||
Within each `ProprietaryGroupIdentifier` group, the importer selects one primary top-level row using
|
||||
structural rules only:
|
||||
|
||||
1. Prefer rows with `ProductTypeCode = Hardware`.
|
||||
2. If multiple rows match, prefer the row with the largest number of `ProductSubLineItem` children.
|
||||
3. If there is still a tie, prefer the first row by `ProductLineNumber`.
|
||||
|
||||
The primary row provides configuration-level metadata such as:
|
||||
|
||||
- configuration name
|
||||
- server count
|
||||
- server model / description
|
||||
- article / support code candidate
|
||||
|
||||
### Software Inclusion Rule
|
||||
|
||||
All top-level rows belonging to the same `ProprietaryGroupIdentifier` must be imported into the same
|
||||
QuoteForge configuration, including:
|
||||
|
||||
- `Hardware`
|
||||
- `Software`
|
||||
- instruction / service rows represented as software-like items
|
||||
|
||||
Effects:
|
||||
|
||||
- a workspace never creates a separate configuration made only of software;
|
||||
- `software1`, `software2`, license rows, and instruction rows stay inside the related configuration;
|
||||
- the user sees one complete configuration instead of fragmented partial imports.
|
||||
|
||||
### Mapping to QuoteForge Project / Configuration
|
||||
|
||||
For one imported configuration group:
|
||||
|
||||
- QuoteForge configuration `name` <- primary row `ProductName`
|
||||
- QuoteForge configuration `server_count` <- primary row `Quantity`
|
||||
- QuoteForge configuration `server_model` <- primary row `ProductDescription`
|
||||
- QuoteForge configuration `article` or `support_code` <- primary row `ProprietaryProductIdentifier`
|
||||
- QuoteForge configuration `line` <- stable order by group appearance in the workspace
|
||||
|
||||
Project-level fields such as QuoteForge `code`, `name`, and `variant` are not reliably defined by `CFXML`
|
||||
itself and should come from the existing target project context or explicit user input.
|
||||
|
||||
### Mapping to `vendor_spec`
|
||||
|
||||
The importer must build one combined `vendor_spec` array per configuration group.
|
||||
|
||||
Source rows:
|
||||
|
||||
- all `ProductSubLineItem` rows from the primary top-level row;
|
||||
- all `ProductSubLineItem` rows from every non-primary top-level row in the same group;
|
||||
- if a top-level row has no `ProductSubLineItem`, the top-level row itself may be converted into one
|
||||
`vendor_spec` row so that software-only content is not lost.
|
||||
|
||||
Each imported row maps into one `VendorSpecItem`:
|
||||
|
||||
- `sort_order` <- stable sequence within the group
|
||||
- `vendor_partnumber` <- `ProprietaryProductIdentifier`
|
||||
- `quantity` <- `Quantity`
|
||||
- `description` <- `ProductDescription`
|
||||
- `unit_price` <- `UnitListPrice.FinancialAmount.MonetaryAmount` when present
|
||||
- `total_price` <- `quantity * unit_price` when unit price is present
|
||||
- `lot_mappings` <- resolved immediately from the active partnumber book using `lots_json`
|
||||
|
||||
The importer stores vendor-native rows in `vendor_spec`, then immediately runs the same logical flow as BOM
|
||||
Resolve + Apply:
|
||||
|
||||
- resolve vendor PN rows through the active partnumber book
|
||||
- persist canonical `lot_mappings[]`
|
||||
- build normalized configuration `items` from `row.quantity * quantity_per_pn`
|
||||
- fill `items.unit_price` from the latest local `estimate` pricelist
|
||||
- recalculate configuration `total_price`
|
||||
|
||||
### Import Pipeline
|
||||
|
||||
Recommended parser pipeline:
|
||||
|
||||
1. Parse XML into top-level `ProductLineItem` rows.
|
||||
2. Group rows by `ProprietaryGroupIdentifier`.
|
||||
3. Select one primary row per group using structural rules.
|
||||
4. Build one QuoteForge configuration DTO per group.
|
||||
5. Merge all hardware/software rows of the group into one `vendor_spec`.
|
||||
6. Resolve imported PN rows into canonical `lot_mappings[]` using the active partnumber book.
|
||||
7. Build configuration `items` from resolved `lot_mappings[]`.
|
||||
8. Price those `items` from the latest local `estimate` pricelist.
|
||||
9. Save or update the QuoteForge configuration inside the existing project.
|
||||
|
||||
### Recommended Internal DTO
|
||||
|
||||
```go
|
||||
type ImportedProject struct {
|
||||
SourceFormat string
|
||||
SourceFilePath string
|
||||
SourceDocID string
|
||||
|
||||
Code string
|
||||
Name string
|
||||
Variant string
|
||||
|
||||
Configurations []ImportedConfiguration
|
||||
}
|
||||
|
||||
type ImportedConfiguration struct {
|
||||
GroupID string
|
||||
|
||||
Name string
|
||||
Line int
|
||||
ServerCount int
|
||||
|
||||
ServerModel string
|
||||
Article string
|
||||
SupportCode string
|
||||
CurrencyCode string
|
||||
|
||||
TopLevelRows []ImportedTopLevelRow
|
||||
VendorSpec []ImportedVendorRow
|
||||
}
|
||||
|
||||
type ImportedTopLevelRow struct {
|
||||
ProductLineNumber string
|
||||
ItemNo string
|
||||
GroupID string
|
||||
|
||||
ProductType string
|
||||
ProductCode string
|
||||
ProductName string
|
||||
Description string
|
||||
Quantity int
|
||||
UnitPrice *float64
|
||||
IsPrimary bool
|
||||
|
||||
SubRows []ImportedVendorRow
|
||||
}
|
||||
|
||||
type ImportedVendorRow struct {
|
||||
SortOrder int
|
||||
|
||||
SourceLineNumber string
|
||||
SourceParentLine string
|
||||
SourceProductType string
|
||||
|
||||
VendorPartnumber string
|
||||
Description string
|
||||
Quantity int
|
||||
UnitPrice *float64
|
||||
TotalPrice *float64
|
||||
|
||||
ProductCharacter string
|
||||
ProductCharPath string
|
||||
}
|
||||
```
|
||||
|
||||
### Current Product Assumption
|
||||
|
||||
For QuoteForge product behavior, the correct user-facing interpretation is:
|
||||
|
||||
- one external project/workspace contains several configurations;
|
||||
- each configuration contains both hardware and software rows that belong to it;
|
||||
- the importer must preserve that grouping exactly.
|
||||
|
||||
---
|
||||
|
||||
## Qty Aggregation Logic
|
||||
|
||||
After resolution, qty per LOT is computed from the BOM row quantity multiplied by the matched `lots_json.qty`:
|
||||
|
||||
```
|
||||
qty(lot) = SUM(quantity_of_pn_row * quantity_of_lot_inside_lots_json)
|
||||
```
|
||||
|
||||
Examples (book: PN_X → `[{LOT_A, qty:2}, {LOT_B, qty:1}]`):
|
||||
- BOM: PN_X ×3 → `LOT_A ×6`, `LOT_B ×3`
|
||||
- BOM: PN_X ×1 and PN_X ×2 → `LOT_A ×6`, `LOT_B ×3`
|
||||
|
||||
---
|
||||
|
||||
## UI: Three Top-Level Tabs
|
||||
|
||||
The configurator (`/configurator`) has three tabs:
|
||||
|
||||
1. **Estimate** — existing cart/component configurator (unchanged).
|
||||
2. **BOM** — paste/import vendor BOM, manual column mapping, LOT matching, bundle decomposition (`1 PN -> multiple LOTs`), "Пересчитать эстимейт", "Очистить".
|
||||
3. **Ценообразование** — pricing summary table + custom price input.
|
||||
|
||||
BOM data is shared between tabs 2 and 3.
|
||||
|
||||
### BOM Import UI (raw table, manual column mapping)
|
||||
|
||||
After paste (`Ctrl+V`) QuoteForge renders an editable raw table (not auto-detected parsing).
|
||||
|
||||
- The pasted rows are shown **as-is** (including header rows, if present).
|
||||
- The user selects a type for each column manually:
|
||||
- `P/N`
|
||||
- `Кол-во`
|
||||
- `Цена`
|
||||
- `Описание`
|
||||
- `Не использовать`
|
||||
- Required mapping:
|
||||
- exactly one `P/N`
|
||||
- exactly one `Кол-во`
|
||||
- Optional mapping:
|
||||
- `Цена` (0..1)
|
||||
- `Описание` (0..1)
|
||||
- Rows can be:
|
||||
- ignored (UI-only, excluded from `vendor_spec`)
|
||||
- deleted
|
||||
- Raw cells are editable inline after paste.
|
||||
|
||||
Notes:
|
||||
- There is **no auto column detection**.
|
||||
- There is **no auto header-row skip**.
|
||||
- Raw import layout itself is not stored on server; only normalized `vendor_spec` is stored.
|
||||
|
||||
### LOT matching in BOM table
|
||||
|
||||
The BOM table adds service columns on the right:
|
||||
|
||||
- `LOT`
|
||||
- `LOT в 1 PN`
|
||||
- actions (`+`, ignore, delete)
|
||||
|
||||
`LOT` behavior:
|
||||
- The first LOT row shown in the BOM UI is the primary LOT mapping for the PN row.
|
||||
- Additional LOT rows are added via the `+` action.
|
||||
- inline LOT input is strict:
|
||||
- autocomplete source = full local components list (`/api/components?per_page=5000`)
|
||||
- free text that does not match an existing LOT is rejected
|
||||
|
||||
`LOT в 1 PN` behavior:
|
||||
- quantity multiplier for each visible LOT row in BOM (`quantity_per_pn` in persisted `lot_mappings[]`)
|
||||
- default = `1`
|
||||
- editable inline
|
||||
|
||||
### Bundle mode (`1 PN -> multiple LOTs`)
|
||||
|
||||
The `+` action in BOM rows adds an extra LOT mapping row for the same vendor PN row.
|
||||
|
||||
- All visible LOT rows (first + added rows) are persisted uniformly in `lot_mappings[]`
|
||||
- Each mapping row has:
|
||||
- LOT
|
||||
- qty (`LOT in 1 PN` = `quantity_per_pn`)
|
||||
|
||||
### BOM restore on config open
|
||||
|
||||
On config open, QuoteForge loads `vendor_spec` from server and reconstructs the editable BOM table in normalized form:
|
||||
|
||||
- columns restored as: `Qty | P/N | Description | Price`
|
||||
- column mapping restored as:
|
||||
- `qty`, `pn`, `description`, `price`
|
||||
- LOT / `LOT в 1 PN` rows are restored from `vendor_spec.lot_mappings[]`
|
||||
|
||||
This restores the BOM editing state, but not the original raw Excel layout (extra columns, ignored rows, original headers).
|
||||
|
||||
### Pricing Tab: column order
|
||||
|
||||
```
|
||||
LOT | PN вендора | Описание | Кол-во | Estimate | Цена вендора | Склад | Конк.
|
||||
```
|
||||
|
||||
**If BOM is empty** — the pricing tab still renders, using cart items as rows (PN вендора = "—", Цена вендора = "—").
|
||||
|
||||
**Description source priority:** BOM row description → LOT description from `local_components`.
|
||||
|
||||
### Pricing Tab: BOM + Estimate merge behavior
|
||||
|
||||
When BOM exists, the pricing tab renders:
|
||||
|
||||
- BOM-based rows (including rows resolved via manual LOT and bundle mappings)
|
||||
- plus **Estimate-only LOTs** (rows currently in cart but not covered by BOM mappings)
|
||||
|
||||
Estimate-only rows are shown as separate rows with:
|
||||
- `PN вендора = "—"`
|
||||
- vendor price = `—`
|
||||
- description from local components
|
||||
|
||||
### Pricing Tab: "Своя цена" input
|
||||
|
||||
- Manual entry → proportionally redistributes custom price into "Цена вендора" cells (proportional to each row's Estimate share). Last row absorbs rounding remainder.
|
||||
- "Проставить цены BOM" button → restores per-row original BOM prices directly (no proportional redistribution). Sets "Своя цена" to their sum.
|
||||
- Both paths show "Скидка от Estimate: X%" info.
|
||||
- "Экспорт CSV" button → downloads `pricing_<uuid>.csv` with UTF-8 BOM, same column order as table, plus Итого row.
|
||||
|
||||
---
|
||||
|
||||
## API Endpoints
|
||||
|
||||
| Method | URL | Description |
|
||||
|--------|-----|-------------|
|
||||
| GET | `/api/configs/:uuid/vendor-spec` | Fetch stored BOM |
|
||||
| PUT | `/api/configs/:uuid/vendor-spec` | Replace BOM (full update) |
|
||||
| POST | `/api/configs/:uuid/vendor-spec/resolve` | Resolve PNs → LOTs (no cart mutation) |
|
||||
| POST | `/api/configs/:uuid/vendor-spec/apply` | Apply resolved LOTs to cart |
|
||||
| POST | `/api/projects/:uuid/vendor-import` | Import `CFXML` workspace into an existing project and create grouped configurations |
|
||||
| GET | `/api/partnumber-books` | List local book snapshots |
|
||||
| GET | `/api/partnumber-books/:id` | Items for a book by server_id |
|
||||
| POST | `/api/sync/partnumber-books` | Pull book snapshots from MariaDB |
|
||||
| POST | `/api/sync/partnumber-seen` | Push unresolved PNs to `qt_vendor_partnumber_seen` on MariaDB |
|
||||
|
||||
## Unresolved PN Tracking (`qt_vendor_partnumber_seen`)
|
||||
|
||||
After each `resolveBOM()` call, QuoteForge pushes PN rows to `POST /api/sync/partnumber-seen` (fire-and-forget from JS — errors silently ignored):
|
||||
|
||||
- unresolved BOM rows (`ignored = false`)
|
||||
- raw BOM rows explicitly marked as ignored in UI (`ignored = true`) — these rows are **not** saved to `vendor_spec`, but are reported for server-side tracking
|
||||
|
||||
The handler calls `sync.PushPartnumberSeen()` which inserts into `qt_vendor_partnumber_seen`.
|
||||
If a row with the same `partnumber` already exists, QuoteForge must leave it untouched:
|
||||
|
||||
- do not update `last_seen_at`
|
||||
- do not update `is_ignored`
|
||||
- do not update `description`
|
||||
|
||||
Canonical insert behavior:
|
||||
|
||||
```sql
|
||||
INSERT INTO qt_vendor_partnumber_seen (source_type, vendor, partnumber, description, is_ignored, last_seen_at)
|
||||
VALUES ('manual', '', ?, ?, ?, NOW())
|
||||
ON DUPLICATE KEY UPDATE
|
||||
partnumber = partnumber
|
||||
```
|
||||
|
||||
Uniqueness key: `partnumber` only (after PriceForge migration 025). PriceForge uses this table for populating the partnumber book.
|
||||
|
||||
Partnumber book sync contract:
|
||||
|
||||
- PriceForge writes membership snapshots to `qt_partnumber_books.partnumbers_json`.
|
||||
- PriceForge writes canonical PN payloads to `qt_partnumber_book_items`.
|
||||
- QuoteForge syncs book headers first, then pulls PN payloads with:
|
||||
`SELECT partnumber, lots_json, description FROM qt_partnumber_book_items WHERE partnumber IN (...)`
|
||||
|
||||
## BOM Persistence
|
||||
|
||||
- `vendor_spec` is saved to server via `PUT /api/configs/:uuid/vendor-spec`.
|
||||
- `GET` / `PUT` `vendor_spec` must preserve row-level mapping fields used by the UI:
|
||||
- `lot_mappings[]`
|
||||
- each item: `lot_name`, `quantity_per_pn`
|
||||
- `description` is persisted in each BOM row and is used by the Pricing tab when available.
|
||||
- Ignored raw rows are **not** persisted into `vendor_spec`.
|
||||
- The PUT handler explicitly marshals `VendorSpec` to JSON string before passing to GORM `Update` (GORM does not reliably call `driver.Valuer` for custom types in `Update(column, value)`).
|
||||
- BOM is autosaved (debounced) after BOM-changing actions, including:
|
||||
- `resolveBOM()`
|
||||
- LOT row qty (`LOT в 1 PN`) changes
|
||||
- LOT row add/remove (`+` / delete in bundle context)
|
||||
- "Сохранить BOM" button triggers explicit save.
|
||||
|
||||
## Pricing Tab: Estimate Price Source
|
||||
|
||||
`renderPricingTab()` is async. It calls `POST /api/quote/price-levels` with LOTs collected from:
|
||||
|
||||
- `lot_mappings[]` from BOM rows
|
||||
- current Estimate/cart LOTs not covered by BOM mappings (to show estimate-only rows)
|
||||
|
||||
This ensures Estimate prices appear for:
|
||||
|
||||
- manually matched LOTs in the BOM tab
|
||||
- bundle LOTs
|
||||
- LOTs already present in Estimate but not mapped from BOM
|
||||
|
||||
### Apply to Estimate (`Пересчитать эстимейт`)
|
||||
|
||||
When applying BOM to Estimate, QuoteForge builds cart rows from explicit UI mappings stored in `lot_mappings[]`.
|
||||
|
||||
For a BOM row with PN qty = `Q`:
|
||||
|
||||
- each mapped LOT contributes `Q * quantity_per_pn`
|
||||
|
||||
Rows without any valid LOT mapping are skipped.
|
||||
|
||||
## Web Route
|
||||
|
||||
| Route | Page |
|
||||
|-------|------|
|
||||
| `/partnumber-books` | Partnumber books — active book summary (unique LOTs, total PN, primary PN count), searchable items table, collapsible snapshot history |
|
||||
- accepted file field is `file`;
|
||||
- maximum file size is `1 GiB`;
|
||||
- one `ProprietaryGroupIdentifier` becomes one QuoteForge configuration;
|
||||
- software rows stay inside their hardware group and never become standalone configurations;
|
||||
- primary group row is selected structurally, without vendor-specific SKU hardcoding;
|
||||
- imported configuration order follows workspace order.
|
||||
|
||||
Imported configuration fields:
|
||||
- `name` from primary row `ProductName`
|
||||
- `server_count` from primary row `Quantity`
|
||||
- `server_model` from primary row `ProductDescription`
|
||||
- `article` or `support_code` from `ProprietaryProductIdentifier`
|
||||
|
||||
Imported BOM rows become `vendor_spec` rows and are resolved through the active local partnumber book when possible.
|
||||
|
||||
@@ -1,55 +1,30 @@
|
||||
# QuoteForge Bible — Architectural Documentation
|
||||
# QuoteForge Bible
|
||||
|
||||
The single source of truth for architecture, schemas, and patterns.
|
||||
Project-specific architecture and operational contracts.
|
||||
|
||||
---
|
||||
## Files
|
||||
|
||||
## Table of Contents
|
||||
| File | Scope |
|
||||
| --- | --- |
|
||||
| [01-overview.md](01-overview.md) | Product scope, runtime model, repository map |
|
||||
| [02-architecture.md](02-architecture.md) | Local-first rules, sync, pricing, versioning |
|
||||
| [03-database.md](03-database.md) | SQLite and MariaDB data model, permissions, migrations |
|
||||
| [04-api.md](04-api.md) | HTTP routes and API contract |
|
||||
| [05-config.md](05-config.md) | Runtime config, paths, env vars, startup behavior |
|
||||
| [06-backup.md](06-backup.md) | Backup contract and restore workflow |
|
||||
| [07-dev.md](07-dev.md) | Development commands and guardrails |
|
||||
| [09-vendor-spec.md](09-vendor-spec.md) | Vendor BOM and CFXML import contract |
|
||||
|
||||
| File | Topic |
|
||||
|------|-------|
|
||||
| [01-overview.md](01-overview.md) | Product: purpose, features, tech stack, repository structure |
|
||||
| [02-architecture.md](02-architecture.md) | Architecture: local-first, sync, pricing, versioning |
|
||||
| [03-database.md](03-database.md) | DB schemas: SQLite + MariaDB, permissions, indexes |
|
||||
| [04-api.md](04-api.md) | API endpoints and web routes |
|
||||
| [05-config.md](05-config.md) | Configuration, environment variables, paths, installation |
|
||||
| [06-backup.md](06-backup.md) | Backup: implementation, rotation policy |
|
||||
| [07-dev.md](07-dev.md) | Development: commands, code style, guardrails |
|
||||
## Rules
|
||||
|
||||
---
|
||||
- `bible-local/` is the source of truth for QuoteForge-specific behavior.
|
||||
- Keep these files in English.
|
||||
- Update the matching file in the same commit as any architectural change.
|
||||
- Remove stale documentation instead of preserving history in place.
|
||||
|
||||
## Bible Rules
|
||||
## Quick reference
|
||||
|
||||
> **Every architectural decision must be recorded in the Bible.**
|
||||
>
|
||||
> Any change to DB schema, data access patterns, sync behavior, API contracts,
|
||||
> configuration format, or any other system-level aspect — the corresponding `bible/` file
|
||||
> **must be updated in the same commit** as the code.
|
||||
>
|
||||
> On every user-requested commit, the Bible must be reviewed and updated in that commit.
|
||||
>
|
||||
> The Bible is the single source of truth for architecture. Outdated documentation is worse than none.
|
||||
|
||||
> **Documentation language: English.**
|
||||
>
|
||||
> All files in `bible/` are written and updated **in English only**.
|
||||
> Mixing languages is not allowed.
|
||||
|
||||
---
|
||||
|
||||
## Quick Reference
|
||||
|
||||
**Where is user data stored?**
|
||||
SQLite → `~/Library/Application Support/QuoteForge/qfs.db` (macOS). MariaDB is sync-only.
|
||||
|
||||
**How to look up a price for a line item?**
|
||||
`local_pricelist_items` → by `pricelist_id` from config + `lot_name`. Prices are **never** taken from `local_components`.
|
||||
|
||||
**Pre-commit check?**
|
||||
`go build ./cmd/qfs && go vet ./...`
|
||||
|
||||
**What must never be restored?**
|
||||
cron jobs, admin pricing, alerts, stock import, importer utility — all removed intentionally.
|
||||
|
||||
**Where is the release changelog?**
|
||||
`releases/memory/v{major}.{minor}.{patch}.md`
|
||||
- Local DB path: see [05-config.md](05-config.md)
|
||||
- Runtime bind: loopback only
|
||||
- Local backups: see [06-backup.md](06-backup.md)
|
||||
- Release notes: `releases/<version>/RELEASE_NOTES.md`
|
||||
|
||||
173
cmd/migrate_project_updated_at/main.go
Normal file
173
cmd/migrate_project_updated_at/main.go
Normal file
@@ -0,0 +1,173 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"flag"
|
||||
"fmt"
|
||||
"log"
|
||||
"sort"
|
||||
"time"
|
||||
|
||||
"git.mchus.pro/mchus/quoteforge/internal/appstate"
|
||||
"git.mchus.pro/mchus/quoteforge/internal/localdb"
|
||||
"git.mchus.pro/mchus/quoteforge/internal/models"
|
||||
"gorm.io/driver/mysql"
|
||||
"gorm.io/gorm"
|
||||
"gorm.io/gorm/logger"
|
||||
)
|
||||
|
||||
type projectTimestampRow struct {
|
||||
UUID string
|
||||
UpdatedAt time.Time
|
||||
}
|
||||
|
||||
type updatePlanRow struct {
|
||||
UUID string
|
||||
Code string
|
||||
Variant string
|
||||
LocalUpdatedAt time.Time
|
||||
ServerUpdatedAt time.Time
|
||||
}
|
||||
|
||||
func main() {
|
||||
defaultLocalDBPath, err := appstate.ResolveDBPath("")
|
||||
if err != nil {
|
||||
log.Fatalf("failed to resolve default local SQLite path: %v", err)
|
||||
}
|
||||
|
||||
localDBPath := flag.String("localdb", defaultLocalDBPath, "path to local SQLite database (default: user state dir or QFS_DB_PATH)")
|
||||
apply := flag.Bool("apply", false, "apply updates to local SQLite (default is preview only)")
|
||||
flag.Parse()
|
||||
|
||||
local, err := localdb.New(*localDBPath)
|
||||
if err != nil {
|
||||
log.Fatalf("failed to initialize local database: %v", err)
|
||||
}
|
||||
defer local.Close()
|
||||
|
||||
if !local.HasSettings() {
|
||||
log.Fatalf("SQLite connection settings are not configured. Run qfs setup first.")
|
||||
}
|
||||
|
||||
dsn, err := local.GetDSN()
|
||||
if err != nil {
|
||||
log.Fatalf("failed to build DSN from SQLite settings: %v", err)
|
||||
}
|
||||
|
||||
db, err := gorm.Open(mysql.Open(dsn), &gorm.Config{
|
||||
Logger: logger.Default.LogMode(logger.Silent),
|
||||
})
|
||||
if err != nil {
|
||||
log.Fatalf("failed to connect to MariaDB: %v", err)
|
||||
}
|
||||
|
||||
serverRows, err := loadServerProjects(db)
|
||||
if err != nil {
|
||||
log.Fatalf("failed to load server projects: %v", err)
|
||||
}
|
||||
|
||||
localProjects, err := local.GetAllProjects(true)
|
||||
if err != nil {
|
||||
log.Fatalf("failed to load local projects: %v", err)
|
||||
}
|
||||
|
||||
plan := buildUpdatePlan(localProjects, serverRows)
|
||||
printPlan(plan, *apply)
|
||||
|
||||
if !*apply || len(plan) == 0 {
|
||||
return
|
||||
}
|
||||
|
||||
updated := 0
|
||||
for i := range plan {
|
||||
project, err := local.GetProjectByUUID(plan[i].UUID)
|
||||
if err != nil {
|
||||
log.Printf("skip %s: load local project: %v", plan[i].UUID, err)
|
||||
continue
|
||||
}
|
||||
project.UpdatedAt = plan[i].ServerUpdatedAt
|
||||
if err := local.SaveProjectPreservingUpdatedAt(project); err != nil {
|
||||
log.Printf("skip %s: save local project: %v", plan[i].UUID, err)
|
||||
continue
|
||||
}
|
||||
updated++
|
||||
}
|
||||
|
||||
log.Printf("updated %d local project timestamps", updated)
|
||||
}
|
||||
|
||||
func loadServerProjects(db *gorm.DB) (map[string]time.Time, error) {
|
||||
var rows []projectTimestampRow
|
||||
if err := db.Model(&models.Project{}).
|
||||
Select("uuid, updated_at").
|
||||
Find(&rows).Error; err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
out := make(map[string]time.Time, len(rows))
|
||||
for _, row := range rows {
|
||||
if row.UUID == "" {
|
||||
continue
|
||||
}
|
||||
out[row.UUID] = row.UpdatedAt
|
||||
}
|
||||
return out, nil
|
||||
}
|
||||
|
||||
func buildUpdatePlan(localProjects []localdb.LocalProject, serverRows map[string]time.Time) []updatePlanRow {
|
||||
plan := make([]updatePlanRow, 0)
|
||||
for i := range localProjects {
|
||||
project := localProjects[i]
|
||||
serverUpdatedAt, ok := serverRows[project.UUID]
|
||||
if !ok {
|
||||
continue
|
||||
}
|
||||
if project.UpdatedAt.Equal(serverUpdatedAt) {
|
||||
continue
|
||||
}
|
||||
plan = append(plan, updatePlanRow{
|
||||
UUID: project.UUID,
|
||||
Code: project.Code,
|
||||
Variant: project.Variant,
|
||||
LocalUpdatedAt: project.UpdatedAt,
|
||||
ServerUpdatedAt: serverUpdatedAt,
|
||||
})
|
||||
}
|
||||
|
||||
sort.Slice(plan, func(i, j int) bool {
|
||||
if plan[i].Code != plan[j].Code {
|
||||
return plan[i].Code < plan[j].Code
|
||||
}
|
||||
return plan[i].Variant < plan[j].Variant
|
||||
})
|
||||
|
||||
return plan
|
||||
}
|
||||
|
||||
func printPlan(plan []updatePlanRow, apply bool) {
|
||||
mode := "preview"
|
||||
if apply {
|
||||
mode = "apply"
|
||||
}
|
||||
log.Printf("project updated_at resync mode=%s changes=%d", mode, len(plan))
|
||||
if len(plan) == 0 {
|
||||
log.Printf("no local project timestamps need resync")
|
||||
return
|
||||
}
|
||||
for _, row := range plan {
|
||||
variant := row.Variant
|
||||
if variant == "" {
|
||||
variant = "main"
|
||||
}
|
||||
log.Printf("%s [%s] local=%s server=%s", row.Code, variant, formatStamp(row.LocalUpdatedAt), formatStamp(row.ServerUpdatedAt))
|
||||
}
|
||||
if !apply {
|
||||
fmt.Println("Re-run with -apply to write server updated_at into local SQLite.")
|
||||
}
|
||||
}
|
||||
|
||||
func formatStamp(value time.Time) string {
|
||||
if value.IsZero() {
|
||||
return "zero"
|
||||
}
|
||||
return value.Format(time.RFC3339)
|
||||
}
|
||||
@@ -39,6 +39,10 @@ logging:
|
||||
t.Fatalf("load legacy config: %v", err)
|
||||
}
|
||||
setConfigDefaults(cfg)
|
||||
cfg.Server.Host, _, err = normalizeLoopbackServerHost(cfg.Server.Host)
|
||||
if err != nil {
|
||||
t.Fatalf("normalize server host: %v", err)
|
||||
}
|
||||
if err := migrateConfigFileToRuntimeShape(path, cfg); err != nil {
|
||||
t.Fatalf("migrate config: %v", err)
|
||||
}
|
||||
@@ -60,7 +64,43 @@ logging:
|
||||
if !strings.Contains(text, "port: 9191") {
|
||||
t.Fatalf("migrated config did not preserve server port:\n%s", text)
|
||||
}
|
||||
if !strings.Contains(text, "host: 127.0.0.1") {
|
||||
t.Fatalf("migrated config did not normalize server host:\n%s", text)
|
||||
}
|
||||
if !strings.Contains(text, "level: debug") {
|
||||
t.Fatalf("migrated config did not preserve logging level:\n%s", text)
|
||||
}
|
||||
}
|
||||
|
||||
func TestNormalizeLoopbackServerHost(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
cases := []struct {
|
||||
host string
|
||||
want string
|
||||
wantChanged bool
|
||||
wantErr bool
|
||||
}{
|
||||
{host: "127.0.0.1", want: "127.0.0.1", wantChanged: false, wantErr: false},
|
||||
{host: "localhost", want: "127.0.0.1", wantChanged: true, wantErr: false},
|
||||
{host: "::1", want: "127.0.0.1", wantChanged: true, wantErr: false},
|
||||
{host: "0.0.0.0", want: "127.0.0.1", wantChanged: true, wantErr: false},
|
||||
{host: "192.168.1.10", want: "127.0.0.1", wantChanged: true, wantErr: false},
|
||||
}
|
||||
|
||||
for _, tc := range cases {
|
||||
got, changed, err := normalizeLoopbackServerHost(tc.host)
|
||||
if tc.wantErr && err == nil {
|
||||
t.Fatalf("expected error for host %q", tc.host)
|
||||
}
|
||||
if !tc.wantErr && err != nil {
|
||||
t.Fatalf("unexpected error for host %q: %v", tc.host, err)
|
||||
}
|
||||
if got != tc.want {
|
||||
t.Fatalf("unexpected normalized host for %q: got %q want %q", tc.host, got, tc.want)
|
||||
}
|
||||
if changed != tc.wantChanged {
|
||||
t.Fatalf("unexpected changed flag for %q: got %t want %t", tc.host, changed, tc.wantChanged)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
284
cmd/qfs/main.go
284
cmd/qfs/main.go
@@ -10,6 +10,7 @@ import (
|
||||
"io/fs"
|
||||
"log/slog"
|
||||
"math"
|
||||
"net"
|
||||
"net/http"
|
||||
"os"
|
||||
"os/exec"
|
||||
@@ -43,11 +44,16 @@ import (
|
||||
|
||||
// Version is set via ldflags during build
|
||||
var Version = "dev"
|
||||
var errVendorImportTooLarge = errors.New("vendor workspace file exceeds 1 GiB limit")
|
||||
|
||||
const backgroundSyncInterval = 5 * time.Minute
|
||||
const onDemandPullCooldown = 30 * time.Second
|
||||
const startupConsoleWarning = "Не закрывайте консоль иначе приложение не будет работать"
|
||||
|
||||
var vendorImportMaxBytes int64 = 1 << 30
|
||||
|
||||
const vendorImportMultipartOverheadBytes int64 = 8 << 20
|
||||
|
||||
func main() {
|
||||
showStartupConsoleWarning()
|
||||
|
||||
@@ -142,6 +148,15 @@ func main() {
|
||||
}
|
||||
}
|
||||
setConfigDefaults(cfg)
|
||||
normalizedHost, changed, err := normalizeLoopbackServerHost(cfg.Server.Host)
|
||||
if err != nil {
|
||||
slog.Error("invalid server host", "host", cfg.Server.Host, "error", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
if changed {
|
||||
slog.Warn("corrected server host to loopback", "from", cfg.Server.Host, "to", normalizedHost)
|
||||
}
|
||||
cfg.Server.Host = normalizedHost
|
||||
if err := migrateConfigFileToRuntimeShape(resolvedConfigPath, cfg); err != nil {
|
||||
slog.Error("failed to migrate config file format", "path", resolvedConfigPath, "error", err)
|
||||
os.Exit(1)
|
||||
@@ -319,29 +334,47 @@ func setConfigDefaults(cfg *config.Config) {
|
||||
if cfg.Server.WriteTimeout == 0 {
|
||||
cfg.Server.WriteTimeout = 30 * time.Second
|
||||
}
|
||||
if cfg.Pricing.DefaultMethod == "" {
|
||||
cfg.Pricing.DefaultMethod = "weighted_median"
|
||||
}
|
||||
if cfg.Pricing.DefaultPeriodDays == 0 {
|
||||
cfg.Pricing.DefaultPeriodDays = 90
|
||||
}
|
||||
if cfg.Pricing.FreshnessGreenDays == 0 {
|
||||
cfg.Pricing.FreshnessGreenDays = 30
|
||||
}
|
||||
if cfg.Pricing.FreshnessYellowDays == 0 {
|
||||
cfg.Pricing.FreshnessYellowDays = 60
|
||||
}
|
||||
if cfg.Pricing.FreshnessRedDays == 0 {
|
||||
cfg.Pricing.FreshnessRedDays = 90
|
||||
}
|
||||
if cfg.Pricing.MinQuotesForMedian == 0 {
|
||||
cfg.Pricing.MinQuotesForMedian = 3
|
||||
}
|
||||
if cfg.Backup.Time == "" {
|
||||
cfg.Backup.Time = "00:00"
|
||||
}
|
||||
}
|
||||
|
||||
func normalizeLoopbackServerHost(host string) (string, bool, error) {
|
||||
trimmed := strings.TrimSpace(host)
|
||||
if trimmed == "" {
|
||||
return "", false, fmt.Errorf("server.host must not be empty")
|
||||
}
|
||||
const loopbackHost = "127.0.0.1"
|
||||
if trimmed == loopbackHost {
|
||||
return loopbackHost, false, nil
|
||||
}
|
||||
if strings.EqualFold(trimmed, "localhost") {
|
||||
return loopbackHost, true, nil
|
||||
}
|
||||
|
||||
ip := net.ParseIP(strings.Trim(trimmed, "[]"))
|
||||
if ip != nil {
|
||||
if ip.IsLoopback() || ip.IsUnspecified() {
|
||||
return loopbackHost, trimmed != loopbackHost, nil
|
||||
}
|
||||
return loopbackHost, true, nil
|
||||
}
|
||||
|
||||
return loopbackHost, true, nil
|
||||
}
|
||||
|
||||
func vendorImportBodyLimit() int64 {
|
||||
return vendorImportMaxBytes + vendorImportMultipartOverheadBytes
|
||||
}
|
||||
|
||||
func isVendorImportTooLarge(fileSize int64, err error) bool {
|
||||
if fileSize > vendorImportMaxBytes {
|
||||
return true
|
||||
}
|
||||
var maxBytesErr *http.MaxBytesError
|
||||
return errors.As(err, &maxBytesErr)
|
||||
}
|
||||
|
||||
func ensureDefaultConfigFile(configPath string) error {
|
||||
if strings.TrimSpace(configPath) == "" {
|
||||
return fmt.Errorf("config path is empty")
|
||||
@@ -747,6 +780,7 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
||||
pricelistHandler := handlers.NewPricelistHandler(local)
|
||||
vendorSpecHandler := handlers.NewVendorSpecHandler(local)
|
||||
partnumberBooksHandler := handlers.NewPartnumberBooksHandler(local)
|
||||
respondError := handlers.RespondError
|
||||
syncHandler, err := handlers.NewSyncHandler(local, syncService, connMgr, templatesPath, backgroundSyncInterval)
|
||||
if err != nil {
|
||||
return nil, nil, fmt.Errorf("creating sync handler: %w", err)
|
||||
@@ -766,6 +800,7 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
||||
|
||||
// Router
|
||||
router := gin.New()
|
||||
router.MaxMultipartMemory = vendorImportBodyLimit()
|
||||
router.Use(gin.Recovery())
|
||||
router.Use(requestLogger())
|
||||
router.Use(middleware.CORS())
|
||||
@@ -786,17 +821,17 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
||||
})
|
||||
})
|
||||
|
||||
// Restart endpoint (for development purposes)
|
||||
router.POST("/api/restart", func(c *gin.Context) {
|
||||
// This will cause the server to restart by exiting
|
||||
// The restartProcess function will be called to restart the process
|
||||
slog.Info("Restart requested via API")
|
||||
go func() {
|
||||
time.Sleep(100 * time.Millisecond)
|
||||
restartProcess()
|
||||
}()
|
||||
c.JSON(http.StatusOK, gin.H{"message": "restarting..."})
|
||||
})
|
||||
// Restart endpoint is intentionally debug-only.
|
||||
if cfg.Server.Mode == "debug" {
|
||||
router.POST("/api/restart", func(c *gin.Context) {
|
||||
slog.Info("Restart requested via API")
|
||||
go func() {
|
||||
time.Sleep(100 * time.Millisecond)
|
||||
restartProcess()
|
||||
}()
|
||||
c.JSON(http.StatusOK, gin.H{"message": "restarting..."})
|
||||
})
|
||||
}
|
||||
|
||||
// DB status endpoint
|
||||
router.GET("/api/db-status", func(c *gin.Context) {
|
||||
@@ -928,7 +963,7 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
||||
|
||||
cfgs, total, err := configService.ListAllWithStatus(page, perPage, status, search)
|
||||
if err != nil {
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
respondError(c, http.StatusInternalServerError, "internal server error", err)
|
||||
return
|
||||
}
|
||||
|
||||
@@ -949,7 +984,7 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
||||
c.JSON(http.StatusServiceUnavailable, gin.H{"error": "Database is offline"})
|
||||
return
|
||||
}
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
respondError(c, http.StatusInternalServerError, "internal server error", err)
|
||||
return
|
||||
}
|
||||
c.JSON(http.StatusOK, result)
|
||||
@@ -958,13 +993,13 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
||||
configs.POST("", func(c *gin.Context) {
|
||||
var req services.CreateConfigRequest
|
||||
if err := c.ShouldBindJSON(&req); err != nil {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
||||
respondError(c, http.StatusBadRequest, "invalid request", err)
|
||||
return
|
||||
}
|
||||
|
||||
config, err := configService.Create(dbUsername, &req)
|
||||
if err != nil {
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
respondError(c, http.StatusInternalServerError, "internal server error", err)
|
||||
return
|
||||
}
|
||||
|
||||
@@ -974,12 +1009,12 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
||||
configs.POST("/preview-article", func(c *gin.Context) {
|
||||
var req services.ArticlePreviewRequest
|
||||
if err := c.ShouldBindJSON(&req); err != nil {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
||||
respondError(c, http.StatusBadRequest, "invalid request", err)
|
||||
return
|
||||
}
|
||||
result, err := configService.BuildArticlePreview(&req)
|
||||
if err != nil {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
||||
respondError(c, http.StatusBadRequest, "invalid request", err)
|
||||
return
|
||||
}
|
||||
c.JSON(http.StatusOK, gin.H{
|
||||
@@ -1002,7 +1037,7 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
||||
uuid := c.Param("uuid")
|
||||
var req services.CreateConfigRequest
|
||||
if err := c.ShouldBindJSON(&req); err != nil {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
||||
respondError(c, http.StatusBadRequest, "invalid request", err)
|
||||
return
|
||||
}
|
||||
|
||||
@@ -1010,13 +1045,13 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
||||
if err != nil {
|
||||
switch {
|
||||
case errors.Is(err, services.ErrConfigNotFound):
|
||||
c.JSON(http.StatusNotFound, gin.H{"error": err.Error()})
|
||||
respondError(c, http.StatusNotFound, "resource not found", err)
|
||||
case errors.Is(err, services.ErrProjectNotFound):
|
||||
c.JSON(http.StatusNotFound, gin.H{"error": err.Error()})
|
||||
respondError(c, http.StatusNotFound, "resource not found", err)
|
||||
case errors.Is(err, services.ErrProjectForbidden):
|
||||
c.JSON(http.StatusForbidden, gin.H{"error": err.Error()})
|
||||
respondError(c, http.StatusForbidden, "access denied", err)
|
||||
default:
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
respondError(c, http.StatusInternalServerError, "internal server error", err)
|
||||
}
|
||||
return
|
||||
}
|
||||
@@ -1027,7 +1062,7 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
||||
configs.DELETE("/:uuid", func(c *gin.Context) {
|
||||
uuid := c.Param("uuid")
|
||||
if err := configService.DeleteNoAuth(uuid); err != nil {
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
respondError(c, http.StatusInternalServerError, "internal server error", err)
|
||||
return
|
||||
}
|
||||
c.JSON(http.StatusOK, gin.H{"message": "archived"})
|
||||
@@ -1037,7 +1072,7 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
||||
uuid := c.Param("uuid")
|
||||
config, err := configService.ReactivateNoAuth(uuid)
|
||||
if err != nil {
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
respondError(c, http.StatusInternalServerError, "internal server error", err)
|
||||
return
|
||||
}
|
||||
c.JSON(http.StatusOK, gin.H{
|
||||
@@ -1052,13 +1087,13 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
||||
Name string `json:"name"`
|
||||
}
|
||||
if err := c.ShouldBindJSON(&req); err != nil {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
||||
respondError(c, http.StatusBadRequest, "invalid request", err)
|
||||
return
|
||||
}
|
||||
|
||||
config, err := configService.RenameNoAuth(uuid, req.Name)
|
||||
if err != nil {
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
respondError(c, http.StatusInternalServerError, "internal server error", err)
|
||||
return
|
||||
}
|
||||
|
||||
@@ -1072,7 +1107,7 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
||||
FromVersion int `json:"from_version"`
|
||||
}
|
||||
if err := c.ShouldBindJSON(&req); err != nil {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
||||
respondError(c, http.StatusBadRequest, "invalid request", err)
|
||||
return
|
||||
}
|
||||
|
||||
@@ -1082,7 +1117,7 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
||||
c.JSON(http.StatusNotFound, gin.H{"error": "version not found"})
|
||||
return
|
||||
}
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
respondError(c, http.StatusInternalServerError, "internal server error", err)
|
||||
return
|
||||
}
|
||||
|
||||
@@ -1091,9 +1126,14 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
||||
|
||||
configs.POST("/:uuid/refresh-prices", func(c *gin.Context) {
|
||||
uuid := c.Param("uuid")
|
||||
config, err := configService.RefreshPricesNoAuth(uuid)
|
||||
var req struct {
|
||||
PricelistID *uint `json:"pricelist_id"`
|
||||
}
|
||||
// Ignore bind error — pricelist_id is optional
|
||||
_ = c.ShouldBindJSON(&req)
|
||||
config, err := configService.RefreshPricesNoAuth(uuid, req.PricelistID)
|
||||
if err != nil {
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
respondError(c, http.StatusInternalServerError, "internal server error", err)
|
||||
return
|
||||
}
|
||||
c.JSON(http.StatusOK, config)
|
||||
@@ -1105,20 +1145,20 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
||||
ProjectUUID string `json:"project_uuid"`
|
||||
}
|
||||
if err := c.ShouldBindJSON(&req); err != nil {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
||||
respondError(c, http.StatusBadRequest, "invalid request", err)
|
||||
return
|
||||
}
|
||||
updated, err := configService.SetProjectNoAuth(uuid, req.ProjectUUID)
|
||||
if err != nil {
|
||||
switch {
|
||||
case errors.Is(err, services.ErrConfigNotFound):
|
||||
c.JSON(http.StatusNotFound, gin.H{"error": err.Error()})
|
||||
respondError(c, http.StatusNotFound, "resource not found", err)
|
||||
case errors.Is(err, services.ErrProjectNotFound):
|
||||
c.JSON(http.StatusNotFound, gin.H{"error": err.Error()})
|
||||
respondError(c, http.StatusNotFound, "resource not found", err)
|
||||
case errors.Is(err, services.ErrProjectForbidden):
|
||||
c.JSON(http.StatusForbidden, gin.H{"error": err.Error()})
|
||||
respondError(c, http.StatusForbidden, "access denied", err)
|
||||
default:
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
respondError(c, http.StatusInternalServerError, "internal server error", err)
|
||||
}
|
||||
return
|
||||
}
|
||||
@@ -1147,7 +1187,7 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
||||
case errors.Is(err, services.ErrInvalidVersionNumber):
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid paging params"})
|
||||
default:
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
respondError(c, http.StatusInternalServerError, "internal server error", err)
|
||||
}
|
||||
return
|
||||
}
|
||||
@@ -1175,7 +1215,7 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
||||
case errors.Is(err, services.ErrConfigVersionNotFound):
|
||||
c.JSON(http.StatusNotFound, gin.H{"error": "version not found"})
|
||||
default:
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
respondError(c, http.StatusInternalServerError, "internal server error", err)
|
||||
}
|
||||
return
|
||||
}
|
||||
@@ -1190,7 +1230,7 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
||||
Note string `json:"note"`
|
||||
}
|
||||
if err := c.ShouldBindJSON(&req); err != nil {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
||||
respondError(c, http.StatusBadRequest, "invalid request", err)
|
||||
return
|
||||
}
|
||||
if req.TargetVersion <= 0 {
|
||||
@@ -1208,7 +1248,7 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
||||
case errors.Is(err, services.ErrVersionConflict):
|
||||
c.JSON(http.StatusConflict, gin.H{"error": "version conflict"})
|
||||
default:
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
respondError(c, http.StatusInternalServerError, "internal server error", err)
|
||||
}
|
||||
return
|
||||
}
|
||||
@@ -1243,12 +1283,12 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
||||
ServerCount int `json:"server_count" binding:"required,min=1"`
|
||||
}
|
||||
if err := c.ShouldBindJSON(&req); err != nil {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
||||
respondError(c, http.StatusBadRequest, "invalid request", err)
|
||||
return
|
||||
}
|
||||
config, err := configService.UpdateServerCount(uuid, req.ServerCount)
|
||||
if err != nil {
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
respondError(c, http.StatusInternalServerError, "internal server error", err)
|
||||
return
|
||||
}
|
||||
c.JSON(http.StatusOK, config)
|
||||
@@ -1293,7 +1333,7 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
||||
|
||||
allProjects, err := projectService.ListByUser(dbUsername, true)
|
||||
if err != nil {
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
respondError(c, http.StatusInternalServerError, "internal server error", err)
|
||||
return
|
||||
}
|
||||
|
||||
@@ -1427,7 +1467,7 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
||||
projects.GET("/all", func(c *gin.Context) {
|
||||
allProjects, err := projectService.ListByUser(dbUsername, true)
|
||||
if err != nil {
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
respondError(c, http.StatusInternalServerError, "internal server error", err)
|
||||
return
|
||||
}
|
||||
|
||||
@@ -1457,7 +1497,7 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
||||
projects.POST("", func(c *gin.Context) {
|
||||
var req services.CreateProjectRequest
|
||||
if err := c.ShouldBindJSON(&req); err != nil {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
||||
respondError(c, http.StatusBadRequest, "invalid request", err)
|
||||
return
|
||||
}
|
||||
if strings.TrimSpace(req.Code) == "" {
|
||||
@@ -1467,10 +1507,12 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
||||
project, err := projectService.Create(dbUsername, &req)
|
||||
if err != nil {
|
||||
switch {
|
||||
case errors.Is(err, services.ErrReservedMainVariant):
|
||||
respondError(c, http.StatusBadRequest, "invalid request", err)
|
||||
case errors.Is(err, services.ErrProjectCodeExists):
|
||||
c.JSON(http.StatusConflict, gin.H{"error": err.Error()})
|
||||
respondError(c, http.StatusConflict, "conflict detected", err)
|
||||
default:
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
respondError(c, http.StatusInternalServerError, "internal server error", err)
|
||||
}
|
||||
return
|
||||
}
|
||||
@@ -1482,11 +1524,11 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
||||
if err != nil {
|
||||
switch {
|
||||
case errors.Is(err, services.ErrProjectNotFound):
|
||||
c.JSON(http.StatusNotFound, gin.H{"error": err.Error()})
|
||||
respondError(c, http.StatusNotFound, "resource not found", err)
|
||||
case errors.Is(err, services.ErrProjectForbidden):
|
||||
c.JSON(http.StatusForbidden, gin.H{"error": err.Error()})
|
||||
respondError(c, http.StatusForbidden, "access denied", err)
|
||||
default:
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
respondError(c, http.StatusInternalServerError, "internal server error", err)
|
||||
}
|
||||
return
|
||||
}
|
||||
@@ -1496,20 +1538,23 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
||||
projects.PUT("/:uuid", func(c *gin.Context) {
|
||||
var req services.UpdateProjectRequest
|
||||
if err := c.ShouldBindJSON(&req); err != nil {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
||||
respondError(c, http.StatusBadRequest, "invalid request", err)
|
||||
return
|
||||
}
|
||||
project, err := projectService.Update(c.Param("uuid"), dbUsername, &req)
|
||||
if err != nil {
|
||||
switch {
|
||||
case errors.Is(err, services.ErrReservedMainVariant),
|
||||
errors.Is(err, services.ErrCannotRenameMainVariant):
|
||||
respondError(c, http.StatusBadRequest, "invalid request", err)
|
||||
case errors.Is(err, services.ErrProjectCodeExists):
|
||||
c.JSON(http.StatusConflict, gin.H{"error": err.Error()})
|
||||
respondError(c, http.StatusConflict, "conflict detected", err)
|
||||
case errors.Is(err, services.ErrProjectNotFound):
|
||||
c.JSON(http.StatusNotFound, gin.H{"error": err.Error()})
|
||||
respondError(c, http.StatusNotFound, "resource not found", err)
|
||||
case errors.Is(err, services.ErrProjectForbidden):
|
||||
c.JSON(http.StatusForbidden, gin.H{"error": err.Error()})
|
||||
respondError(c, http.StatusForbidden, "access denied", err)
|
||||
default:
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
respondError(c, http.StatusInternalServerError, "internal server error", err)
|
||||
}
|
||||
return
|
||||
}
|
||||
@@ -1520,11 +1565,11 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
||||
if err := projectService.Archive(c.Param("uuid"), dbUsername); err != nil {
|
||||
switch {
|
||||
case errors.Is(err, services.ErrProjectNotFound):
|
||||
c.JSON(http.StatusNotFound, gin.H{"error": err.Error()})
|
||||
respondError(c, http.StatusNotFound, "resource not found", err)
|
||||
case errors.Is(err, services.ErrProjectForbidden):
|
||||
c.JSON(http.StatusForbidden, gin.H{"error": err.Error()})
|
||||
respondError(c, http.StatusForbidden, "access denied", err)
|
||||
default:
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
respondError(c, http.StatusInternalServerError, "internal server error", err)
|
||||
}
|
||||
return
|
||||
}
|
||||
@@ -1535,11 +1580,11 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
||||
if err := projectService.Reactivate(c.Param("uuid"), dbUsername); err != nil {
|
||||
switch {
|
||||
case errors.Is(err, services.ErrProjectNotFound):
|
||||
c.JSON(http.StatusNotFound, gin.H{"error": err.Error()})
|
||||
respondError(c, http.StatusNotFound, "resource not found", err)
|
||||
case errors.Is(err, services.ErrProjectForbidden):
|
||||
c.JSON(http.StatusForbidden, gin.H{"error": err.Error()})
|
||||
respondError(c, http.StatusForbidden, "access denied", err)
|
||||
default:
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
respondError(c, http.StatusInternalServerError, "internal server error", err)
|
||||
}
|
||||
return
|
||||
}
|
||||
@@ -1550,13 +1595,13 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
||||
if err := projectService.DeleteVariant(c.Param("uuid"), dbUsername); err != nil {
|
||||
switch {
|
||||
case errors.Is(err, services.ErrCannotDeleteMainVariant):
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
||||
respondError(c, http.StatusBadRequest, "invalid request", err)
|
||||
case errors.Is(err, services.ErrProjectNotFound):
|
||||
c.JSON(http.StatusNotFound, gin.H{"error": err.Error()})
|
||||
respondError(c, http.StatusNotFound, "resource not found", err)
|
||||
case errors.Is(err, services.ErrProjectForbidden):
|
||||
c.JSON(http.StatusForbidden, gin.H{"error": err.Error()})
|
||||
respondError(c, http.StatusForbidden, "access denied", err)
|
||||
default:
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
respondError(c, http.StatusInternalServerError, "internal server error", err)
|
||||
}
|
||||
return
|
||||
}
|
||||
@@ -1576,11 +1621,11 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
||||
if err != nil {
|
||||
switch {
|
||||
case errors.Is(err, services.ErrProjectNotFound):
|
||||
c.JSON(http.StatusNotFound, gin.H{"error": err.Error()})
|
||||
respondError(c, http.StatusNotFound, "resource not found", err)
|
||||
case errors.Is(err, services.ErrProjectForbidden):
|
||||
c.JSON(http.StatusForbidden, gin.H{"error": err.Error()})
|
||||
respondError(c, http.StatusForbidden, "access denied", err)
|
||||
default:
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
respondError(c, http.StatusInternalServerError, "internal server error", err)
|
||||
}
|
||||
return
|
||||
}
|
||||
@@ -1593,7 +1638,7 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
||||
OrderedUUIDs []string `json:"ordered_uuids"`
|
||||
}
|
||||
if err := c.ShouldBindJSON(&req); err != nil {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
||||
respondError(c, http.StatusBadRequest, "invalid request", err)
|
||||
return
|
||||
}
|
||||
if len(req.OrderedUUIDs) == 0 {
|
||||
@@ -1605,9 +1650,9 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
||||
if err != nil {
|
||||
switch {
|
||||
case errors.Is(err, services.ErrProjectNotFound):
|
||||
c.JSON(http.StatusNotFound, gin.H{"error": err.Error()})
|
||||
respondError(c, http.StatusNotFound, "resource not found", err)
|
||||
default:
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
||||
respondError(c, http.StatusBadRequest, "invalid request", err)
|
||||
}
|
||||
return
|
||||
}
|
||||
@@ -1628,7 +1673,7 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
||||
projects.POST("/:uuid/configs", func(c *gin.Context) {
|
||||
var req services.CreateConfigRequest
|
||||
if err := c.ShouldBindJSON(&req); err != nil {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
||||
respondError(c, http.StatusBadRequest, "invalid request", err)
|
||||
return
|
||||
}
|
||||
projectUUID := c.Param("uuid")
|
||||
@@ -1636,29 +1681,42 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
||||
|
||||
config, err := configService.Create(dbUsername, &req)
|
||||
if err != nil {
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
respondError(c, http.StatusInternalServerError, "internal server error", err)
|
||||
return
|
||||
}
|
||||
c.JSON(http.StatusCreated, config)
|
||||
})
|
||||
|
||||
projects.POST("/:uuid/vendor-import", func(c *gin.Context) {
|
||||
c.Request.Body = http.MaxBytesReader(c.Writer, c.Request.Body, vendorImportBodyLimit())
|
||||
fileHeader, err := c.FormFile("file")
|
||||
if err != nil {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": "file is required"})
|
||||
if isVendorImportTooLarge(0, err) {
|
||||
respondError(c, http.StatusBadRequest, "vendor workspace file exceeds 1 GiB limit", errVendorImportTooLarge)
|
||||
return
|
||||
}
|
||||
respondError(c, http.StatusBadRequest, "file is required", err)
|
||||
return
|
||||
}
|
||||
if isVendorImportTooLarge(fileHeader.Size, nil) {
|
||||
respondError(c, http.StatusBadRequest, "vendor workspace file exceeds 1 GiB limit", errVendorImportTooLarge)
|
||||
return
|
||||
}
|
||||
|
||||
file, err := fileHeader.Open()
|
||||
if err != nil {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": "failed to open uploaded file"})
|
||||
respondError(c, http.StatusBadRequest, "failed to open uploaded file", err)
|
||||
return
|
||||
}
|
||||
defer file.Close()
|
||||
|
||||
data, err := io.ReadAll(file)
|
||||
data, err := io.ReadAll(io.LimitReader(file, vendorImportMaxBytes+1))
|
||||
if err != nil {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": "failed to read uploaded file"})
|
||||
respondError(c, http.StatusBadRequest, "failed to read uploaded file", err)
|
||||
return
|
||||
}
|
||||
if int64(len(data)) > vendorImportMaxBytes {
|
||||
respondError(c, http.StatusBadRequest, "vendor workspace file exceeds 1 GiB limit", errVendorImportTooLarge)
|
||||
return
|
||||
}
|
||||
if !services.IsCFXMLWorkspace(data) {
|
||||
@@ -1670,9 +1728,9 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
||||
if err != nil {
|
||||
switch {
|
||||
case errors.Is(err, services.ErrProjectNotFound):
|
||||
c.JSON(http.StatusNotFound, gin.H{"error": err.Error()})
|
||||
respondError(c, http.StatusNotFound, "resource not found", err)
|
||||
default:
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
||||
respondError(c, http.StatusBadRequest, "invalid request", err)
|
||||
}
|
||||
return
|
||||
}
|
||||
@@ -1688,14 +1746,14 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
||||
Name string `json:"name"`
|
||||
}
|
||||
if err := c.ShouldBindJSON(&req); err != nil {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
||||
respondError(c, http.StatusBadRequest, "invalid request", err)
|
||||
return
|
||||
}
|
||||
|
||||
projectUUID := c.Param("uuid")
|
||||
config, err := configService.CloneNoAuthToProject(c.Param("config_uuid"), req.Name, dbUsername, &projectUUID)
|
||||
if err != nil {
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
respondError(c, http.StatusInternalServerError, "internal server error", err)
|
||||
return
|
||||
}
|
||||
c.JSON(http.StatusCreated, config)
|
||||
@@ -1769,22 +1827,12 @@ func requestLogger() gin.HandlerFunc {
|
||||
path := c.Request.URL.Path
|
||||
query := c.Request.URL.RawQuery
|
||||
|
||||
blw := &captureResponseWriter{
|
||||
ResponseWriter: c.Writer,
|
||||
body: bytes.NewBuffer(nil),
|
||||
}
|
||||
c.Writer = blw
|
||||
|
||||
c.Next()
|
||||
|
||||
latency := time.Since(start)
|
||||
status := c.Writer.Status()
|
||||
|
||||
if status >= http.StatusBadRequest {
|
||||
responseBody := strings.TrimSpace(blw.body.String())
|
||||
if len(responseBody) > 2048 {
|
||||
responseBody = responseBody[:2048] + "...(truncated)"
|
||||
}
|
||||
errText := strings.TrimSpace(c.Errors.String())
|
||||
|
||||
slog.Error("request failed",
|
||||
@@ -1795,7 +1843,6 @@ func requestLogger() gin.HandlerFunc {
|
||||
"latency", latency,
|
||||
"ip", c.ClientIP(),
|
||||
"errors", errText,
|
||||
"response", responseBody,
|
||||
)
|
||||
return
|
||||
}
|
||||
@@ -1810,22 +1857,3 @@ func requestLogger() gin.HandlerFunc {
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
type captureResponseWriter struct {
|
||||
gin.ResponseWriter
|
||||
body *bytes.Buffer
|
||||
}
|
||||
|
||||
func (w *captureResponseWriter) Write(b []byte) (int, error) {
|
||||
if len(b) > 0 {
|
||||
_, _ = w.body.Write(b)
|
||||
}
|
||||
return w.ResponseWriter.Write(b)
|
||||
}
|
||||
|
||||
func (w *captureResponseWriter) WriteString(s string) (int, error) {
|
||||
if s != "" {
|
||||
_, _ = w.body.WriteString(s)
|
||||
}
|
||||
return w.ResponseWriter.WriteString(s)
|
||||
}
|
||||
|
||||
48
cmd/qfs/request_logger_test.go
Normal file
48
cmd/qfs/request_logger_test.go
Normal file
@@ -0,0 +1,48 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"errors"
|
||||
"log/slog"
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
)
|
||||
|
||||
func TestRequestLoggerDoesNotLogResponseBody(t *testing.T) {
|
||||
gin.SetMode(gin.TestMode)
|
||||
|
||||
var logBuffer bytes.Buffer
|
||||
previousLogger := slog.Default()
|
||||
slog.SetDefault(slog.New(slog.NewTextHandler(&logBuffer, &slog.HandlerOptions{})))
|
||||
defer slog.SetDefault(previousLogger)
|
||||
|
||||
router := gin.New()
|
||||
router.Use(requestLogger())
|
||||
router.GET("/fail", func(c *gin.Context) {
|
||||
_ = c.Error(errors.New("root cause"))
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": "do not log this body"})
|
||||
})
|
||||
|
||||
rec := httptest.NewRecorder()
|
||||
req := httptest.NewRequest(http.MethodGet, "/fail?debug=1", nil)
|
||||
router.ServeHTTP(rec, req)
|
||||
|
||||
if rec.Code != http.StatusBadRequest {
|
||||
t.Fatalf("expected 400, got %d", rec.Code)
|
||||
}
|
||||
|
||||
logOutput := logBuffer.String()
|
||||
if !strings.Contains(logOutput, "request failed") {
|
||||
t.Fatalf("expected request failure log, got %q", logOutput)
|
||||
}
|
||||
if strings.Contains(logOutput, "do not log this body") {
|
||||
t.Fatalf("response body leaked into logs: %q", logOutput)
|
||||
}
|
||||
if !strings.Contains(logOutput, "root cause") {
|
||||
t.Fatalf("expected error details in logs, got %q", logOutput)
|
||||
}
|
||||
}
|
||||
@@ -3,10 +3,12 @@ package main
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/json"
|
||||
"mime/multipart"
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
"git.mchus.pro/mchus/quoteforge/internal/config"
|
||||
@@ -290,6 +292,88 @@ func TestConfigMoveToProjectEndpoint(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestVendorImportRejectsOversizedUpload(t *testing.T) {
|
||||
moveToRepoRoot(t)
|
||||
|
||||
prevLimit := vendorImportMaxBytes
|
||||
vendorImportMaxBytes = 128
|
||||
defer func() { vendorImportMaxBytes = prevLimit }()
|
||||
|
||||
local, connMgr, _ := newAPITestStack(t)
|
||||
cfg := &config.Config{}
|
||||
setConfigDefaults(cfg)
|
||||
router, _, err := setupRouter(cfg, local, connMgr, "tester", nil)
|
||||
if err != nil {
|
||||
t.Fatalf("setup router: %v", err)
|
||||
}
|
||||
|
||||
createProjectReq := httptest.NewRequest(http.MethodPost, "/api/projects", bytes.NewReader([]byte(`{"name":"Import Project","code":"IMP"}`)))
|
||||
createProjectReq.Header.Set("Content-Type", "application/json")
|
||||
createProjectRec := httptest.NewRecorder()
|
||||
router.ServeHTTP(createProjectRec, createProjectReq)
|
||||
if createProjectRec.Code != http.StatusCreated {
|
||||
t.Fatalf("create project status=%d body=%s", createProjectRec.Code, createProjectRec.Body.String())
|
||||
}
|
||||
|
||||
var project models.Project
|
||||
if err := json.Unmarshal(createProjectRec.Body.Bytes(), &project); err != nil {
|
||||
t.Fatalf("unmarshal project: %v", err)
|
||||
}
|
||||
|
||||
var body bytes.Buffer
|
||||
writer := multipart.NewWriter(&body)
|
||||
part, err := writer.CreateFormFile("file", "huge.xml")
|
||||
if err != nil {
|
||||
t.Fatalf("create form file: %v", err)
|
||||
}
|
||||
payload := "<CFXML>" + strings.Repeat("A", int(vendorImportMaxBytes)+1) + "</CFXML>"
|
||||
if _, err := part.Write([]byte(payload)); err != nil {
|
||||
t.Fatalf("write multipart payload: %v", err)
|
||||
}
|
||||
if err := writer.Close(); err != nil {
|
||||
t.Fatalf("close multipart writer: %v", err)
|
||||
}
|
||||
|
||||
req := httptest.NewRequest(http.MethodPost, "/api/projects/"+project.UUID+"/vendor-import", &body)
|
||||
req.Header.Set("Content-Type", writer.FormDataContentType())
|
||||
rec := httptest.NewRecorder()
|
||||
router.ServeHTTP(rec, req)
|
||||
|
||||
if rec.Code != http.StatusBadRequest {
|
||||
t.Fatalf("expected 400 for oversized upload, got %d body=%s", rec.Code, rec.Body.String())
|
||||
}
|
||||
if !strings.Contains(rec.Body.String(), "1 GiB") {
|
||||
t.Fatalf("expected size limit message, got %s", rec.Body.String())
|
||||
}
|
||||
}
|
||||
|
||||
func TestCreateConfigMalformedJSONReturnsGenericError(t *testing.T) {
|
||||
moveToRepoRoot(t)
|
||||
|
||||
local, connMgr, _ := newAPITestStack(t)
|
||||
cfg := &config.Config{}
|
||||
setConfigDefaults(cfg)
|
||||
router, _, err := setupRouter(cfg, local, connMgr, "tester", nil)
|
||||
if err != nil {
|
||||
t.Fatalf("setup router: %v", err)
|
||||
}
|
||||
|
||||
req := httptest.NewRequest(http.MethodPost, "/api/configs", bytes.NewReader([]byte(`{"name":`)))
|
||||
req.Header.Set("Content-Type", "application/json")
|
||||
rec := httptest.NewRecorder()
|
||||
router.ServeHTTP(rec, req)
|
||||
|
||||
if rec.Code != http.StatusBadRequest {
|
||||
t.Fatalf("expected 400 for malformed json, got %d body=%s", rec.Code, rec.Body.String())
|
||||
}
|
||||
if strings.Contains(strings.ToLower(rec.Body.String()), "unexpected eof") {
|
||||
t.Fatalf("expected sanitized error body, got %s", rec.Body.String())
|
||||
}
|
||||
if !strings.Contains(rec.Body.String(), "invalid request") {
|
||||
t.Fatalf("expected generic invalid request message, got %s", rec.Body.String())
|
||||
}
|
||||
}
|
||||
|
||||
func newAPITestStack(t *testing.T) (*localdb.LocalDB, *db.ConnectionManager, *services.LocalConfigurationService) {
|
||||
t.Helper()
|
||||
|
||||
|
||||
@@ -1,56 +1,18 @@
|
||||
# QuoteForge Configuration
|
||||
# Copy this file to config.yaml and update values
|
||||
# QuoteForge runtime config
|
||||
# Runtime creates a minimal config automatically on first start.
|
||||
# This file is only a reference template.
|
||||
|
||||
server:
|
||||
host: "127.0.0.1" # Use 0.0.0.0 to listen on all interfaces
|
||||
host: "127.0.0.1" # Loopback only; remote HTTP binding is unsupported
|
||||
port: 8080
|
||||
mode: "release" # debug | release
|
||||
read_timeout: "30s"
|
||||
write_timeout: "30s"
|
||||
|
||||
database:
|
||||
host: "localhost"
|
||||
port: 3306
|
||||
name: "RFQ_LOG"
|
||||
user: "quoteforge"
|
||||
password: "CHANGE_ME"
|
||||
max_open_conns: 25
|
||||
max_idle_conns: 5
|
||||
conn_max_lifetime: "5m"
|
||||
|
||||
pricing:
|
||||
default_method: "weighted_median" # median | average | weighted_median
|
||||
default_period_days: 90
|
||||
freshness_green_days: 30
|
||||
freshness_yellow_days: 60
|
||||
freshness_red_days: 90
|
||||
min_quotes_for_median: 3
|
||||
popularity_decay_days: 180
|
||||
|
||||
export:
|
||||
temp_dir: "/tmp/quoteforge-exports"
|
||||
max_file_age: "1h"
|
||||
company_name: "Your Company Name"
|
||||
|
||||
backup:
|
||||
time: "00:00"
|
||||
|
||||
alerts:
|
||||
enabled: true
|
||||
check_interval: "1h"
|
||||
high_demand_threshold: 5 # КП за 30 дней
|
||||
trending_threshold_percent: 50 # % роста для алерта
|
||||
|
||||
notifications:
|
||||
email_enabled: false
|
||||
smtp_host: "smtp.example.com"
|
||||
smtp_port: 587
|
||||
smtp_user: ""
|
||||
smtp_password: ""
|
||||
from_address: "quoteforge@example.com"
|
||||
|
||||
logging:
|
||||
level: "info" # debug | info | warn | error
|
||||
format: "json" # json | text
|
||||
output: "stdout" # stdout | file
|
||||
file_path: "/var/log/quoteforge/app.log"
|
||||
format: "json" # json | text
|
||||
output: "stdout" # stdout | stderr | /path/to/file
|
||||
|
||||
213
docs/storage-components-guide.md
Normal file
213
docs/storage-components-guide.md
Normal file
@@ -0,0 +1,213 @@
|
||||
# Руководство по составлению каталога лотов СХД
|
||||
|
||||
## Что такое LOT и зачем он нужен
|
||||
|
||||
LOT — это внутренний идентификатор типа компонента в системе QuoteForge.
|
||||
|
||||
Каждый LOT представляет одну рыночную позицию и хранит **средневзвешенную рыночную цену**, рассчитанную по историческим данным от поставщиков. Это позволяет получать актуальную оценку стоимости независимо от конкретного поставщика или прайс-листа.
|
||||
|
||||
Партномера вендора (Part Number, Feature Code) сами по себе не имеют цены в системе — они **переводятся в LOT** через книгу партномеров. Именно через LOT происходит расценка конфигурации.
|
||||
|
||||
**Пример:** Feature Code `B4B9` и Part Number `4C57A14368` — это два разных обозначения одной и той же HIC-карты от Lenovo. Оба маппируются на один LOT `HIC_4pFC32`, у которого есть рыночная цена.
|
||||
|
||||
---
|
||||
|
||||
## Категории и вкладки конфигуратора
|
||||
|
||||
Категория LOT определяет, в какой вкладке конфигуратора он появится.
|
||||
|
||||
| Код категории | Название | Вкладка | Что сюда относится |
|
||||
|---|---|---|---|
|
||||
| `ENC` | Storage Enclosure | **Base** | Дисковая полка без контроллера |
|
||||
| `DKC` | Disk/Controller Enclosure | **Base** | Контроллерная полка: модель СХД + тип дисков + кол-во слотов + кол-во контроллеров |
|
||||
| `CTL` | Storage Controller | **Base** | Контроллер СХД: объём кэша + встроенные хост-порты |
|
||||
| `HIC` | Host Interface Card | **PCI** | HIC-карты СХД: интерфейсы подключения (FC, iSCSI, SAS) |
|
||||
| `HDD` | HDD | **Storage** | Жёсткие диски (HDD) |
|
||||
| `SSD` | SSD | **Storage** | Твердотельные диски (SSD, NVMe) |
|
||||
| `ACC` | Accessories | **Accessories** | Кабели подключения, кабели питания |
|
||||
| `SW` | Software | **SW** | Программные лицензии |
|
||||
| *(прочее)* | — | **Other** | Гарантийные опции, инсталляция |
|
||||
|
||||
---
|
||||
|
||||
## Правила именования LOT
|
||||
|
||||
Формат: `КАТЕГОРИЯ_МОДЕЛЬСХД_СПЕЦИФИКА`
|
||||
|
||||
- только латиница, цифры и знак `_`
|
||||
- регистр — ВЕРХНИЙ
|
||||
- без пробелов, дефисов, точек
|
||||
- каждый LOT уникален — два разных компонента не могут иметь одинаковое имя
|
||||
|
||||
### DKC — контроллерная полка
|
||||
|
||||
Специфика: `ТИПДИСКА_СЛОТЫ_NCTRL`
|
||||
|
||||
| Пример | Расшифровка |
|
||||
|---|---|
|
||||
| `DKC_DE4000H_SFF_24_2CTRL` | DE4000H, 24 слота SFF (2.5"), 2 контроллера |
|
||||
| `DKC_DE4000H_LFF_12_2CTRL` | DE4000H, 12 слотов LFF (3.5"), 2 контроллера |
|
||||
| `DKC_DE4000H_SFF_24_1CTRL` | DE4000H, 24 слота SFF, 1 контроллер (симплекс) |
|
||||
|
||||
Обозначения типа диска: `SFF` — 2.5", `LFF` — 3.5", `NVMe` — U.2/U.3.
|
||||
|
||||
### CTL — контроллер
|
||||
|
||||
Специфика: `КЭШГБ_ПОРТЫТИП` (если встроенные порты есть) или `КЭШГБ_BASE` (если без портов, добавляются через HIC)
|
||||
|
||||
| Пример | Расшифровка |
|
||||
|---|---|
|
||||
| `CTL_DE4000H_32GB_BASE` | 32GB кэш, без встроенных хост-портов |
|
||||
| `CTL_DE4000H_8GB_BASE` | 8GB кэш, без встроенных хост-портов |
|
||||
| `CTL_MSA2060_8GB_ISCSI10G_4P` | 8GB кэш, встроенные 4× iSCSI 10GbE |
|
||||
|
||||
### HIC — HIC-карты (интерфейс подключения)
|
||||
|
||||
Специфика: `NpПРОТОКОЛ` — без привязки к модели СХД, по аналогии с серверными `HBA_2pFC16`, `HBA_4pFC32_Gen6`.
|
||||
|
||||
| Пример | Расшифровка |
|
||||
|---|---|
|
||||
| `HIC_4pFC32` | 4 порта FC 32Gb |
|
||||
| `HIC_4pFC16` | 4 порта FC 16G/10GbE |
|
||||
| `HIC_4p25G_iSCSI` | 4 порта 25G iSCSI |
|
||||
| `HIC_4p12G_SAS` | 4 порта SAS 12Gb |
|
||||
| `HIC_2p10G_BaseT` | 2 порта 10G Base-T |
|
||||
|
||||
### HDD / SSD / NVMe — диски
|
||||
|
||||
Диски **не привязываются к модели СХД** — используются существующие LOT из серверного каталога (`HDD_...`, `SSD_...`, `NVME_...`). Новые LOT для дисков СХД не создаются; партномера дисков маппируются на уже существующие серверные LOT.
|
||||
|
||||
### ACC — кабели
|
||||
|
||||
Кабели **не привязываются к модели СХД**. Формат: `ACC_CABLE_{ТИП}_{ДЛИНА}` — универсальные LOT, одинаковые для серверов и СХД.
|
||||
|
||||
| Пример | Расшифровка |
|
||||
|---|---|
|
||||
| `ACC_CABLE_CAT6_10M` | Кабель CAT6 10м |
|
||||
| `ACC_CABLE_FC_OM4_3M` | Кабель FC LC-LC OM4 до 3м |
|
||||
| `ACC_CABLE_PWR_C13C14_15M` | Кабель питания C13–C14 1.5м |
|
||||
|
||||
### SW — программные лицензии
|
||||
|
||||
Специфика: краткое название функции.
|
||||
|
||||
| Пример | Расшифровка |
|
||||
|---|---|
|
||||
| `SW_DE4000H_ASYNC_MIRROR` | Async Mirroring |
|
||||
| `SW_DE4000H_SNAPSHOT_512` | Snapshot 512 |
|
||||
|
||||
---
|
||||
|
||||
## Таблица лотов: DE4000H (пример заполнения)
|
||||
|
||||
### DKC — контроллерная полка
|
||||
|
||||
| lot_name | vendor | model | description | disk_slots | disk_type | controllers |
|
||||
|---|---|---|---|---|---|---|
|
||||
| `DKC_DE4000H_SFF_24_2CTRL` | Lenovo | DE4000H 2U24 | DE4000H, 24× SFF, 2 контроллера | 24 | SFF | 2 |
|
||||
| `DKC_DE4000H_LFF_12_2CTRL` | Lenovo | DE4000H 2U12 | DE4000H, 12× LFF, 2 контроллера | 12 | LFF | 2 |
|
||||
|
||||
### CTL — контроллер
|
||||
|
||||
| lot_name | vendor | model | description | cache_gb | host_ports |
|
||||
|---|---|---|---|---|---|
|
||||
| `CTL_DE4000H_32GB_BASE` | Lenovo | DE4000 Controller 32GB Gen2 | Контроллер DE4000, 32GB кэш, без встроенных портов | 32 | — |
|
||||
| `CTL_DE4000H_8GB_BASE` | Lenovo | DE4000 Controller 8GB Gen2 | Контроллер DE4000, 8GB кэш, без встроенных портов | 8 | — |
|
||||
|
||||
### HIC — HIC-карты
|
||||
|
||||
| lot_name | vendor | model | description |
|
||||
|---|---|---|---|
|
||||
| `HIC_2p10G_BaseT` | Lenovo | HIC 10GBASE-T 2-Ports | HIC 10GBASE-T, 2 порта |
|
||||
| `HIC_4p25G_iSCSI` | Lenovo | HIC 10/25GbE iSCSI 4-ports | HIC iSCSI 10/25GbE, 4 порта |
|
||||
| `HIC_4p12G_SAS` | Lenovo | HIC 12Gb SAS 4-ports | HIC SAS 12Gb, 4 порта |
|
||||
| `HIC_4pFC32` | Lenovo | HIC 32Gb FC 4-ports | HIC FC 32Gb, 4 порта |
|
||||
| `HIC_4pFC16` | Lenovo | HIC 16G FC/10GbE 4-ports | HIC FC 16G/10GbE, 4 порта |
|
||||
|
||||
### HDD / SSD / NVMe / ACC — диски и кабели
|
||||
|
||||
Для дисков и кабелей новые LOT не создаются. Партномера маппируются на существующие серверные LOT из каталога.
|
||||
|
||||
### SW — программные лицензии
|
||||
|
||||
| lot_name | vendor | model | description |
|
||||
|---|---|---|---|
|
||||
| `SW_DE4000H_ASYNC_MIRROR` | Lenovo | DE4000H Asynchronous Mirroring | Лицензия Async Mirroring |
|
||||
| `SW_DE4000H_SNAPSHOT_512` | Lenovo | DE4000H Snapshot Upgrade 512 | Лицензия Snapshot 512 |
|
||||
| `SW_DE4000H_SYNC_MIRROR` | Lenovo | DE4000 Synchronous Mirroring | Лицензия Sync Mirroring |
|
||||
|
||||
---
|
||||
|
||||
## Таблица партномеров: DE4000H (пример заполнения)
|
||||
|
||||
Каждый Feature Code и Part Number должен быть привязан к своему LOT.
|
||||
Если у компонента есть оба — добавить две строки.
|
||||
|
||||
| partnumber | lot_name | описание |
|
||||
|---|---|---|
|
||||
| `BEY7` | `ENC_2U24_CHASSIS` | Lenovo ThinkSystem Storage 2U24 Chassis |
|
||||
| `BQA0` | `CTL_DE4000H_32GB_BASE` | DE4000 Controller 32GB Gen2 |
|
||||
| `BQ9Z` | `CTL_DE4000H_8GB_BASE` | DE4000 Controller 8GB Gen2 |
|
||||
| `B4B1` | `HIC_2p10G_BaseT` | HIC 10GBASE-T 2-Ports |
|
||||
| `4C57A14376` | `HIC_2p10G_BaseT` | HIC 10GBASE-T 2-Ports |
|
||||
| `B4BA` | `HIC_4p25G_iSCSI` | HIC 10/25GbE iSCSI 4-ports |
|
||||
| `4C57A14369` | `HIC_4p25G_iSCSI` | HIC 10/25GbE iSCSI 4-ports |
|
||||
| `B4B8` | `HIC_4p12G_SAS` | HIC 12Gb SAS 4-ports |
|
||||
| `4C57A14367` | `HIC_4p12G_SAS` | HIC 12Gb SAS 4-ports |
|
||||
| `B4B9` | `HIC_4pFC32` | HIC 32Gb FC 4-ports |
|
||||
| `4C57A14368` | `HIC_4pFC32` | HIC 32Gb FC 4-ports |
|
||||
| `B4B7` | `HIC_4pFC16` | HIC 16G FC/10GbE 4-ports |
|
||||
| `4C57A14366` | `HIC_4pFC16` | HIC 16G FC/10GbE 4-ports |
|
||||
| `BW12` | `HDD_SAS_02.4TB` | 2.4TB 10K 2.5" HDD 2U24 |
|
||||
| `4XB7A88046` | `HDD_SAS_02.4TB` | 2.4TB 10K 2.5" HDD 2U24 |
|
||||
| `B4C0` | `HDD_SAS_01.8TB` | 1.8TB 10K 2.5" HDD SED FIPS |
|
||||
| `4XB7A14114` | `HDD_SAS_01.8TB` | 1.8TB 10K 2.5" HDD SED FIPS |
|
||||
| `BW13` | `HDD_SAS_02.4TB` | 2.4TB 10K 2.5" HDD FIPS |
|
||||
| `4XB7A88048` | `HDD_SAS_02.4TB` | 2.4TB 10K 2.5" HDD FIPS |
|
||||
| `BKUQ` | `SSD_SAS_0.960T` | 960GB 1DWD 2.5" SSD |
|
||||
| `4XB7A74948` | `SSD_SAS_0.960T` | 960GB 1DWD 2.5" SSD |
|
||||
| `BKUT` | `SSD_SAS_01.92T` | 1.92TB 1DWD 2.5" SSD |
|
||||
| `4XB7A74951` | `SSD_SAS_01.92T` | 1.92TB 1DWD 2.5" SSD |
|
||||
| `BKUK` | `SSD_SAS_03.84T` | 3.84TB 1DWD 2.5" SSD |
|
||||
| `4XB7A74955` | `SSD_SAS_03.84T` | 3.84TB 1DWD 2.5" SSD |
|
||||
| `B4RY` | `SSD_SAS_07.68T` | 7.68TB 1DWD 2.5" SSD |
|
||||
| `4XB7A14176` | `SSD_SAS_07.68T` | 7.68TB 1DWD 2.5" SSD |
|
||||
| `B4CD` | `SSD_SAS_15.36T` | 15.36TB 1DWD 2.5" SSD |
|
||||
| `4XB7A14110` | `SSD_SAS_15.36T` | 15.36TB 1DWD 2.5" SSD |
|
||||
| `BWCJ` | `SSD_SAS_03.84T` | 3.84TB 1DWD 2.5" SSD FIPS |
|
||||
| `4XB7A88469` | `SSD_SAS_03.84T` | 3.84TB 1DWD 2.5" SSD FIPS |
|
||||
| `BW2B` | `SSD_SAS_15.36T` | 15.36TB 1DWD 2.5" SSD SED |
|
||||
| `4XB7A88466` | `SSD_SAS_15.36T` | 15.36TB 1DWD 2.5" SSD SED |
|
||||
| `AVFW` | `ACC_CABLE_CAT6_1M` | CAT6 0.75-1.5m |
|
||||
| `A1MT` | `ACC_CABLE_CAT6_10M` | CAT6 10m |
|
||||
| `90Y3718` | `ACC_CABLE_CAT6_10M` | CAT6 10m |
|
||||
| `A1MW` | `ACC_CABLE_CAT6_25M` | CAT6 25m |
|
||||
| `90Y3727` | `ACC_CABLE_CAT6_25M` | CAT6 25m |
|
||||
| `39Y7937` | `ACC_CABLE_PWR_C13C14_15M` | C13–C14 1.5m |
|
||||
| `39Y7938` | `ACC_CABLE_PWR_C13C20_28M` | C13–C20 2.8m |
|
||||
| `4L67A08371` | `ACC_CABLE_PWR_C13C14_43M` | C13–C14 4.3m |
|
||||
| `C932` | `SW_DE4000H_ASYNC_MIRROR` | DE4000H Asynchronous Mirroring |
|
||||
| `00WE123` | `SW_DE4000H_ASYNC_MIRROR` | DE4000H Asynchronous Mirroring |
|
||||
| `C930` | `SW_DE4000H_SNAPSHOT_512` | DE4000H Snapshot Upgrade 512 |
|
||||
| `C931` | `SW_DE4000H_SYNC_MIRROR` | DE4000 Synchronous Mirroring |
|
||||
|
||||
---
|
||||
|
||||
## Шаблон для новых моделей СХД
|
||||
|
||||
```
|
||||
DKC_МОДЕЛЬ_ТИПДИСКА_СЛОТЫ_NCTRL — контроллерная полка
|
||||
CTL_МОДЕЛЬ_КЭШГБ_ПОРТЫ — контроллер
|
||||
HIC_МОДЕЛЬ_ПРОТОКОЛ_СКОРОСТЬ_ПОРТЫ — HIC-карта (интерфейс подключения)
|
||||
SW_МОДЕЛЬ_ФУНКЦИЯ — лицензия
|
||||
```
|
||||
|
||||
Диски (HDD/SSD/NVMe) и кабели (ACC) — маппируются на существующие серверные LOT, новые не создаются.
|
||||
|
||||
Пример для HPE MSA 2060:
|
||||
```
|
||||
DKC_MSA2060_SFF_24_2CTRL
|
||||
CTL_MSA2060_8GB_ISCSI10G_4P
|
||||
HIC_MSA2060_FC32G_2P
|
||||
SW_MSA2060_REMOTE_SNAP
|
||||
```
|
||||
@@ -10,6 +10,10 @@ import (
|
||||
"sort"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/glebarez/sqlite"
|
||||
"gorm.io/gorm"
|
||||
"gorm.io/gorm/logger"
|
||||
)
|
||||
|
||||
type backupPeriod struct {
|
||||
@@ -250,6 +254,12 @@ func pruneOldBackups(periodDir string, keep int) error {
|
||||
}
|
||||
|
||||
func createBackupArchive(destPath, dbPath, configPath string) error {
|
||||
snapshotPath, cleanup, err := createSQLiteSnapshot(dbPath)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer cleanup()
|
||||
|
||||
file, err := os.Create(destPath)
|
||||
if err != nil {
|
||||
return err
|
||||
@@ -257,12 +267,10 @@ func createBackupArchive(destPath, dbPath, configPath string) error {
|
||||
defer file.Close()
|
||||
|
||||
zipWriter := zip.NewWriter(file)
|
||||
if err := addZipFile(zipWriter, dbPath); err != nil {
|
||||
if err := addZipFileAs(zipWriter, snapshotPath, filepath.Base(dbPath)); err != nil {
|
||||
_ = zipWriter.Close()
|
||||
return err
|
||||
}
|
||||
_ = addZipOptionalFile(zipWriter, dbPath+"-wal")
|
||||
_ = addZipOptionalFile(zipWriter, dbPath+"-shm")
|
||||
|
||||
if strings.TrimSpace(configPath) != "" {
|
||||
_ = addZipOptionalFile(zipWriter, configPath)
|
||||
@@ -274,6 +282,77 @@ func createBackupArchive(destPath, dbPath, configPath string) error {
|
||||
return file.Sync()
|
||||
}
|
||||
|
||||
func createSQLiteSnapshot(dbPath string) (string, func(), error) {
|
||||
tempFile, err := os.CreateTemp("", "qfs-backup-*.db")
|
||||
if err != nil {
|
||||
return "", func() {}, err
|
||||
}
|
||||
tempPath := tempFile.Name()
|
||||
if err := tempFile.Close(); err != nil {
|
||||
_ = os.Remove(tempPath)
|
||||
return "", func() {}, err
|
||||
}
|
||||
if err := os.Remove(tempPath); err != nil && !os.IsNotExist(err) {
|
||||
return "", func() {}, err
|
||||
}
|
||||
|
||||
cleanup := func() {
|
||||
_ = os.Remove(tempPath)
|
||||
}
|
||||
|
||||
db, err := gorm.Open(sqlite.Open(dbPath), &gorm.Config{
|
||||
Logger: logger.Default.LogMode(logger.Silent),
|
||||
})
|
||||
if err != nil {
|
||||
cleanup()
|
||||
return "", func() {}, err
|
||||
}
|
||||
|
||||
sqlDB, err := db.DB()
|
||||
if err != nil {
|
||||
cleanup()
|
||||
return "", func() {}, err
|
||||
}
|
||||
defer sqlDB.Close()
|
||||
|
||||
if err := db.Exec("PRAGMA busy_timeout = 5000").Error; err != nil {
|
||||
cleanup()
|
||||
return "", func() {}, fmt.Errorf("configure sqlite busy_timeout: %w", err)
|
||||
}
|
||||
|
||||
literalPath := strings.ReplaceAll(tempPath, "'", "''")
|
||||
if err := vacuumIntoWithRetry(db, literalPath); err != nil {
|
||||
cleanup()
|
||||
return "", func() {}, err
|
||||
}
|
||||
|
||||
return tempPath, cleanup, nil
|
||||
}
|
||||
|
||||
func vacuumIntoWithRetry(db *gorm.DB, literalPath string) error {
|
||||
var lastErr error
|
||||
for attempt := 0; attempt < 3; attempt++ {
|
||||
if err := db.Exec("VACUUM INTO '" + literalPath + "'").Error; err != nil {
|
||||
lastErr = err
|
||||
if !isSQLiteBusyError(err) {
|
||||
return fmt.Errorf("create sqlite snapshot: %w", err)
|
||||
}
|
||||
time.Sleep(time.Duration(attempt+1) * 250 * time.Millisecond)
|
||||
continue
|
||||
}
|
||||
return nil
|
||||
}
|
||||
return fmt.Errorf("create sqlite snapshot after retries: %w", lastErr)
|
||||
}
|
||||
|
||||
func isSQLiteBusyError(err error) bool {
|
||||
if err == nil {
|
||||
return false
|
||||
}
|
||||
lower := strings.ToLower(err.Error())
|
||||
return strings.Contains(lower, "database is locked") || strings.Contains(lower, "database is busy")
|
||||
}
|
||||
|
||||
func addZipOptionalFile(writer *zip.Writer, path string) error {
|
||||
if _, err := os.Stat(path); err != nil {
|
||||
return nil
|
||||
@@ -282,6 +361,10 @@ func addZipOptionalFile(writer *zip.Writer, path string) error {
|
||||
}
|
||||
|
||||
func addZipFile(writer *zip.Writer, path string) error {
|
||||
return addZipFileAs(writer, path, filepath.Base(path))
|
||||
}
|
||||
|
||||
func addZipFileAs(writer *zip.Writer, path string, archiveName string) error {
|
||||
in, err := os.Open(path)
|
||||
if err != nil {
|
||||
return err
|
||||
@@ -297,7 +380,7 @@ func addZipFile(writer *zip.Writer, path string) error {
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
header.Name = filepath.Base(path)
|
||||
header.Name = archiveName
|
||||
header.Method = zip.Deflate
|
||||
|
||||
out, err := writer.CreateHeader(header)
|
||||
|
||||
@@ -1,11 +1,15 @@
|
||||
package appstate
|
||||
|
||||
import (
|
||||
"archive/zip"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/glebarez/sqlite"
|
||||
"gorm.io/gorm"
|
||||
)
|
||||
|
||||
func TestEnsureRotatingLocalBackupCreatesAndRotates(t *testing.T) {
|
||||
@@ -13,8 +17,8 @@ func TestEnsureRotatingLocalBackupCreatesAndRotates(t *testing.T) {
|
||||
dbPath := filepath.Join(temp, "qfs.db")
|
||||
cfgPath := filepath.Join(temp, "config.yaml")
|
||||
|
||||
if err := os.WriteFile(dbPath, []byte("db"), 0644); err != nil {
|
||||
t.Fatalf("write db: %v", err)
|
||||
if err := writeTestSQLiteDB(dbPath); err != nil {
|
||||
t.Fatalf("write sqlite db: %v", err)
|
||||
}
|
||||
if err := os.WriteFile(cfgPath, []byte("cfg"), 0644); err != nil {
|
||||
t.Fatalf("write config: %v", err)
|
||||
@@ -36,6 +40,7 @@ func TestEnsureRotatingLocalBackupCreatesAndRotates(t *testing.T) {
|
||||
if _, err := os.Stat(dailyArchive); err != nil {
|
||||
t.Fatalf("daily archive missing: %v", err)
|
||||
}
|
||||
assertZipContains(t, dailyArchive, "qfs.db", "config.yaml")
|
||||
|
||||
backupNow = func() time.Time { return time.Date(2026, 2, 12, 10, 0, 0, 0, time.UTC) }
|
||||
created, err = EnsureRotatingLocalBackup(dbPath, cfgPath)
|
||||
@@ -57,8 +62,8 @@ func TestEnsureRotatingLocalBackupEnvControls(t *testing.T) {
|
||||
dbPath := filepath.Join(temp, "qfs.db")
|
||||
cfgPath := filepath.Join(temp, "config.yaml")
|
||||
|
||||
if err := os.WriteFile(dbPath, []byte("db"), 0644); err != nil {
|
||||
t.Fatalf("write db: %v", err)
|
||||
if err := writeTestSQLiteDB(dbPath); err != nil {
|
||||
t.Fatalf("write sqlite db: %v", err)
|
||||
}
|
||||
if err := os.WriteFile(cfgPath, []byte("cfg"), 0644); err != nil {
|
||||
t.Fatalf("write config: %v", err)
|
||||
@@ -95,8 +100,8 @@ func TestEnsureRotatingLocalBackupRejectsGitWorktree(t *testing.T) {
|
||||
if err := os.MkdirAll(filepath.Dir(dbPath), 0755); err != nil {
|
||||
t.Fatalf("mkdir data dir: %v", err)
|
||||
}
|
||||
if err := os.WriteFile(dbPath, []byte("db"), 0644); err != nil {
|
||||
t.Fatalf("write db: %v", err)
|
||||
if err := writeTestSQLiteDB(dbPath); err != nil {
|
||||
t.Fatalf("write sqlite db: %v", err)
|
||||
}
|
||||
if err := os.WriteFile(cfgPath, []byte("cfg"), 0644); err != nil {
|
||||
t.Fatalf("write cfg: %v", err)
|
||||
@@ -110,3 +115,43 @@ func TestEnsureRotatingLocalBackupRejectsGitWorktree(t *testing.T) {
|
||||
t.Fatalf("unexpected error: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
func writeTestSQLiteDB(path string) error {
|
||||
db, err := gorm.Open(sqlite.Open(path), &gorm.Config{})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
sqlDB, err := db.DB()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer sqlDB.Close()
|
||||
|
||||
return db.Exec(`
|
||||
CREATE TABLE sample_items (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
name TEXT NOT NULL
|
||||
);
|
||||
INSERT INTO sample_items(name) VALUES ('backup');
|
||||
`).Error
|
||||
}
|
||||
|
||||
func assertZipContains(t *testing.T, archivePath string, expected ...string) {
|
||||
t.Helper()
|
||||
|
||||
reader, err := zip.OpenReader(archivePath)
|
||||
if err != nil {
|
||||
t.Fatalf("open archive: %v", err)
|
||||
}
|
||||
defer reader.Close()
|
||||
|
||||
found := make(map[string]bool, len(reader.File))
|
||||
for _, file := range reader.File {
|
||||
found[file.Name] = true
|
||||
}
|
||||
for _, name := range expected {
|
||||
if !found[name] {
|
||||
t.Fatalf("archive %s missing %s", archivePath, name)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -329,33 +329,60 @@ func parseGPUModel(lotName string) string {
|
||||
}
|
||||
parts := strings.Split(upper, "_")
|
||||
model := ""
|
||||
numSuffix := ""
|
||||
mem := ""
|
||||
for i, p := range parts {
|
||||
if p == "" {
|
||||
continue
|
||||
}
|
||||
switch p {
|
||||
case "NV", "NVIDIA", "INTEL", "AMD", "RADEON", "PCIE", "PCI", "SXM", "SXMX":
|
||||
case "NV", "NVIDIA", "INTEL", "AMD", "RADEON", "PCIE", "PCI", "SXM", "SXMX", "SFF", "LOVELACE":
|
||||
continue
|
||||
case "ADA", "AMPERE", "HOPPER", "BLACKWELL":
|
||||
if model != "" {
|
||||
archAbbr := map[string]string{
|
||||
"ADA": "ADA", "AMPERE": "AMP", "HOPPER": "HOP", "BLACKWELL": "BWL",
|
||||
}
|
||||
numSuffix += archAbbr[p]
|
||||
}
|
||||
continue
|
||||
default:
|
||||
if strings.Contains(p, "GB") {
|
||||
mem = p
|
||||
continue
|
||||
}
|
||||
if model == "" && (i > 0) {
|
||||
if model == "" && i > 0 {
|
||||
model = p
|
||||
} else if model != "" && numSuffix == "" && isNumeric(p) {
|
||||
numSuffix = p
|
||||
}
|
||||
}
|
||||
}
|
||||
if model != "" && mem != "" {
|
||||
return model + "_" + mem
|
||||
full := model
|
||||
if numSuffix != "" {
|
||||
full = model + numSuffix
|
||||
}
|
||||
if model != "" {
|
||||
return model
|
||||
if full != "" && mem != "" {
|
||||
return full + "_" + mem
|
||||
}
|
||||
if full != "" {
|
||||
return full
|
||||
}
|
||||
return normalizeModelToken(lotName)
|
||||
}
|
||||
|
||||
func isNumeric(s string) bool {
|
||||
if s == "" {
|
||||
return false
|
||||
}
|
||||
for _, r := range s {
|
||||
if r < '0' || r > '9' {
|
||||
return false
|
||||
}
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
func parseMemGiB(lotName string) int {
|
||||
if m := reMemTiB.FindStringSubmatch(lotName); len(m) == 3 {
|
||||
return atoi(m[1]) * 1024
|
||||
|
||||
@@ -7,19 +7,14 @@ import (
|
||||
"strconv"
|
||||
"time"
|
||||
|
||||
mysqlDriver "github.com/go-sql-driver/mysql"
|
||||
"gopkg.in/yaml.v3"
|
||||
)
|
||||
|
||||
type Config struct {
|
||||
Server ServerConfig `yaml:"server"`
|
||||
Database DatabaseConfig `yaml:"database"`
|
||||
Pricing PricingConfig `yaml:"pricing"`
|
||||
Export ExportConfig `yaml:"export"`
|
||||
Alerts AlertsConfig `yaml:"alerts"`
|
||||
Notifications NotificationsConfig `yaml:"notifications"`
|
||||
Logging LoggingConfig `yaml:"logging"`
|
||||
Backup BackupConfig `yaml:"backup"`
|
||||
Server ServerConfig `yaml:"server"`
|
||||
Export ExportConfig `yaml:"export"`
|
||||
Logging LoggingConfig `yaml:"logging"`
|
||||
Backup BackupConfig `yaml:"backup"`
|
||||
}
|
||||
|
||||
type ServerConfig struct {
|
||||
@@ -30,64 +25,6 @@ type ServerConfig struct {
|
||||
WriteTimeout time.Duration `yaml:"write_timeout"`
|
||||
}
|
||||
|
||||
type DatabaseConfig struct {
|
||||
Host string `yaml:"host"`
|
||||
Port int `yaml:"port"`
|
||||
Name string `yaml:"name"`
|
||||
User string `yaml:"user"`
|
||||
Password string `yaml:"password"`
|
||||
MaxOpenConns int `yaml:"max_open_conns"`
|
||||
MaxIdleConns int `yaml:"max_idle_conns"`
|
||||
ConnMaxLifetime time.Duration `yaml:"conn_max_lifetime"`
|
||||
}
|
||||
|
||||
func (d *DatabaseConfig) DSN() string {
|
||||
cfg := mysqlDriver.NewConfig()
|
||||
cfg.User = d.User
|
||||
cfg.Passwd = d.Password
|
||||
cfg.Net = "tcp"
|
||||
cfg.Addr = net.JoinHostPort(d.Host, strconv.Itoa(d.Port))
|
||||
cfg.DBName = d.Name
|
||||
cfg.ParseTime = true
|
||||
cfg.Loc = time.Local
|
||||
cfg.Params = map[string]string{
|
||||
"charset": "utf8mb4",
|
||||
}
|
||||
return cfg.FormatDSN()
|
||||
}
|
||||
|
||||
type PricingConfig struct {
|
||||
DefaultMethod string `yaml:"default_method"`
|
||||
DefaultPeriodDays int `yaml:"default_period_days"`
|
||||
FreshnessGreenDays int `yaml:"freshness_green_days"`
|
||||
FreshnessYellowDays int `yaml:"freshness_yellow_days"`
|
||||
FreshnessRedDays int `yaml:"freshness_red_days"`
|
||||
MinQuotesForMedian int `yaml:"min_quotes_for_median"`
|
||||
PopularityDecayDays int `yaml:"popularity_decay_days"`
|
||||
}
|
||||
|
||||
type ExportConfig struct {
|
||||
TempDir string `yaml:"temp_dir"`
|
||||
MaxFileAge time.Duration `yaml:"max_file_age"`
|
||||
CompanyName string `yaml:"company_name"`
|
||||
}
|
||||
|
||||
type AlertsConfig struct {
|
||||
Enabled bool `yaml:"enabled"`
|
||||
CheckInterval time.Duration `yaml:"check_interval"`
|
||||
HighDemandThreshold int `yaml:"high_demand_threshold"`
|
||||
TrendingThresholdPercent int `yaml:"trending_threshold_percent"`
|
||||
}
|
||||
|
||||
type NotificationsConfig struct {
|
||||
EmailEnabled bool `yaml:"email_enabled"`
|
||||
SMTPHost string `yaml:"smtp_host"`
|
||||
SMTPPort int `yaml:"smtp_port"`
|
||||
SMTPUser string `yaml:"smtp_user"`
|
||||
SMTPPassword string `yaml:"smtp_password"`
|
||||
FromAddress string `yaml:"from_address"`
|
||||
}
|
||||
|
||||
type LoggingConfig struct {
|
||||
Level string `yaml:"level"`
|
||||
Format string `yaml:"format"`
|
||||
@@ -95,6 +32,10 @@ type LoggingConfig struct {
|
||||
FilePath string `yaml:"file_path"`
|
||||
}
|
||||
|
||||
// ExportConfig is kept for constructor compatibility in export services.
|
||||
// Runtime no longer persists an export section in config.yaml.
|
||||
type ExportConfig struct{}
|
||||
|
||||
type BackupConfig struct {
|
||||
Time string `yaml:"time"`
|
||||
}
|
||||
@@ -132,38 +73,6 @@ func (c *Config) setDefaults() {
|
||||
c.Server.WriteTimeout = 30 * time.Second
|
||||
}
|
||||
|
||||
if c.Database.Port == 0 {
|
||||
c.Database.Port = 3306
|
||||
}
|
||||
if c.Database.MaxOpenConns == 0 {
|
||||
c.Database.MaxOpenConns = 25
|
||||
}
|
||||
if c.Database.MaxIdleConns == 0 {
|
||||
c.Database.MaxIdleConns = 5
|
||||
}
|
||||
if c.Database.ConnMaxLifetime == 0 {
|
||||
c.Database.ConnMaxLifetime = 5 * time.Minute
|
||||
}
|
||||
|
||||
if c.Pricing.DefaultMethod == "" {
|
||||
c.Pricing.DefaultMethod = "weighted_median"
|
||||
}
|
||||
if c.Pricing.DefaultPeriodDays == 0 {
|
||||
c.Pricing.DefaultPeriodDays = 90
|
||||
}
|
||||
if c.Pricing.FreshnessGreenDays == 0 {
|
||||
c.Pricing.FreshnessGreenDays = 30
|
||||
}
|
||||
if c.Pricing.FreshnessYellowDays == 0 {
|
||||
c.Pricing.FreshnessYellowDays = 60
|
||||
}
|
||||
if c.Pricing.FreshnessRedDays == 0 {
|
||||
c.Pricing.FreshnessRedDays = 90
|
||||
}
|
||||
if c.Pricing.MinQuotesForMedian == 0 {
|
||||
c.Pricing.MinQuotesForMedian = 3
|
||||
}
|
||||
|
||||
if c.Logging.Level == "" {
|
||||
c.Logging.Level = "info"
|
||||
}
|
||||
@@ -180,5 +89,5 @@ func (c *Config) setDefaults() {
|
||||
}
|
||||
|
||||
func (c *Config) Address() string {
|
||||
return fmt.Sprintf("%s:%d", c.Server.Host, c.Server.Port)
|
||||
return net.JoinHostPort(c.Server.Host, strconv.Itoa(c.Server.Port))
|
||||
}
|
||||
|
||||
@@ -238,6 +238,22 @@ func (cm *ConnectionManager) Disconnect() {
|
||||
cm.lastError = nil
|
||||
}
|
||||
|
||||
// MarkOffline closes the current connection and preserves the last observed error.
|
||||
func (cm *ConnectionManager) MarkOffline(err error) {
|
||||
cm.mu.Lock()
|
||||
defer cm.mu.Unlock()
|
||||
|
||||
if cm.db != nil {
|
||||
sqlDB, dbErr := cm.db.DB()
|
||||
if dbErr == nil {
|
||||
sqlDB.Close()
|
||||
}
|
||||
}
|
||||
cm.db = nil
|
||||
cm.lastError = err
|
||||
cm.lastCheck = time.Now()
|
||||
}
|
||||
|
||||
// GetLastError returns the last connection error (thread-safe)
|
||||
func (cm *ConnectionManager) GetLastError() error {
|
||||
cm.mu.RLock()
|
||||
|
||||
@@ -49,7 +49,7 @@ func (h *ComponentHandler) List(c *gin.Context) {
|
||||
offset := (page - 1) * perPage
|
||||
localComps, total, err := h.localDB.ListComponents(localFilter, offset, perPage)
|
||||
if err != nil {
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
RespondError(c, http.StatusInternalServerError, "internal server error", err)
|
||||
return
|
||||
}
|
||||
|
||||
|
||||
@@ -48,17 +48,19 @@ type ExportRequest struct {
|
||||
}
|
||||
|
||||
type ProjectExportOptionsRequest struct {
|
||||
IncludeLOT bool `json:"include_lot"`
|
||||
IncludeBOM bool `json:"include_bom"`
|
||||
IncludeEstimate bool `json:"include_estimate"`
|
||||
IncludeStock bool `json:"include_stock"`
|
||||
IncludeCompetitor bool `json:"include_competitor"`
|
||||
IncludeLOT bool `json:"include_lot"`
|
||||
IncludeBOM bool `json:"include_bom"`
|
||||
IncludeEstimate bool `json:"include_estimate"`
|
||||
IncludeStock bool `json:"include_stock"`
|
||||
IncludeCompetitor bool `json:"include_competitor"`
|
||||
Basis string `json:"basis"` // "fob" or "ddp"
|
||||
SaleMarkup float64 `json:"sale_markup"` // DDP multiplier; 0 defaults to 1.3
|
||||
}
|
||||
|
||||
func (h *ExportHandler) ExportCSV(c *gin.Context) {
|
||||
var req ExportRequest
|
||||
if err := c.ShouldBindJSON(&req); err != nil {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
||||
RespondError(c, http.StatusBadRequest, "invalid request", err)
|
||||
return
|
||||
}
|
||||
|
||||
@@ -150,7 +152,7 @@ func (h *ExportHandler) ExportConfigCSV(c *gin.Context) {
|
||||
// Get config before streaming (can return JSON error)
|
||||
config, err := h.configService.GetByUUID(uuid, h.dbUsername)
|
||||
if err != nil {
|
||||
c.JSON(http.StatusNotFound, gin.H{"error": err.Error()})
|
||||
RespondError(c, http.StatusNotFound, "resource not found", err)
|
||||
return
|
||||
}
|
||||
|
||||
@@ -193,13 +195,13 @@ func (h *ExportHandler) ExportProjectCSV(c *gin.Context) {
|
||||
|
||||
project, err := h.projectService.GetByUUID(projectUUID, h.dbUsername)
|
||||
if err != nil {
|
||||
c.JSON(http.StatusNotFound, gin.H{"error": err.Error()})
|
||||
RespondError(c, http.StatusNotFound, "resource not found", err)
|
||||
return
|
||||
}
|
||||
|
||||
result, err := h.projectService.ListConfigurations(projectUUID, h.dbUsername, "active")
|
||||
if err != nil {
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
RespondError(c, http.StatusInternalServerError, "internal server error", err)
|
||||
return
|
||||
}
|
||||
|
||||
@@ -226,19 +228,19 @@ func (h *ExportHandler) ExportProjectPricingCSV(c *gin.Context) {
|
||||
|
||||
var req ProjectExportOptionsRequest
|
||||
if err := c.ShouldBindJSON(&req); err != nil {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
||||
RespondError(c, http.StatusBadRequest, "invalid request", err)
|
||||
return
|
||||
}
|
||||
|
||||
project, err := h.projectService.GetByUUID(projectUUID, h.dbUsername)
|
||||
if err != nil {
|
||||
c.JSON(http.StatusNotFound, gin.H{"error": err.Error()})
|
||||
RespondError(c, http.StatusNotFound, "resource not found", err)
|
||||
return
|
||||
}
|
||||
|
||||
result, err := h.projectService.ListConfigurations(projectUUID, h.dbUsername, "active")
|
||||
if err != nil {
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
RespondError(c, http.StatusInternalServerError, "internal server error", err)
|
||||
return
|
||||
}
|
||||
if len(result.Configs) == 0 {
|
||||
@@ -252,15 +254,25 @@ func (h *ExportHandler) ExportProjectPricingCSV(c *gin.Context) {
|
||||
IncludeEstimate: req.IncludeEstimate,
|
||||
IncludeStock: req.IncludeStock,
|
||||
IncludeCompetitor: req.IncludeCompetitor,
|
||||
Basis: req.Basis,
|
||||
SaleMarkup: req.SaleMarkup,
|
||||
}
|
||||
|
||||
data, err := h.exportService.ProjectToPricingExportData(result.Configs, opts)
|
||||
if err != nil {
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
RespondError(c, http.StatusInternalServerError, "internal server error", err)
|
||||
return
|
||||
}
|
||||
|
||||
filename := fmt.Sprintf("%s (%s) pricing.csv", time.Now().Format("2006-01-02"), project.Code)
|
||||
basisLabel := "FOB"
|
||||
if strings.EqualFold(strings.TrimSpace(req.Basis), "ddp") {
|
||||
basisLabel = "DDP"
|
||||
}
|
||||
variantLabel := strings.TrimSpace(project.Variant)
|
||||
if variantLabel == "" {
|
||||
variantLabel = "main"
|
||||
}
|
||||
filename := fmt.Sprintf("%s (%s) %s %s.csv", time.Now().Format("2006-01-02"), project.Code, basisLabel, variantLabel)
|
||||
c.Header("Content-Type", "text/csv; charset=utf-8")
|
||||
c.Header("Content-Disposition", fmt.Sprintf("attachment; filename=\"%s\"", filename))
|
||||
|
||||
|
||||
@@ -25,7 +25,7 @@ func (h *PartnumberBooksHandler) List(c *gin.Context) {
|
||||
bookRepo := repository.NewPartnumberBookRepository(h.localDB.DB())
|
||||
books, err := bookRepo.ListBooks()
|
||||
if err != nil {
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
RespondError(c, http.StatusInternalServerError, "internal server error", err)
|
||||
return
|
||||
}
|
||||
|
||||
@@ -86,7 +86,7 @@ func (h *PartnumberBooksHandler) GetItems(c *gin.Context) {
|
||||
|
||||
items, total, err := bookRepo.GetBookItemsPage(book.ID, search, page, perPage)
|
||||
if err != nil {
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
RespondError(c, http.StatusInternalServerError, "internal server error", err)
|
||||
return
|
||||
}
|
||||
|
||||
|
||||
@@ -34,7 +34,7 @@ func (h *PricelistHandler) List(c *gin.Context) {
|
||||
|
||||
localPLs, err := h.localDB.GetLocalPricelists()
|
||||
if err != nil {
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
RespondError(c, http.StatusInternalServerError, "internal server error", err)
|
||||
return
|
||||
}
|
||||
if source != "" {
|
||||
@@ -172,24 +172,48 @@ func (h *PricelistHandler) GetItems(c *gin.Context) {
|
||||
}
|
||||
var total int64
|
||||
if err := dbq.Count(&total).Error; err != nil {
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
RespondError(c, http.StatusInternalServerError, "internal server error", err)
|
||||
return
|
||||
}
|
||||
offset := (page - 1) * perPage
|
||||
|
||||
if err := dbq.Order("lot_name").Offset(offset).Limit(perPage).Find(&items).Error; err != nil {
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
RespondError(c, http.StatusInternalServerError, "internal server error", err)
|
||||
return
|
||||
}
|
||||
lotNames := make([]string, len(items))
|
||||
for i, item := range items {
|
||||
lotNames[i] = item.LotName
|
||||
}
|
||||
type compRow struct {
|
||||
LotName string
|
||||
LotDescription string
|
||||
}
|
||||
var comps []compRow
|
||||
if len(lotNames) > 0 {
|
||||
h.localDB.DB().Table("local_components").
|
||||
Select("lot_name, lot_description").
|
||||
Where("lot_name IN ?", lotNames).
|
||||
Scan(&comps)
|
||||
}
|
||||
descMap := make(map[string]string, len(comps))
|
||||
for _, c := range comps {
|
||||
descMap[c.LotName] = c.LotDescription
|
||||
}
|
||||
|
||||
resultItems := make([]gin.H, 0, len(items))
|
||||
for _, item := range items {
|
||||
resultItems = append(resultItems, gin.H{
|
||||
"id": item.ID,
|
||||
"lot_name": item.LotName,
|
||||
"price": item.Price,
|
||||
"category": item.LotCategory,
|
||||
"available_qty": item.AvailableQty,
|
||||
"partnumbers": []string(item.Partnumbers),
|
||||
"id": item.ID,
|
||||
"lot_name": item.LotName,
|
||||
"lot_description": descMap[item.LotName],
|
||||
"price": item.Price,
|
||||
"category": item.LotCategory,
|
||||
"available_qty": item.AvailableQty,
|
||||
"partnumbers": []string(item.Partnumbers),
|
||||
"partnumber_qtys": map[string]interface{}{},
|
||||
"competitor_names": []string{},
|
||||
"price_spread_pct": nil,
|
||||
})
|
||||
}
|
||||
|
||||
@@ -217,7 +241,7 @@ func (h *PricelistHandler) GetLotNames(c *gin.Context) {
|
||||
}
|
||||
items, err := h.localDB.GetLocalPricelistItems(localPL.ID)
|
||||
if err != nil {
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
RespondError(c, http.StatusInternalServerError, "internal server error", err)
|
||||
return
|
||||
}
|
||||
lotNames := make([]string, 0, len(items))
|
||||
|
||||
@@ -18,13 +18,13 @@ func NewQuoteHandler(quoteService *services.QuoteService) *QuoteHandler {
|
||||
func (h *QuoteHandler) Validate(c *gin.Context) {
|
||||
var req services.QuoteRequest
|
||||
if err := c.ShouldBindJSON(&req); err != nil {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
||||
RespondError(c, http.StatusBadRequest, "invalid request", err)
|
||||
return
|
||||
}
|
||||
|
||||
result, err := h.quoteService.ValidateAndCalculate(&req)
|
||||
if err != nil {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
||||
RespondError(c, http.StatusBadRequest, "invalid request", err)
|
||||
return
|
||||
}
|
||||
|
||||
@@ -34,13 +34,13 @@ func (h *QuoteHandler) Validate(c *gin.Context) {
|
||||
func (h *QuoteHandler) Calculate(c *gin.Context) {
|
||||
var req services.QuoteRequest
|
||||
if err := c.ShouldBindJSON(&req); err != nil {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
||||
RespondError(c, http.StatusBadRequest, "invalid request", err)
|
||||
return
|
||||
}
|
||||
|
||||
result, err := h.quoteService.ValidateAndCalculate(&req)
|
||||
if err != nil {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
||||
RespondError(c, http.StatusBadRequest, "invalid request", err)
|
||||
return
|
||||
}
|
||||
|
||||
@@ -53,13 +53,13 @@ func (h *QuoteHandler) Calculate(c *gin.Context) {
|
||||
func (h *QuoteHandler) PriceLevels(c *gin.Context) {
|
||||
var req services.PriceLevelsRequest
|
||||
if err := c.ShouldBindJSON(&req); err != nil {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
||||
RespondError(c, http.StatusBadRequest, "invalid request", err)
|
||||
return
|
||||
}
|
||||
|
||||
result, err := h.quoteService.CalculatePriceLevels(&req)
|
||||
if err != nil {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
||||
RespondError(c, http.StatusBadRequest, "invalid request", err)
|
||||
return
|
||||
}
|
||||
|
||||
|
||||
73
internal/handlers/respond.go
Normal file
73
internal/handlers/respond.go
Normal file
@@ -0,0 +1,73 @@
|
||||
package handlers
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"io"
|
||||
"strings"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
)
|
||||
|
||||
func RespondError(c *gin.Context, status int, fallback string, err error) {
|
||||
if err != nil {
|
||||
_ = c.Error(err)
|
||||
}
|
||||
c.JSON(status, gin.H{"error": clientFacingErrorMessage(status, fallback, err)})
|
||||
}
|
||||
|
||||
func clientFacingErrorMessage(status int, fallback string, err error) string {
|
||||
if err == nil {
|
||||
return fallback
|
||||
}
|
||||
if status >= 500 {
|
||||
return fallback
|
||||
}
|
||||
if isRequestDecodeError(err) {
|
||||
return fallback
|
||||
}
|
||||
|
||||
message := strings.TrimSpace(err.Error())
|
||||
if message == "" {
|
||||
return fallback
|
||||
}
|
||||
if looksTechnicalError(message) {
|
||||
return fallback
|
||||
}
|
||||
return message
|
||||
}
|
||||
|
||||
func isRequestDecodeError(err error) bool {
|
||||
var syntaxErr *json.SyntaxError
|
||||
if errors.As(err, &syntaxErr) {
|
||||
return true
|
||||
}
|
||||
|
||||
var unmarshalTypeErr *json.UnmarshalTypeError
|
||||
if errors.As(err, &unmarshalTypeErr) {
|
||||
return true
|
||||
}
|
||||
|
||||
return errors.Is(err, io.ErrUnexpectedEOF) || errors.Is(err, io.EOF)
|
||||
}
|
||||
|
||||
func looksTechnicalError(message string) bool {
|
||||
lower := strings.ToLower(strings.TrimSpace(message))
|
||||
needles := []string{
|
||||
"sql",
|
||||
"gorm",
|
||||
"driver",
|
||||
"constraint",
|
||||
"syntax error",
|
||||
"unexpected eof",
|
||||
"record not found",
|
||||
"no such table",
|
||||
"stack trace",
|
||||
}
|
||||
for _, needle := range needles {
|
||||
if strings.Contains(lower, needle) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
41
internal/handlers/respond_test.go
Normal file
41
internal/handlers/respond_test.go
Normal file
@@ -0,0 +1,41 @@
|
||||
package handlers
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestClientFacingErrorMessageKeepsDomain4xx(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
got := clientFacingErrorMessage(400, "invalid request", &json.SyntaxError{Offset: 1})
|
||||
if got != "invalid request" {
|
||||
t.Fatalf("expected fallback for decode error, got %q", got)
|
||||
}
|
||||
}
|
||||
|
||||
func TestClientFacingErrorMessagePreservesBusinessMessage(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
err := errString("main project variant cannot be deleted")
|
||||
got := clientFacingErrorMessage(400, "invalid request", err)
|
||||
if got != err.Error() {
|
||||
t.Fatalf("expected business message, got %q", got)
|
||||
}
|
||||
}
|
||||
|
||||
func TestClientFacingErrorMessageHidesTechnical4xx(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
err := errString("sql: no rows in result set")
|
||||
got := clientFacingErrorMessage(404, "resource not found", err)
|
||||
if got != "resource not found" {
|
||||
t.Fatalf("expected fallback for technical error, got %q", got)
|
||||
}
|
||||
}
|
||||
|
||||
type errString string
|
||||
|
||||
func (e errString) Error() string {
|
||||
return string(e)
|
||||
}
|
||||
@@ -1,6 +1,7 @@
|
||||
package handlers
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"html/template"
|
||||
"log/slog"
|
||||
@@ -12,8 +13,8 @@ import (
|
||||
qfassets "git.mchus.pro/mchus/quoteforge"
|
||||
"git.mchus.pro/mchus/quoteforge/internal/db"
|
||||
"git.mchus.pro/mchus/quoteforge/internal/localdb"
|
||||
mysqlDriver "github.com/go-sql-driver/mysql"
|
||||
"github.com/gin-gonic/gin"
|
||||
mysqlDriver "github.com/go-sql-driver/mysql"
|
||||
gormmysql "gorm.io/driver/mysql"
|
||||
"gorm.io/gorm"
|
||||
"gorm.io/gorm/logger"
|
||||
@@ -26,6 +27,8 @@ type SetupHandler struct {
|
||||
restartSig chan struct{}
|
||||
}
|
||||
|
||||
var errPermissionProbeRollback = errors.New("permission probe rollback")
|
||||
|
||||
func NewSetupHandler(localDB *localdb.LocalDB, connMgr *db.ConnectionManager, _ string, restartSig chan struct{}) (*SetupHandler, error) {
|
||||
funcMap := template.FuncMap{
|
||||
"sub": func(a, b int) int { return a - b },
|
||||
@@ -64,7 +67,8 @@ func (h *SetupHandler) ShowSetup(c *gin.Context) {
|
||||
|
||||
tmpl := h.templates["setup.html"]
|
||||
if err := tmpl.ExecuteTemplate(c.Writer, "setup.html", data); err != nil {
|
||||
c.String(http.StatusInternalServerError, "Template error: %v", err)
|
||||
_ = c.Error(err)
|
||||
c.String(http.StatusInternalServerError, "Template error")
|
||||
}
|
||||
}
|
||||
|
||||
@@ -89,49 +93,16 @@ func (h *SetupHandler) TestConnection(c *gin.Context) {
|
||||
}
|
||||
|
||||
dsn := buildMySQLDSN(host, port, database, user, password, 5*time.Second)
|
||||
|
||||
db, err := gorm.Open(gormmysql.Open(dsn), &gorm.Config{
|
||||
Logger: logger.Default.LogMode(logger.Silent),
|
||||
})
|
||||
lotCount, canWrite, err := validateMariaDBConnection(dsn)
|
||||
if err != nil {
|
||||
_ = c.Error(err)
|
||||
c.JSON(http.StatusOK, gin.H{
|
||||
"success": false,
|
||||
"error": fmt.Sprintf("Connection failed: %v", err),
|
||||
"error": "Connection check failed",
|
||||
})
|
||||
return
|
||||
}
|
||||
|
||||
sqlDB, err := db.DB()
|
||||
if err != nil {
|
||||
c.JSON(http.StatusOK, gin.H{
|
||||
"success": false,
|
||||
"error": fmt.Sprintf("Failed to get database handle: %v", err),
|
||||
})
|
||||
return
|
||||
}
|
||||
defer sqlDB.Close()
|
||||
|
||||
if err := sqlDB.Ping(); err != nil {
|
||||
c.JSON(http.StatusOK, gin.H{
|
||||
"success": false,
|
||||
"error": fmt.Sprintf("Ping failed: %v", err),
|
||||
})
|
||||
return
|
||||
}
|
||||
|
||||
// Check for required tables
|
||||
var lotCount int64
|
||||
if err := db.Table("lot").Count(&lotCount).Error; err != nil {
|
||||
c.JSON(http.StatusOK, gin.H{
|
||||
"success": false,
|
||||
"error": fmt.Sprintf("Table 'lot' not found or inaccessible: %v", err),
|
||||
})
|
||||
return
|
||||
}
|
||||
|
||||
// Check write permission
|
||||
canWrite := testWritePermission(db)
|
||||
|
||||
c.JSON(http.StatusOK, gin.H{
|
||||
"success": true,
|
||||
"lot_count": lotCount,
|
||||
@@ -164,26 +135,21 @@ func (h *SetupHandler) SaveConnection(c *gin.Context) {
|
||||
|
||||
// Test connection first
|
||||
dsn := buildMySQLDSN(host, port, database, user, password, 5*time.Second)
|
||||
|
||||
db, err := gorm.Open(gormmysql.Open(dsn), &gorm.Config{
|
||||
Logger: logger.Default.LogMode(logger.Silent),
|
||||
})
|
||||
if err != nil {
|
||||
if _, _, err := validateMariaDBConnection(dsn); err != nil {
|
||||
_ = c.Error(err)
|
||||
c.JSON(http.StatusBadRequest, gin.H{
|
||||
"success": false,
|
||||
"error": fmt.Sprintf("Connection failed: %v", err),
|
||||
"error": "Connection check failed",
|
||||
})
|
||||
return
|
||||
}
|
||||
|
||||
sqlDB, _ := db.DB()
|
||||
sqlDB.Close()
|
||||
|
||||
// Save settings
|
||||
if err := h.localDB.SaveSettings(host, port, database, user, password); err != nil {
|
||||
_ = c.Error(err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{
|
||||
"success": false,
|
||||
"error": fmt.Sprintf("Failed to save settings: %v", err),
|
||||
"error": "Failed to save settings",
|
||||
})
|
||||
return
|
||||
}
|
||||
@@ -232,22 +198,6 @@ func (h *SetupHandler) GetStatus(c *gin.Context) {
|
||||
})
|
||||
}
|
||||
|
||||
func testWritePermission(db *gorm.DB) bool {
|
||||
// Simple check: try to create a temporary table and drop it
|
||||
testTable := fmt.Sprintf("qt_write_test_%d", time.Now().UnixNano())
|
||||
|
||||
// Try to create a test table
|
||||
err := db.Exec(fmt.Sprintf("CREATE TABLE %s (id INT)", testTable)).Error
|
||||
if err != nil {
|
||||
return false
|
||||
}
|
||||
|
||||
// Drop it immediately
|
||||
db.Exec(fmt.Sprintf("DROP TABLE %s", testTable))
|
||||
|
||||
return true
|
||||
}
|
||||
|
||||
func buildMySQLDSN(host string, port int, database, user, password string, timeout time.Duration) string {
|
||||
cfg := mysqlDriver.NewConfig()
|
||||
cfg.User = user
|
||||
@@ -263,3 +213,47 @@ func buildMySQLDSN(host string, port int, database, user, password string, timeo
|
||||
}
|
||||
return cfg.FormatDSN()
|
||||
}
|
||||
|
||||
func validateMariaDBConnection(dsn string) (int64, bool, error) {
|
||||
db, err := gorm.Open(gormmysql.Open(dsn), &gorm.Config{
|
||||
Logger: logger.Default.LogMode(logger.Silent),
|
||||
})
|
||||
if err != nil {
|
||||
return 0, false, fmt.Errorf("open MariaDB connection: %w", err)
|
||||
}
|
||||
|
||||
sqlDB, err := db.DB()
|
||||
if err != nil {
|
||||
return 0, false, fmt.Errorf("get database handle: %w", err)
|
||||
}
|
||||
defer sqlDB.Close()
|
||||
|
||||
if err := sqlDB.Ping(); err != nil {
|
||||
return 0, false, fmt.Errorf("ping MariaDB: %w", err)
|
||||
}
|
||||
|
||||
var lotCount int64
|
||||
if err := db.Table("lot").Count(&lotCount).Error; err != nil {
|
||||
return 0, false, fmt.Errorf("check required table lot: %w", err)
|
||||
}
|
||||
|
||||
return lotCount, testSyncWritePermission(db), nil
|
||||
}
|
||||
|
||||
func testSyncWritePermission(db *gorm.DB) bool {
|
||||
sentinel := fmt.Sprintf("quoteforge-permission-check-%d", time.Now().UnixNano())
|
||||
err := db.Transaction(func(tx *gorm.DB) error {
|
||||
if err := tx.Exec(`
|
||||
INSERT INTO qt_client_schema_state (username, hostname, last_checked_at, updated_at)
|
||||
VALUES (?, ?, NOW(), NOW())
|
||||
ON DUPLICATE KEY UPDATE
|
||||
last_checked_at = VALUES(last_checked_at),
|
||||
updated_at = VALUES(updated_at)
|
||||
`, sentinel, "setup-check").Error; err != nil {
|
||||
return err
|
||||
}
|
||||
return errPermissionProbeRollback
|
||||
})
|
||||
|
||||
return errors.Is(err, errPermissionProbeRollback)
|
||||
}
|
||||
|
||||
@@ -6,6 +6,7 @@ import (
|
||||
"html/template"
|
||||
"log/slog"
|
||||
"net/http"
|
||||
"strings"
|
||||
stdsync "sync"
|
||||
"time"
|
||||
|
||||
@@ -49,15 +50,20 @@ func NewSyncHandler(localDB *localdb.LocalDB, syncService *sync.Service, connMgr
|
||||
|
||||
// SyncStatusResponse represents the sync status
|
||||
type SyncStatusResponse struct {
|
||||
LastComponentSync *time.Time `json:"last_component_sync"`
|
||||
LastPricelistSync *time.Time `json:"last_pricelist_sync"`
|
||||
IsOnline bool `json:"is_online"`
|
||||
ComponentsCount int64 `json:"components_count"`
|
||||
PricelistsCount int64 `json:"pricelists_count"`
|
||||
ServerPricelists int `json:"server_pricelists"`
|
||||
NeedComponentSync bool `json:"need_component_sync"`
|
||||
NeedPricelistSync bool `json:"need_pricelist_sync"`
|
||||
Readiness *sync.SyncReadiness `json:"readiness,omitempty"`
|
||||
LastComponentSync *time.Time `json:"last_component_sync"`
|
||||
LastPricelistSync *time.Time `json:"last_pricelist_sync"`
|
||||
LastPricelistAttemptAt *time.Time `json:"last_pricelist_attempt_at,omitempty"`
|
||||
LastPricelistSyncStatus string `json:"last_pricelist_sync_status,omitempty"`
|
||||
LastPricelistSyncError string `json:"last_pricelist_sync_error,omitempty"`
|
||||
HasIncompleteServerSync bool `json:"has_incomplete_server_sync"`
|
||||
KnownServerChangesMiss bool `json:"known_server_changes_missing"`
|
||||
IsOnline bool `json:"is_online"`
|
||||
ComponentsCount int64 `json:"components_count"`
|
||||
PricelistsCount int64 `json:"pricelists_count"`
|
||||
ServerPricelists int `json:"server_pricelists"`
|
||||
NeedComponentSync bool `json:"need_component_sync"`
|
||||
NeedPricelistSync bool `json:"need_pricelist_sync"`
|
||||
Readiness *sync.SyncReadiness `json:"readiness,omitempty"`
|
||||
}
|
||||
|
||||
type SyncReadinessResponse struct {
|
||||
@@ -72,42 +78,34 @@ type SyncReadinessResponse struct {
|
||||
// GetStatus returns current sync status
|
||||
// GET /api/sync/status
|
||||
func (h *SyncHandler) GetStatus(c *gin.Context) {
|
||||
// Check online status by pinging MariaDB
|
||||
isOnline := h.checkOnline()
|
||||
|
||||
// Get sync times
|
||||
connStatus := h.connMgr.GetStatus()
|
||||
isOnline := connStatus.IsConnected && strings.TrimSpace(connStatus.LastError) == ""
|
||||
lastComponentSync := h.localDB.GetComponentSyncTime()
|
||||
lastPricelistSync := h.localDB.GetLastSyncTime()
|
||||
|
||||
// Get counts
|
||||
componentsCount := h.localDB.CountLocalComponents()
|
||||
pricelistsCount := h.localDB.CountLocalPricelists()
|
||||
|
||||
// Get server pricelist count if online
|
||||
serverPricelists := 0
|
||||
needPricelistSync := false
|
||||
if isOnline {
|
||||
status, err := h.syncService.GetStatus()
|
||||
if err == nil {
|
||||
serverPricelists = status.ServerPricelists
|
||||
needPricelistSync = status.NeedsSync
|
||||
}
|
||||
}
|
||||
|
||||
// Check if component sync is needed (older than 24 hours)
|
||||
lastPricelistAttemptAt := h.localDB.GetLastPricelistSyncAttemptAt()
|
||||
lastPricelistSyncStatus := h.localDB.GetLastPricelistSyncStatus()
|
||||
lastPricelistSyncError := h.localDB.GetLastPricelistSyncError()
|
||||
hasFailedSync := strings.EqualFold(lastPricelistSyncStatus, "failed")
|
||||
needComponentSync := h.localDB.NeedComponentSync(24)
|
||||
readiness := h.getReadinessCached(10 * time.Second)
|
||||
readiness := h.getReadinessLocal()
|
||||
|
||||
c.JSON(http.StatusOK, SyncStatusResponse{
|
||||
LastComponentSync: lastComponentSync,
|
||||
LastPricelistSync: lastPricelistSync,
|
||||
IsOnline: isOnline,
|
||||
ComponentsCount: componentsCount,
|
||||
PricelistsCount: pricelistsCount,
|
||||
ServerPricelists: serverPricelists,
|
||||
NeedComponentSync: needComponentSync,
|
||||
NeedPricelistSync: needPricelistSync,
|
||||
Readiness: readiness,
|
||||
LastComponentSync: lastComponentSync,
|
||||
LastPricelistSync: lastPricelistSync,
|
||||
LastPricelistAttemptAt: lastPricelistAttemptAt,
|
||||
LastPricelistSyncStatus: lastPricelistSyncStatus,
|
||||
LastPricelistSyncError: lastPricelistSyncError,
|
||||
HasIncompleteServerSync: hasFailedSync,
|
||||
KnownServerChangesMiss: hasFailedSync,
|
||||
IsOnline: isOnline,
|
||||
ComponentsCount: componentsCount,
|
||||
PricelistsCount: pricelistsCount,
|
||||
ServerPricelists: 0,
|
||||
NeedComponentSync: needComponentSync,
|
||||
NeedPricelistSync: lastPricelistSync == nil || hasFailedSync,
|
||||
Readiness: readiness,
|
||||
})
|
||||
}
|
||||
|
||||
@@ -116,9 +114,7 @@ func (h *SyncHandler) GetStatus(c *gin.Context) {
|
||||
func (h *SyncHandler) GetReadiness(c *gin.Context) {
|
||||
readiness, err := h.syncService.GetReadiness()
|
||||
if err != nil && readiness == nil {
|
||||
c.JSON(http.StatusInternalServerError, gin.H{
|
||||
"error": err.Error(),
|
||||
})
|
||||
RespondError(c, http.StatusInternalServerError, "internal server error", err)
|
||||
return
|
||||
}
|
||||
if readiness == nil {
|
||||
@@ -158,8 +154,9 @@ func (h *SyncHandler) ensureSyncReadiness(c *gin.Context) bool {
|
||||
|
||||
c.JSON(http.StatusInternalServerError, gin.H{
|
||||
"success": false,
|
||||
"error": err.Error(),
|
||||
"error": "internal server error",
|
||||
})
|
||||
_ = c.Error(err)
|
||||
_ = readiness
|
||||
return false
|
||||
}
|
||||
@@ -184,8 +181,9 @@ func (h *SyncHandler) SyncComponents(c *gin.Context) {
|
||||
if err != nil {
|
||||
c.JSON(http.StatusServiceUnavailable, gin.H{
|
||||
"success": false,
|
||||
"error": "Database connection failed: " + err.Error(),
|
||||
"error": "database connection failed",
|
||||
})
|
||||
_ = c.Error(err)
|
||||
return
|
||||
}
|
||||
|
||||
@@ -194,8 +192,9 @@ func (h *SyncHandler) SyncComponents(c *gin.Context) {
|
||||
slog.Error("component sync failed", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{
|
||||
"success": false,
|
||||
"error": err.Error(),
|
||||
"error": "component sync failed",
|
||||
})
|
||||
_ = c.Error(err)
|
||||
return
|
||||
}
|
||||
|
||||
@@ -220,8 +219,9 @@ func (h *SyncHandler) SyncPricelists(c *gin.Context) {
|
||||
slog.Error("pricelist sync failed", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{
|
||||
"success": false,
|
||||
"error": err.Error(),
|
||||
"error": "pricelist sync failed",
|
||||
})
|
||||
_ = c.Error(err)
|
||||
return
|
||||
}
|
||||
|
||||
@@ -247,8 +247,9 @@ func (h *SyncHandler) SyncPartnumberBooks(c *gin.Context) {
|
||||
slog.Error("partnumber books pull failed", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{
|
||||
"success": false,
|
||||
"error": err.Error(),
|
||||
"error": "partnumber books sync failed",
|
||||
})
|
||||
_ = c.Error(err)
|
||||
return
|
||||
}
|
||||
|
||||
@@ -295,8 +296,9 @@ func (h *SyncHandler) SyncAll(c *gin.Context) {
|
||||
slog.Error("pending push failed during full sync", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{
|
||||
"success": false,
|
||||
"error": "Pending changes push failed: " + err.Error(),
|
||||
"error": "pending changes push failed",
|
||||
})
|
||||
_ = c.Error(err)
|
||||
return
|
||||
}
|
||||
|
||||
@@ -305,8 +307,9 @@ func (h *SyncHandler) SyncAll(c *gin.Context) {
|
||||
if err != nil {
|
||||
c.JSON(http.StatusServiceUnavailable, gin.H{
|
||||
"success": false,
|
||||
"error": "Database connection failed: " + err.Error(),
|
||||
"error": "database connection failed",
|
||||
})
|
||||
_ = c.Error(err)
|
||||
return
|
||||
}
|
||||
|
||||
@@ -315,8 +318,9 @@ func (h *SyncHandler) SyncAll(c *gin.Context) {
|
||||
slog.Error("component sync failed during full sync", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{
|
||||
"success": false,
|
||||
"error": "Component sync failed: " + err.Error(),
|
||||
"error": "component sync failed",
|
||||
})
|
||||
_ = c.Error(err)
|
||||
return
|
||||
}
|
||||
componentsSynced = compResult.TotalSynced
|
||||
@@ -327,10 +331,11 @@ func (h *SyncHandler) SyncAll(c *gin.Context) {
|
||||
slog.Error("pricelist sync failed during full sync", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{
|
||||
"success": false,
|
||||
"error": "Pricelist sync failed: " + err.Error(),
|
||||
"error": "pricelist sync failed",
|
||||
"pending_pushed": pendingPushed,
|
||||
"components_synced": componentsSynced,
|
||||
})
|
||||
_ = c.Error(err)
|
||||
return
|
||||
}
|
||||
|
||||
@@ -339,11 +344,12 @@ func (h *SyncHandler) SyncAll(c *gin.Context) {
|
||||
slog.Error("project import failed during full sync", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{
|
||||
"success": false,
|
||||
"error": "Project import failed: " + err.Error(),
|
||||
"error": "project import failed",
|
||||
"pending_pushed": pendingPushed,
|
||||
"components_synced": componentsSynced,
|
||||
"pricelists_synced": pricelistsSynced,
|
||||
})
|
||||
_ = c.Error(err)
|
||||
return
|
||||
}
|
||||
|
||||
@@ -352,7 +358,7 @@ func (h *SyncHandler) SyncAll(c *gin.Context) {
|
||||
slog.Error("configuration import failed during full sync", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{
|
||||
"success": false,
|
||||
"error": "Configuration import failed: " + err.Error(),
|
||||
"error": "configuration import failed",
|
||||
"pending_pushed": pendingPushed,
|
||||
"components_synced": componentsSynced,
|
||||
"pricelists_synced": pricelistsSynced,
|
||||
@@ -360,6 +366,7 @@ func (h *SyncHandler) SyncAll(c *gin.Context) {
|
||||
"projects_updated": projectsResult.Updated,
|
||||
"projects_skipped": projectsResult.Skipped,
|
||||
})
|
||||
_ = c.Error(err)
|
||||
return
|
||||
}
|
||||
|
||||
@@ -398,8 +405,9 @@ func (h *SyncHandler) PushPendingChanges(c *gin.Context) {
|
||||
slog.Error("push pending changes failed", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{
|
||||
"success": false,
|
||||
"error": err.Error(),
|
||||
"error": "pending changes push failed",
|
||||
})
|
||||
_ = c.Error(err)
|
||||
return
|
||||
}
|
||||
|
||||
@@ -426,9 +434,7 @@ func (h *SyncHandler) GetPendingCount(c *gin.Context) {
|
||||
func (h *SyncHandler) GetPendingChanges(c *gin.Context) {
|
||||
changes, err := h.localDB.GetPendingChanges()
|
||||
if err != nil {
|
||||
c.JSON(http.StatusInternalServerError, gin.H{
|
||||
"error": err.Error(),
|
||||
})
|
||||
RespondError(c, http.StatusInternalServerError, "internal server error", err)
|
||||
return
|
||||
}
|
||||
|
||||
@@ -445,8 +451,9 @@ func (h *SyncHandler) RepairPendingChanges(c *gin.Context) {
|
||||
slog.Error("repair pending changes failed", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{
|
||||
"success": false,
|
||||
"error": err.Error(),
|
||||
"error": "pending changes repair failed",
|
||||
})
|
||||
_ = c.Error(err)
|
||||
return
|
||||
}
|
||||
|
||||
@@ -465,8 +472,13 @@ type SyncInfoResponse struct {
|
||||
DBName string `json:"db_name"`
|
||||
|
||||
// Status
|
||||
IsOnline bool `json:"is_online"`
|
||||
LastSyncAt *time.Time `json:"last_sync_at"`
|
||||
IsOnline bool `json:"is_online"`
|
||||
LastSyncAt *time.Time `json:"last_sync_at"`
|
||||
LastPricelistAttemptAt *time.Time `json:"last_pricelist_attempt_at,omitempty"`
|
||||
LastPricelistSyncStatus string `json:"last_pricelist_sync_status,omitempty"`
|
||||
LastPricelistSyncError string `json:"last_pricelist_sync_error,omitempty"`
|
||||
NeedPricelistSync bool `json:"need_pricelist_sync"`
|
||||
HasIncompleteServerSync bool `json:"has_incomplete_server_sync"`
|
||||
|
||||
// Statistics
|
||||
LotCount int64 `json:"lot_count"`
|
||||
@@ -502,8 +514,8 @@ type SyncError struct {
|
||||
// GetInfo returns sync information for modal
|
||||
// GET /api/sync/info
|
||||
func (h *SyncHandler) GetInfo(c *gin.Context) {
|
||||
// Check online status by pinging MariaDB
|
||||
isOnline := h.checkOnline()
|
||||
connStatus := h.connMgr.GetStatus()
|
||||
isOnline := connStatus.IsConnected && strings.TrimSpace(connStatus.LastError) == ""
|
||||
|
||||
// Get DB connection info
|
||||
var dbHost, dbUser, dbName string
|
||||
@@ -515,6 +527,12 @@ func (h *SyncHandler) GetInfo(c *gin.Context) {
|
||||
|
||||
// Get sync times
|
||||
lastPricelistSync := h.localDB.GetLastSyncTime()
|
||||
lastPricelistAttemptAt := h.localDB.GetLastPricelistSyncAttemptAt()
|
||||
lastPricelistSyncStatus := h.localDB.GetLastPricelistSyncStatus()
|
||||
lastPricelistSyncError := h.localDB.GetLastPricelistSyncError()
|
||||
hasFailedSync := strings.EqualFold(lastPricelistSyncStatus, "failed")
|
||||
needPricelistSync := lastPricelistSync == nil || hasFailedSync
|
||||
hasIncompleteServerSync := hasFailedSync
|
||||
|
||||
// Get local counts
|
||||
configCount := h.localDB.CountConfigurations()
|
||||
@@ -547,22 +565,27 @@ func (h *SyncHandler) GetInfo(c *gin.Context) {
|
||||
syncErrors = syncErrors[:10]
|
||||
}
|
||||
|
||||
readiness := h.getReadinessCached(10 * time.Second)
|
||||
readiness := h.getReadinessLocal()
|
||||
|
||||
c.JSON(http.StatusOK, SyncInfoResponse{
|
||||
DBHost: dbHost,
|
||||
DBUser: dbUser,
|
||||
DBName: dbName,
|
||||
IsOnline: isOnline,
|
||||
LastSyncAt: lastPricelistSync,
|
||||
LotCount: componentCount,
|
||||
LotLogCount: pricelistCount,
|
||||
ConfigCount: configCount,
|
||||
ProjectCount: projectCount,
|
||||
PendingChanges: changes,
|
||||
ErrorCount: errorCount,
|
||||
Errors: syncErrors,
|
||||
Readiness: readiness,
|
||||
DBHost: dbHost,
|
||||
DBUser: dbUser,
|
||||
DBName: dbName,
|
||||
IsOnline: isOnline,
|
||||
LastSyncAt: lastPricelistSync,
|
||||
LastPricelistAttemptAt: lastPricelistAttemptAt,
|
||||
LastPricelistSyncStatus: lastPricelistSyncStatus,
|
||||
LastPricelistSyncError: lastPricelistSyncError,
|
||||
NeedPricelistSync: needPricelistSync,
|
||||
HasIncompleteServerSync: hasIncompleteServerSync,
|
||||
LotCount: componentCount,
|
||||
LotLogCount: pricelistCount,
|
||||
ConfigCount: configCount,
|
||||
ProjectCount: projectCount,
|
||||
PendingChanges: changes,
|
||||
ErrorCount: errorCount,
|
||||
Errors: syncErrors,
|
||||
Readiness: readiness,
|
||||
})
|
||||
}
|
||||
|
||||
@@ -588,9 +611,7 @@ func (h *SyncHandler) GetUsersStatus(c *gin.Context) {
|
||||
|
||||
users, err := h.syncService.ListUserSyncStatuses(threshold)
|
||||
if err != nil {
|
||||
c.JSON(http.StatusInternalServerError, gin.H{
|
||||
"error": err.Error(),
|
||||
})
|
||||
RespondError(c, http.StatusInternalServerError, "internal server error", err)
|
||||
return
|
||||
}
|
||||
|
||||
@@ -619,15 +640,33 @@ func (h *SyncHandler) SyncStatusPartial(c *gin.Context) {
|
||||
|
||||
// Get pending count
|
||||
pendingCount := h.localDB.GetPendingCount()
|
||||
readiness := h.getReadinessCached(10 * time.Second)
|
||||
readiness := h.getReadinessLocal()
|
||||
isBlocked := readiness != nil && readiness.Blocked
|
||||
lastPricelistSyncStatus := h.localDB.GetLastPricelistSyncStatus()
|
||||
lastPricelistSyncError := h.localDB.GetLastPricelistSyncError()
|
||||
hasFailedSync := strings.EqualFold(lastPricelistSyncStatus, "failed")
|
||||
hasIncompleteServerSync := hasFailedSync
|
||||
|
||||
slog.Debug("rendering sync status", "is_offline", isOffline, "pending_count", pendingCount, "sync_blocked", isBlocked)
|
||||
|
||||
data := gin.H{
|
||||
"IsOffline": isOffline,
|
||||
"PendingCount": pendingCount,
|
||||
"IsBlocked": isBlocked,
|
||||
"IsOffline": isOffline,
|
||||
"PendingCount": pendingCount,
|
||||
"IsBlocked": isBlocked,
|
||||
"HasFailedSync": hasFailedSync,
|
||||
"HasIncompleteServerSync": hasIncompleteServerSync,
|
||||
"SyncIssueTitle": func() string {
|
||||
if hasIncompleteServerSync {
|
||||
return "Последняя синхронизация прайслистов прервалась. На сервере есть изменения, которые не загружены локально."
|
||||
}
|
||||
if hasFailedSync {
|
||||
if lastPricelistSyncError != "" {
|
||||
return lastPricelistSyncError
|
||||
}
|
||||
return "Последняя синхронизация прайслистов завершилась ошибкой."
|
||||
}
|
||||
return ""
|
||||
}(),
|
||||
"BlockedReason": func() string {
|
||||
if readiness == nil {
|
||||
return ""
|
||||
@@ -639,24 +678,34 @@ func (h *SyncHandler) SyncStatusPartial(c *gin.Context) {
|
||||
c.Header("Content-Type", "text/html; charset=utf-8")
|
||||
if err := h.tmpl.ExecuteTemplate(c.Writer, "sync_status", data); err != nil {
|
||||
slog.Error("failed to render sync_status template", "error", err)
|
||||
c.String(http.StatusInternalServerError, "Template error: "+err.Error())
|
||||
_ = c.Error(err)
|
||||
c.String(http.StatusInternalServerError, "Template error")
|
||||
}
|
||||
}
|
||||
|
||||
func (h *SyncHandler) getReadinessCached(maxAge time.Duration) *sync.SyncReadiness {
|
||||
func (h *SyncHandler) getReadinessLocal() *sync.SyncReadiness {
|
||||
h.readinessMu.Lock()
|
||||
if h.readinessCached != nil && time.Since(h.readinessCachedAt) < maxAge {
|
||||
if h.readinessCached != nil && time.Since(h.readinessCachedAt) < 10*time.Second {
|
||||
cached := *h.readinessCached
|
||||
h.readinessMu.Unlock()
|
||||
return &cached
|
||||
}
|
||||
h.readinessMu.Unlock()
|
||||
|
||||
readiness, err := h.syncService.GetReadiness()
|
||||
if err != nil && readiness == nil {
|
||||
state, err := h.localDB.GetSyncGuardState()
|
||||
if err != nil || state == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
readiness := &sync.SyncReadiness{
|
||||
Status: state.Status,
|
||||
Blocked: state.Status == sync.ReadinessBlocked,
|
||||
ReasonCode: state.ReasonCode,
|
||||
ReasonText: state.ReasonText,
|
||||
RequiredMinAppVersion: state.RequiredMinAppVersion,
|
||||
LastCheckedAt: state.LastCheckedAt,
|
||||
}
|
||||
|
||||
h.readinessMu.Lock()
|
||||
h.readinessCached = readiness
|
||||
h.readinessCachedAt = time.Now()
|
||||
@@ -675,7 +724,7 @@ func (h *SyncHandler) ReportPartnumberSeen(c *gin.Context) {
|
||||
} `json:"items"`
|
||||
}
|
||||
if err := c.ShouldBindJSON(&body); err != nil {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
||||
RespondError(c, http.StatusBadRequest, "invalid request", err)
|
||||
return
|
||||
}
|
||||
|
||||
@@ -691,7 +740,7 @@ func (h *SyncHandler) ReportPartnumberSeen(c *gin.Context) {
|
||||
}
|
||||
|
||||
if err := h.syncService.PushPartnumberSeen(items); err != nil {
|
||||
c.JSON(http.StatusServiceUnavailable, gin.H{"error": err.Error()})
|
||||
RespondError(c, http.StatusServiceUnavailable, "service unavailable", err)
|
||||
return
|
||||
}
|
||||
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
package handlers
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"net/http"
|
||||
"strings"
|
||||
@@ -14,11 +13,15 @@ import (
|
||||
|
||||
// VendorSpecHandler handles vendor BOM spec operations for a configuration.
|
||||
type VendorSpecHandler struct {
|
||||
localDB *localdb.LocalDB
|
||||
localDB *localdb.LocalDB
|
||||
configService *services.LocalConfigurationService
|
||||
}
|
||||
|
||||
func NewVendorSpecHandler(localDB *localdb.LocalDB) *VendorSpecHandler {
|
||||
return &VendorSpecHandler{localDB: localDB}
|
||||
return &VendorSpecHandler{
|
||||
localDB: localDB,
|
||||
configService: services.NewLocalConfigurationService(localDB, nil, nil, func() bool { return false }),
|
||||
}
|
||||
}
|
||||
|
||||
// lookupConfig finds an active configuration by UUID using the standard localDB method.
|
||||
@@ -62,7 +65,7 @@ func (h *VendorSpecHandler) PutVendorSpec(c *gin.Context) {
|
||||
VendorSpec []localdb.VendorSpecItem `json:"vendor_spec"`
|
||||
}
|
||||
if err := c.ShouldBindJSON(&body); err != nil {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
||||
RespondError(c, http.StatusBadRequest, "invalid request", err)
|
||||
return
|
||||
}
|
||||
|
||||
@@ -80,13 +83,8 @@ func (h *VendorSpecHandler) PutVendorSpec(c *gin.Context) {
|
||||
}
|
||||
|
||||
spec := localdb.VendorSpec(body.VendorSpec)
|
||||
specJSON, err := json.Marshal(spec)
|
||||
if err != nil {
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
if err := h.localDB.DB().Model(cfg).Update("vendor_spec", string(specJSON)).Error; err != nil {
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
if _, err := h.configService.UpdateVendorSpecNoAuth(cfg.UUID, spec); err != nil {
|
||||
RespondError(c, http.StatusInternalServerError, "internal server error", err)
|
||||
return
|
||||
}
|
||||
|
||||
@@ -138,7 +136,7 @@ func (h *VendorSpecHandler) ResolveVendorSpec(c *gin.Context) {
|
||||
VendorSpec []localdb.VendorSpecItem `json:"vendor_spec"`
|
||||
}
|
||||
if err := c.ShouldBindJSON(&body); err != nil {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
||||
RespondError(c, http.StatusBadRequest, "invalid request", err)
|
||||
return
|
||||
}
|
||||
|
||||
@@ -147,14 +145,14 @@ func (h *VendorSpecHandler) ResolveVendorSpec(c *gin.Context) {
|
||||
|
||||
resolved, err := resolver.Resolve(body.VendorSpec)
|
||||
if err != nil {
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
RespondError(c, http.StatusInternalServerError, "internal server error", err)
|
||||
return
|
||||
}
|
||||
|
||||
book, _ := bookRepo.GetActiveBook()
|
||||
aggregated, err := services.AggregateLOTs(resolved, book, bookRepo)
|
||||
if err != nil {
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
RespondError(c, http.StatusInternalServerError, "internal server error", err)
|
||||
return
|
||||
}
|
||||
|
||||
@@ -181,7 +179,7 @@ func (h *VendorSpecHandler) ApplyVendorSpec(c *gin.Context) {
|
||||
} `json:"items"`
|
||||
}
|
||||
if err := c.ShouldBindJSON(&body); err != nil {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
||||
RespondError(c, http.StatusBadRequest, "invalid request", err)
|
||||
return
|
||||
}
|
||||
|
||||
@@ -194,14 +192,8 @@ func (h *VendorSpecHandler) ApplyVendorSpec(c *gin.Context) {
|
||||
})
|
||||
}
|
||||
|
||||
itemsJSON, err := json.Marshal(newItems)
|
||||
if err != nil {
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
if err := h.localDB.DB().Model(cfg).Update("items", string(itemsJSON)).Error; err != nil {
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
if _, err := h.configService.ApplyVendorSpecItemsNoAuth(cfg.UUID, newItems); err != nil {
|
||||
RespondError(c, http.StatusInternalServerError, "internal server error", err)
|
||||
return
|
||||
}
|
||||
|
||||
|
||||
@@ -1,11 +1,13 @@
|
||||
package handlers
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"html/template"
|
||||
"strconv"
|
||||
"strings"
|
||||
|
||||
qfassets "git.mchus.pro/mchus/quoteforge"
|
||||
"git.mchus.pro/mchus/quoteforge/internal/appmeta"
|
||||
"git.mchus.pro/mchus/quoteforge/internal/localdb"
|
||||
"git.mchus.pro/mchus/quoteforge/internal/models"
|
||||
"github.com/gin-gonic/gin"
|
||||
@@ -111,15 +113,18 @@ func NewWebHandler(_ string, localDB *localdb.LocalDB) (*WebHandler, error) {
|
||||
}
|
||||
|
||||
func (h *WebHandler) render(c *gin.Context, name string, data gin.H) {
|
||||
data["AppVersion"] = appmeta.Version()
|
||||
c.Header("Content-Type", "text/html; charset=utf-8")
|
||||
tmpl, ok := h.templates[name]
|
||||
if !ok {
|
||||
c.String(500, "Template not found: %s", name)
|
||||
_ = c.Error(fmt.Errorf("template %q not found", name))
|
||||
c.String(500, "Template error")
|
||||
return
|
||||
}
|
||||
// Execute the page template which will use base
|
||||
if err := tmpl.ExecuteTemplate(c.Writer, name, data); err != nil {
|
||||
c.String(500, "Template error: %v", err)
|
||||
_ = c.Error(err)
|
||||
c.String(500, "Template error")
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
47
internal/handlers/web_test.go
Normal file
47
internal/handlers/web_test.go
Normal file
@@ -0,0 +1,47 @@
|
||||
package handlers
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"html/template"
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
)
|
||||
|
||||
func TestWebHandlerRenderHidesTemplateExecutionError(t *testing.T) {
|
||||
gin.SetMode(gin.TestMode)
|
||||
|
||||
tmpl := template.Must(template.New("broken.html").Funcs(template.FuncMap{
|
||||
"boom": func() (string, error) {
|
||||
return "", errors.New("secret template failure")
|
||||
},
|
||||
}).Parse(`{{define "broken.html"}}{{boom}}{{end}}`))
|
||||
|
||||
handler := &WebHandler{
|
||||
templates: map[string]*template.Template{
|
||||
"broken.html": tmpl,
|
||||
},
|
||||
}
|
||||
|
||||
rec := httptest.NewRecorder()
|
||||
ctx, _ := gin.CreateTestContext(rec)
|
||||
ctx.Request = httptest.NewRequest(http.MethodGet, "/broken", nil)
|
||||
|
||||
handler.render(ctx, "broken.html", gin.H{})
|
||||
|
||||
if rec.Code != http.StatusInternalServerError {
|
||||
t.Fatalf("expected 500, got %d", rec.Code)
|
||||
}
|
||||
if body := strings.TrimSpace(rec.Body.String()); body != "Template error" {
|
||||
t.Fatalf("expected generic template error, got %q", body)
|
||||
}
|
||||
if len(ctx.Errors) != 1 {
|
||||
t.Fatalf("expected logged template error, got %d", len(ctx.Errors))
|
||||
}
|
||||
if !strings.Contains(ctx.Errors.String(), "secret template failure") {
|
||||
t.Fatalf("expected original error in gin context, got %q", ctx.Errors.String())
|
||||
}
|
||||
}
|
||||
@@ -28,8 +28,9 @@ type ComponentSyncResult struct {
|
||||
func (l *LocalDB) SyncComponents(mariaDB *gorm.DB) (*ComponentSyncResult, error) {
|
||||
startTime := time.Now()
|
||||
|
||||
// Query to join lot with qt_lot_metadata (metadata only, no pricing)
|
||||
// Use LEFT JOIN to include lots without metadata
|
||||
// Build the component catalog from every runtime source of LOT names.
|
||||
// Storage lots may exist in qt_lot_metadata / qt_pricelist_items before they appear in lot,
|
||||
// so the sync cannot start from lot alone.
|
||||
type componentRow struct {
|
||||
LotName string
|
||||
LotDescription string
|
||||
@@ -40,15 +41,29 @@ func (l *LocalDB) SyncComponents(mariaDB *gorm.DB) (*ComponentSyncResult, error)
|
||||
var rows []componentRow
|
||||
err := mariaDB.Raw(`
|
||||
SELECT
|
||||
l.lot_name,
|
||||
l.lot_description,
|
||||
COALESCE(c.code, SUBSTRING_INDEX(l.lot_name, '_', 1)) as category,
|
||||
m.model
|
||||
FROM lot l
|
||||
LEFT JOIN qt_lot_metadata m ON l.lot_name = m.lot_name
|
||||
src.lot_name,
|
||||
COALESCE(MAX(NULLIF(TRIM(l.lot_description), '')), '') AS lot_description,
|
||||
COALESCE(
|
||||
MAX(NULLIF(TRIM(c.code), '')),
|
||||
MAX(NULLIF(TRIM(l.lot_category), '')),
|
||||
SUBSTRING_INDEX(src.lot_name, '_', 1)
|
||||
) AS category,
|
||||
MAX(NULLIF(TRIM(m.model), '')) AS model
|
||||
FROM (
|
||||
SELECT lot_name FROM lot
|
||||
UNION
|
||||
SELECT lot_name FROM qt_lot_metadata
|
||||
WHERE is_hidden = FALSE OR is_hidden IS NULL
|
||||
UNION
|
||||
SELECT lot_name FROM qt_pricelist_items
|
||||
) src
|
||||
LEFT JOIN lot l ON l.lot_name = src.lot_name
|
||||
LEFT JOIN qt_lot_metadata m
|
||||
ON m.lot_name = src.lot_name
|
||||
AND (m.is_hidden = FALSE OR m.is_hidden IS NULL)
|
||||
LEFT JOIN qt_categories c ON m.category_id = c.id
|
||||
WHERE m.is_hidden = FALSE OR m.is_hidden IS NULL
|
||||
ORDER BY l.lot_name
|
||||
GROUP BY src.lot_name
|
||||
ORDER BY src.lot_name
|
||||
`).Scan(&rows).Error
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("querying components from MariaDB: %w", err)
|
||||
@@ -71,18 +86,25 @@ func (l *LocalDB) SyncComponents(mariaDB *gorm.DB) (*ComponentSyncResult, error)
|
||||
existingMap[c.LotName] = true
|
||||
}
|
||||
|
||||
// Prepare components for batch insert/update
|
||||
// Prepare components for batch insert/update.
|
||||
// Source joins may duplicate the same lot_name, so collapse them before insert.
|
||||
syncTime := time.Now()
|
||||
components := make([]LocalComponent, 0, len(rows))
|
||||
componentIndex := make(map[string]int, len(rows))
|
||||
newCount := 0
|
||||
|
||||
for _, row := range rows {
|
||||
lotName := strings.TrimSpace(row.LotName)
|
||||
if lotName == "" {
|
||||
continue
|
||||
}
|
||||
|
||||
category := ""
|
||||
if row.Category != nil {
|
||||
category = *row.Category
|
||||
category = strings.TrimSpace(*row.Category)
|
||||
} else {
|
||||
// Parse category from lot_name (e.g., "CPU_AMD_9654" -> "CPU")
|
||||
parts := strings.SplitN(row.LotName, "_", 2)
|
||||
parts := strings.SplitN(lotName, "_", 2)
|
||||
if len(parts) >= 1 {
|
||||
category = parts[0]
|
||||
}
|
||||
@@ -90,18 +112,34 @@ func (l *LocalDB) SyncComponents(mariaDB *gorm.DB) (*ComponentSyncResult, error)
|
||||
|
||||
model := ""
|
||||
if row.Model != nil {
|
||||
model = *row.Model
|
||||
model = strings.TrimSpace(*row.Model)
|
||||
}
|
||||
|
||||
comp := LocalComponent{
|
||||
LotName: row.LotName,
|
||||
LotDescription: row.LotDescription,
|
||||
LotName: lotName,
|
||||
LotDescription: strings.TrimSpace(row.LotDescription),
|
||||
Category: category,
|
||||
Model: model,
|
||||
}
|
||||
|
||||
if idx, exists := componentIndex[lotName]; exists {
|
||||
// Keep the first row, but fill any missing metadata from duplicates.
|
||||
if components[idx].LotDescription == "" && comp.LotDescription != "" {
|
||||
components[idx].LotDescription = comp.LotDescription
|
||||
}
|
||||
if components[idx].Category == "" && comp.Category != "" {
|
||||
components[idx].Category = comp.Category
|
||||
}
|
||||
if components[idx].Model == "" && comp.Model != "" {
|
||||
components[idx].Model = comp.Model
|
||||
}
|
||||
continue
|
||||
}
|
||||
|
||||
componentIndex[lotName] = len(components)
|
||||
components = append(components, comp)
|
||||
|
||||
if !existingMap[row.LotName] {
|
||||
if !existingMap[lotName] {
|
||||
newCount++
|
||||
}
|
||||
}
|
||||
|
||||
@@ -95,3 +95,60 @@ func TestConfigurationSnapshotPreservesBusinessFields(t *testing.T) {
|
||||
t.Fatalf("lot mappings lost in snapshot: %+v", decoded.VendorSpec)
|
||||
}
|
||||
}
|
||||
|
||||
func TestConfigurationFingerprintIncludesPricingSelectorsAndVendorSpec(t *testing.T) {
|
||||
estimateID := uint(11)
|
||||
warehouseID := uint(22)
|
||||
competitorID := uint(33)
|
||||
|
||||
base := &LocalConfiguration{
|
||||
UUID: "cfg-1",
|
||||
Name: "Config",
|
||||
ServerCount: 1,
|
||||
Items: LocalConfigItems{{LotName: "LOT_A", Quantity: 1, UnitPrice: 100}},
|
||||
PricelistID: &estimateID,
|
||||
WarehousePricelistID: &warehouseID,
|
||||
CompetitorPricelistID: &competitorID,
|
||||
DisablePriceRefresh: true,
|
||||
OnlyInStock: true,
|
||||
VendorSpec: VendorSpec{
|
||||
{
|
||||
SortOrder: 10,
|
||||
VendorPartnumber: "PN-1",
|
||||
Quantity: 1,
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
baseFingerprint, err := BuildConfigurationSpecPriceFingerprint(base)
|
||||
if err != nil {
|
||||
t.Fatalf("base fingerprint: %v", err)
|
||||
}
|
||||
|
||||
changedPricelist := *base
|
||||
newEstimateID := uint(44)
|
||||
changedPricelist.PricelistID = &newEstimateID
|
||||
pricelistFingerprint, err := BuildConfigurationSpecPriceFingerprint(&changedPricelist)
|
||||
if err != nil {
|
||||
t.Fatalf("pricelist fingerprint: %v", err)
|
||||
}
|
||||
if pricelistFingerprint == baseFingerprint {
|
||||
t.Fatalf("expected pricelist selector to affect fingerprint")
|
||||
}
|
||||
|
||||
changedVendorSpec := *base
|
||||
changedVendorSpec.VendorSpec = VendorSpec{
|
||||
{
|
||||
SortOrder: 10,
|
||||
VendorPartnumber: "PN-2",
|
||||
Quantity: 1,
|
||||
},
|
||||
}
|
||||
vendorFingerprint, err := BuildConfigurationSpecPriceFingerprint(&changedVendorSpec)
|
||||
if err != nil {
|
||||
t.Fatalf("vendor fingerprint: %v", err)
|
||||
}
|
||||
if vendorFingerprint == baseFingerprint {
|
||||
t.Fatalf("expected vendor spec to affect fingerprint")
|
||||
}
|
||||
}
|
||||
|
||||
@@ -34,6 +34,7 @@ func ConfigurationToLocal(cfg *models.Configuration) *LocalConfiguration {
|
||||
PricelistID: cfg.PricelistID,
|
||||
WarehousePricelistID: cfg.WarehousePricelistID,
|
||||
CompetitorPricelistID: cfg.CompetitorPricelistID,
|
||||
ConfigType: cfg.ConfigType,
|
||||
VendorSpec: modelVendorSpecToLocal(cfg.VendorSpec),
|
||||
DisablePriceRefresh: cfg.DisablePriceRefresh,
|
||||
OnlyInStock: cfg.OnlyInStock,
|
||||
@@ -82,6 +83,7 @@ func LocalToConfiguration(local *LocalConfiguration) *models.Configuration {
|
||||
PricelistID: local.PricelistID,
|
||||
WarehousePricelistID: local.WarehousePricelistID,
|
||||
CompetitorPricelistID: local.CompetitorPricelistID,
|
||||
ConfigType: local.ConfigType,
|
||||
VendorSpec: localVendorSpecToModel(local.VendorSpec),
|
||||
DisablePriceRefresh: local.DisablePriceRefresh,
|
||||
OnlyInStock: local.OnlyInStock,
|
||||
|
||||
@@ -116,6 +116,14 @@ func New(dbPath string) (*LocalDB, error) {
|
||||
return nil, fmt.Errorf("opening sqlite database: %w", err)
|
||||
}
|
||||
|
||||
// Enable WAL mode so background sync writes never block UI reads.
|
||||
if err := db.Exec("PRAGMA journal_mode=WAL").Error; err != nil {
|
||||
slog.Warn("failed to enable WAL mode", "error", err)
|
||||
}
|
||||
if err := db.Exec("PRAGMA synchronous=NORMAL").Error; err != nil {
|
||||
slog.Warn("failed to set synchronous=NORMAL", "error", err)
|
||||
}
|
||||
|
||||
if err := ensureLocalProjectsTable(db); err != nil {
|
||||
return nil, fmt.Errorf("ensure local_projects table: %w", err)
|
||||
}
|
||||
@@ -611,6 +619,46 @@ func (l *LocalDB) SaveProject(project *LocalProject) error {
|
||||
return l.db.Save(project).Error
|
||||
}
|
||||
|
||||
// SaveProjectPreservingUpdatedAt stores a project without replacing UpdatedAt
|
||||
// with the current local sync time.
|
||||
func (l *LocalDB) SaveProjectPreservingUpdatedAt(project *LocalProject) error {
|
||||
if project == nil {
|
||||
return fmt.Errorf("project is nil")
|
||||
}
|
||||
|
||||
if project.ID == 0 && strings.TrimSpace(project.UUID) != "" {
|
||||
var existing LocalProject
|
||||
err := l.db.Where("uuid = ?", project.UUID).First(&existing).Error
|
||||
if err == nil {
|
||||
project.ID = existing.ID
|
||||
} else if !errors.Is(err, gorm.ErrRecordNotFound) {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
if project.ID == 0 {
|
||||
return l.db.Create(project).Error
|
||||
}
|
||||
|
||||
return l.db.Model(&LocalProject{}).
|
||||
Where("id = ?", project.ID).
|
||||
UpdateColumns(map[string]interface{}{
|
||||
"uuid": project.UUID,
|
||||
"server_id": project.ServerID,
|
||||
"owner_username": project.OwnerUsername,
|
||||
"code": project.Code,
|
||||
"variant": project.Variant,
|
||||
"name": project.Name,
|
||||
"tracker_url": project.TrackerURL,
|
||||
"is_active": project.IsActive,
|
||||
"is_system": project.IsSystem,
|
||||
"created_at": project.CreatedAt,
|
||||
"updated_at": project.UpdatedAt,
|
||||
"synced_at": project.SyncedAt,
|
||||
"sync_status": project.SyncStatus,
|
||||
}).Error
|
||||
}
|
||||
|
||||
func (l *LocalDB) GetProjects(ownerUsername string, includeArchived bool) ([]LocalProject, error) {
|
||||
var projects []LocalProject
|
||||
query := l.db.Model(&LocalProject{}).Where("owner_username = ?", ownerUsername)
|
||||
@@ -1026,6 +1074,26 @@ func (l *LocalDB) GetLastSyncTime() *time.Time {
|
||||
return &t
|
||||
}
|
||||
|
||||
func (l *LocalDB) getAppSettingValue(key string) (string, bool) {
|
||||
var setting struct {
|
||||
Value string
|
||||
}
|
||||
if err := l.db.Table("app_settings").
|
||||
Where("key = ?", key).
|
||||
First(&setting).Error; err != nil {
|
||||
return "", false
|
||||
}
|
||||
return setting.Value, true
|
||||
}
|
||||
|
||||
func (l *LocalDB) upsertAppSetting(tx *gorm.DB, key, value string, updatedAt time.Time) error {
|
||||
return tx.Exec(`
|
||||
INSERT INTO app_settings (key, value, updated_at)
|
||||
VALUES (?, ?, ?)
|
||||
ON CONFLICT(key) DO UPDATE SET value = excluded.value, updated_at = excluded.updated_at
|
||||
`, key, value, updatedAt.Format(time.RFC3339)).Error
|
||||
}
|
||||
|
||||
// SetLastSyncTime sets the last sync timestamp
|
||||
func (l *LocalDB) SetLastSyncTime(t time.Time) error {
|
||||
// Using raw SQL for upsert since SQLite doesn't have native UPSERT in all versions
|
||||
@@ -1036,6 +1104,55 @@ func (l *LocalDB) SetLastSyncTime(t time.Time) error {
|
||||
`, "last_pricelist_sync", t.Format(time.RFC3339), time.Now().Format(time.RFC3339)).Error
|
||||
}
|
||||
|
||||
func (l *LocalDB) GetLastPricelistSyncAttemptAt() *time.Time {
|
||||
value, ok := l.getAppSettingValue("last_pricelist_sync_attempt_at")
|
||||
if !ok {
|
||||
return nil
|
||||
}
|
||||
t, err := time.Parse(time.RFC3339, value)
|
||||
if err != nil {
|
||||
return nil
|
||||
}
|
||||
return &t
|
||||
}
|
||||
|
||||
func (l *LocalDB) GetLastPricelistSyncStatus() string {
|
||||
value, ok := l.getAppSettingValue("last_pricelist_sync_status")
|
||||
if !ok {
|
||||
return ""
|
||||
}
|
||||
return strings.TrimSpace(value)
|
||||
}
|
||||
|
||||
func (l *LocalDB) GetLastPricelistSyncError() string {
|
||||
value, ok := l.getAppSettingValue("last_pricelist_sync_error")
|
||||
if !ok {
|
||||
return ""
|
||||
}
|
||||
return strings.TrimSpace(value)
|
||||
}
|
||||
|
||||
func (l *LocalDB) SetPricelistSyncResult(status, errorText string, attemptedAt time.Time) error {
|
||||
status = strings.TrimSpace(status)
|
||||
errorText = strings.TrimSpace(errorText)
|
||||
if status == "" {
|
||||
status = "unknown"
|
||||
}
|
||||
|
||||
return l.db.Transaction(func(tx *gorm.DB) error {
|
||||
if err := l.upsertAppSetting(tx, "last_pricelist_sync_status", status, attemptedAt); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := l.upsertAppSetting(tx, "last_pricelist_sync_error", errorText, attemptedAt); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := l.upsertAppSetting(tx, "last_pricelist_sync_attempt_at", attemptedAt.Format(time.RFC3339), attemptedAt); err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
})
|
||||
}
|
||||
|
||||
// CountLocalPricelists returns the number of local pricelists
|
||||
func (l *LocalDB) CountLocalPricelists() int64 {
|
||||
var count int64
|
||||
@@ -1043,6 +1160,29 @@ func (l *LocalDB) CountLocalPricelists() int64 {
|
||||
return count
|
||||
}
|
||||
|
||||
// CountAllPricelistItems returns total rows across all local_pricelist_items.
|
||||
func (l *LocalDB) CountAllPricelistItems() int64 {
|
||||
var count int64
|
||||
l.db.Model(&LocalPricelistItem{}).Count(&count)
|
||||
return count
|
||||
}
|
||||
|
||||
// CountComponents returns the number of rows in local_components.
|
||||
func (l *LocalDB) CountComponents() int64 {
|
||||
var count int64
|
||||
l.db.Model(&LocalComponent{}).Count(&count)
|
||||
return count
|
||||
}
|
||||
|
||||
// DBFileSizeBytes returns the size of the SQLite database file in bytes.
|
||||
func (l *LocalDB) DBFileSizeBytes() int64 {
|
||||
info, err := os.Stat(l.path)
|
||||
if err != nil {
|
||||
return 0
|
||||
}
|
||||
return info.Size()
|
||||
}
|
||||
|
||||
// GetLatestLocalPricelist returns the most recently synced pricelist
|
||||
func (l *LocalDB) GetLatestLocalPricelist() (*LocalPricelist, error) {
|
||||
var pricelist LocalPricelist
|
||||
@@ -1147,20 +1287,20 @@ func (l *LocalDB) CountLocalPricelistItemsWithEmptyCategory(pricelistID uint) (i
|
||||
return count, nil
|
||||
}
|
||||
|
||||
// SaveLocalPricelistItems saves pricelist items to local SQLite
|
||||
// SaveLocalPricelistItems saves pricelist items to local SQLite.
|
||||
// Duplicate (pricelist_id, lot_name) rows are silently ignored.
|
||||
func (l *LocalDB) SaveLocalPricelistItems(items []LocalPricelistItem) error {
|
||||
if len(items) == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Batch insert
|
||||
batchSize := 500
|
||||
for i := 0; i < len(items); i += batchSize {
|
||||
end := i + batchSize
|
||||
if end > len(items) {
|
||||
end = len(items)
|
||||
}
|
||||
if err := l.db.CreateInBatches(items[i:end], batchSize).Error; err != nil {
|
||||
if err := l.db.Clauses(clause.OnConflict{DoNothing: true}).CreateInBatches(items[i:end], batchSize).Error; err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
@@ -1210,6 +1350,32 @@ func (l *LocalDB) GetLocalPriceForLot(pricelistID uint, lotName string) (float64
|
||||
return item.Price, nil
|
||||
}
|
||||
|
||||
// GetLocalPricesForLots returns prices for multiple lots from a local pricelist in a single query.
|
||||
// Uses the composite index (pricelist_id, lot_name). Missing lots are omitted from the result.
|
||||
func (l *LocalDB) GetLocalPricesForLots(pricelistID uint, lotNames []string) (map[string]float64, error) {
|
||||
result := make(map[string]float64, len(lotNames))
|
||||
if len(lotNames) == 0 {
|
||||
return result, nil
|
||||
}
|
||||
type row struct {
|
||||
LotName string `gorm:"column:lot_name"`
|
||||
Price float64 `gorm:"column:price"`
|
||||
}
|
||||
var rows []row
|
||||
if err := l.db.Model(&LocalPricelistItem{}).
|
||||
Select("lot_name, price").
|
||||
Where("pricelist_id = ? AND lot_name IN ?", pricelistID, lotNames).
|
||||
Find(&rows).Error; err != nil {
|
||||
return nil, err
|
||||
}
|
||||
for _, r := range rows {
|
||||
if r.Price > 0 {
|
||||
result[r.LotName] = r.Price
|
||||
}
|
||||
}
|
||||
return result, nil
|
||||
}
|
||||
|
||||
// GetLocalLotCategoriesByServerPricelistID returns lot_category for each lot_name from a local pricelist resolved by server ID.
|
||||
// Missing lots are not included in the map; caller is responsible for strict validation.
|
||||
func (l *LocalDB) GetLocalLotCategoriesByServerPricelistID(serverPricelistID uint, lotNames []string) (map[string]string, error) {
|
||||
|
||||
@@ -119,6 +119,11 @@ var localMigrations = []localMigration{
|
||||
name: "Convert local partnumber book cache to book membership + deduplicated PN catalog",
|
||||
run: migrateLocalPartnumberBookCatalog,
|
||||
},
|
||||
{
|
||||
id: "2026_03_13_pricelist_items_dedup_unique",
|
||||
name: "Deduplicate local_pricelist_items and add unique index on (pricelist_id, lot_name)",
|
||||
run: deduplicatePricelistItemsAndAddUniqueIndex,
|
||||
},
|
||||
}
|
||||
|
||||
type localPartnumberCatalogRow struct {
|
||||
@@ -1092,3 +1097,26 @@ func rebuildLocalPartnumberBookCatalog(tx *gorm.DB, catalog map[string]*localPar
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func deduplicatePricelistItemsAndAddUniqueIndex(tx *gorm.DB) error {
|
||||
// Remove duplicate (pricelist_id, lot_name) rows keeping only the row with the lowest id.
|
||||
if err := tx.Exec(`
|
||||
DELETE FROM local_pricelist_items
|
||||
WHERE id NOT IN (
|
||||
SELECT MIN(id) FROM local_pricelist_items
|
||||
GROUP BY pricelist_id, lot_name
|
||||
)
|
||||
`).Error; err != nil {
|
||||
return fmt.Errorf("deduplicate local_pricelist_items: %w", err)
|
||||
}
|
||||
|
||||
// Add unique index to prevent future duplicates.
|
||||
if err := tx.Exec(`
|
||||
CREATE UNIQUE INDEX IF NOT EXISTS idx_local_pricelist_items_pricelist_lot_unique
|
||||
ON local_pricelist_items(pricelist_id, lot_name)
|
||||
`).Error; err != nil {
|
||||
return fmt.Errorf("create unique index on local_pricelist_items: %w", err)
|
||||
}
|
||||
slog.Info("deduplicated local_pricelist_items and added unique index")
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -110,6 +110,7 @@ type LocalConfiguration struct {
|
||||
CreatedAt time.Time `json:"created_at"`
|
||||
UpdatedAt time.Time `json:"updated_at"`
|
||||
SyncedAt *time.Time `json:"synced_at"`
|
||||
ConfigType string `gorm:"default:server" json:"config_type"` // "server" | "storage"
|
||||
SyncStatus string `gorm:"default:'local'" json:"sync_status"` // 'local', 'synced', 'modified'
|
||||
OriginalUserID uint `json:"original_user_id"` // UserID from MariaDB for reference
|
||||
OriginalUsername string `gorm:"not null;default:'';index" json:"original_username"`
|
||||
|
||||
53
internal/localdb/project_sync_timestamp_test.go
Normal file
53
internal/localdb/project_sync_timestamp_test.go
Normal file
@@ -0,0 +1,53 @@
|
||||
package localdb
|
||||
|
||||
import (
|
||||
"path/filepath"
|
||||
"testing"
|
||||
"time"
|
||||
)
|
||||
|
||||
func TestSaveProjectPreservingUpdatedAtKeepsProvidedTimestamp(t *testing.T) {
|
||||
dbPath := filepath.Join(t.TempDir(), "project_sync_timestamp.db")
|
||||
|
||||
local, err := New(dbPath)
|
||||
if err != nil {
|
||||
t.Fatalf("open localdb: %v", err)
|
||||
}
|
||||
t.Cleanup(func() { _ = local.Close() })
|
||||
|
||||
createdAt := time.Date(2026, 2, 1, 10, 0, 0, 0, time.UTC)
|
||||
updatedAt := time.Date(2026, 2, 3, 12, 30, 0, 0, time.UTC)
|
||||
project := &LocalProject{
|
||||
UUID: "project-1",
|
||||
OwnerUsername: "tester",
|
||||
Code: "OPS-1",
|
||||
Variant: "Lenovo",
|
||||
IsActive: true,
|
||||
CreatedAt: createdAt,
|
||||
UpdatedAt: updatedAt,
|
||||
SyncStatus: "synced",
|
||||
}
|
||||
|
||||
if err := local.SaveProjectPreservingUpdatedAt(project); err != nil {
|
||||
t.Fatalf("save project: %v", err)
|
||||
}
|
||||
|
||||
syncedAt := time.Date(2026, 3, 16, 8, 45, 0, 0, time.UTC)
|
||||
project.SyncedAt = &syncedAt
|
||||
project.SyncStatus = "synced"
|
||||
|
||||
if err := local.SaveProjectPreservingUpdatedAt(project); err != nil {
|
||||
t.Fatalf("save project second time: %v", err)
|
||||
}
|
||||
|
||||
stored, err := local.GetProjectByUUID(project.UUID)
|
||||
if err != nil {
|
||||
t.Fatalf("get project: %v", err)
|
||||
}
|
||||
if !stored.UpdatedAt.Equal(updatedAt) {
|
||||
t.Fatalf("updated_at changed during sync save: got %s want %s", stored.UpdatedAt, updatedAt)
|
||||
}
|
||||
if stored.SyncedAt == nil || !stored.SyncedAt.Equal(syncedAt) {
|
||||
t.Fatalf("synced_at not updated correctly: got %+v want %s", stored.SyncedAt, syncedAt)
|
||||
}
|
||||
}
|
||||
@@ -112,10 +112,16 @@ func DecodeConfigurationSnapshot(data string) (*LocalConfiguration, error) {
|
||||
}
|
||||
|
||||
type configurationSpecPriceFingerprint struct {
|
||||
Items []configurationSpecPriceFingerprintItem `json:"items"`
|
||||
ServerCount int `json:"server_count"`
|
||||
TotalPrice *float64 `json:"total_price,omitempty"`
|
||||
CustomPrice *float64 `json:"custom_price,omitempty"`
|
||||
Items []configurationSpecPriceFingerprintItem `json:"items"`
|
||||
ServerCount int `json:"server_count"`
|
||||
TotalPrice *float64 `json:"total_price,omitempty"`
|
||||
CustomPrice *float64 `json:"custom_price,omitempty"`
|
||||
PricelistID *uint `json:"pricelist_id,omitempty"`
|
||||
WarehousePricelistID *uint `json:"warehouse_pricelist_id,omitempty"`
|
||||
CompetitorPricelistID *uint `json:"competitor_pricelist_id,omitempty"`
|
||||
DisablePriceRefresh bool `json:"disable_price_refresh"`
|
||||
OnlyInStock bool `json:"only_in_stock"`
|
||||
VendorSpec VendorSpec `json:"vendor_spec,omitempty"`
|
||||
}
|
||||
|
||||
type configurationSpecPriceFingerprintItem struct {
|
||||
@@ -146,10 +152,16 @@ func BuildConfigurationSpecPriceFingerprint(localCfg *LocalConfiguration) (strin
|
||||
})
|
||||
|
||||
payload := configurationSpecPriceFingerprint{
|
||||
Items: items,
|
||||
ServerCount: localCfg.ServerCount,
|
||||
TotalPrice: localCfg.TotalPrice,
|
||||
CustomPrice: localCfg.CustomPrice,
|
||||
Items: items,
|
||||
ServerCount: localCfg.ServerCount,
|
||||
TotalPrice: localCfg.TotalPrice,
|
||||
CustomPrice: localCfg.CustomPrice,
|
||||
PricelistID: localCfg.PricelistID,
|
||||
WarehousePricelistID: localCfg.WarehousePricelistID,
|
||||
CompetitorPricelistID: localCfg.CompetitorPricelistID,
|
||||
DisablePriceRefresh: localCfg.DisablePriceRefresh,
|
||||
OnlyInStock: localCfg.OnlyInStock,
|
||||
VendorSpec: localCfg.VendorSpec,
|
||||
}
|
||||
|
||||
raw, err := json.Marshal(payload)
|
||||
|
||||
@@ -111,6 +111,7 @@ type Configuration struct {
|
||||
WarehousePricelistID *uint `gorm:"index" json:"warehouse_pricelist_id,omitempty"`
|
||||
CompetitorPricelistID *uint `gorm:"index" json:"competitor_pricelist_id,omitempty"`
|
||||
VendorSpec VendorSpec `gorm:"type:json" json:"vendor_spec,omitempty"`
|
||||
ConfigType string `gorm:"size:20;default:server" json:"config_type"` // "server" | "storage"
|
||||
DisablePriceRefresh bool `gorm:"default:false" json:"disable_price_refresh"`
|
||||
OnlyInStock bool `gorm:"default:false" json:"only_in_stock"`
|
||||
Line int `gorm:"column:line_no;index" json:"line"`
|
||||
|
||||
@@ -31,7 +31,9 @@ func Migrate(db *gorm.DB) error {
|
||||
errStr := err.Error()
|
||||
if strings.Contains(errStr, "Can't DROP") ||
|
||||
strings.Contains(errStr, "Duplicate key name") ||
|
||||
strings.Contains(errStr, "check that it exists") {
|
||||
strings.Contains(errStr, "check that it exists") ||
|
||||
strings.Contains(errStr, "Cannot change column") ||
|
||||
strings.Contains(errStr, "used in a foreign key constraint") {
|
||||
slog.Warn("migration warning (skipped)", "model", model, "error", errStr)
|
||||
continue
|
||||
}
|
||||
|
||||
@@ -58,6 +58,7 @@ type CreateConfigRequest struct {
|
||||
PricelistID *uint `json:"pricelist_id,omitempty"`
|
||||
WarehousePricelistID *uint `json:"warehouse_pricelist_id,omitempty"`
|
||||
CompetitorPricelistID *uint `json:"competitor_pricelist_id,omitempty"`
|
||||
ConfigType string `json:"config_type,omitempty"` // "server" | "storage"
|
||||
DisablePriceRefresh bool `json:"disable_price_refresh"`
|
||||
OnlyInStock bool `json:"only_in_stock"`
|
||||
}
|
||||
@@ -103,9 +104,13 @@ func (s *ConfigurationService) Create(ownerUsername string, req *CreateConfigReq
|
||||
PricelistID: pricelistID,
|
||||
WarehousePricelistID: req.WarehousePricelistID,
|
||||
CompetitorPricelistID: req.CompetitorPricelistID,
|
||||
ConfigType: req.ConfigType,
|
||||
DisablePriceRefresh: req.DisablePriceRefresh,
|
||||
OnlyInStock: req.OnlyInStock,
|
||||
}
|
||||
if config.ConfigType == "" {
|
||||
config.ConfigType = "server"
|
||||
}
|
||||
|
||||
if err := s.configRepo.Create(config); err != nil {
|
||||
return nil, err
|
||||
|
||||
@@ -56,11 +56,24 @@ type ProjectExportData struct {
|
||||
}
|
||||
|
||||
type ProjectPricingExportOptions struct {
|
||||
IncludeLOT bool `json:"include_lot"`
|
||||
IncludeBOM bool `json:"include_bom"`
|
||||
IncludeEstimate bool `json:"include_estimate"`
|
||||
IncludeStock bool `json:"include_stock"`
|
||||
IncludeCompetitor bool `json:"include_competitor"`
|
||||
IncludeLOT bool `json:"include_lot"`
|
||||
IncludeBOM bool `json:"include_bom"`
|
||||
IncludeEstimate bool `json:"include_estimate"`
|
||||
IncludeStock bool `json:"include_stock"`
|
||||
IncludeCompetitor bool `json:"include_competitor"`
|
||||
Basis string `json:"basis"` // "fob" or "ddp"; empty defaults to "fob"
|
||||
SaleMarkup float64 `json:"sale_markup"` // DDP multiplier; 0 defaults to 1.3
|
||||
}
|
||||
|
||||
func (o ProjectPricingExportOptions) saleMarkupFactor() float64 {
|
||||
if o.SaleMarkup > 0 {
|
||||
return o.SaleMarkup
|
||||
}
|
||||
return 1.3
|
||||
}
|
||||
|
||||
func (o ProjectPricingExportOptions) isDDP() bool {
|
||||
return strings.EqualFold(strings.TrimSpace(o.Basis), "ddp")
|
||||
}
|
||||
|
||||
type ProjectPricingExportData struct {
|
||||
@@ -251,18 +264,16 @@ func (s *ExportService) ToPricingCSV(w io.Writer, data *ProjectPricingExportData
|
||||
return fmt.Errorf("failed to write pricing header: %w", err)
|
||||
}
|
||||
|
||||
for idx, cfg := range data.Configs {
|
||||
writeRows := opts.IncludeLOT || opts.IncludeBOM
|
||||
for _, cfg := range data.Configs {
|
||||
if err := csvWriter.Write(pricingConfigSummaryRow(cfg, opts)); err != nil {
|
||||
return fmt.Errorf("failed to write config summary row: %w", err)
|
||||
}
|
||||
for _, row := range cfg.Rows {
|
||||
if err := csvWriter.Write(pricingCSVRow(row, opts)); err != nil {
|
||||
return fmt.Errorf("failed to write pricing row: %w", err)
|
||||
}
|
||||
}
|
||||
if idx < len(data.Configs)-1 {
|
||||
if err := csvWriter.Write([]string{}); err != nil {
|
||||
return fmt.Errorf("failed to write separator row: %w", err)
|
||||
if writeRows {
|
||||
for _, row := range cfg.Rows {
|
||||
if err := csvWriter.Write(pricingCSVRow(row, opts)); err != nil {
|
||||
return fmt.Errorf("failed to write pricing row: %w", err)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -424,6 +435,9 @@ func (s *ExportService) buildPricingExportBlock(cfg *models.Configuration, opts
|
||||
Competitor: totalForUnitPrice(priceMap[item.LotName].Competitor, item.Quantity),
|
||||
})
|
||||
}
|
||||
if opts.isDDP() {
|
||||
applyDDPMarkup(block.Rows, opts.saleMarkupFactor())
|
||||
}
|
||||
return block, nil
|
||||
}
|
||||
|
||||
@@ -443,9 +457,29 @@ func (s *ExportService) buildPricingExportBlock(cfg *models.Configuration, opts
|
||||
})
|
||||
}
|
||||
|
||||
if opts.isDDP() {
|
||||
applyDDPMarkup(block.Rows, opts.saleMarkupFactor())
|
||||
}
|
||||
|
||||
return block, nil
|
||||
}
|
||||
|
||||
func applyDDPMarkup(rows []ProjectPricingExportRow, factor float64) {
|
||||
for i := range rows {
|
||||
rows[i].Estimate = scaleFloatPtr(rows[i].Estimate, factor)
|
||||
rows[i].Stock = scaleFloatPtr(rows[i].Stock, factor)
|
||||
rows[i].Competitor = scaleFloatPtr(rows[i].Competitor, factor)
|
||||
}
|
||||
}
|
||||
|
||||
func scaleFloatPtr(v *float64, factor float64) *float64 {
|
||||
if v == nil {
|
||||
return nil
|
||||
}
|
||||
result := *v * factor
|
||||
return &result
|
||||
}
|
||||
|
||||
// resolveCategories returns lot_name → category map.
|
||||
// Primary source: pricelist items (lot_category). Fallback: local_components table.
|
||||
func (s *ExportService) resolveCategories(pricelistID *uint, lotNames []string) map[string]string {
|
||||
@@ -735,7 +769,7 @@ func pricingConfigSummaryRow(cfg ProjectPricingExportConfig, opts ProjectPricing
|
||||
record = append(record, "")
|
||||
}
|
||||
record = append(record,
|
||||
"",
|
||||
emptyDash(cfg.Article),
|
||||
emptyDash(cfg.Name),
|
||||
fmt.Sprintf("%d", exportPositiveInt(cfg.ServerCount, 1)),
|
||||
)
|
||||
|
||||
@@ -49,11 +49,13 @@ func NewLocalConfigurationService(
|
||||
|
||||
// Create creates a new configuration in local SQLite and queues it for sync
|
||||
func (s *LocalConfigurationService) Create(ownerUsername string, req *CreateConfigRequest) (*models.Configuration, error) {
|
||||
// If online, check for new pricelists first
|
||||
// If online, trigger pricelist sync in the background — do not block config creation
|
||||
if s.isOnline() {
|
||||
if err := s.syncService.SyncPricelistsIfNeeded(); err != nil {
|
||||
// Log but don't fail - we can still use local pricelists
|
||||
}
|
||||
go func() {
|
||||
if err := s.syncService.SyncPricelistsIfNeeded(); err != nil {
|
||||
// Log but don't fail - we can still use local pricelists
|
||||
}
|
||||
}()
|
||||
}
|
||||
|
||||
projectUUID, err := s.resolveProjectUUID(ownerUsername, req.ProjectUUID)
|
||||
@@ -99,10 +101,14 @@ func (s *LocalConfigurationService) Create(ownerUsername string, req *CreateConf
|
||||
PricelistID: pricelistID,
|
||||
WarehousePricelistID: req.WarehousePricelistID,
|
||||
CompetitorPricelistID: req.CompetitorPricelistID,
|
||||
ConfigType: req.ConfigType,
|
||||
DisablePriceRefresh: req.DisablePriceRefresh,
|
||||
OnlyInStock: req.OnlyInStock,
|
||||
CreatedAt: time.Now(),
|
||||
}
|
||||
if cfg.ConfigType == "" {
|
||||
cfg.ConfigType = "server"
|
||||
}
|
||||
|
||||
// Convert to local model
|
||||
localCfg := localdb.ConfigurationToLocal(cfg)
|
||||
@@ -399,17 +405,29 @@ func (s *LocalConfigurationService) RefreshPrices(uuid string, ownerUsername str
|
||||
return nil, ErrConfigForbidden
|
||||
}
|
||||
|
||||
// Refresh local pricelists when online and use latest active/local pricelist for recalculation.
|
||||
// Refresh local pricelists when online.
|
||||
if s.isOnline() {
|
||||
_ = s.syncService.SyncPricelistsIfNeeded()
|
||||
}
|
||||
latestPricelist, latestErr := s.localDB.GetLatestLocalPricelist()
|
||||
|
||||
// Use the pricelist stored in the config; fall back to latest if unavailable.
|
||||
var pricelist *localdb.LocalPricelist
|
||||
if localCfg.PricelistID != nil && *localCfg.PricelistID > 0 {
|
||||
if pl, err := s.localDB.GetLocalPricelistByServerID(*localCfg.PricelistID); err == nil {
|
||||
pricelist = pl
|
||||
}
|
||||
}
|
||||
if pricelist == nil {
|
||||
if pl, err := s.localDB.GetLatestLocalPricelist(); err == nil {
|
||||
pricelist = pl
|
||||
}
|
||||
}
|
||||
|
||||
// Update prices for all items from pricelist
|
||||
updatedItems := make(localdb.LocalConfigItems, len(localCfg.Items))
|
||||
for i, item := range localCfg.Items {
|
||||
if latestErr == nil && latestPricelist != nil {
|
||||
price, err := s.localDB.GetLocalPriceForLot(latestPricelist.ID, item.LotName)
|
||||
if pricelist != nil {
|
||||
price, err := s.localDB.GetLocalPriceForLot(pricelist.ID, item.LotName)
|
||||
if err == nil && price > 0 {
|
||||
updatedItems[i] = localdb.LocalConfigItem{
|
||||
LotName: item.LotName,
|
||||
@@ -434,8 +452,8 @@ func (s *LocalConfigurationService) RefreshPrices(uuid string, ownerUsername str
|
||||
}
|
||||
|
||||
localCfg.TotalPrice = &total
|
||||
if latestErr == nil && latestPricelist != nil {
|
||||
localCfg.PricelistID = &latestPricelist.ServerID
|
||||
if pricelist != nil {
|
||||
localCfg.PricelistID = &pricelist.ServerID
|
||||
}
|
||||
|
||||
// Set price update timestamp and mark for sync
|
||||
@@ -762,8 +780,10 @@ func (s *LocalConfigurationService) ListTemplates(page, perPage int) ([]models.C
|
||||
return templates[start:end], total, nil
|
||||
}
|
||||
|
||||
// RefreshPricesNoAuth updates all component prices in the configuration without ownership check
|
||||
func (s *LocalConfigurationService) RefreshPricesNoAuth(uuid string) (*models.Configuration, error) {
|
||||
// RefreshPricesNoAuth updates all component prices in the configuration without ownership check.
|
||||
// pricelistServerID optionally specifies which pricelist to use; if nil, the config's stored
|
||||
// pricelist is used; if that is also absent, the latest local pricelist is used as a fallback.
|
||||
func (s *LocalConfigurationService) RefreshPricesNoAuth(uuid string, pricelistServerID *uint) (*models.Configuration, error) {
|
||||
// Get configuration from local SQLite
|
||||
localCfg, err := s.localDB.GetConfigurationByUUID(uuid)
|
||||
if err != nil {
|
||||
@@ -773,13 +793,36 @@ func (s *LocalConfigurationService) RefreshPricesNoAuth(uuid string) (*models.Co
|
||||
if s.isOnline() {
|
||||
_ = s.syncService.SyncPricelistsIfNeeded()
|
||||
}
|
||||
latestPricelist, latestErr := s.localDB.GetLatestLocalPricelist()
|
||||
|
||||
// Resolve which pricelist to use:
|
||||
// 1. Explicitly requested pricelist (from UI selection)
|
||||
// 2. Pricelist stored in the configuration
|
||||
// 3. Latest local pricelist as last-resort fallback
|
||||
var targetServerID *uint
|
||||
if pricelistServerID != nil && *pricelistServerID > 0 {
|
||||
targetServerID = pricelistServerID
|
||||
} else if localCfg.PricelistID != nil && *localCfg.PricelistID > 0 {
|
||||
targetServerID = localCfg.PricelistID
|
||||
}
|
||||
|
||||
var pricelist *localdb.LocalPricelist
|
||||
if targetServerID != nil {
|
||||
if pl, err := s.localDB.GetLocalPricelistByServerID(*targetServerID); err == nil {
|
||||
pricelist = pl
|
||||
}
|
||||
}
|
||||
if pricelist == nil {
|
||||
// Fallback: use latest local pricelist
|
||||
if pl, err := s.localDB.GetLatestLocalPricelist(); err == nil {
|
||||
pricelist = pl
|
||||
}
|
||||
}
|
||||
|
||||
// Update prices for all items from pricelist
|
||||
updatedItems := make(localdb.LocalConfigItems, len(localCfg.Items))
|
||||
for i, item := range localCfg.Items {
|
||||
if latestErr == nil && latestPricelist != nil {
|
||||
price, err := s.localDB.GetLocalPriceForLot(latestPricelist.ID, item.LotName)
|
||||
if pricelist != nil {
|
||||
price, err := s.localDB.GetLocalPriceForLot(pricelist.ID, item.LotName)
|
||||
if err == nil && price > 0 {
|
||||
updatedItems[i] = localdb.LocalConfigItem{
|
||||
LotName: item.LotName,
|
||||
@@ -804,8 +847,8 @@ func (s *LocalConfigurationService) RefreshPricesNoAuth(uuid string) (*models.Co
|
||||
}
|
||||
|
||||
localCfg.TotalPrice = &total
|
||||
if latestErr == nil && latestPricelist != nil {
|
||||
localCfg.PricelistID = &latestPricelist.ServerID
|
||||
if pricelist != nil {
|
||||
localCfg.PricelistID = &pricelist.ServerID
|
||||
}
|
||||
|
||||
// Set price update timestamp and mark for sync
|
||||
@@ -1205,21 +1248,55 @@ func hasNonRevisionConfigurationChanges(current *localdb.LocalConfiguration, nex
|
||||
current.ServerModel != next.ServerModel ||
|
||||
current.SupportCode != next.SupportCode ||
|
||||
current.Article != next.Article ||
|
||||
current.DisablePriceRefresh != next.DisablePriceRefresh ||
|
||||
current.OnlyInStock != next.OnlyInStock ||
|
||||
current.IsActive != next.IsActive ||
|
||||
current.Line != next.Line {
|
||||
return true
|
||||
}
|
||||
if !equalUintPtr(current.PricelistID, next.PricelistID) ||
|
||||
!equalUintPtr(current.WarehousePricelistID, next.WarehousePricelistID) ||
|
||||
!equalUintPtr(current.CompetitorPricelistID, next.CompetitorPricelistID) ||
|
||||
!equalStringPtr(current.ProjectUUID, next.ProjectUUID) {
|
||||
if !equalStringPtr(current.ProjectUUID, next.ProjectUUID) {
|
||||
return true
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func (s *LocalConfigurationService) UpdateVendorSpecNoAuth(uuid string, spec localdb.VendorSpec) (*models.Configuration, error) {
|
||||
localCfg, err := s.localDB.GetConfigurationByUUID(uuid)
|
||||
if err != nil {
|
||||
return nil, ErrConfigNotFound
|
||||
}
|
||||
|
||||
localCfg.VendorSpec = spec
|
||||
localCfg.UpdatedAt = time.Now()
|
||||
localCfg.SyncStatus = "pending"
|
||||
|
||||
cfg, err := s.saveWithVersionAndPending(localCfg, "update", "")
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("update vendor spec without auth with version: %w", err)
|
||||
}
|
||||
return cfg, nil
|
||||
}
|
||||
|
||||
func (s *LocalConfigurationService) ApplyVendorSpecItemsNoAuth(uuid string, items localdb.LocalConfigItems) (*models.Configuration, error) {
|
||||
localCfg, err := s.localDB.GetConfigurationByUUID(uuid)
|
||||
if err != nil {
|
||||
return nil, ErrConfigNotFound
|
||||
}
|
||||
|
||||
localCfg.Items = items
|
||||
total := items.Total()
|
||||
if localCfg.ServerCount > 1 {
|
||||
total *= float64(localCfg.ServerCount)
|
||||
}
|
||||
localCfg.TotalPrice = &total
|
||||
localCfg.UpdatedAt = time.Now()
|
||||
localCfg.SyncStatus = "pending"
|
||||
|
||||
cfg, err := s.saveWithVersionAndPending(localCfg, "update", "")
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("apply vendor spec items without auth with version: %w", err)
|
||||
}
|
||||
return cfg, nil
|
||||
}
|
||||
|
||||
func equalStringPtr(a, b *string) bool {
|
||||
if a == nil && b == nil {
|
||||
return true
|
||||
|
||||
@@ -137,6 +137,77 @@ func TestUpdateNoAuthSkipsRevisionWhenSpecAndPriceUnchanged(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestUpdateNoAuthCreatesRevisionWhenPricingSettingsChanged(t *testing.T) {
|
||||
service, local := newLocalConfigServiceForTest(t)
|
||||
|
||||
created, err := service.Create("tester", &CreateConfigRequest{
|
||||
Name: "pricing",
|
||||
Items: models.ConfigItems{{LotName: "CPU_A", Quantity: 1, UnitPrice: 1000}},
|
||||
ServerCount: 1,
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("create config: %v", err)
|
||||
}
|
||||
|
||||
if _, err := service.UpdateNoAuth(created.UUID, &CreateConfigRequest{
|
||||
Name: "pricing",
|
||||
Items: models.ConfigItems{{LotName: "CPU_A", Quantity: 1, UnitPrice: 1000}},
|
||||
ServerCount: 1,
|
||||
DisablePriceRefresh: true,
|
||||
OnlyInStock: true,
|
||||
}); err != nil {
|
||||
t.Fatalf("update pricing settings: %v", err)
|
||||
}
|
||||
|
||||
versions := loadVersions(t, local, created.UUID)
|
||||
if len(versions) != 2 {
|
||||
t.Fatalf("expected 2 versions after pricing settings change, got %d", len(versions))
|
||||
}
|
||||
if versions[1].VersionNo != 2 {
|
||||
t.Fatalf("expected latest version_no=2, got %d", versions[1].VersionNo)
|
||||
}
|
||||
}
|
||||
|
||||
func TestUpdateVendorSpecNoAuthCreatesRevision(t *testing.T) {
|
||||
service, local := newLocalConfigServiceForTest(t)
|
||||
|
||||
created, err := service.Create("tester", &CreateConfigRequest{
|
||||
Name: "bom",
|
||||
Items: models.ConfigItems{{LotName: "CPU_A", Quantity: 1, UnitPrice: 1000}},
|
||||
ServerCount: 1,
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("create config: %v", err)
|
||||
}
|
||||
|
||||
spec := localdb.VendorSpec{
|
||||
{
|
||||
VendorPartnumber: "PN-001",
|
||||
Quantity: 2,
|
||||
SortOrder: 10,
|
||||
LotMappings: []localdb.VendorSpecLotMapping{
|
||||
{LotName: "CPU_A", QuantityPerPN: 1},
|
||||
},
|
||||
},
|
||||
}
|
||||
if _, err := service.UpdateVendorSpecNoAuth(created.UUID, spec); err != nil {
|
||||
t.Fatalf("update vendor spec: %v", err)
|
||||
}
|
||||
|
||||
versions := loadVersions(t, local, created.UUID)
|
||||
if len(versions) != 2 {
|
||||
t.Fatalf("expected 2 versions after vendor spec change, got %d", len(versions))
|
||||
}
|
||||
|
||||
cfg, err := local.GetConfigurationByUUID(created.UUID)
|
||||
if err != nil {
|
||||
t.Fatalf("load config after vendor spec update: %v", err)
|
||||
}
|
||||
if len(cfg.VendorSpec) != 1 || cfg.VendorSpec[0].VendorPartnumber != "PN-001" {
|
||||
t.Fatalf("expected saved vendor spec, got %+v", cfg.VendorSpec)
|
||||
}
|
||||
}
|
||||
|
||||
func TestReorderProjectConfigurationsDoesNotCreateNewVersions(t *testing.T) {
|
||||
service, local := newLocalConfigServiceForTest(t)
|
||||
|
||||
|
||||
@@ -20,6 +20,8 @@ var (
|
||||
ErrProjectForbidden = errors.New("access to project forbidden")
|
||||
ErrProjectCodeExists = errors.New("project code and variant already exist")
|
||||
ErrCannotDeleteMainVariant = errors.New("cannot delete main variant")
|
||||
ErrReservedMainVariant = errors.New("variant name 'main' is reserved")
|
||||
ErrCannotRenameMainVariant = errors.New("cannot rename main variant")
|
||||
)
|
||||
|
||||
type ProjectService struct {
|
||||
@@ -63,6 +65,9 @@ func (s *ProjectService) Create(ownerUsername string, req *CreateProjectRequest)
|
||||
return nil, fmt.Errorf("project code is required")
|
||||
}
|
||||
variant := strings.TrimSpace(req.Variant)
|
||||
if err := validateProjectVariantName(variant); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if err := s.ensureUniqueProjectCodeVariant("", code, variant); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@@ -104,7 +109,15 @@ func (s *ProjectService) Update(projectUUID, ownerUsername string, req *UpdatePr
|
||||
localProject.Code = code
|
||||
}
|
||||
if req.Variant != nil {
|
||||
localProject.Variant = strings.TrimSpace(*req.Variant)
|
||||
newVariant := strings.TrimSpace(*req.Variant)
|
||||
// Block renaming of the main variant (empty Variant) — there must always be a main.
|
||||
if strings.TrimSpace(localProject.Variant) == "" && newVariant != "" {
|
||||
return nil, ErrCannotRenameMainVariant
|
||||
}
|
||||
localProject.Variant = newVariant
|
||||
if err := validateProjectVariantName(localProject.Variant); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
if err := s.ensureUniqueProjectCodeVariant(projectUUID, localProject.Code, localProject.Variant); err != nil {
|
||||
return nil, err
|
||||
@@ -166,6 +179,13 @@ func normalizeProjectVariant(variant string) string {
|
||||
return strings.ToLower(strings.TrimSpace(variant))
|
||||
}
|
||||
|
||||
func validateProjectVariantName(variant string) error {
|
||||
if normalizeProjectVariant(variant) == "main" {
|
||||
return ErrReservedMainVariant
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *ProjectService) Archive(projectUUID, ownerUsername string) error {
|
||||
return s.setProjectActive(projectUUID, ownerUsername, false)
|
||||
}
|
||||
|
||||
60
internal/services/project_test.go
Normal file
60
internal/services/project_test.go
Normal file
@@ -0,0 +1,60 @@
|
||||
package services
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
|
||||
"git.mchus.pro/mchus/quoteforge/internal/localdb"
|
||||
)
|
||||
|
||||
func TestProjectServiceCreateRejectsReservedMainVariant(t *testing.T) {
|
||||
local, err := newProjectTestLocalDB(t)
|
||||
if err != nil {
|
||||
t.Fatalf("open localdb: %v", err)
|
||||
}
|
||||
service := NewProjectService(local)
|
||||
|
||||
_, err = service.Create("tester", &CreateProjectRequest{
|
||||
Code: "OPS-1",
|
||||
Variant: "main",
|
||||
})
|
||||
if !errors.Is(err, ErrReservedMainVariant) {
|
||||
t.Fatalf("expected ErrReservedMainVariant, got %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
func TestProjectServiceUpdateRejectsReservedMainVariant(t *testing.T) {
|
||||
local, err := newProjectTestLocalDB(t)
|
||||
if err != nil {
|
||||
t.Fatalf("open localdb: %v", err)
|
||||
}
|
||||
service := NewProjectService(local)
|
||||
|
||||
created, err := service.Create("tester", &CreateProjectRequest{
|
||||
Code: "OPS-1",
|
||||
Variant: "Lenovo",
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("create project: %v", err)
|
||||
}
|
||||
|
||||
mainName := "main"
|
||||
_, err = service.Update(created.UUID, "tester", &UpdateProjectRequest{
|
||||
Variant: &mainName,
|
||||
})
|
||||
if !errors.Is(err, ErrReservedMainVariant) {
|
||||
t.Fatalf("expected ErrReservedMainVariant, got %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
func newProjectTestLocalDB(t *testing.T) (*localdb.LocalDB, error) {
|
||||
t.Helper()
|
||||
dbPath := filepath.Join(t.TempDir(), "project_test.db")
|
||||
local, err := localdb.New(dbPath)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
t.Cleanup(func() { _ = local.Close() })
|
||||
return local, nil
|
||||
}
|
||||
@@ -388,13 +388,14 @@ func (s *QuoteService) lookupPricesByPricelistID(pricelistID uint, lotNames []st
|
||||
}
|
||||
}
|
||||
|
||||
// Fallback path (usually offline): local per-lot lookup.
|
||||
// Fallback path (usually offline): batch local lookup (single query via index).
|
||||
if s.localDB != nil {
|
||||
for _, lotName := range missing {
|
||||
price, found := s.lookupPriceByPricelistID(pricelistID, lotName)
|
||||
if found && price > 0 {
|
||||
result[lotName] = price
|
||||
loaded[lotName] = price
|
||||
if localPL, err := s.localDB.GetLocalPricelistByServerID(pricelistID); err == nil {
|
||||
if batchPrices, err := s.localDB.GetLocalPricesForLots(localPL.ID, missing); err == nil {
|
||||
for lotName, price := range batchPrices {
|
||||
result[lotName] = price
|
||||
loaded[lotName] = price
|
||||
}
|
||||
}
|
||||
}
|
||||
s.updateCache(pricelistID, missing, loaded)
|
||||
|
||||
@@ -168,6 +168,10 @@ func ensureClientSchemaStateTable(db *gorm.DB) error {
|
||||
"ALTER TABLE qt_client_schema_state ADD COLUMN IF NOT EXISTS competitor_pricelist_version VARCHAR(128) NULL AFTER warehouse_pricelist_version",
|
||||
"ALTER TABLE qt_client_schema_state ADD COLUMN IF NOT EXISTS last_sync_error_code VARCHAR(128) NULL AFTER competitor_pricelist_version",
|
||||
"ALTER TABLE qt_client_schema_state ADD COLUMN IF NOT EXISTS last_sync_error_text TEXT NULL AFTER last_sync_error_code",
|
||||
"ALTER TABLE qt_client_schema_state ADD COLUMN IF NOT EXISTS local_pricelist_count INT NOT NULL DEFAULT 0 AFTER last_sync_error_text",
|
||||
"ALTER TABLE qt_client_schema_state ADD COLUMN IF NOT EXISTS pricelist_items_count INT NOT NULL DEFAULT 0 AFTER local_pricelist_count",
|
||||
"ALTER TABLE qt_client_schema_state ADD COLUMN IF NOT EXISTS components_count INT NOT NULL DEFAULT 0 AFTER pricelist_items_count",
|
||||
"ALTER TABLE qt_client_schema_state ADD COLUMN IF NOT EXISTS db_size_bytes BIGINT NOT NULL DEFAULT 0 AFTER components_count",
|
||||
} {
|
||||
if err := db.Exec(stmt).Error; err != nil {
|
||||
return fmt.Errorf("expand qt_client_schema_state: %w", err)
|
||||
@@ -215,6 +219,10 @@ func (s *Service) reportClientSchemaState(mariaDB *gorm.DB, checkedAt time.Time)
|
||||
warehouseVersion := latestPricelistVersion(s.localDB, "warehouse")
|
||||
competitorVersion := latestPricelistVersion(s.localDB, "competitor")
|
||||
lastSyncErrorCode, lastSyncErrorText := latestSyncErrorState(s.localDB)
|
||||
localPricelistCount := s.localDB.CountLocalPricelists()
|
||||
pricelistItemsCount := s.localDB.CountAllPricelistItems()
|
||||
componentsCount := s.localDB.CountComponents()
|
||||
dbSizeBytes := s.localDB.DBFileSizeBytes()
|
||||
return mariaDB.Exec(`
|
||||
INSERT INTO qt_client_schema_state (
|
||||
username, hostname, app_version,
|
||||
@@ -222,9 +230,10 @@ func (s *Service) reportClientSchemaState(mariaDB *gorm.DB, checkedAt time.Time)
|
||||
configurations_count, projects_count,
|
||||
estimate_pricelist_version, warehouse_pricelist_version, competitor_pricelist_version,
|
||||
last_sync_error_code, last_sync_error_text,
|
||||
local_pricelist_count, pricelist_items_count, components_count, db_size_bytes,
|
||||
last_checked_at, updated_at
|
||||
)
|
||||
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
|
||||
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
|
||||
ON DUPLICATE KEY UPDATE
|
||||
app_version = VALUES(app_version),
|
||||
last_sync_at = VALUES(last_sync_at),
|
||||
@@ -238,6 +247,10 @@ func (s *Service) reportClientSchemaState(mariaDB *gorm.DB, checkedAt time.Time)
|
||||
competitor_pricelist_version = VALUES(competitor_pricelist_version),
|
||||
last_sync_error_code = VALUES(last_sync_error_code),
|
||||
last_sync_error_text = VALUES(last_sync_error_text),
|
||||
local_pricelist_count = VALUES(local_pricelist_count),
|
||||
pricelist_items_count = VALUES(pricelist_items_count),
|
||||
components_count = VALUES(components_count),
|
||||
db_size_bytes = VALUES(db_size_bytes),
|
||||
last_checked_at = VALUES(last_checked_at),
|
||||
updated_at = VALUES(updated_at)
|
||||
`, username, hostname, appmeta.Version(),
|
||||
@@ -245,6 +258,7 @@ func (s *Service) reportClientSchemaState(mariaDB *gorm.DB, checkedAt time.Time)
|
||||
configurationsCount, projectsCount,
|
||||
estimateVersion, warehouseVersion, competitorVersion,
|
||||
lastSyncErrorCode, lastSyncErrorText,
|
||||
localPricelistCount, pricelistItemsCount, componentsCount, dbSizeBytes,
|
||||
checkedAt, checkedAt).Error
|
||||
}
|
||||
|
||||
|
||||
@@ -7,6 +7,7 @@ import (
|
||||
"log/slog"
|
||||
"sort"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"git.mchus.pro/mchus/quoteforge/internal/appmeta"
|
||||
@@ -22,9 +23,10 @@ var ErrOffline = errors.New("database is offline")
|
||||
|
||||
// Service handles synchronization between MariaDB and local SQLite
|
||||
type Service struct {
|
||||
connMgr *db.ConnectionManager
|
||||
localDB *localdb.LocalDB
|
||||
directDB *gorm.DB
|
||||
connMgr *db.ConnectionManager
|
||||
localDB *localdb.LocalDB
|
||||
directDB *gorm.DB
|
||||
pricelistMu sync.Mutex // prevents concurrent pricelist syncs
|
||||
}
|
||||
|
||||
// NewService creates a new sync service
|
||||
@@ -45,10 +47,15 @@ func NewServiceWithDB(mariaDB *gorm.DB, localDB *localdb.LocalDB) *Service {
|
||||
|
||||
// SyncStatus represents the current sync status
|
||||
type SyncStatus struct {
|
||||
LastSyncAt *time.Time `json:"last_sync_at"`
|
||||
ServerPricelists int `json:"server_pricelists"`
|
||||
LocalPricelists int `json:"local_pricelists"`
|
||||
NeedsSync bool `json:"needs_sync"`
|
||||
LastSyncAt *time.Time `json:"last_sync_at"`
|
||||
LastAttemptAt *time.Time `json:"last_attempt_at,omitempty"`
|
||||
LastSyncStatus string `json:"last_sync_status,omitempty"`
|
||||
LastSyncError string `json:"last_sync_error,omitempty"`
|
||||
ServerPricelists int `json:"server_pricelists"`
|
||||
LocalPricelists int `json:"local_pricelists"`
|
||||
NeedsSync bool `json:"needs_sync"`
|
||||
IncompleteServerSync bool `json:"incomplete_server_sync"`
|
||||
KnownServerChangesMiss bool `json:"known_server_changes_missing"`
|
||||
}
|
||||
|
||||
type UserSyncStatus struct {
|
||||
@@ -215,7 +222,7 @@ func (s *Service) ImportProjectsToLocal() (*ProjectImportResult, error) {
|
||||
existing.SyncStatus = "synced"
|
||||
existing.SyncedAt = &now
|
||||
|
||||
if err := s.localDB.SaveProject(existing); err != nil {
|
||||
if err := s.localDB.SaveProjectPreservingUpdatedAt(existing); err != nil {
|
||||
return nil, fmt.Errorf("saving local project %s: %w", project.UUID, err)
|
||||
}
|
||||
result.Updated++
|
||||
@@ -225,7 +232,7 @@ func (s *Service) ImportProjectsToLocal() (*ProjectImportResult, error) {
|
||||
localProject := localdb.ProjectToLocal(&project)
|
||||
localProject.SyncStatus = "synced"
|
||||
localProject.SyncedAt = &now
|
||||
if err := s.localDB.SaveProject(localProject); err != nil {
|
||||
if err := s.localDB.SaveProjectPreservingUpdatedAt(localProject); err != nil {
|
||||
return nil, fmt.Errorf("saving local project %s: %w", project.UUID, err)
|
||||
}
|
||||
result.Imported++
|
||||
@@ -240,30 +247,23 @@ func (s *Service) ImportProjectsToLocal() (*ProjectImportResult, error) {
|
||||
// GetStatus returns the current sync status
|
||||
func (s *Service) GetStatus() (*SyncStatus, error) {
|
||||
lastSync := s.localDB.GetLastSyncTime()
|
||||
|
||||
// Count server pricelists (only if already connected, don't reconnect)
|
||||
serverCount := 0
|
||||
connStatus := s.getConnectionStatus()
|
||||
if connStatus.IsConnected {
|
||||
if mariaDB, err := s.getDB(); err == nil && mariaDB != nil {
|
||||
pricelistRepo := repository.NewPricelistRepository(mariaDB)
|
||||
activeCount, err := pricelistRepo.CountActive()
|
||||
if err == nil {
|
||||
serverCount = int(activeCount)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Count local pricelists
|
||||
lastAttempt := s.localDB.GetLastPricelistSyncAttemptAt()
|
||||
lastSyncStatus := s.localDB.GetLastPricelistSyncStatus()
|
||||
lastSyncError := s.localDB.GetLastPricelistSyncError()
|
||||
localCount := s.localDB.CountLocalPricelists()
|
||||
|
||||
needsSync, _ := s.NeedSync()
|
||||
hasFailedSync := strings.EqualFold(lastSyncStatus, "failed")
|
||||
needsSync := lastSync == nil || hasFailedSync
|
||||
|
||||
return &SyncStatus{
|
||||
LastSyncAt: lastSync,
|
||||
ServerPricelists: serverCount,
|
||||
LocalPricelists: int(localCount),
|
||||
NeedsSync: needsSync,
|
||||
LastSyncAt: lastSync,
|
||||
LastAttemptAt: lastAttempt,
|
||||
LastSyncStatus: lastSyncStatus,
|
||||
LastSyncError: lastSyncError,
|
||||
ServerPricelists: 0,
|
||||
LocalPricelists: int(localCount),
|
||||
NeedsSync: needsSync,
|
||||
IncompleteServerSync: hasFailedSync,
|
||||
KnownServerChangesMiss: hasFailedSync,
|
||||
}, nil
|
||||
}
|
||||
|
||||
@@ -333,6 +333,7 @@ func (s *Service) SyncPricelists() (int, error) {
|
||||
// Get database connection
|
||||
mariaDB, err := s.getDB()
|
||||
if err != nil {
|
||||
s.recordPricelistSyncFailure(err)
|
||||
return 0, fmt.Errorf("database not available: %w", err)
|
||||
}
|
||||
|
||||
@@ -342,6 +343,7 @@ func (s *Service) SyncPricelists() (int, error) {
|
||||
// Get active pricelists from server (up to 100)
|
||||
serverPricelists, _, err := pricelistRepo.ListActive(0, 100)
|
||||
if err != nil {
|
||||
s.recordPricelistSyncFailure(err)
|
||||
return 0, fmt.Errorf("getting active server pricelists: %w", err)
|
||||
}
|
||||
serverPricelistIDs := make([]uint, 0, len(serverPricelists))
|
||||
@@ -350,6 +352,7 @@ func (s *Service) SyncPricelists() (int, error) {
|
||||
}
|
||||
|
||||
synced := 0
|
||||
var syncErr error
|
||||
for _, pl := range serverPricelists {
|
||||
// Check if pricelist already exists locally
|
||||
existing, _ := s.localDB.GetLocalPricelistByServerID(pl.ID)
|
||||
@@ -358,6 +361,9 @@ func (s *Service) SyncPricelists() (int, error) {
|
||||
if s.localDB.CountLocalPricelistItems(existing.ID) == 0 {
|
||||
itemCount, err := s.SyncPricelistItems(existing.ID)
|
||||
if err != nil {
|
||||
if syncErr == nil {
|
||||
syncErr = fmt.Errorf("sync items for existing pricelist %s: %w", pl.Version, err)
|
||||
}
|
||||
slog.Warn("failed to sync missing pricelist items for existing local pricelist", "version", pl.Version, "error", err)
|
||||
} else {
|
||||
slog.Info("synced missing pricelist items for existing local pricelist", "version", pl.Version, "items", itemCount)
|
||||
@@ -377,19 +383,15 @@ func (s *Service) SyncPricelists() (int, error) {
|
||||
IsUsed: false,
|
||||
}
|
||||
|
||||
if err := s.localDB.SaveLocalPricelist(localPL); err != nil {
|
||||
slog.Warn("failed to save local pricelist", "version", pl.Version, "error", err)
|
||||
itemCount, err := s.syncNewPricelistSnapshot(localPL)
|
||||
if err != nil {
|
||||
if syncErr == nil {
|
||||
syncErr = fmt.Errorf("sync new pricelist %s: %w", pl.Version, err)
|
||||
}
|
||||
slog.Warn("failed to sync pricelist snapshot", "version", pl.Version, "error", err)
|
||||
continue
|
||||
}
|
||||
|
||||
// Sync items for the newly created pricelist
|
||||
itemCount, err := s.SyncPricelistItems(localPL.ID)
|
||||
if err != nil {
|
||||
slog.Warn("failed to sync pricelist items", "version", pl.Version, "error", err)
|
||||
// Continue even if items sync fails - we have the pricelist metadata
|
||||
} else {
|
||||
slog.Debug("synced pricelist with items", "version", pl.Version, "items", itemCount)
|
||||
}
|
||||
slog.Debug("synced pricelist with items", "version", pl.Version, "items", itemCount)
|
||||
|
||||
synced++
|
||||
}
|
||||
@@ -404,14 +406,96 @@ func (s *Service) SyncPricelists() (int, error) {
|
||||
// Backfill lot_category for used pricelists (older local caches may miss the column values).
|
||||
s.backfillUsedPricelistItemCategories(pricelistRepo, serverPricelistIDs)
|
||||
|
||||
if syncErr != nil {
|
||||
s.recordPricelistSyncFailure(syncErr)
|
||||
return synced, syncErr
|
||||
}
|
||||
|
||||
// Update last sync time
|
||||
s.localDB.SetLastSyncTime(time.Now())
|
||||
now := time.Now()
|
||||
s.localDB.SetLastSyncTime(now)
|
||||
s.recordPricelistSyncSuccess(now)
|
||||
s.RecordSyncHeartbeat()
|
||||
|
||||
slog.Info("pricelist sync completed", "synced", synced, "total", len(serverPricelists))
|
||||
return synced, nil
|
||||
}
|
||||
|
||||
func (s *Service) recordPricelistSyncSuccess(at time.Time) {
|
||||
if s.localDB == nil {
|
||||
return
|
||||
}
|
||||
if err := s.localDB.SetPricelistSyncResult("success", "", at); err != nil {
|
||||
slog.Warn("failed to persist pricelist sync success state", "error", err)
|
||||
}
|
||||
}
|
||||
|
||||
func (s *Service) recordPricelistSyncFailure(syncErr error) {
|
||||
if s.localDB == nil || syncErr == nil {
|
||||
return
|
||||
}
|
||||
s.markConnectionBroken(syncErr)
|
||||
if err := s.localDB.SetPricelistSyncResult("failed", syncErr.Error(), time.Now()); err != nil {
|
||||
slog.Warn("failed to persist pricelist sync failure state", "error", err)
|
||||
}
|
||||
}
|
||||
|
||||
func (s *Service) markConnectionBroken(err error) {
|
||||
if err == nil || s.connMgr == nil {
|
||||
return
|
||||
}
|
||||
|
||||
msg := strings.ToLower(err.Error())
|
||||
switch {
|
||||
case strings.Contains(msg, "i/o timeout"),
|
||||
strings.Contains(msg, "invalid connection"),
|
||||
strings.Contains(msg, "bad connection"),
|
||||
strings.Contains(msg, "connection reset"),
|
||||
strings.Contains(msg, "broken pipe"),
|
||||
strings.Contains(msg, "unexpected eof"):
|
||||
s.connMgr.MarkOffline(err)
|
||||
}
|
||||
}
|
||||
|
||||
func (s *Service) syncNewPricelistSnapshot(localPL *localdb.LocalPricelist) (int, error) {
|
||||
if localPL == nil {
|
||||
return 0, fmt.Errorf("local pricelist is nil")
|
||||
}
|
||||
|
||||
localItems, err := s.fetchServerPricelistItems(localPL.ServerID)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
|
||||
if err := s.localDB.DB().Transaction(func(tx *gorm.DB) error {
|
||||
if err := tx.Create(localPL).Error; err != nil {
|
||||
return fmt.Errorf("save local pricelist: %w", err)
|
||||
}
|
||||
if len(localItems) == 0 {
|
||||
return nil
|
||||
}
|
||||
for i := range localItems {
|
||||
localItems[i].PricelistID = localPL.ID
|
||||
}
|
||||
batchSize := 500
|
||||
for i := 0; i < len(localItems); i += batchSize {
|
||||
end := i + batchSize
|
||||
if end > len(localItems) {
|
||||
end = len(localItems)
|
||||
}
|
||||
if err := tx.CreateInBatches(localItems[i:end], batchSize).Error; err != nil {
|
||||
return fmt.Errorf("save local pricelist items: %w", err)
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}); err != nil {
|
||||
return 0, err
|
||||
}
|
||||
|
||||
slog.Info("synced pricelist items", "pricelist_id", localPL.ID, "items", len(localItems))
|
||||
return len(localItems), nil
|
||||
}
|
||||
|
||||
func (s *Service) backfillUsedPricelistItemCategories(pricelistRepo *repository.PricelistRepository, activeServerPricelistIDs []uint) {
|
||||
if s.localDB == nil || pricelistRepo == nil {
|
||||
return
|
||||
@@ -670,30 +754,13 @@ func (s *Service) SyncPricelistItems(localPricelistID uint) (int, error) {
|
||||
return int(existingCount), nil
|
||||
}
|
||||
|
||||
// Get database connection
|
||||
mariaDB, err := s.getDB()
|
||||
localItems, err := s.fetchServerPricelistItems(localPL.ServerID)
|
||||
if err != nil {
|
||||
return 0, fmt.Errorf("database not available: %w", err)
|
||||
return 0, err
|
||||
}
|
||||
|
||||
// Create repository
|
||||
pricelistRepo := repository.NewPricelistRepository(mariaDB)
|
||||
|
||||
// Get items from server
|
||||
serverItems, _, err := pricelistRepo.GetItems(localPL.ServerID, 0, 10000, "")
|
||||
if err != nil {
|
||||
return 0, fmt.Errorf("getting server pricelist items: %w", err)
|
||||
for i := range localItems {
|
||||
localItems[i].PricelistID = localPricelistID
|
||||
}
|
||||
|
||||
// Convert and save locally
|
||||
localItems := make([]localdb.LocalPricelistItem, len(serverItems))
|
||||
for i, item := range serverItems {
|
||||
localItems[i] = *localdb.PricelistItemToLocal(&item, localPricelistID)
|
||||
}
|
||||
if err := s.enrichLocalPricelistItemsWithStock(mariaDB, localItems); err != nil {
|
||||
slog.Warn("pricelist stock enrichment skipped", "pricelist_id", localPricelistID, "error", err)
|
||||
}
|
||||
|
||||
if err := s.localDB.SaveLocalPricelistItems(localItems); err != nil {
|
||||
return 0, fmt.Errorf("saving local pricelist items: %w", err)
|
||||
}
|
||||
@@ -702,6 +769,30 @@ func (s *Service) SyncPricelistItems(localPricelistID uint) (int, error) {
|
||||
return len(localItems), nil
|
||||
}
|
||||
|
||||
func (s *Service) fetchServerPricelistItems(serverPricelistID uint) ([]localdb.LocalPricelistItem, error) {
|
||||
// Get database connection
|
||||
mariaDB, err := s.getDB()
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("database not available: %w", err)
|
||||
}
|
||||
|
||||
// Create repository
|
||||
pricelistRepo := repository.NewPricelistRepository(mariaDB)
|
||||
|
||||
// Get items from server
|
||||
serverItems, _, err := pricelistRepo.GetItems(serverPricelistID, 0, 10000, "")
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("getting server pricelist items: %w", err)
|
||||
}
|
||||
|
||||
localItems := make([]localdb.LocalPricelistItem, len(serverItems))
|
||||
for i, item := range serverItems {
|
||||
localItems[i] = *localdb.PricelistItemToLocal(&item, 0)
|
||||
}
|
||||
|
||||
return localItems, nil
|
||||
}
|
||||
|
||||
// SyncPricelistItemsByServerID syncs items for a pricelist by its server ID
|
||||
func (s *Service) SyncPricelistItemsByServerID(serverPricelistID uint) (int, error) {
|
||||
localPL, err := s.localDB.GetLocalPricelistByServerID(serverPricelistID)
|
||||
@@ -711,111 +802,6 @@ func (s *Service) SyncPricelistItemsByServerID(serverPricelistID uint) (int, err
|
||||
return s.SyncPricelistItems(localPL.ID)
|
||||
}
|
||||
|
||||
func (s *Service) enrichLocalPricelistItemsWithStock(mariaDB *gorm.DB, items []localdb.LocalPricelistItem) error {
|
||||
if len(items) == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
bookRepo := repository.NewPartnumberBookRepository(s.localDB.DB())
|
||||
book, err := bookRepo.GetActiveBook()
|
||||
if err != nil || book == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
bookItems, err := bookRepo.GetBookItems(book.ID)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if len(bookItems) == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
partnumberToLots := make(map[string][]string, len(bookItems))
|
||||
for _, item := range bookItems {
|
||||
pn := strings.TrimSpace(item.Partnumber)
|
||||
if pn == "" {
|
||||
continue
|
||||
}
|
||||
seenLots := make(map[string]struct{}, len(item.LotsJSON))
|
||||
for _, lot := range item.LotsJSON {
|
||||
lotName := strings.TrimSpace(lot.LotName)
|
||||
if lotName == "" {
|
||||
continue
|
||||
}
|
||||
key := strings.ToLower(lotName)
|
||||
if _, exists := seenLots[key]; exists {
|
||||
continue
|
||||
}
|
||||
seenLots[key] = struct{}{}
|
||||
partnumberToLots[pn] = append(partnumberToLots[pn], lotName)
|
||||
}
|
||||
}
|
||||
if len(partnumberToLots) == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
type stockRow struct {
|
||||
Partnumber string `gorm:"column:partnumber"`
|
||||
Qty *float64 `gorm:"column:qty"`
|
||||
}
|
||||
rows := make([]stockRow, 0)
|
||||
if err := mariaDB.Raw(`
|
||||
SELECT s.partnumber, s.qty
|
||||
FROM stock_log s
|
||||
INNER JOIN (
|
||||
SELECT partnumber, MAX(date) AS max_date
|
||||
FROM stock_log
|
||||
GROUP BY partnumber
|
||||
) latest ON latest.partnumber = s.partnumber AND latest.max_date = s.date
|
||||
WHERE s.qty IS NOT NULL
|
||||
`).Scan(&rows).Error; err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
lotTotals := make(map[string]float64, len(items))
|
||||
lotPartnumbers := make(map[string][]string, len(items))
|
||||
seenPartnumbers := make(map[string]map[string]struct{}, len(items))
|
||||
|
||||
for _, row := range rows {
|
||||
pn := strings.TrimSpace(row.Partnumber)
|
||||
if pn == "" || row.Qty == nil {
|
||||
continue
|
||||
}
|
||||
lots := partnumberToLots[pn]
|
||||
if len(lots) == 0 {
|
||||
continue
|
||||
}
|
||||
for _, lotName := range lots {
|
||||
lotTotals[lotName] += *row.Qty
|
||||
if _, ok := seenPartnumbers[lotName]; !ok {
|
||||
seenPartnumbers[lotName] = make(map[string]struct{}, 4)
|
||||
}
|
||||
key := strings.ToLower(pn)
|
||||
if _, exists := seenPartnumbers[lotName][key]; exists {
|
||||
continue
|
||||
}
|
||||
seenPartnumbers[lotName][key] = struct{}{}
|
||||
lotPartnumbers[lotName] = append(lotPartnumbers[lotName], pn)
|
||||
}
|
||||
}
|
||||
|
||||
for i := range items {
|
||||
lotName := strings.TrimSpace(items[i].LotName)
|
||||
if qty, ok := lotTotals[lotName]; ok {
|
||||
qtyCopy := qty
|
||||
items[i].AvailableQty = &qtyCopy
|
||||
}
|
||||
if partnumbers := lotPartnumbers[lotName]; len(partnumbers) > 0 {
|
||||
sort.Slice(partnumbers, func(a, b int) bool {
|
||||
return strings.ToLower(partnumbers[a]) < strings.ToLower(partnumbers[b])
|
||||
})
|
||||
items[i].Partnumbers = append(localdb.LocalStringList{}, partnumbers...)
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// GetLocalPriceForLot returns the price for a lot from a local pricelist
|
||||
func (s *Service) GetLocalPriceForLot(localPricelistID uint, lotName string) (float64, error) {
|
||||
return s.localDB.GetLocalPriceForLot(localPricelistID, lotName)
|
||||
@@ -847,9 +833,15 @@ func (s *Service) GetPricelistForOffline(serverPricelistID uint) (*localdb.Local
|
||||
return localPL, nil
|
||||
}
|
||||
|
||||
// SyncPricelistsIfNeeded checks for new pricelists and syncs if needed
|
||||
// This should be called before creating a new configuration when online
|
||||
// SyncPricelistsIfNeeded checks for new pricelists and syncs if needed.
|
||||
// If a sync is already in progress, returns immediately without blocking.
|
||||
func (s *Service) SyncPricelistsIfNeeded() error {
|
||||
if !s.pricelistMu.TryLock() {
|
||||
slog.Debug("pricelist sync already in progress, skipping")
|
||||
return nil
|
||||
}
|
||||
defer s.pricelistMu.Unlock()
|
||||
|
||||
needSync, err := s.NeedSync()
|
||||
if err != nil {
|
||||
slog.Warn("failed to check if sync needed", "error", err)
|
||||
@@ -901,6 +893,7 @@ func (s *Service) PushPendingChanges() (int, error) {
|
||||
for _, change := range sortedChanges {
|
||||
err := s.pushSingleChange(&change)
|
||||
if err != nil {
|
||||
s.markConnectionBroken(err)
|
||||
slog.Warn("failed to push change", "id", change.ID, "type", change.EntityType, "operation", change.Operation, "error", err)
|
||||
// Increment attempts
|
||||
s.localDB.IncrementPendingChangeAttempts(change.ID, err.Error())
|
||||
@@ -1008,7 +1001,7 @@ func (s *Service) pushProjectChange(change *localdb.PendingChange) error {
|
||||
localProject.SyncStatus = "synced"
|
||||
now := time.Now()
|
||||
localProject.SyncedAt = &now
|
||||
_ = s.localDB.SaveProject(localProject)
|
||||
_ = s.localDB.SaveProjectPreservingUpdatedAt(localProject)
|
||||
}
|
||||
|
||||
return nil
|
||||
@@ -1278,7 +1271,7 @@ func (s *Service) ensureConfigurationProject(mariaDB *gorm.DB, cfg *models.Confi
|
||||
localProject.SyncStatus = "synced"
|
||||
now := time.Now()
|
||||
localProject.SyncedAt = &now
|
||||
_ = s.localDB.SaveProject(localProject)
|
||||
_ = s.localDB.SaveProjectPreservingUpdatedAt(localProject)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -17,7 +17,6 @@ func TestSyncPricelists_BackfillsLotCategoryForUsedPricelistItems(t *testing.T)
|
||||
&models.Pricelist{},
|
||||
&models.PricelistItem{},
|
||||
&models.Lot{},
|
||||
&models.StockLog{},
|
||||
); err != nil {
|
||||
t.Fatalf("migrate server tables: %v", err)
|
||||
}
|
||||
@@ -103,103 +102,3 @@ func TestSyncPricelists_BackfillsLotCategoryForUsedPricelistItems(t *testing.T)
|
||||
t.Fatalf("expected lot_category backfilled to CPU, got %q", items[0].LotCategory)
|
||||
}
|
||||
}
|
||||
|
||||
func TestSyncPricelistItems_EnrichesStockFromLocalPartnumberBook(t *testing.T) {
|
||||
local := newLocalDBForSyncTest(t)
|
||||
serverDB := newServerDBForSyncTest(t)
|
||||
|
||||
if err := serverDB.AutoMigrate(
|
||||
&models.Pricelist{},
|
||||
&models.PricelistItem{},
|
||||
&models.Lot{},
|
||||
&models.StockLog{},
|
||||
); err != nil {
|
||||
t.Fatalf("migrate server tables: %v", err)
|
||||
}
|
||||
|
||||
serverPL := models.Pricelist{
|
||||
Source: "warehouse",
|
||||
Version: "2026-03-07-001",
|
||||
Notification: "server",
|
||||
CreatedBy: "tester",
|
||||
IsActive: true,
|
||||
CreatedAt: time.Now().Add(-1 * time.Hour),
|
||||
}
|
||||
if err := serverDB.Create(&serverPL).Error; err != nil {
|
||||
t.Fatalf("create server pricelist: %v", err)
|
||||
}
|
||||
if err := serverDB.Create(&models.PricelistItem{
|
||||
PricelistID: serverPL.ID,
|
||||
LotName: "CPU_A",
|
||||
LotCategory: "CPU",
|
||||
Price: 10,
|
||||
}).Error; err != nil {
|
||||
t.Fatalf("create server pricelist item: %v", err)
|
||||
}
|
||||
qty := 7.0
|
||||
if err := serverDB.Create(&models.StockLog{
|
||||
Partnumber: "CPU-PN-1",
|
||||
Date: time.Now(),
|
||||
Price: 100,
|
||||
Qty: &qty,
|
||||
}).Error; err != nil {
|
||||
t.Fatalf("create stock log: %v", err)
|
||||
}
|
||||
|
||||
if err := local.SaveLocalPricelist(&localdb.LocalPricelist{
|
||||
ServerID: serverPL.ID,
|
||||
Source: serverPL.Source,
|
||||
Version: serverPL.Version,
|
||||
Name: serverPL.Notification,
|
||||
CreatedAt: serverPL.CreatedAt,
|
||||
SyncedAt: time.Now(),
|
||||
IsUsed: false,
|
||||
}); err != nil {
|
||||
t.Fatalf("seed local pricelist: %v", err)
|
||||
}
|
||||
localPL, err := local.GetLocalPricelistByServerID(serverPL.ID)
|
||||
if err != nil {
|
||||
t.Fatalf("get local pricelist: %v", err)
|
||||
}
|
||||
|
||||
if err := local.DB().Create(&localdb.LocalPartnumberBook{
|
||||
ServerID: 1,
|
||||
Version: "2026-03-07-001",
|
||||
CreatedAt: time.Now(),
|
||||
IsActive: true,
|
||||
PartnumbersJSON: localdb.LocalStringList{"CPU-PN-1"},
|
||||
}).Error; err != nil {
|
||||
t.Fatalf("create local partnumber book: %v", err)
|
||||
}
|
||||
if err := local.DB().Create(&localdb.LocalPartnumberBookItem{
|
||||
Partnumber: "CPU-PN-1",
|
||||
LotsJSON: localdb.LocalPartnumberBookLots{
|
||||
{LotName: "CPU_A", Qty: 1},
|
||||
},
|
||||
Description: "CPU PN",
|
||||
}).Error; err != nil {
|
||||
t.Fatalf("create local partnumber book item: %v", err)
|
||||
}
|
||||
|
||||
svc := syncsvc.NewServiceWithDB(serverDB, local)
|
||||
if _, err := svc.SyncPricelistItems(localPL.ID); err != nil {
|
||||
t.Fatalf("sync pricelist items: %v", err)
|
||||
}
|
||||
|
||||
items, err := local.GetLocalPricelistItems(localPL.ID)
|
||||
if err != nil {
|
||||
t.Fatalf("load local items: %v", err)
|
||||
}
|
||||
if len(items) != 1 {
|
||||
t.Fatalf("expected 1 local item, got %d", len(items))
|
||||
}
|
||||
if items[0].AvailableQty == nil {
|
||||
t.Fatalf("expected available_qty to be set")
|
||||
}
|
||||
if *items[0].AvailableQty != 7 {
|
||||
t.Fatalf("expected available_qty=7, got %v", *items[0].AvailableQty)
|
||||
}
|
||||
if len(items[0].Partnumbers) != 1 || items[0].Partnumbers[0] != "CPU-PN-1" {
|
||||
t.Fatalf("expected partnumbers [CPU-PN-1], got %v", items[0].Partnumbers)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,12 +1,15 @@
|
||||
package sync_test
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"strings"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"git.mchus.pro/mchus/quoteforge/internal/localdb"
|
||||
"git.mchus.pro/mchus/quoteforge/internal/models"
|
||||
syncsvc "git.mchus.pro/mchus/quoteforge/internal/services/sync"
|
||||
"gorm.io/gorm"
|
||||
)
|
||||
|
||||
func TestSyncPricelistsDeletesMissingUnusedLocalPricelists(t *testing.T) {
|
||||
@@ -83,3 +86,58 @@ func TestSyncPricelistsDeletesMissingUnusedLocalPricelists(t *testing.T) {
|
||||
t.Fatalf("expected server pricelist to be synced locally: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
func TestSyncPricelistsDoesNotPersistHeaderWithoutItems(t *testing.T) {
|
||||
local := newLocalDBForSyncTest(t)
|
||||
serverDB := newServerDBForSyncTest(t)
|
||||
if err := serverDB.AutoMigrate(&models.Pricelist{}, &models.PricelistItem{}); err != nil {
|
||||
t.Fatalf("migrate server pricelist tables: %v", err)
|
||||
}
|
||||
|
||||
serverPL := models.Pricelist{
|
||||
Source: "estimate",
|
||||
Version: "2026-03-17-001",
|
||||
Notification: "server",
|
||||
CreatedBy: "tester",
|
||||
IsActive: true,
|
||||
CreatedAt: time.Now().Add(-1 * time.Hour),
|
||||
}
|
||||
if err := serverDB.Create(&serverPL).Error; err != nil {
|
||||
t.Fatalf("create server pricelist: %v", err)
|
||||
}
|
||||
if err := serverDB.Create(&models.PricelistItem{PricelistID: serverPL.ID, LotName: "CPU_A", Price: 10}).Error; err != nil {
|
||||
t.Fatalf("create server pricelist item: %v", err)
|
||||
}
|
||||
|
||||
const callbackName = "test:fail_qt_pricelist_items_query"
|
||||
if err := serverDB.Callback().Query().Before("gorm:query").Register(callbackName, func(db *gorm.DB) {
|
||||
if db.Statement != nil && db.Statement.Table == "qt_pricelist_items" {
|
||||
_ = db.AddError(errors.New("forced pricelist item fetch failure"))
|
||||
}
|
||||
}); err != nil {
|
||||
t.Fatalf("register query callback: %v", err)
|
||||
}
|
||||
defer serverDB.Callback().Query().Remove(callbackName)
|
||||
|
||||
svc := syncsvc.NewServiceWithDB(serverDB, local)
|
||||
synced, err := svc.SyncPricelists()
|
||||
if err == nil {
|
||||
t.Fatalf("expected sync error when item fetch fails")
|
||||
}
|
||||
if synced != 0 {
|
||||
t.Fatalf("expected synced=0 on incomplete sync, got %d", synced)
|
||||
}
|
||||
if !strings.Contains(err.Error(), "forced pricelist item fetch failure") {
|
||||
t.Fatalf("expected item fetch error, got %v", err)
|
||||
}
|
||||
|
||||
if _, err := local.GetLocalPricelistByServerID(serverPL.ID); err == nil {
|
||||
t.Fatalf("expected pricelist header not to be persisted without items")
|
||||
}
|
||||
if got := local.CountLocalPricelists(); got != 0 {
|
||||
t.Fatalf("expected no local pricelists after failed sync, got %d", got)
|
||||
}
|
||||
if ts := local.GetLastSyncTime(); ts != nil {
|
||||
t.Fatalf("expected last_pricelist_sync to stay unset on incomplete sync, got %v", ts)
|
||||
}
|
||||
}
|
||||
|
||||
41
memory.md
41
memory.md
@@ -1,41 +0,0 @@
|
||||
# Changes summary (2026-02-11)
|
||||
|
||||
Implemented strict `lot_category` flow using `pricelist_items.lot_category` only (no parsing from `lot_name`), plus local caching and backfill:
|
||||
|
||||
1. Local DB schema + migrations
|
||||
- Added `lot_category` column to `local_pricelist_items` via `LocalPricelistItem` model.
|
||||
- Added local migration `2026_02_11_local_pricelist_item_category` to add the column if missing and create indexes:
|
||||
- `idx_local_pricelist_items_pricelist_lot (pricelist_id, lot_name)`
|
||||
- `idx_local_pricelist_items_lot_category (lot_category)`
|
||||
|
||||
2. Server model/repository
|
||||
- Added `LotCategory` field to `models.PricelistItem`.
|
||||
- `PricelistRepository.GetItems` now sets `Category` from `LotCategory` (no parsing from `lot_name`).
|
||||
|
||||
3. Sync + local DB helpers
|
||||
- `SyncPricelistItems` now saves `lot_category` into local cache via `PricelistItemToLocal`.
|
||||
- Added `LocalDB.CountLocalPricelistItemsWithEmptyCategory` and `LocalDB.ReplaceLocalPricelistItems`.
|
||||
- Added `LocalDB.GetLocalLotCategoriesByServerPricelistID` for strict category lookup.
|
||||
- Added `SyncPricelists` backfill step: for used active pricelists with empty categories, force refresh items from server.
|
||||
|
||||
4. API handler
|
||||
- `GET /api/pricelists/:id/items` returns `category` from `local_pricelist_items.lot_category` (no parsing from `lot_name`).
|
||||
|
||||
5. Article category foundation
|
||||
- New package `internal/article`:
|
||||
- `ResolveLotCategoriesStrict` pulls categories from local pricelist items and errors on missing category.
|
||||
- `GroupForLotCategory` maps only allowed codes (CPU/MEM/GPU/M2/SSD/HDD/EDSFF/HHHL/NIC/HCA/DPU/PSU/PS) to article groups; excludes `SFP`.
|
||||
- Error type `MissingCategoryForLotError` with base `ErrMissingCategoryForLot`.
|
||||
|
||||
6. Tests
|
||||
- Added unit tests for converters and article category resolver.
|
||||
- Added handler test to ensure `/api/pricelists/:id/items` returns `lot_category`.
|
||||
- Added sync test for category backfill on used pricelist items.
|
||||
- `go test ./...` passed.
|
||||
|
||||
Additional fixes (2026-02-11):
|
||||
- Fixed article parsing bug: CPU/GPU parsers were swapped in `internal/article/generator.go`. CPU now uses last token from CPU lot; GPU uses model+memory from `GPU_vendor_model_mem_iface`.
|
||||
- Adjusted configurator base tab layout to align labels on the same row (separate label row + input row grid).
|
||||
|
||||
UI rule (2026-02-19):
|
||||
- In all breadcrumbs, truncate long specification/configuration names to 16 characters using ellipsis.
|
||||
2
migrations/029_add_config_type.sql
Normal file
2
migrations/029_add_config_type.sql
Normal file
@@ -0,0 +1,2 @@
|
||||
ALTER TABLE qt_configurations
|
||||
ADD COLUMN config_type VARCHAR(20) NOT NULL DEFAULT 'server';
|
||||
18
releases/README.md
Normal file
18
releases/README.md
Normal file
@@ -0,0 +1,18 @@
|
||||
# Releases
|
||||
|
||||
This directory stores packaged release artifacts and per-release notes.
|
||||
|
||||
Rules:
|
||||
- keep release notes next to the matching release directory as `RELEASE_NOTES.md`;
|
||||
- do not keep duplicate changelog memory files elsewhere in the repository;
|
||||
- if a release directory has no notes yet, add them there instead of creating side documents.
|
||||
|
||||
Current convention:
|
||||
|
||||
```text
|
||||
releases/
|
||||
v1.5.0/
|
||||
RELEASE_NOTES.md
|
||||
SHA256SUMS.txt
|
||||
qfs-...
|
||||
```
|
||||
@@ -1,72 +0,0 @@
|
||||
# v1.2.1 Release Notes
|
||||
|
||||
**Date:** 2026-02-09
|
||||
**Changes since v1.2.0:** 2 commits
|
||||
|
||||
## Summary
|
||||
Fixed configurator component substitution by updating to work with new pricelist-based pricing model. Addresses regression from v1.2.0 refactor that removed `CurrentPrice` field from components.
|
||||
|
||||
## Commits
|
||||
|
||||
### 1. Refactor: Remove CurrentPrice from local_components (5984a57)
|
||||
**Type:** Refactor
|
||||
**Files Changed:** 11 files, +167 insertions, -194 deletions
|
||||
|
||||
#### Overview
|
||||
Transitioned from component-based pricing to pricelist-based pricing model:
|
||||
- Removed `CurrentPrice` and `SyncedAt` from LocalComponent (metadata-only now)
|
||||
- Added `WarehousePricelistID` and `CompetitorPricelistID` to LocalConfiguration
|
||||
- Removed 2 unused methods: UpdateComponentPricesFromPricelist, EnsureComponentPricesFromPricelists
|
||||
|
||||
#### Key Changes
|
||||
- **Data Model:**
|
||||
- LocalComponent: now stores only metadata (LotName, LotDescription, Category, Model)
|
||||
- LocalConfiguration: added warehouse and competitor pricelist references
|
||||
|
||||
- **Migrations:**
|
||||
- drop_component_unused_fields - removes CurrentPrice, SyncedAt columns
|
||||
- add_warehouse_competitor_pricelists - adds new pricelist fields
|
||||
|
||||
- **Quote Calculation:**
|
||||
- Updated to use pricelist_items instead of component.CurrentPrice
|
||||
- Added PricelistID field to QuoteRequest
|
||||
- Maintains offline-first behavior
|
||||
|
||||
- **API:**
|
||||
- Removed CurrentPrice from ComponentView
|
||||
- Components API no longer returns pricing
|
||||
|
||||
### 2. Fix: Load component prices via API (acf7c8a)
|
||||
**Type:** Bug Fix
|
||||
**Files Changed:** 1 file (web/templates/index.html), +66 insertions, -12 deletions
|
||||
|
||||
#### Problem
|
||||
After v1.2.0 refactor, the configurator's autocomplete was filtering out all components because it checked for the removed `current_price` field on component objects.
|
||||
|
||||
#### Solution
|
||||
Implemented on-demand price loading via API:
|
||||
- Added `ensurePricesLoaded()` function to fetch prices from `/api/quote/price-levels`
|
||||
- Added `componentPricesCache` to cache loaded prices in memory
|
||||
- Updated all 3 autocomplete modes (single, multi, section) to load prices when input is focused
|
||||
- Changed price validation from `c.current_price` to `hasComponentPrice(lot_name)`
|
||||
- Updated cart item creation to use cached API prices
|
||||
|
||||
#### Impact
|
||||
- Components without prices are still filtered out (as required)
|
||||
- Price checks now use API data instead of removed database field
|
||||
- Frontend loads prices on-demand for better performance
|
||||
|
||||
## Testing Notes
|
||||
- ✅ Configurator component substitution now works
|
||||
- ✅ Prices load correctly from pricelist
|
||||
- ✅ Offline mode still supported (prices cached after initial load)
|
||||
- ✅ Multi-pricelist support functional (estimate/warehouse/competitor)
|
||||
|
||||
## Known Issues
|
||||
None
|
||||
|
||||
## Migration Path
|
||||
No database migration needed from v1.2.0 - migrations were applied in v1.2.0 release.
|
||||
|
||||
## Breaking Changes
|
||||
None for end users. Internal: `ComponentView` no longer includes `CurrentPrice` in API responses.
|
||||
@@ -1,59 +0,0 @@
|
||||
# Release v1.2.2 (2026-02-09)
|
||||
|
||||
## Summary
|
||||
|
||||
Fixed CSV export filename inconsistency where project names weren't being resolved correctly. Standardized export format across both manual exports and project configuration exports to use `YYYY-MM-DD (project_name) config_name BOM.csv`.
|
||||
|
||||
## Commits
|
||||
|
||||
- `8f596ce` fix: standardize CSV export filename format to use project name
|
||||
|
||||
## Changes
|
||||
|
||||
### CSV Export Filename Standardization
|
||||
|
||||
**Problem:**
|
||||
- ExportCSV and ExportConfigCSV had inconsistent filename formats
|
||||
- Project names sometimes fell back to config names when not explicitly provided
|
||||
- Export timestamps didn't reflect actual price update time
|
||||
|
||||
**Solution:**
|
||||
- Unified format: `YYYY-MM-DD (project_name) config_name BOM.csv`
|
||||
- Both export paths now use PriceUpdatedAt if available, otherwise CreatedAt
|
||||
- Project name resolved from ProjectUUID via ProjectService for both paths
|
||||
- Frontend passes project_uuid context when exporting
|
||||
|
||||
**Technical Details:**
|
||||
|
||||
Backend:
|
||||
- Added `ProjectUUID` field to `ExportRequest` struct in handlers/export.go
|
||||
- Updated ExportCSV to look up project name from ProjectUUID using ProjectService
|
||||
- Ensured ExportConfigCSV gets project name from config's ProjectUUID
|
||||
- Both use CreatedAt (for ExportCSV) or PriceUpdatedAt/CreatedAt (for ExportConfigCSV)
|
||||
|
||||
Frontend:
|
||||
- Added `projectUUID` and `projectName` state variables in index.html
|
||||
- Load and store projectUUID when configuration is loaded
|
||||
- Pass `project_uuid` in JSON body for both export requests
|
||||
|
||||
## Files Modified
|
||||
|
||||
- `internal/handlers/export.go` - Project name resolution and ExportRequest update
|
||||
- `internal/handlers/export_test.go` - Updated mock initialization with projectService param
|
||||
- `cmd/qfs/main.go` - Pass projectService to ExportHandler constructor
|
||||
- `web/templates/index.html` - Add projectUUID tracking and export payload updates
|
||||
|
||||
## Testing Notes
|
||||
|
||||
✅ All existing tests updated and passing
|
||||
✅ Code builds without errors
|
||||
✅ Export filename now includes correct project name
|
||||
✅ Works for both form-based and project-based exports
|
||||
|
||||
## Breaking Changes
|
||||
|
||||
None - API response format unchanged, only filename generation updated.
|
||||
|
||||
## Known Issues
|
||||
|
||||
None identified.
|
||||
@@ -1,95 +0,0 @@
|
||||
# Release v1.2.3 (2026-02-10)
|
||||
|
||||
## Summary
|
||||
|
||||
Unified synchronization functionality with event-driven UI updates. Resolved user confusion about duplicate sync buttons by implementing a single sync source with automatic page refreshes.
|
||||
|
||||
## Changes
|
||||
|
||||
### Main Feature: Sync Event System
|
||||
|
||||
- **Added `sync-completed` event** in base.html's `syncAction()` function
|
||||
- Dispatched after successful `/api/sync/all` or `/api/sync/push`
|
||||
- Includes endpoint and response data in event detail
|
||||
- Enables pages to react automatically to sync completion
|
||||
|
||||
### Configs Page (`configs.html`)
|
||||
|
||||
- **Removed "Импорт с сервера" button** - duplicate functionality no longer needed
|
||||
- **Updated layout** - changed from 2-column grid to single button layout
|
||||
- **Removed `importConfigsFromServer()` function** - functionality now handled by navbar sync
|
||||
- **Added sync-completed event listener**:
|
||||
- Automatically reloads configurations list after sync
|
||||
- Resets pagination to first page
|
||||
- New configurations appear immediately without manual refresh
|
||||
|
||||
### Projects Page (`projects.html`)
|
||||
|
||||
- **Wrapped initialization in DOMContentLoaded**:
|
||||
- Moved `loadProjects()` and all event listeners inside handler
|
||||
- Ensures DOM is fully loaded before accessing elements
|
||||
- **Added sync-completed event listener**:
|
||||
- Automatically reloads projects list after sync
|
||||
- New projects appear immediately without manual refresh
|
||||
|
||||
### Pricelists Page (`pricelists.html`)
|
||||
|
||||
- **Added sync-completed event listener** to existing DOMContentLoaded:
|
||||
- Automatically reloads pricelists when sync completes
|
||||
- Maintains existing permissions and modal functionality
|
||||
|
||||
## Benefits
|
||||
|
||||
### User Experience
|
||||
- ✅ Single "Синхронизация" button in navbar - no confusion about sync sources
|
||||
- ✅ Automatic list updates after sync - no need for manual F5 refresh
|
||||
- ✅ Consistent behavior across all pages (configs, projects, pricelists)
|
||||
- ✅ Better feedback: toast notification + automatic UI refresh
|
||||
|
||||
### Architecture
|
||||
- ✅ Event-driven loose coupling between navbar and pages
|
||||
- ✅ Easy to extend to other pages (just add event listener)
|
||||
- ✅ No backend changes needed
|
||||
- ✅ Production-ready
|
||||
|
||||
## Breaking Changes
|
||||
|
||||
- **`/api/configs/import` endpoint** still works but UI button removed
|
||||
- Users should use navbar "Синхронизация" button instead
|
||||
- Backend API remains unchanged for backward compatibility
|
||||
|
||||
## Files Modified
|
||||
|
||||
1. `web/templates/base.html` - Added sync-completed event dispatch
|
||||
2. `web/templates/configs.html` - Event listener + removed duplicate UI
|
||||
3. `web/templates/projects.html` - DOMContentLoaded wrapper + event listener
|
||||
4. `web/templates/pricelists.html` - Event listener for auto-refresh
|
||||
|
||||
**Stats:** 4 files changed, 59 insertions(+), 65 deletions(-)
|
||||
|
||||
## Commits
|
||||
|
||||
- `99fd80b` - feat: unify sync functionality with event-driven UI updates
|
||||
|
||||
## Testing Checklist
|
||||
|
||||
- [x] Configs page: New configurations appear after navbar sync
|
||||
- [x] Projects page: New projects appear after navbar sync
|
||||
- [x] Pricelists page: Pricelists refresh after navbar sync
|
||||
- [x] Both `/api/sync/all` and `/api/sync/push` trigger updates
|
||||
- [x] Toast notifications still show correctly
|
||||
- [x] Sync status indicator updates
|
||||
- [x] Error handling (423, network errors) still works
|
||||
- [x] Mode switching (Active/Archive) works correctly
|
||||
- [x] Backward compatibility maintained
|
||||
|
||||
## Known Issues
|
||||
|
||||
None - implementation is production-ready
|
||||
|
||||
## Migration Notes
|
||||
|
||||
No migration needed. Changes are frontend-only and backward compatible:
|
||||
- Old `/api/configs/import` endpoint still functional
|
||||
- No database schema changes
|
||||
- No configuration changes needed
|
||||
@@ -1,68 +0,0 @@
|
||||
# Release v1.3.0 (2026-02-11)
|
||||
|
||||
## Summary
|
||||
|
||||
Introduced article generation with pricelist categories, added local configuration storage, and expanded sync/export capabilities. Simplified article generator compression and loosened project update constraints.
|
||||
|
||||
## Changes
|
||||
|
||||
### Main Features: Articles + Pricelist Categories
|
||||
|
||||
- **Article generation pipeline**
|
||||
- New generator and tests under `internal/article/`
|
||||
- Category support with test coverage
|
||||
- **Pricelist category integration**
|
||||
- Handler and repository updates
|
||||
- Sync backfill test for category propagation
|
||||
|
||||
### Local Configuration Storage
|
||||
|
||||
- **Local DB support**
|
||||
- New localdb models, converters, snapshots, and migrations
|
||||
- Local configuration service for cached configurations
|
||||
|
||||
### Export & UI
|
||||
|
||||
- **Export handler updates** for article data output
|
||||
- **Configs and index templates** adjusted for new article-related fields
|
||||
|
||||
### Behavior Changes
|
||||
|
||||
- **Cross-user project updates allowed**
|
||||
- Removed restriction in project service
|
||||
- **Article compression refinement**
|
||||
- Generator logic simplified to reduce complexity
|
||||
|
||||
## Breaking Changes
|
||||
|
||||
None identified. Existing APIs remain intact.
|
||||
|
||||
## Files Modified
|
||||
|
||||
1. `internal/article/*` - Article generator + categories + tests
|
||||
2. `internal/localdb/*` - Local DB models, migrations, snapshots
|
||||
3. `internal/handlers/export.go` - Export updates
|
||||
4. `internal/handlers/pricelist.go` - Category handling
|
||||
5. `internal/services/sync/service.go` - Category backfill logic
|
||||
6. `web/templates/configs.html` - Article field updates
|
||||
7. `web/templates/index.html` - Article field updates
|
||||
|
||||
**Stats:** 33 files changed, 2059 insertions(+), 329 deletions(-)
|
||||
|
||||
## Commits
|
||||
|
||||
- `5edffe8` - Add article generation and pricelist categories
|
||||
- `e355903` - Allow cross-user project updates
|
||||
- `e58fd35` - Refine article compression and simplify generator
|
||||
|
||||
## Testing Checklist
|
||||
|
||||
- [ ] Tests not run (not requested)
|
||||
|
||||
## Migration Notes
|
||||
|
||||
- New migrations:
|
||||
- `022_add_article_to_configurations.sql`
|
||||
- `023_add_server_model_to_configurations.sql`
|
||||
- `024_add_support_code_to_configurations.sql`
|
||||
- Ensure migrations are applied before running v1.3.0
|
||||
@@ -1,66 +0,0 @@
|
||||
# Release v1.3.2 (2026-02-19)
|
||||
|
||||
## Summary
|
||||
|
||||
Release focuses on stability and data integrity for local configurations. Added configuration revision history, stronger recovery for broken local sync/version states, improved sync self-healing, and clearer API error logging.
|
||||
|
||||
## Changes
|
||||
|
||||
### Configuration Revisions
|
||||
|
||||
- Added full local configuration revision flow with storage and UI support.
|
||||
- Introduced revisions page/template and backend plumbing for browsing revisions.
|
||||
- Prevented duplicate revisions when content did not actually change.
|
||||
|
||||
### Local Data Integrity and Recovery
|
||||
|
||||
- Added migration and snapshot support for local configuration version data.
|
||||
- Hardened updates for legacy/orphaned configuration rows:
|
||||
- allow update when project UUID is unchanged even if referenced project is missing locally;
|
||||
- recover gracefully when `current_version_id` is stale or version rows are missing.
|
||||
- Added regression tests for orphan-project and missing-current-version scenarios.
|
||||
|
||||
### Sync Reliability
|
||||
|
||||
- Added smart self-healing path for sync errors.
|
||||
- Fixed duplicate-project sync edge cases.
|
||||
|
||||
### API and Logging
|
||||
|
||||
- Improved HTTP error mapping for configuration updates (`404/403` instead of generic `500` in known cases).
|
||||
- Enhanced request logger to capture error responses (status, response body snippet, gin errors) for failed requests.
|
||||
|
||||
### UI and Export
|
||||
|
||||
- Updated project detail and index templates for revisions and related UX improvements.
|
||||
- Updated export pipeline and tests to align with revisions/project behavior changes.
|
||||
|
||||
## Breaking Changes
|
||||
|
||||
None identified.
|
||||
|
||||
## Files Changed
|
||||
|
||||
- 24 files changed, 2394 insertions(+), 482 deletions(-)
|
||||
- Main touched areas:
|
||||
- `internal/services/local_configuration.go`
|
||||
- `internal/services/local_configuration_versioning_test.go`
|
||||
- `internal/localdb/{localdb.go,migrations.go,snapshots.go,local_migrations_test.go}`
|
||||
- `internal/services/export.go`
|
||||
- `cmd/qfs/main.go`
|
||||
- `web/templates/{config_revisions.html,project_detail.html,index.html,base.html}`
|
||||
|
||||
## Commits Included (`v1.3.1..v1.3.2`)
|
||||
|
||||
- `b153afb` - Add smart self-healing for sync errors
|
||||
- `8508ee2` - Fix sync errors for duplicate projects and add modal scrolling
|
||||
- `2e973b6` - Add configuration revisions system and project variant deletion
|
||||
- `71f73e2` - chore: save current changes
|
||||
- `cbaeafa` - Deduplicate configuration revisions and update revisions UI
|
||||
- `075fc70` - Harden local config updates and error logging
|
||||
|
||||
## Testing
|
||||
|
||||
- [x] Targeted tests for local configuration update/version recovery:
|
||||
- `go test ./internal/services -run 'TestUpdateNoAuth(AllowsOrphanProjectWhenUUIDUnchanged|RecoversWhenCurrentVersionMissing|KeepsProjectWhenProjectUUIDOmitted)$'`
|
||||
- [ ] Full regression suite not run in this release step.
|
||||
@@ -1,89 +1,20 @@
|
||||
# QuoteForge v1.2.1
|
||||
|
||||
**Дата релиза:** 2026-02-09
|
||||
**Тег:** `v1.2.1`
|
||||
**GitHub:** https://git.mchus.pro/mchus/QuoteForge/releases/tag/v1.2.1
|
||||
Дата релиза: 2026-02-09
|
||||
Тег: `v1.2.1`
|
||||
|
||||
## Резюме
|
||||
## Ключевые изменения
|
||||
|
||||
Быстрый патч-релиз, исправляющий регрессию в конфигураторе после рефактора v1.2.0. После удаления поля `CurrentPrice` из компонентов, autocomplete перестал показывать компоненты. Теперь используется на-demand загрузка цен через API.
|
||||
- исправлена регрессия autocomplete после отказа от `CurrentPrice` в компонентах;
|
||||
- цены компонентов подгружаются через `/api/quote/price-levels`;
|
||||
- подготовлена сопровождающая release documentation.
|
||||
|
||||
## Что исправлено
|
||||
## Коммиты релиза
|
||||
|
||||
### 🐛 Configurator Component Substitution (acf7c8a)
|
||||
- **Проблема:** После рефактора в v1.2.0, autocomplete фильтровал ВСЕ компоненты, потому что проверял удаленное поле `current_price`
|
||||
- **Решение:** Загрузка цен на-demand через `/api/quote/price-levels`
|
||||
- Добавлен `componentPricesCache` для кэширования цен в памяти
|
||||
- Функция `ensurePricesLoaded()` загружает цены при фокусе на поле поиска
|
||||
- Все 3 режима autocomplete (single, multi, section) обновлены
|
||||
- Компоненты без цен по-прежнему фильтруются (как требуется), но проверка использует API
|
||||
- **Затронутые файлы:** `web/templates/index.html` (+66 строк, -12 строк)
|
||||
- `acf7c8a` fix: load component prices via API instead of removed current_price field
|
||||
- `5984a57` refactor: remove CurrentPrice from local_components and transition to pricelist-based pricing
|
||||
- `8fd27d1` docs: update v1.2.1 release notes with full changelog
|
||||
|
||||
## История v1.2.0 → v1.2.1
|
||||
## Совместимость
|
||||
|
||||
Всего коммитов: **2**
|
||||
|
||||
| Хеш | Автор | Сообщение |
|
||||
|-----|-------|-----------|
|
||||
| `acf7c8a` | Claude | fix: load component prices via API instead of removed current_price field |
|
||||
| `5984a57` | Claude | refactor: remove CurrentPrice from local_components and transition to pricelist-based pricing |
|
||||
|
||||
## Тестирование
|
||||
|
||||
✅ Configurator component substitution работает
|
||||
✅ Цены загружаются корректно из pricelist
|
||||
✅ Offline режим поддерживается (цены кэшируются после первой загрузки)
|
||||
✅ Multi-pricelist поддержка функциональна (estimate/warehouse/competitor)
|
||||
|
||||
## Breaking Changes
|
||||
|
||||
Нет критических изменений для конечных пользователей.
|
||||
|
||||
⚠️ **Для разработчиков:** `ComponentView` API больше не возвращает `CurrentPrice`.
|
||||
|
||||
## Миграция
|
||||
|
||||
Не требуется миграция БД — все миграции были применены в v1.2.0.
|
||||
|
||||
## Установка
|
||||
|
||||
### macOS
|
||||
|
||||
```bash
|
||||
# Скачать и распаковать
|
||||
tar xzf qfs-v1.2.1-darwin-arm64.tar.gz # для Apple Silicon
|
||||
# или
|
||||
tar xzf qfs-v1.2.1-darwin-amd64.tar.gz # для Intel Mac
|
||||
|
||||
# Снять ограничение Gatekeeper (если требуется)
|
||||
xattr -d com.apple.quarantine ./qfs
|
||||
|
||||
# Запустить
|
||||
./qfs
|
||||
```
|
||||
|
||||
### Linux
|
||||
|
||||
```bash
|
||||
tar xzf qfs-v1.2.1-linux-amd64.tar.gz
|
||||
./qfs
|
||||
```
|
||||
|
||||
### Windows
|
||||
|
||||
```bash
|
||||
# Распаковать qfs-v1.2.1-windows-amd64.zip
|
||||
# Запустить qfs.exe
|
||||
```
|
||||
|
||||
## Известные проблемы
|
||||
|
||||
Нет известных проблем на момент релиза.
|
||||
|
||||
## Поддержка
|
||||
|
||||
По вопросам обращайтесь: [@mchus](https://git.mchus.pro/mchus)
|
||||
|
||||
---
|
||||
|
||||
*Отправлено с ❤️ через Claude Code*
|
||||
- дополнительных миграций поверх `v1.2.0` не требуется.
|
||||
|
||||
25
releases/v1.5.3/RELEASE_NOTES.md
Normal file
25
releases/v1.5.3/RELEASE_NOTES.md
Normal file
@@ -0,0 +1,25 @@
|
||||
# QuoteForge v1.5.3
|
||||
|
||||
Дата релиза: 2026-03-15
|
||||
Тег: `v1.5.3`
|
||||
|
||||
## Ключевые изменения
|
||||
|
||||
- документация проекта очищена и приведена к одному формату;
|
||||
- `bible-local/` сокращён до актуальных архитектурных контрактов без исторического шума;
|
||||
- удалены временные заметки и дублирующий changelog в `releases/memory`;
|
||||
- runtime config упрощён: из активной схемы убраны мёртвые секции, оставлены только используемые части.
|
||||
|
||||
## Затронутые области
|
||||
|
||||
- корневой `README.md`;
|
||||
- весь `bible-local/`;
|
||||
- `config.example.yaml`;
|
||||
- `internal/config/config.go`;
|
||||
- release notes и правила их хранения в `releases/`.
|
||||
|
||||
## Совместимость
|
||||
|
||||
- релиз не меняет пользовательскую модель данных;
|
||||
- локальные и серверные миграции не требуются;
|
||||
- основное изменение касается документации и формы runtime-конфига.
|
||||
67
releases/v1.5.4/RELEASE_NOTES.md
Normal file
67
releases/v1.5.4/RELEASE_NOTES.md
Normal file
@@ -0,0 +1,67 @@
|
||||
# QuoteForge v1.5.4
|
||||
|
||||
Дата релиза: 2026-03-16
|
||||
Тег: `v1.5.4`
|
||||
|
||||
Предыдущий релиз: `v1.5.0`
|
||||
|
||||
## Ключевые изменения
|
||||
|
||||
- pricing tab переработан: закупка и продажа разделены на отдельные таблицы с ценами за 1 шт.;
|
||||
- экран прайслиста переработан под разные типы источников; удалены misleading-колонки `Поставщик` и `partnumbers`;
|
||||
- runtime и startup ужесточены: локальный клиент принудительно работает только на loopback, конфиг автоматически нормализуется;
|
||||
- добавлены действия с вариантом и унифицированы правила именования `_копия` для вариантов и конфигураций;
|
||||
- исправлен CSV-экспорт прайсинговых таблиц в конфигураторе под Excel-совместимый формат Excel-friendly;
|
||||
- таблица проектов переработана: дата последней правки, tooltip с деталями, отдельный автор, компактные действия и ссылка на трекер;
|
||||
- sync больше не подменяет `updated_at` проектов временем синхронизации;
|
||||
- добавлена одноразовая утилита `cmd/migrate_project_updated_at` для пересинхронизации `updated_at` проектов из MariaDB в локальную SQLite;
|
||||
- runtime config, release notes и `bible-local/` очищены и приведены к актуальной архитектуре;
|
||||
- `scripts/release.sh` больше не затирает существующий `RELEASE_NOTES.md`.
|
||||
|
||||
## Summary
|
||||
|
||||
### UI и UX
|
||||
|
||||
- вкладка ценообразования теперь разделена на отдельные таблицы закупки и продажи;
|
||||
- список проектов переработан: новая колонка даты, отдельный автор, tooltip с деталями, компактные действия, ссылка на трекер;
|
||||
- для вариантов добавлены действия переименования, переноса и копирования;
|
||||
- копии вариантов и конфигураций теперь именуются единообразно: `_копия`, `_копия2`, `_копия3`.
|
||||
|
||||
### Прайслисты и экспорт
|
||||
|
||||
- экран прайслиста переработан под разные типы источников;
|
||||
- из прайслистов убраны misleading-колонки `Поставщик` и `partnumbers`;
|
||||
- CSV-экспорт прайсинговых таблиц в конфигураторе приведён к Excel-совместимому формату.
|
||||
|
||||
### Runtime и sync
|
||||
|
||||
- локальный runtime нормализует `server.host` к `127.0.0.1` и переписывает некорректный runtime config;
|
||||
- sync перестал подменять `updated_at` проектов временем локальной синхронизации;
|
||||
- добавлена утилита `cmd/migrate_project_updated_at` для восстановления локальных дат проектов с сервера.
|
||||
|
||||
### Документация и release tooling
|
||||
|
||||
- `bible-local/` сокращён до актуальных архитектурных контрактов;
|
||||
- release notes и release-структура приведены к одному формату;
|
||||
- `scripts/release.sh` теперь сохраняет существующий `RELEASE_NOTES.md` и не затирает его шаблоном.
|
||||
|
||||
## Затронутые области
|
||||
|
||||
- `cmd/qfs/`;
|
||||
- `cmd/migrate_project_updated_at/`;
|
||||
- `internal/localdb/`;
|
||||
- `internal/services/project.go`;
|
||||
- `internal/services/sync/service.go`;
|
||||
- `internal/handlers/pricelist.go`;
|
||||
- `web/templates/pricelist_detail.html`;
|
||||
- `web/templates/index.html`;
|
||||
- `web/templates/project_detail.html`;
|
||||
- `web/templates/projects.html`;
|
||||
- `web/templates/configs.html`;
|
||||
- `bible-local/`.
|
||||
|
||||
## Совместимость
|
||||
|
||||
- схема данных не меняется;
|
||||
- серверные SQL-миграции не требуются;
|
||||
- для уже испорченных локальных дат проектов можно один раз запустить `go run ./cmd/migrate_project_updated_at -apply`.
|
||||
@@ -21,13 +21,14 @@ fi
|
||||
echo -e "${GREEN}Building QuoteForge version: ${VERSION}${NC}"
|
||||
echo ""
|
||||
|
||||
# Create release directory
|
||||
RELEASE_DIR="releases/${VERSION}"
|
||||
mkdir -p "${RELEASE_DIR}"
|
||||
ensure_release_notes() {
|
||||
local notes_path="$1"
|
||||
if [ -f "${notes_path}" ]; then
|
||||
echo -e "${GREEN} ✓ Preserving existing RELEASE_NOTES.md${NC}"
|
||||
return
|
||||
fi
|
||||
|
||||
# Create release notes template (always include macOS Gatekeeper note)
|
||||
if [ ! -f "${RELEASE_DIR}/RELEASE_NOTES.md" ]; then
|
||||
cat > "${RELEASE_DIR}/RELEASE_NOTES.md" <<EOF
|
||||
cat > "${notes_path}" <<EOF
|
||||
# QuoteForge ${VERSION}
|
||||
|
||||
Дата релиза: $(date +%Y-%m-%d)
|
||||
@@ -42,7 +43,15 @@ cat > "${RELEASE_DIR}/RELEASE_NOTES.md" <<EOF
|
||||
Снимите карантинный атрибут через терминал: \`xattr -d com.apple.quarantine /path/to/qfs-darwin-arm64\`
|
||||
После этого бинарник запустится без предупреждения Gatekeeper.
|
||||
EOF
|
||||
fi
|
||||
echo -e "${GREEN} ✓ Created RELEASE_NOTES.md template${NC}"
|
||||
}
|
||||
|
||||
# Create release directory
|
||||
RELEASE_DIR="releases/${VERSION}"
|
||||
mkdir -p "${RELEASE_DIR}"
|
||||
|
||||
# Create release notes template only when missing.
|
||||
ensure_release_notes "${RELEASE_DIR}/RELEASE_NOTES.md"
|
||||
|
||||
# Build for all platforms
|
||||
echo -e "${YELLOW}→ Building binaries...${NC}"
|
||||
|
||||
78
todo.md
78
todo.md
@@ -1,78 +0,0 @@
|
||||
# QuoteForge — План очистки (удаление admin pricing)
|
||||
|
||||
Цель: убрать всё, что связано с администрированием цен, складскими справками, алертами.
|
||||
Оставить: конфигуратор, проекты, read-only просмотр прайслистов, sync, offline-first.
|
||||
|
||||
---
|
||||
|
||||
## 1. Удалить файлы
|
||||
|
||||
- [x] `internal/handlers/pricing.go` (40.6KB) — весь admin pricing UI
|
||||
- [x] `internal/services/pricing/` — весь пакет расчёта цен
|
||||
- [x] `internal/services/pricelist/` — весь пакет управления прайслистами
|
||||
- [x] `internal/services/stock_import.go` — импорт складских справок
|
||||
- [x] `internal/services/alerts/` — весь пакет алертов
|
||||
- [x] `internal/warehouse/` — алгоритмы расчёта цен по складу
|
||||
- [x] `web/templates/admin_pricing.html` (109KB) — страница admin pricing
|
||||
- [x] `cmd/cron/` — cron jobs (cleanup-pricelists, update-prices, update-popularity)
|
||||
- [x] `cmd/importer/` — утилита импорта данных
|
||||
|
||||
## 2. Упростить `internal/handlers/pricelist.go` (read-only)
|
||||
|
||||
Read-only методы (List, Get, GetItems, GetLotNames, GetLatest) уже работают
|
||||
только через `h.localDB` (SQLite) без `pricelist.Service`.
|
||||
|
||||
- [x] Убрать поле `service *pricelist.Service` из структуры `PricelistHandler`
|
||||
- [x] Изменить конструктор: `NewPricelistHandler(localDB *localdb.LocalDB)`
|
||||
- [x] Удалить write-методы: `Create()`, `CreateWithProgress()`, `Delete()`, `SetActive()`, `CanWrite()`
|
||||
- [x] Удалить метод `refreshLocalPricelistCacheFromServer()` (зависит от service)
|
||||
- [x] Удалить import `pricelist` пакета
|
||||
- [x] Оставить: `List()`, `Get()`, `GetItems()`, `GetLotNames()`, `GetLatest()`
|
||||
|
||||
## 3. Упростить `cmd/qfs/main.go`
|
||||
|
||||
- [x] Удалить создание сервисов: `pricingService`, `alertService`, `pricelistService`, `stockImportService`
|
||||
- [x] Удалить хэндлер: `pricingHandler`
|
||||
- [x] Изменить создание `pricelistHandler`: `NewPricelistHandler(local)` (без service)
|
||||
- [x] Удалить repositories: `priceRepo`, `alertRepo` (statsRepo оставить — nil-safe)
|
||||
- [x] Удалить все routes `/api/admin/pricing/*` (строки ~1407-1430)
|
||||
- [x] Из `/api/pricelists/*` оставить только read-only:
|
||||
- `GET ""` (List), `GET "/latest"`, `GET "/:id"`, `GET "/:id/items"`, `GET "/:id/lots"`
|
||||
- [x] Удалить write routes: `POST ""`, `POST "/create-with-progress"`, `PATCH "/:id/active"`, `DELETE "/:id"`, `GET "/can-write"`
|
||||
- [x] Удалить web page `/admin/pricing`
|
||||
- [x] Исправить `/pricelists` — вместо redirect на admin/pricing сделать страницу
|
||||
- [x] В `QuoteService` конструкторе: передавать `nil` для `pricingService`
|
||||
- [x] Удалить imports: `pricing`, `pricelist`, `alerts` пакеты
|
||||
|
||||
## 4. Упростить `handlers/web.go`
|
||||
|
||||
- [x] Удалить из `simplePages`: `admin_pricing.html`
|
||||
- [x] Удалить метод: `AdminPricing()`
|
||||
- [x] Оставить все остальные методы включая `Pricelists()` и `PricelistDetail()`
|
||||
|
||||
## 5. Упростить `base.html` (навигация)
|
||||
|
||||
- [x] Убрать ссылку "Администратор цен"
|
||||
- [x] Добавить ссылку "Прайслисты" (на `/pricelists`)
|
||||
- [x] Оставить: "Мои проекты", "Прайслисты", sync indicator
|
||||
|
||||
## 6. Sync — оставить полностью
|
||||
|
||||
- Background worker: pull компоненты + прайслисты, push конфигурации
|
||||
- Все `/api/sync/*` endpoints остаются
|
||||
- Это ядро offline-first архитектуры
|
||||
|
||||
## 7. Верификация
|
||||
|
||||
- [x] `go build ./cmd/qfs` — компилируется
|
||||
- [x] `go vet ./...` — без ошибок
|
||||
- [ ] Запуск → `/configs` работает
|
||||
- [ ] `/pricelists` — read-only список работает
|
||||
- [ ] `/pricelists/:id` — детали работают
|
||||
- [ ] Sync с сервером работает
|
||||
- [ ] Нет ссылок на admin pricing в UI
|
||||
|
||||
## 8. Обновить CLAUDE.md
|
||||
- [x] Убрать разделы про admin pricing, stock import, alerts, cron
|
||||
- [x] Обновить API endpoints список
|
||||
- [x] Обновить описание приложения
|
||||
1
web/static/vendor/htmx-1.9.10.min.js
vendored
Normal file
1
web/static/vendor/htmx-1.9.10.min.js
vendored
Normal file
File diff suppressed because one or more lines are too long
83
web/static/vendor/tailwindcss.browser.js
vendored
Normal file
83
web/static/vendor/tailwindcss.browser.js
vendored
Normal file
File diff suppressed because one or more lines are too long
@@ -5,8 +5,9 @@
|
||||
<meta charset="UTF-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||
<title>{{template "title" .}}</title>
|
||||
<script src="https://cdn.tailwindcss.com"></script>
|
||||
<script src="https://unpkg.com/htmx.org@1.9.10"></script>
|
||||
<link rel="stylesheet" href="/static/app.css">
|
||||
<script src="/static/vendor/tailwindcss.browser.js"></script>
|
||||
<script src="/static/vendor/htmx-1.9.10.min.js"></script>
|
||||
<style>
|
||||
.htmx-request { opacity: 0.5; }
|
||||
.line-clamp-2 { display: -webkit-box; -webkit-line-clamp: 2; -webkit-box-orient: vertical; overflow: hidden; }
|
||||
@@ -43,6 +44,10 @@
|
||||
{{template "content" .}}
|
||||
</main>
|
||||
|
||||
<footer class="max-w-7xl mx-auto px-4 sm:px-6 lg:px-8 py-3 text-right">
|
||||
<span class="text-xs text-gray-400">v{{.AppVersion}}</span>
|
||||
</footer>
|
||||
|
||||
<div id="toast" class="fixed bottom-4 right-4 z-50"></div>
|
||||
|
||||
<!-- Sync Info Modal -->
|
||||
@@ -92,6 +97,15 @@
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div id="modal-pricelist-sync-issue" class="hidden">
|
||||
<h4 class="font-medium text-red-700 mb-2">Состояние прайслистов</h4>
|
||||
<div class="bg-red-50 border border-red-200 rounded px-3 py-2 text-sm space-y-1">
|
||||
<div id="modal-pricelist-sync-summary" class="text-red-700">—</div>
|
||||
<div id="modal-pricelist-sync-attempt" class="text-red-600 text-xs hidden"></div>
|
||||
<div id="modal-pricelist-sync-error" class="text-red-600 text-xs hidden whitespace-pre-wrap"></div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Section 2: Statistics -->
|
||||
<div>
|
||||
<h4 class="font-medium text-gray-900 mb-2">Статистика</h4>
|
||||
@@ -229,6 +243,43 @@
|
||||
readinessMinVersion.textContent = '';
|
||||
}
|
||||
|
||||
const syncIssueSection = document.getElementById('modal-pricelist-sync-issue');
|
||||
const syncIssueSummary = document.getElementById('modal-pricelist-sync-summary');
|
||||
const syncIssueAttempt = document.getElementById('modal-pricelist-sync-attempt');
|
||||
const syncIssueError = document.getElementById('modal-pricelist-sync-error');
|
||||
const hasSyncFailure = data.last_pricelist_sync_status === 'failed';
|
||||
if (data.has_incomplete_server_sync) {
|
||||
syncIssueSection.classList.remove('hidden');
|
||||
syncIssueSummary.textContent = 'Последняя синхронизация прайслистов прервалась. На сервере есть изменения, которые еще не загружены локально.';
|
||||
} else if (hasSyncFailure) {
|
||||
syncIssueSection.classList.remove('hidden');
|
||||
syncIssueSummary.textContent = 'Последняя синхронизация прайслистов завершилась ошибкой.';
|
||||
} else {
|
||||
syncIssueSection.classList.add('hidden');
|
||||
syncIssueSummary.textContent = '';
|
||||
}
|
||||
if (syncIssueSection.classList.contains('hidden')) {
|
||||
syncIssueAttempt.classList.add('hidden');
|
||||
syncIssueAttempt.textContent = '';
|
||||
syncIssueError.classList.add('hidden');
|
||||
syncIssueError.textContent = '';
|
||||
} else {
|
||||
if (data.last_pricelist_attempt_at) {
|
||||
syncIssueAttempt.classList.remove('hidden');
|
||||
syncIssueAttempt.textContent = 'Последняя попытка: ' + new Date(data.last_pricelist_attempt_at).toLocaleString('ru-RU');
|
||||
} else {
|
||||
syncIssueAttempt.classList.add('hidden');
|
||||
syncIssueAttempt.textContent = '';
|
||||
}
|
||||
if (data.last_pricelist_sync_error) {
|
||||
syncIssueError.classList.remove('hidden');
|
||||
syncIssueError.textContent = data.last_pricelist_sync_error;
|
||||
} else {
|
||||
syncIssueError.classList.add('hidden');
|
||||
syncIssueError.textContent = '';
|
||||
}
|
||||
}
|
||||
|
||||
// Section 2: Statistics
|
||||
document.getElementById('modal-lot-count').textContent = data.is_online ? data.lot_count.toLocaleString() : '—';
|
||||
document.getElementById('modal-lotlog-count').textContent = data.is_online ? data.lot_log_count.toLocaleString() : '—';
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
{{define "title"}}Ревизии - QuoteForge{{end}}
|
||||
{{define "title"}}Ревизии - OFS{{end}}
|
||||
|
||||
{{define "content"}}
|
||||
<div class="space-y-4">
|
||||
@@ -135,15 +135,18 @@ async function loadVersions() {
|
||||
}
|
||||
|
||||
function renderVersions(versions) {
|
||||
if (versions.length === 0) {
|
||||
const currentVersionNo = configData && configData.current_version_no ? Number(configData.current_version_no) : null;
|
||||
const snapshots = versions.filter(v => Number(v.version_no) !== currentVersionNo);
|
||||
|
||||
if (snapshots.length === 0) {
|
||||
document.getElementById('revisions-list').innerHTML =
|
||||
'<div class="bg-white rounded-lg shadow p-8 text-center text-gray-500">Нет ревизий</div>';
|
||||
'<div class="bg-white rounded-lg shadow p-8 text-center text-gray-500">Нет прошлых снимков. Рабочая версия остается main.</div>';
|
||||
return;
|
||||
}
|
||||
|
||||
let html = '<div class="bg-white rounded-lg shadow overflow-hidden"><table class="w-full">';
|
||||
html += '<thead class="bg-gray-50"><tr>';
|
||||
html += '<th class="px-4 py-3 text-left text-xs font-medium text-gray-500 uppercase">Версия</th>';
|
||||
html += '<th class="px-4 py-3 text-left text-xs font-medium text-gray-500 uppercase">Снимок</th>';
|
||||
html += '<th class="px-4 py-3 text-left text-xs font-medium text-gray-500 uppercase">Дата</th>';
|
||||
html += '<th class="px-4 py-3 text-left text-xs font-medium text-gray-500 uppercase">Автор</th>';
|
||||
html += '<th class="px-4 py-3 text-left text-xs font-medium text-gray-500 uppercase">Артикул</th>';
|
||||
@@ -152,16 +155,14 @@ function renderVersions(versions) {
|
||||
html += '<th class="px-4 py-3 text-right text-xs font-medium text-gray-500 uppercase">Действия</th>';
|
||||
html += '</tr></thead><tbody class="divide-y">';
|
||||
|
||||
versions.forEach((v, idx) => {
|
||||
snapshots.forEach((v) => {
|
||||
const date = new Date(v.created_at).toLocaleString('ru-RU');
|
||||
const author = v.created_by || '—';
|
||||
const snapshot = parseVersionSnapshot(v);
|
||||
const isCurrent = idx === 0;
|
||||
|
||||
html += '<tr class="hover:bg-gray-50' + (isCurrent ? ' bg-blue-50' : '') + '">';
|
||||
html += '<tr class="hover:bg-gray-50">';
|
||||
html += '<td class="px-4 py-3 text-sm font-medium">';
|
||||
html += 'v' + v.version_no;
|
||||
if (isCurrent) html += ' <span class="text-xs text-blue-600 font-normal">(текущая)</span>';
|
||||
html += '</td>';
|
||||
html += '<td class="px-4 py-3 text-sm text-gray-500">' + escapeHtml(date) + '</td>';
|
||||
html += '<td class="px-4 py-3 text-sm text-gray-500">' + escapeHtml(author) + '</td>';
|
||||
@@ -178,11 +179,8 @@ function renderVersions(versions) {
|
||||
html += '<button onclick="cloneFromVersion(' + v.version_no + ')" class="text-green-600 hover:text-green-800" title="Скопировать из этой ревизии">';
|
||||
html += '<svg class="w-5 h-5" fill="none" stroke="currentColor" viewBox="0 0 24 24"><path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M8 16H6a2 2 0 01-2-2V6a2 2 0 012-2h8a2 2 0 012 2v2m-6 12h8a2 2 0 002-2v-8a2 2 0 00-2-2h-8a2 2 0 00-2 2v8a2 2 0 002 2z"></path></svg></button>';
|
||||
|
||||
// Rollback (not for current version)
|
||||
if (!isCurrent) {
|
||||
html += '<button onclick="rollbackToVersion(' + v.version_no + ')" class="text-orange-600 hover:text-orange-800" title="Восстановить эту ревизию">';
|
||||
html += '<svg class="w-5 h-5" fill="none" stroke="currentColor" viewBox="0 0 24 24"><path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M3 10h10a8 8 0 018 8v2M3 10l6 6m-6-6l6-6"></path></svg></button>';
|
||||
}
|
||||
html += '<button onclick="rollbackToVersion(' + v.version_no + ')" class="text-orange-600 hover:text-orange-800" title="Восстановить этот снимок как новую main-версию">';
|
||||
html += '<svg class="w-5 h-5" fill="none" stroke="currentColor" viewBox="0 0 24 24"><path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M3 10h10a8 8 0 018 8v2M3 10l6 6m-6-6l6-6"></path></svg></button>';
|
||||
|
||||
html += '</td></tr>';
|
||||
});
|
||||
@@ -212,7 +210,7 @@ async function cloneFromVersion(versionNo) {
|
||||
}
|
||||
|
||||
async function rollbackToVersion(versionNo) {
|
||||
if (!confirm('Восстановить конфигурацию до ревизии v' + versionNo + '?')) return;
|
||||
if (!confirm('Восстановить снимок v' + versionNo + ' как новую рабочую версию main?')) return;
|
||||
const resp = await fetch('/api/configs/' + configUUID + '/rollback', {
|
||||
method: 'POST',
|
||||
headers: {'Content-Type': 'application/json'},
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
{{define "title"}}Мои конфигурации - QuoteForge{{end}}
|
||||
{{define "title"}}Мои конфигурации - OFS{{end}}
|
||||
|
||||
{{define "content"}}
|
||||
<div class="space-y-4">
|
||||
@@ -53,6 +53,19 @@
|
||||
<h2 class="text-xl font-semibold mb-4">Новая конфигурация</h2>
|
||||
|
||||
<div class="space-y-4">
|
||||
<div>
|
||||
<label class="block text-sm font-medium text-gray-700 mb-1">Тип оборудования</label>
|
||||
<div class="inline-flex rounded-lg border border-gray-200 overflow-hidden w-full">
|
||||
<button type="button" id="type-server-btn" onclick="setCreateType('server')"
|
||||
class="flex-1 py-2 text-sm font-medium bg-blue-600 text-white">
|
||||
Сервер
|
||||
</button>
|
||||
<button type="button" id="type-storage-btn" onclick="setCreateType('storage')"
|
||||
class="flex-1 py-2 text-sm font-medium bg-white text-gray-700 hover:bg-gray-50 border-l border-gray-200">
|
||||
СХД
|
||||
</button>
|
||||
</div>
|
||||
</div>
|
||||
<div>
|
||||
<label class="block text-sm font-medium text-gray-700 mb-1">Название конфигурации</label>
|
||||
<input type="text" id="opportunity-number" placeholder="Например: Сервер для проекта X"
|
||||
@@ -203,6 +216,8 @@ let projectsCache = [];
|
||||
let projectNameByUUID = {};
|
||||
let projectCodeByUUID = {};
|
||||
let projectVariantByUUID = {};
|
||||
let configProjectUUIDByUUID = {};
|
||||
let configNameByUUID = {};
|
||||
let pendingMoveConfigUUID = '';
|
||||
let pendingMoveProjectCode = '';
|
||||
let pendingCreateConfigName = '';
|
||||
@@ -343,6 +358,45 @@ function findProjectByInput(input) {
|
||||
return null;
|
||||
}
|
||||
|
||||
async function resolveUniqueConfigName(baseName, projectUUID, excludeUUID) {
|
||||
const cleanedBase = (baseName || '').trim();
|
||||
if (!cleanedBase) {
|
||||
return {error: 'Введите название'};
|
||||
}
|
||||
|
||||
let configs = [];
|
||||
if (projectUUID) {
|
||||
const resp = await fetch('/api/projects/' + projectUUID + '/configs?status=all');
|
||||
if (!resp.ok) {
|
||||
return {error: 'Не удалось проверить конфигурации проекта'};
|
||||
}
|
||||
const data = await resp.json().catch(() => ({}));
|
||||
configs = Array.isArray(data.configurations) ? data.configurations : [];
|
||||
} else {
|
||||
configs = Object.keys(configProjectUUIDByUUID)
|
||||
.filter(uuid => !configProjectUUIDByUUID[uuid])
|
||||
.map(uuid => ({uuid: uuid, name: configNameByUUID[uuid] || ''}));
|
||||
}
|
||||
|
||||
const used = new Set(
|
||||
configs
|
||||
.filter(cfg => !excludeUUID || cfg.uuid !== excludeUUID)
|
||||
.map(cfg => (cfg.name || '').trim().toLowerCase())
|
||||
);
|
||||
|
||||
if (!used.has(cleanedBase.toLowerCase())) {
|
||||
return {name: cleanedBase, changed: false};
|
||||
}
|
||||
|
||||
let candidate = cleanedBase + '_копия';
|
||||
let suffix = 2;
|
||||
while (used.has(candidate.toLowerCase())) {
|
||||
candidate = cleanedBase + '_копия' + suffix;
|
||||
suffix++;
|
||||
}
|
||||
return {name: candidate, changed: true};
|
||||
}
|
||||
|
||||
function escapeHtml(text) {
|
||||
const div = document.createElement('div');
|
||||
div.textContent = text;
|
||||
@@ -385,14 +439,23 @@ function closeRenameModal() {
|
||||
|
||||
async function renameConfig() {
|
||||
const uuid = document.getElementById('rename-uuid').value;
|
||||
const name = document.getElementById('rename-input').value.trim();
|
||||
const rawName = document.getElementById('rename-input').value.trim();
|
||||
|
||||
if (!name) {
|
||||
if (!rawName) {
|
||||
alert('Введите название');
|
||||
return;
|
||||
}
|
||||
|
||||
try {
|
||||
const result = await resolveUniqueConfigName(rawName, configProjectUUIDByUUID[uuid] || '', uuid);
|
||||
if (result.error) {
|
||||
alert(result.error);
|
||||
return;
|
||||
}
|
||||
const name = result.name;
|
||||
if (result.changed) {
|
||||
document.getElementById('rename-input').value = name;
|
||||
}
|
||||
const resp = await fetch('/api/configs/' + uuid + '/rename', {
|
||||
method: 'PATCH',
|
||||
headers: {
|
||||
@@ -416,7 +479,7 @@ async function renameConfig() {
|
||||
|
||||
function openCloneModal(uuid, currentName) {
|
||||
document.getElementById('clone-uuid').value = uuid;
|
||||
document.getElementById('clone-input').value = currentName + ' (копия)';
|
||||
document.getElementById('clone-input').value = currentName + '_копия';
|
||||
document.getElementById('clone-modal').classList.remove('hidden');
|
||||
document.getElementById('clone-modal').classList.add('flex');
|
||||
document.getElementById('clone-input').focus();
|
||||
@@ -430,14 +493,23 @@ function closeCloneModal() {
|
||||
|
||||
async function cloneConfig() {
|
||||
const uuid = document.getElementById('clone-uuid').value;
|
||||
const name = document.getElementById('clone-input').value.trim();
|
||||
const rawName = document.getElementById('clone-input').value.trim();
|
||||
|
||||
if (!name) {
|
||||
if (!rawName) {
|
||||
alert('Введите название');
|
||||
return;
|
||||
}
|
||||
|
||||
try {
|
||||
const result = await resolveUniqueConfigName(rawName, configProjectUUIDByUUID[uuid] || '', uuid);
|
||||
if (result.error) {
|
||||
alert(result.error);
|
||||
return;
|
||||
}
|
||||
const name = result.name;
|
||||
if (result.changed) {
|
||||
document.getElementById('clone-input').value = name;
|
||||
}
|
||||
const resp = await fetch('/api/configs/' + uuid + '/clone', {
|
||||
method: 'POST',
|
||||
headers: {
|
||||
@@ -459,7 +531,19 @@ async function cloneConfig() {
|
||||
}
|
||||
}
|
||||
|
||||
let createConfigType = 'server';
|
||||
|
||||
function setCreateType(type) {
|
||||
createConfigType = type;
|
||||
document.getElementById('type-server-btn').className = 'flex-1 py-2 text-sm font-medium ' +
|
||||
(type === 'server' ? 'bg-blue-600 text-white' : 'bg-white text-gray-700 hover:bg-gray-50 border-l border-gray-200');
|
||||
document.getElementById('type-storage-btn').className = 'flex-1 py-2 text-sm font-medium border-l border-gray-200 ' +
|
||||
(type === 'storage' ? 'bg-blue-600 text-white' : 'bg-white text-gray-700 hover:bg-gray-50');
|
||||
}
|
||||
|
||||
function openCreateModal() {
|
||||
createConfigType = 'server';
|
||||
setCreateType('server');
|
||||
document.getElementById('opportunity-number').value = '';
|
||||
document.getElementById('create-project-input').value = '';
|
||||
document.getElementById('create-modal').classList.remove('hidden');
|
||||
@@ -514,7 +598,8 @@ async function createConfigWithProject(name, projectUUID) {
|
||||
items: [],
|
||||
notes: '',
|
||||
server_count: 1,
|
||||
project_uuid: projectUUID || null
|
||||
project_uuid: projectUUID || null,
|
||||
config_type: createConfigType
|
||||
})
|
||||
});
|
||||
|
||||
@@ -851,6 +936,12 @@ async function loadConfigs() {
|
||||
}
|
||||
|
||||
const data = await resp.json();
|
||||
configProjectUUIDByUUID = {};
|
||||
configNameByUUID = {};
|
||||
(data.configurations || []).forEach(cfg => {
|
||||
configProjectUUIDByUUID[cfg.uuid] = cfg.project_uuid || '';
|
||||
configNameByUUID[cfg.uuid] = cfg.name || '';
|
||||
});
|
||||
renderConfigs(data.configurations || []);
|
||||
updatePagination(data.total);
|
||||
} catch(e) {
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -14,7 +14,21 @@
|
||||
</span>
|
||||
{{end}}
|
||||
|
||||
{{if .IsBlocked}}
|
||||
{{if .HasIncompleteServerSync}}
|
||||
<span class="bg-red-100 text-red-800 px-2 py-0.5 rounded-full text-xs font-medium flex items-center gap-1 cursor-pointer" title="{{.SyncIssueTitle}}" onclick="openSyncModal()">
|
||||
<svg class="w-3 h-3" fill="none" stroke="currentColor" viewBox="0 0 24 24">
|
||||
<path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M12 9v2m0 4h.01m-7.938 4h15.876c1.54 0 2.502-1.667 1.732-3L13.732 4c-.77-1.333-2.694-1.333-3.464 0L2.33 16c-.77 1.333.192 3 1.732 3z"></path>
|
||||
</svg>
|
||||
Не докачано
|
||||
</span>
|
||||
{{else if .HasFailedSync}}
|
||||
<span class="bg-orange-100 text-orange-800 px-2 py-0.5 rounded-full text-xs font-medium flex items-center gap-1 cursor-pointer" title="{{.SyncIssueTitle}}" onclick="openSyncModal()">
|
||||
<svg class="w-3 h-3" fill="none" stroke="currentColor" viewBox="0 0 24 24">
|
||||
<path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M12 8v4m0 4h.01M4.93 19h14.14c1.54 0 2.502-1.667 1.732-3L13.732 4c-.77-1.333-2.694-1.333-3.464 0L3.2 16c-.77 1.333.192 3 1.732 3z"></path>
|
||||
</svg>
|
||||
Sync error
|
||||
</span>
|
||||
{{else if .IsBlocked}}
|
||||
<span class="bg-red-100 text-red-800 px-2 py-0.5 rounded-full text-xs font-medium flex items-center gap-1 cursor-pointer" title="{{.BlockedReason}}" onclick="openSyncModal()">
|
||||
<svg class="w-3 h-3" fill="none" stroke="currentColor" viewBox="0 0 24 24">
|
||||
<path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M12 9v2m0 4h.01m-7.938 4h15.876c1.54 0 2.502-1.667 1.732-3L13.732 4c-.77-1.333-2.694-1.333-3.464 0L2.33 16c-.77 1.333.192 3 1.732 3z"></path>
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
{{define "title"}}QuoteForge - Партномера{{end}}
|
||||
{{define "title"}}OFS - Партномера{{end}}
|
||||
|
||||
{{define "content"}}
|
||||
<div class="space-y-4">
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
{{define "title"}}Прайслист - QuoteForge{{end}}
|
||||
{{define "title"}}Прайслист - OFS{{end}}
|
||||
|
||||
{{define "content"}}
|
||||
<div class="space-y-6">
|
||||
@@ -59,9 +59,8 @@
|
||||
<th class="px-6 py-3 text-left text-xs font-medium text-gray-500 uppercase">Категория</th>
|
||||
<th class="px-6 py-3 text-left text-xs font-medium text-gray-500 uppercase">Описание</th>
|
||||
<th id="th-qty" class="hidden px-6 py-3 text-right text-xs font-medium text-gray-500 uppercase">Доступно</th>
|
||||
<th id="th-partnumbers" class="hidden px-6 py-3 text-left text-xs font-medium text-gray-500 uppercase">Partnumbers</th>
|
||||
<th class="px-6 py-3 text-right text-xs font-medium text-gray-500 uppercase">Цена, $</th>
|
||||
<th class="px-6 py-3 text-left text-xs font-medium text-gray-500 uppercase">Настройки</th>
|
||||
<th id="th-settings" class="px-6 py-3 text-left text-xs font-medium text-gray-500 uppercase">Настройки</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody id="items-body" class="bg-white divide-y divide-gray-200">
|
||||
@@ -150,18 +149,23 @@
|
||||
}
|
||||
}
|
||||
|
||||
function isStockSource() {
|
||||
const src = (currentSource || '').toLowerCase();
|
||||
return src === 'warehouse' || src === 'competitor';
|
||||
}
|
||||
|
||||
function isWarehouseSource() {
|
||||
return (currentSource || '').toLowerCase() === 'warehouse';
|
||||
}
|
||||
|
||||
function itemsColspan() {
|
||||
return isWarehouseSource() ? 7 : 5;
|
||||
return isStockSource() ? 4 : 5;
|
||||
}
|
||||
|
||||
function toggleWarehouseColumns() {
|
||||
const visible = isWarehouseSource();
|
||||
document.getElementById('th-qty').classList.toggle('hidden', !visible);
|
||||
document.getElementById('th-partnumbers').classList.toggle('hidden', !visible);
|
||||
const stock = isStockSource();
|
||||
document.getElementById('th-qty').classList.toggle('hidden', true);
|
||||
document.getElementById('th-settings').classList.toggle('hidden', stock);
|
||||
}
|
||||
|
||||
function formatQty(qty) {
|
||||
@@ -234,27 +238,34 @@
|
||||
return;
|
||||
}
|
||||
|
||||
const showWarehouse = isWarehouseSource();
|
||||
const stock = isStockSource();
|
||||
const p = stock ? 'px-3 py-2' : 'px-6 py-3';
|
||||
const descMax = stock ? 30 : 60;
|
||||
|
||||
const html = items.map(item => {
|
||||
const price = item.price.toLocaleString('en-US', { minimumFractionDigits: 2, maximumFractionDigits: 2 });
|
||||
const description = item.lot_description || '-';
|
||||
const truncatedDesc = description.length > 60 ? description.substring(0, 60) + '...' : description;
|
||||
const qty = formatQty(item.available_qty);
|
||||
const partnumbers = Array.isArray(item.partnumbers) && item.partnumbers.length > 0 ? item.partnumbers.join(', ') : '—';
|
||||
const truncatedDesc = description.length > descMax ? description.substring(0, descMax) + '...' : description;
|
||||
|
||||
|
||||
|
||||
// Price cell — add spread badge for competitor
|
||||
let priceHtml = price;
|
||||
if (!isWarehouseSource() && item.price_spread_pct > 0) {
|
||||
priceHtml += ` <span class="text-xs text-amber-600 font-medium" title="Разброс цен конкурентов">±${item.price_spread_pct.toFixed(0)}%</span>`;
|
||||
}
|
||||
|
||||
return `
|
||||
<tr class="hover:bg-gray-50">
|
||||
<td class="px-6 py-3 whitespace-nowrap">
|
||||
<span class="font-mono text-sm">${item.lot_name}</span>
|
||||
<td class="${p} max-w-[160px]">
|
||||
<span class="font-mono text-sm break-all">${escapeHtml(item.lot_name)}</span>
|
||||
</td>
|
||||
<td class="px-6 py-3 whitespace-nowrap">
|
||||
<span class="px-2 py-1 text-xs bg-gray-100 rounded">${item.category || '-'}</span>
|
||||
<td class="${p} whitespace-nowrap">
|
||||
<span class="px-2 py-1 text-xs bg-gray-100 rounded">${escapeHtml(item.category || '-')}</span>
|
||||
</td>
|
||||
<td class="px-6 py-3 text-sm text-gray-500" title="${description}">${truncatedDesc}</td>
|
||||
${showWarehouse ? `<td class="px-6 py-3 whitespace-nowrap text-right font-mono">${qty}</td>` : ''}
|
||||
${showWarehouse ? `<td class="px-6 py-3 text-sm text-gray-600" title="${escapeHtml(partnumbers)}">${escapeHtml(partnumbers)}</td>` : ''}
|
||||
<td class="px-6 py-3 whitespace-nowrap text-right font-mono">${price}</td>
|
||||
<td class="px-6 py-3 whitespace-nowrap text-sm"><span class="text-xs bg-gray-100 px-2 py-1 rounded">${formatPriceSettings(item)}</span></td>
|
||||
<td class="${p} text-sm text-gray-500" title="${escapeHtml(description)}">${escapeHtml(truncatedDesc)}</td>
|
||||
<td class="${p} whitespace-nowrap text-right font-mono">${priceHtml}</td>
|
||||
${!stock ? `<td class="${p} whitespace-nowrap text-sm"><span class="text-xs bg-gray-100 px-2 py-1 rounded">${formatPriceSettings(item)}</span></td>` : ''}
|
||||
</tr>
|
||||
`;
|
||||
}).join('');
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
{{define "title"}}Прайслисты - QuoteForge{{end}}
|
||||
{{define "title"}}Прайслисты - OFS{{end}}
|
||||
|
||||
{{define "content"}}
|
||||
<div class="space-y-6">
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
{{define "title"}}Проект - QuoteForge{{end}}
|
||||
{{define "title"}}Проект - OFS{{end}}
|
||||
|
||||
{{define "content"}}
|
||||
<div class="space-y-4">
|
||||
@@ -29,23 +29,26 @@
|
||||
<button onclick="openNewVariantModal()" class="inline-flex w-full sm:w-auto justify-center items-center px-3 py-1.5 text-sm font-medium bg-purple-600 text-white rounded-lg hover:bg-purple-700">
|
||||
+ Вариант
|
||||
</button>
|
||||
<button onclick="openVariantActionModal()" class="inline-flex w-full sm:w-auto justify-center items-center px-3 py-1.5 text-sm font-medium bg-indigo-600 text-white rounded-lg hover:bg-indigo-700">
|
||||
Действия с вариантом
|
||||
</button>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div id="action-buttons" class="mt-4 grid grid-cols-1 sm:grid-cols-6 gap-3">
|
||||
<button onclick="openCreateModal()" class="py-2 bg-blue-600 text-white rounded-lg hover:bg-blue-700 font-medium">
|
||||
Новая конфигурация
|
||||
+ Конфигурация
|
||||
</button>
|
||||
<button onclick="openVendorImportModal()" class="py-2 bg-amber-600 text-white rounded-lg hover:bg-amber-700 font-medium">
|
||||
Импорт выгрузки вендора
|
||||
</button>
|
||||
<button onclick="openProjectSettingsModal()" class="py-2 bg-gray-700 text-white rounded-lg hover:bg-gray-800 font-medium">
|
||||
Параметры
|
||||
Импорт
|
||||
</button>
|
||||
<button onclick="openExportModal()" class="py-2 bg-green-600 text-white rounded-lg hover:bg-green-700 font-medium">
|
||||
Экспорт CSV
|
||||
</button>
|
||||
<button onclick="openProjectSettingsModal()" class="py-2 bg-gray-700 text-white rounded-lg hover:bg-gray-800 font-medium">
|
||||
Параметры
|
||||
</button>
|
||||
<button id="delete-variant-btn" onclick="deleteVariant()" class="py-2 bg-red-600 text-white rounded-lg hover:bg-red-700 font-medium hidden">
|
||||
Удалить вариант
|
||||
</button>
|
||||
@@ -74,6 +77,19 @@
|
||||
<div class="bg-white rounded-lg shadow-xl w-full max-w-md mx-4 p-6">
|
||||
<h2 class="text-xl font-semibold mb-4">Новая конфигурация в проекте</h2>
|
||||
<div class="space-y-4">
|
||||
<div>
|
||||
<label class="block text-sm font-medium text-gray-700 mb-1">Тип оборудования</label>
|
||||
<div class="inline-flex rounded-lg border border-gray-200 overflow-hidden w-full">
|
||||
<button type="button" id="pd-type-server-btn" onclick="pdSetCreateType('server')"
|
||||
class="flex-1 py-2 text-sm font-medium bg-blue-600 text-white">
|
||||
Сервер
|
||||
</button>
|
||||
<button type="button" id="pd-type-storage-btn" onclick="pdSetCreateType('storage')"
|
||||
class="flex-1 py-2 text-sm font-medium bg-white text-gray-700 hover:bg-gray-50 border-l border-gray-200">
|
||||
СХД
|
||||
</button>
|
||||
</div>
|
||||
</div>
|
||||
<div>
|
||||
<label class="block text-sm font-medium text-gray-700 mb-1">Название конфигурации</label>
|
||||
<input type="text" id="create-name" placeholder="Например: OPP-2026-001"
|
||||
@@ -110,33 +126,60 @@
|
||||
|
||||
<div id="project-export-modal" class="fixed inset-0 bg-black bg-opacity-50 hidden items-center justify-center z-50">
|
||||
<div class="bg-white rounded-lg shadow-xl w-full max-w-md mx-4 p-6">
|
||||
<h2 class="text-xl font-semibold mb-4">Экспорт CSV</h2>
|
||||
<div class="space-y-4">
|
||||
<div class="text-sm text-gray-600">
|
||||
Экспортирует проект в формате вкладки ценообразования. Если включён `BOM`, строки строятся по BOM; иначе по текущему Estimate.
|
||||
<h2 class="text-xl font-semibold mb-5">Экспорт CSV</h2>
|
||||
<div class="space-y-5">
|
||||
|
||||
<!-- Section 1: Артикул -->
|
||||
<div>
|
||||
<p class="text-xs font-semibold uppercase tracking-wide text-gray-500 mb-2">Артикул</p>
|
||||
<div class="space-y-2">
|
||||
<label class="flex items-center gap-3 text-sm text-gray-700">
|
||||
<input id="export-col-lot" type="checkbox" class="rounded border-gray-300" checked>
|
||||
<span>LOT</span>
|
||||
</label>
|
||||
<label class="flex items-center gap-3 text-sm text-gray-700">
|
||||
<input id="export-col-bom" type="checkbox" class="rounded border-gray-300">
|
||||
<span>BOM <span class="text-gray-400 font-normal">(строки по BOM, иначе по Estimate)</span></span>
|
||||
</label>
|
||||
</div>
|
||||
</div>
|
||||
<div class="space-y-3">
|
||||
<label class="flex items-center gap-3 text-sm text-gray-700">
|
||||
<input id="export-col-lot" type="checkbox" class="rounded border-gray-300" checked>
|
||||
<span>LOT</span>
|
||||
</label>
|
||||
<label class="flex items-center gap-3 text-sm text-gray-700">
|
||||
<input id="export-col-bom" type="checkbox" class="rounded border-gray-300">
|
||||
<span>BOM</span>
|
||||
</label>
|
||||
<label class="flex items-center gap-3 text-sm text-gray-700">
|
||||
<input id="export-col-estimate" type="checkbox" class="rounded border-gray-300" checked>
|
||||
<span>Estimate</span>
|
||||
</label>
|
||||
<label class="flex items-center gap-3 text-sm text-gray-700">
|
||||
<input id="export-col-stock" type="checkbox" class="rounded border-gray-300">
|
||||
<span>Stock</span>
|
||||
</label>
|
||||
<label class="flex items-center gap-3 text-sm text-gray-700">
|
||||
<input id="export-col-competitor" type="checkbox" class="rounded border-gray-300">
|
||||
<span>Конкуренты</span>
|
||||
</label>
|
||||
|
||||
<!-- Section 2: Цены -->
|
||||
<div>
|
||||
<p class="text-xs font-semibold uppercase tracking-wide text-gray-500 mb-2">Цены</p>
|
||||
<div class="space-y-2">
|
||||
<label class="flex items-center gap-3 text-sm text-gray-700">
|
||||
<input id="export-col-estimate" type="checkbox" class="rounded border-gray-300" checked>
|
||||
<span>Est</span>
|
||||
</label>
|
||||
<label class="flex items-center gap-3 text-sm text-gray-700">
|
||||
<input id="export-col-stock" type="checkbox" class="rounded border-gray-300">
|
||||
<span>Stock</span>
|
||||
</label>
|
||||
<label class="flex items-center gap-3 text-sm text-gray-700">
|
||||
<input id="export-col-competitor" type="checkbox" class="rounded border-gray-300">
|
||||
<span>Конкуренты</span>
|
||||
</label>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Section 3: Базис поставки -->
|
||||
<div>
|
||||
<p class="text-xs font-semibold uppercase tracking-wide text-gray-500 mb-2">Базис поставки</p>
|
||||
<div class="flex gap-6">
|
||||
<label class="flex items-center gap-2 text-sm text-gray-700 cursor-pointer">
|
||||
<input type="radio" name="export-basis" value="fob" class="border-gray-300" checked>
|
||||
<span class="font-medium">FOB</span>
|
||||
<span class="text-gray-400">— Цена покупки</span>
|
||||
</label>
|
||||
<label class="flex items-center gap-2 text-sm text-gray-700 cursor-pointer">
|
||||
<input type="radio" name="export-basis" value="ddp" class="border-gray-300">
|
||||
<span class="font-medium">DDP</span>
|
||||
<span class="text-gray-400">— Цена продажи ×1,3</span>
|
||||
</label>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div id="project-export-status" class="hidden text-sm rounded border px-3 py-2"></div>
|
||||
</div>
|
||||
<div class="flex justify-end space-x-3 mt-6">
|
||||
@@ -173,6 +216,34 @@
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div id="variant-action-modal" class="fixed inset-0 bg-black bg-opacity-50 hidden items-center justify-center z-50">
|
||||
<div class="bg-white rounded-lg shadow-xl w-full max-w-md mx-4 p-6">
|
||||
<h2 class="text-xl font-semibold mb-4">Действия с вариантом</h2>
|
||||
<div class="space-y-4">
|
||||
<div>
|
||||
<label class="block text-sm font-medium text-gray-700 mb-1">Название</label>
|
||||
<input type="text" id="variant-action-name"
|
||||
class="w-full px-3 py-2 border rounded focus:ring-2 focus:ring-indigo-500 focus:border-indigo-500">
|
||||
</div>
|
||||
<label class="flex items-center gap-2 text-sm text-gray-700">
|
||||
<input type="checkbox" id="variant-action-copy" class="rounded border-gray-300">
|
||||
Создать копию
|
||||
</label>
|
||||
<div>
|
||||
<label class="block text-sm font-medium text-gray-700 mb-1">Код проекта</label>
|
||||
<input type="text" id="variant-action-code"
|
||||
class="w-full px-3 py-2 border rounded focus:ring-2 focus:ring-indigo-500 focus:border-indigo-500">
|
||||
</div>
|
||||
<input type="hidden" id="variant-action-current-name">
|
||||
<input type="hidden" id="variant-action-current-code">
|
||||
</div>
|
||||
<div class="flex justify-end space-x-3 mt-6">
|
||||
<button onclick="closeVariantActionModal()" class="px-4 py-2 text-gray-600 hover:text-gray-800">Отмена</button>
|
||||
<button onclick="saveVariantAction()" class="px-4 py-2 bg-indigo-600 text-white rounded hover:bg-indigo-700">Сохранить</button>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div id="config-action-modal" class="fixed inset-0 bg-black bg-opacity-50 hidden items-center justify-center z-50">
|
||||
<div class="bg-white rounded-lg shadow-xl w-full max-w-md mx-4 p-6">
|
||||
<h2 class="text-xl font-semibold mb-4">Действия с конфигурацией</h2>
|
||||
@@ -332,6 +403,7 @@ function renderVariantSelect() {
|
||||
if (item.uuid === projectUUID) {
|
||||
option.className += ' font-semibold text-gray-900';
|
||||
label.textContent = variantLabel;
|
||||
document.title = (project && project.code ? project.code : '—') + ' / ' + variantLabel + ' — OFS';
|
||||
}
|
||||
option.textContent = variantLabel;
|
||||
option.onclick = function() {
|
||||
@@ -441,8 +513,7 @@ function renderConfigs(configs) {
|
||||
html += '<td class="px-4 py-3 text-sm text-gray-500"><input type="number" min="1" value="' + serverCount + '" class="w-16 px-1 py-0.5 border rounded text-center text-sm" data-uuid="' + c.uuid + '" data-prev="' + serverCount + '" onchange="updateConfigServerCount(this)"></td>';
|
||||
}
|
||||
html += '<td class="px-4 py-3 text-sm text-right" data-total-uuid="' + c.uuid + '">' + formatMoneyNoDecimals(total) + '</td>';
|
||||
const versionNo = c.current_version_no || 1;
|
||||
html += '<td class="px-2 py-3 text-sm text-center text-gray-500 w-12">v' + versionNo + '</td>';
|
||||
html += '<td class="px-2 py-3 text-sm text-center text-gray-500 w-12">main</td>';
|
||||
html += '<td class="px-4 py-3 text-sm text-right whitespace-nowrap"><div class="inline-flex items-center justify-end gap-2">';
|
||||
if (configStatusMode === 'archived') {
|
||||
html += '<button onclick="reactivateConfig(\'' + c.uuid + '\')" class="text-emerald-600 hover:text-emerald-800" title="Восстановить">';
|
||||
@@ -518,7 +589,19 @@ async function loadConfigs() {
|
||||
}
|
||||
}
|
||||
|
||||
let pdCreateConfigType = 'server';
|
||||
|
||||
function pdSetCreateType(type) {
|
||||
pdCreateConfigType = type;
|
||||
document.getElementById('pd-type-server-btn').className = 'flex-1 py-2 text-sm font-medium ' +
|
||||
(type === 'server' ? 'bg-blue-600 text-white' : 'bg-white text-gray-700 hover:bg-gray-50');
|
||||
document.getElementById('pd-type-storage-btn').className = 'flex-1 py-2 text-sm font-medium border-l border-gray-200 ' +
|
||||
(type === 'storage' ? 'bg-blue-600 text-white' : 'bg-white text-gray-700 hover:bg-gray-50');
|
||||
}
|
||||
|
||||
function openCreateModal() {
|
||||
pdCreateConfigType = 'server';
|
||||
pdSetCreateType('server');
|
||||
document.getElementById('create-name').value = '';
|
||||
document.getElementById('create-modal').classList.remove('hidden');
|
||||
document.getElementById('create-modal').classList.add('flex');
|
||||
@@ -540,6 +623,213 @@ function closeNewVariantModal() {
|
||||
document.getElementById('new-variant-modal').classList.remove('flex');
|
||||
}
|
||||
|
||||
function openVariantActionModal() {
|
||||
if (!project) return;
|
||||
const currentName = (project.variant || '').trim();
|
||||
const currentCode = (project.code || '').trim();
|
||||
document.getElementById('variant-action-current-name').value = currentName;
|
||||
document.getElementById('variant-action-current-code').value = currentCode;
|
||||
document.getElementById('variant-action-name').value = currentName;
|
||||
document.getElementById('variant-action-code').value = currentCode;
|
||||
document.getElementById('variant-action-copy').checked = false;
|
||||
document.getElementById('variant-action-modal').classList.remove('hidden');
|
||||
document.getElementById('variant-action-modal').classList.add('flex');
|
||||
const nameInput = document.getElementById('variant-action-name');
|
||||
nameInput.focus();
|
||||
nameInput.select();
|
||||
}
|
||||
|
||||
function closeVariantActionModal() {
|
||||
document.getElementById('variant-action-modal').classList.add('hidden');
|
||||
document.getElementById('variant-action-modal').classList.remove('flex');
|
||||
}
|
||||
|
||||
function findUniqueVariantActionName(baseName, targetCode, excludeProjectUUID) {
|
||||
const cleanedBase = (baseName || '').trim();
|
||||
if (!cleanedBase || normalizeVariantLabel(cleanedBase).toLowerCase() === 'main') {
|
||||
return {error: 'Имя варианта не должно быть пустым и не может быть main'};
|
||||
}
|
||||
|
||||
const code = (targetCode || '').trim();
|
||||
const used = new Set(
|
||||
projectsCatalog
|
||||
.filter(p => (p.code || '').trim().toLowerCase() === code.toLowerCase())
|
||||
.filter(p => !excludeProjectUUID || p.uuid !== excludeProjectUUID)
|
||||
.map(p => ((p.variant || '').trim()).toLowerCase())
|
||||
);
|
||||
|
||||
if (!used.has(cleanedBase.toLowerCase())) {
|
||||
return {name: cleanedBase, changed: false};
|
||||
}
|
||||
|
||||
let candidate = cleanedBase + '_копия';
|
||||
let suffix = 2;
|
||||
while (used.has(candidate.toLowerCase())) {
|
||||
candidate = cleanedBase + '_копия' + suffix;
|
||||
suffix++;
|
||||
}
|
||||
return {name: candidate, changed: true};
|
||||
}
|
||||
|
||||
async function resolveUniqueConfigActionName(baseName, targetProjectUUID, excludeConfigUUID) {
|
||||
const cleanedBase = (baseName || '').trim();
|
||||
if (!cleanedBase) {
|
||||
return {error: 'Введите название'};
|
||||
}
|
||||
|
||||
let configs = [];
|
||||
if (targetProjectUUID === projectUUID) {
|
||||
configs = Array.isArray(allConfigs) ? allConfigs : [];
|
||||
} else {
|
||||
const resp = await fetch('/api/projects/' + targetProjectUUID + '/configs?status=all');
|
||||
if (!resp.ok) {
|
||||
return {error: 'Не удалось проверить конфигурации целевого проекта'};
|
||||
}
|
||||
const data = await resp.json().catch(() => ({}));
|
||||
configs = Array.isArray(data.configurations) ? data.configurations : [];
|
||||
}
|
||||
|
||||
const used = new Set(
|
||||
configs
|
||||
.filter(cfg => !excludeConfigUUID || cfg.uuid !== excludeConfigUUID)
|
||||
.map(cfg => (cfg.name || '').trim().toLowerCase())
|
||||
)
|
||||
|
||||
if (!used.has(cleanedBase.toLowerCase())) {
|
||||
return {name: cleanedBase, changed: false};
|
||||
}
|
||||
|
||||
let candidate = cleanedBase + '_копия';
|
||||
let suffix = 2;
|
||||
while (used.has(candidate.toLowerCase())) {
|
||||
candidate = cleanedBase + '_копия' + suffix;
|
||||
suffix++;
|
||||
}
|
||||
return {name: candidate, changed: true};
|
||||
}
|
||||
|
||||
async function cloneVariantConfigurations(targetProjectUUID) {
|
||||
const listResp = await fetch('/api/projects/' + projectUUID + '/configs');
|
||||
if (!listResp.ok) {
|
||||
throw new Error('Не удалось загрузить конфигурации варианта');
|
||||
}
|
||||
const listData = await listResp.json().catch(() => ({}));
|
||||
const configs = Array.isArray(listData.configurations) ? listData.configurations : [];
|
||||
for (const cfg of configs) {
|
||||
const cloneResp = await fetch('/api/projects/' + targetProjectUUID + '/configs/' + cfg.uuid + '/clone', {
|
||||
method: 'POST',
|
||||
headers: {'Content-Type': 'application/json'},
|
||||
body: JSON.stringify({name: cfg.name})
|
||||
});
|
||||
if (!cloneResp.ok) {
|
||||
throw new Error('Не удалось скопировать конфигурацию «' + (cfg.name || 'без названия') + '»');
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
async function saveVariantAction() {
|
||||
if (!project) return;
|
||||
const notify = (message, type) => {
|
||||
if (typeof showToast === 'function') {
|
||||
showToast(message, type || 'success');
|
||||
} else {
|
||||
alert(message);
|
||||
}
|
||||
};
|
||||
|
||||
const currentName = document.getElementById('variant-action-current-name').value.trim();
|
||||
const currentCode = document.getElementById('variant-action-current-code').value.trim();
|
||||
const rawName = document.getElementById('variant-action-name').value.trim();
|
||||
const code = document.getElementById('variant-action-code').value.trim();
|
||||
const copy = document.getElementById('variant-action-copy').checked;
|
||||
|
||||
if (!code) {
|
||||
notify('Введите код проекта', 'error');
|
||||
return;
|
||||
}
|
||||
const uniqueNameResult = findUniqueVariantActionName(rawName, code, copy ? '' : projectUUID);
|
||||
if (uniqueNameResult.error) {
|
||||
notify(uniqueNameResult.error, 'error');
|
||||
return;
|
||||
}
|
||||
const name = uniqueNameResult.name;
|
||||
if (uniqueNameResult.changed) {
|
||||
document.getElementById('variant-action-name').value = name;
|
||||
notify('Имя варианта занято, использовано ' + name, 'success');
|
||||
}
|
||||
|
||||
if (copy) {
|
||||
const createResp = await fetch('/api/projects', {
|
||||
method: 'POST',
|
||||
headers: {'Content-Type': 'application/json'},
|
||||
body: JSON.stringify({
|
||||
code: code,
|
||||
variant: name,
|
||||
name: project.name || null,
|
||||
tracker_url: (project.tracker_url || '').trim()
|
||||
})
|
||||
});
|
||||
if (!createResp.ok) {
|
||||
if (createResp.status === 400) {
|
||||
notify('Имя варианта не может быть main', 'error');
|
||||
return;
|
||||
}
|
||||
if (createResp.status === 409) {
|
||||
notify('Вариант с таким кодом и значением уже существует', 'error');
|
||||
return;
|
||||
}
|
||||
notify('Не удалось создать копию варианта', 'error');
|
||||
return;
|
||||
}
|
||||
const created = await createResp.json().catch(() => null);
|
||||
if (!created || !created.uuid) {
|
||||
notify('Не удалось создать копию варианта', 'error');
|
||||
return;
|
||||
}
|
||||
try {
|
||||
await cloneVariantConfigurations(created.uuid);
|
||||
} catch (err) {
|
||||
notify(err.message || 'Вариант создан, но конфигурации не скопированы полностью', 'error');
|
||||
window.location.href = '/projects/' + created.uuid;
|
||||
return;
|
||||
}
|
||||
closeVariantActionModal();
|
||||
notify('Копия варианта создана', 'success');
|
||||
window.location.href = '/projects/' + created.uuid;
|
||||
return;
|
||||
}
|
||||
|
||||
const changed = name !== currentName || code !== currentCode;
|
||||
if (!changed) {
|
||||
closeVariantActionModal();
|
||||
return;
|
||||
}
|
||||
|
||||
const updateResp = await fetch('/api/projects/' + projectUUID, {
|
||||
method: 'PUT',
|
||||
headers: {'Content-Type': 'application/json'},
|
||||
body: JSON.stringify({code: code, variant: name})
|
||||
});
|
||||
if (!updateResp.ok) {
|
||||
if (updateResp.status === 400) {
|
||||
notify('Имя варианта не может быть main', 'error');
|
||||
return;
|
||||
}
|
||||
if (updateResp.status === 409) {
|
||||
notify('Вариант с таким кодом и значением уже существует', 'error');
|
||||
return;
|
||||
}
|
||||
notify('Не удалось сохранить вариант', 'error');
|
||||
return;
|
||||
}
|
||||
|
||||
closeVariantActionModal();
|
||||
await loadProject();
|
||||
await loadConfigs();
|
||||
updateDeleteVariantButton();
|
||||
notify('Вариант обновлён', 'success');
|
||||
}
|
||||
|
||||
async function createNewVariant() {
|
||||
if (!project) return;
|
||||
const code = (project.code || '').trim();
|
||||
@@ -669,7 +959,7 @@ async function createConfig() {
|
||||
const resp = await fetch('/api/projects/' + projectUUID + '/configs', {
|
||||
method: 'POST',
|
||||
headers: {'Content-Type': 'application/json'},
|
||||
body: JSON.stringify({name: name, items: [], notes: '', server_count: 1})
|
||||
body: JSON.stringify({name: name, items: [], notes: '', server_count: 1, config_type: pdCreateConfigType})
|
||||
});
|
||||
if (!resp.ok) {
|
||||
alert('Не удалось создать конфигурацию');
|
||||
@@ -864,12 +1154,22 @@ async function saveConfigAction() {
|
||||
notify('Введите название', 'error');
|
||||
return;
|
||||
}
|
||||
const uniqueNameResult = await resolveUniqueConfigActionName(name, targetProjectUUID, copy ? '' : uuid);
|
||||
if (uniqueNameResult.error) {
|
||||
notify(uniqueNameResult.error, 'error');
|
||||
return;
|
||||
}
|
||||
const resolvedName = uniqueNameResult.name;
|
||||
if (uniqueNameResult.changed) {
|
||||
document.getElementById('config-action-name').value = resolvedName;
|
||||
notify('Имя занято, использовано ' + resolvedName, 'success');
|
||||
}
|
||||
|
||||
if (copy) {
|
||||
const cloneResp = await fetch('/api/projects/' + targetProjectUUID + '/configs/' + uuid + '/clone', {
|
||||
method: 'POST',
|
||||
headers: {'Content-Type': 'application/json'},
|
||||
body: JSON.stringify({name: name})
|
||||
body: JSON.stringify({name: resolvedName})
|
||||
});
|
||||
if (!cloneResp.ok) {
|
||||
notify('Не удалось скопировать конфигурацию', 'error');
|
||||
@@ -886,11 +1186,11 @@ async function saveConfigAction() {
|
||||
}
|
||||
|
||||
let changed = false;
|
||||
if (name !== currentName) {
|
||||
if (resolvedName !== currentName) {
|
||||
const renameResp = await fetch('/api/configs/' + uuid + '/rename', {
|
||||
method: 'PATCH',
|
||||
headers: {'Content-Type': 'application/json'},
|
||||
body: JSON.stringify({name: name})
|
||||
body: JSON.stringify({name: resolvedName})
|
||||
});
|
||||
if (!renameResp.ok) {
|
||||
notify('Не удалось переименовать конфигурацию', 'error');
|
||||
@@ -1016,6 +1316,7 @@ function updateDeleteVariantButton() {
|
||||
document.getElementById('create-modal').addEventListener('click', function(e) { if (e.target === this) closeCreateModal(); });
|
||||
document.getElementById('vendor-import-modal').addEventListener('click', function(e) { if (e.target === this) closeVendorImportModal(); });
|
||||
document.getElementById('new-variant-modal').addEventListener('click', function(e) { if (e.target === this) closeNewVariantModal(); });
|
||||
document.getElementById('variant-action-modal').addEventListener('click', function(e) { if (e.target === this) closeVariantActionModal(); });
|
||||
document.getElementById('config-action-modal').addEventListener('click', function(e) { if (e.target === this) closeConfigActionModal(); });
|
||||
document.getElementById('project-settings-modal').addEventListener('click', function(e) { if (e.target === this) closeProjectSettingsModal(); });
|
||||
document.getElementById('config-action-project-input').addEventListener('input', function(e) {
|
||||
@@ -1026,7 +1327,7 @@ document.getElementById('config-action-copy').addEventListener('change', functio
|
||||
const currentName = document.getElementById('config-action-current-name').value;
|
||||
const nameInput = document.getElementById('config-action-name');
|
||||
if (e.target.checked && nameInput.value.trim() === currentName.trim()) {
|
||||
nameInput.value = currentName + ' (копия)';
|
||||
nameInput.value = currentName + '_копия';
|
||||
}
|
||||
syncActionModalMode();
|
||||
});
|
||||
@@ -1034,6 +1335,7 @@ document.addEventListener('keydown', function(e) {
|
||||
if (e.key === 'Escape') {
|
||||
closeCreateModal();
|
||||
closeVendorImportModal();
|
||||
closeVariantActionModal();
|
||||
closeConfigActionModal();
|
||||
closeProjectSettingsModal();
|
||||
}
|
||||
@@ -1225,7 +1527,8 @@ async function exportProject() {
|
||||
include_bom: !!document.getElementById('export-col-bom')?.checked,
|
||||
include_estimate: !!document.getElementById('export-col-estimate')?.checked,
|
||||
include_stock: !!document.getElementById('export-col-stock')?.checked,
|
||||
include_competitor: !!document.getElementById('export-col-competitor')?.checked
|
||||
include_competitor: !!document.getElementById('export-col-competitor')?.checked,
|
||||
basis: document.querySelector('input[name="export-basis"]:checked')?.value || 'fob'
|
||||
};
|
||||
|
||||
if (submitBtn) submitBtn.disabled = true;
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
{{define "title"}}Мои проекты - QuoteForge{{end}}
|
||||
{{define "title"}}Мои проекты - OFS{{end}}
|
||||
|
||||
{{define "content"}}
|
||||
<div class="space-y-4">
|
||||
@@ -64,7 +64,7 @@ let status = 'active';
|
||||
let projectsSearch = '';
|
||||
let authorSearch = '';
|
||||
let currentPage = 1;
|
||||
let perPage = 10;
|
||||
let perPage = 33;
|
||||
let sortField = 'created_at';
|
||||
let sortDir = 'desc';
|
||||
let createProjectTrackerManuallyEdited = false;
|
||||
@@ -114,21 +114,21 @@ function formatDateParts(value) {
|
||||
};
|
||||
}
|
||||
|
||||
function renderAuditCell(value, user) {
|
||||
const parts = formatDateParts(value);
|
||||
const safeUser = escapeHtml((user || '—').trim() || '—');
|
||||
if (!parts) {
|
||||
return '<div class="leading-tight">' +
|
||||
'<div class="text-gray-400">—</div>' +
|
||||
'<div class="text-gray-400">—</div>' +
|
||||
'<div class="text-gray-500">@ ' + safeUser + '</div>' +
|
||||
'</div>';
|
||||
}
|
||||
return '<div class="leading-tight whitespace-nowrap">' +
|
||||
'<div>' + escapeHtml(parts.date) + '</div>' +
|
||||
'<div class="text-gray-500">' + escapeHtml(parts.time) + '</div>' +
|
||||
'<div class="text-gray-600">@ ' + safeUser + '</div>' +
|
||||
'</div>';
|
||||
function formatISODate(value) {
|
||||
if (!value) return '—';
|
||||
const date = new Date(value);
|
||||
if (Number.isNaN(date.getTime())) return '—';
|
||||
return date.toISOString().slice(0, 10);
|
||||
}
|
||||
|
||||
function renderProjectDateCell(project) {
|
||||
const updatedDate = formatISODate(project && project.updated_at);
|
||||
const tooltip = [
|
||||
'Создан: ' + formatDateTime(project && project.created_at),
|
||||
'Изменен: ' + formatDateTime(project && project.updated_at),
|
||||
'Автор: ' + ((project && project.owner_username) || '—')
|
||||
].join('\n');
|
||||
return '<div class="whitespace-nowrap text-gray-600 cursor-help" title="' + escapeHtml(tooltip) + '">' + escapeHtml(updatedDate) + '</div>';
|
||||
}
|
||||
|
||||
function normalizeVariant(variant) {
|
||||
@@ -141,11 +141,11 @@ function renderVariantChips(code, fallbackVariant, fallbackUUID) {
|
||||
if (!variants.length) {
|
||||
const single = normalizeVariant(fallbackVariant);
|
||||
const href = fallbackUUID ? ('/projects/' + fallbackUUID) : '/projects';
|
||||
return '<a href="' + href + '" class="inline-flex items-center px-2 py-0.5 text-xs rounded-full bg-gray-100 text-gray-600 hover:bg-gray-200 hover:text-gray-900">' + escapeHtml(single) + '</a>';
|
||||
return '<a href="' + href + '" class="inline-flex items-center px-1.5 py-px text-xs leading-5 rounded-full bg-gray-100 text-gray-600 hover:bg-gray-200 hover:text-gray-900">' + escapeHtml(single) + '</a>';
|
||||
}
|
||||
return variants.map(v => {
|
||||
const href = v.uuid ? ('/projects/' + v.uuid) : '/projects';
|
||||
return '<a href="' + href + '" class="inline-flex items-center px-2 py-0.5 text-xs rounded-full bg-gray-100 text-gray-700 hover:bg-gray-200 hover:text-gray-900">' + escapeHtml(v.label) + '</a>';
|
||||
return '<a href="' + href + '" class="inline-flex items-center px-1.5 py-px text-xs leading-5 rounded-full bg-gray-100 text-gray-700 hover:bg-gray-200 hover:text-gray-900">' + escapeHtml(v.label) + '</a>';
|
||||
}).join(' ');
|
||||
}
|
||||
|
||||
@@ -262,25 +262,25 @@ async function loadProjects() {
|
||||
let html = '<div class="overflow-x-auto"><table class="w-full table-fixed min-w-[980px]">';
|
||||
html += '<thead class="bg-gray-50">';
|
||||
html += '<tr>';
|
||||
html += '<th class="w-28 px-4 py-3 text-left text-xs font-medium text-gray-500 uppercase">Код</th>';
|
||||
html += '<th class="w-28 px-4 py-3 text-left text-xs font-medium text-gray-500 uppercase">Дата</th>';
|
||||
html += '<th class="w-32 px-4 py-3 text-left text-xs font-medium text-gray-500 uppercase">Код</th>';
|
||||
html += '<th class="px-4 py-3 text-left text-xs font-medium text-gray-500 uppercase">';
|
||||
html += '<button type="button" onclick="toggleSort(\'name\')" class="inline-flex items-center gap-1 hover:text-gray-700">Название';
|
||||
if (sortField === 'name') {
|
||||
html += sortDir === 'asc' ? ' <span>↑</span>' : ' <span>↓</span>';
|
||||
}
|
||||
html += '</button></th>';
|
||||
html += '<th class="w-44 px-4 py-3 text-left text-xs font-medium text-gray-500 uppercase">Создан</th>';
|
||||
html += '<th class="w-44 px-4 py-3 text-left text-xs font-medium text-gray-500 uppercase">Изменен</th>';
|
||||
html += '<th class="w-36 px-4 py-3 text-left text-xs font-medium text-gray-500 uppercase">Варианты</th>';
|
||||
html += '<th class="w-36 px-4 py-3 text-right text-xs font-medium text-gray-500 uppercase">Действия</th>';
|
||||
html += '<th class="w-24 px-4 py-3 text-left text-xs font-medium text-gray-500 uppercase">Автор</th>';
|
||||
html += '<th class="w-56 px-4 py-3 text-left text-xs font-medium text-gray-500 uppercase">Варианты</th>';
|
||||
html += '<th class="w-14 px-2 py-3 text-right text-xs font-medium text-gray-500 uppercase"></th>';
|
||||
html += '</tr>';
|
||||
html += '<tr>';
|
||||
html += '<th class="px-4 py-2"></th>';
|
||||
html += '<th class="px-2 py-2"></th>';
|
||||
html += '<th class="px-4 py-2"></th>';
|
||||
html += '<th class="px-4 py-2"><input id="projects-author-filter" type="text" value="' + escapeHtml(authorSearch) + '" placeholder="Фильтр автора" class="w-full px-2 py-1 border rounded text-xs focus:ring-1 focus:ring-blue-500 focus:border-blue-500"></th>';
|
||||
html += '<th class="px-4 py-2"></th>';
|
||||
html += '<th class="px-4 py-2"></th>';
|
||||
html += '<th class="px-4 py-2"></th>';
|
||||
html += '<th class="px-2 py-2"></th>';
|
||||
html += '</tr>';
|
||||
html += '</thead><tbody class="divide-y">';
|
||||
|
||||
@@ -292,36 +292,21 @@ async function loadProjects() {
|
||||
html += '<tr class="hover:bg-gray-50">';
|
||||
const displayName = p.name || '';
|
||||
const createdBy = p.owner_username || '—';
|
||||
const updatedBy = '—';
|
||||
const variantChips = renderVariantChips(p.code, p.variant, p.uuid);
|
||||
html += '<td class="px-4 py-3 text-sm font-medium align-top"><a class="inline-block max-w-full text-blue-600 hover:underline whitespace-nowrap" href="/projects/' + p.uuid + '">' + escapeHtml(p.code || '—') + '</a></td>';
|
||||
html += '<td class="px-4 py-3 text-sm text-gray-700 align-top"><div class="truncate" title="' + escapeHtml(displayName) + '">' + escapeHtml(displayName || '—') + '</div></td>';
|
||||
html += '<td class="px-4 py-3 text-sm text-gray-600 align-top">' + renderAuditCell(p.created_at, createdBy) + '</td>';
|
||||
html += '<td class="px-4 py-3 text-sm text-gray-600 align-top">' + renderAuditCell(p.updated_at, updatedBy) + '</td>';
|
||||
html += '<td class="px-4 py-3 text-sm align-top">' + renderProjectDateCell(p) + '</td>';
|
||||
html += '<td class="px-4 py-3 text-sm font-medium align-top break-words"><a class="inline text-blue-600 hover:underline break-all whitespace-normal" href="/projects/' + p.uuid + '">' + escapeHtml(p.code || '—') + '</a></td>';
|
||||
html += '<td class="px-4 py-3 text-sm text-gray-700 align-top break-words"><div class="whitespace-normal break-words" title="' + escapeHtml(displayName) + '">' + escapeHtml(displayName || '—') + '</div></td>';
|
||||
html += '<td class="px-4 py-3 text-sm text-gray-600 align-top whitespace-nowrap">' + escapeHtml(createdBy) + '</td>';
|
||||
html += '<td class="px-4 py-3 text-sm align-top"><div class="flex flex-wrap gap-1">' + variantChips + '</div></td>';
|
||||
html += '<td class="px-4 py-3 text-sm text-right"><div class="inline-flex items-center gap-2">';
|
||||
html += '<td class="px-2 py-3 text-sm text-right"><div class="inline-flex items-center justify-end gap-2">';
|
||||
|
||||
if (p.is_active) {
|
||||
const safeName = escapeHtml(displayName).replace(/'/g, "\\'");
|
||||
html += '<button onclick="copyProject(' + JSON.stringify(p.uuid) + ', ' + JSON.stringify(displayName) + ')" class="text-green-700 hover:text-green-900" title="Копировать">';
|
||||
html += '<svg class="w-5 h-5" fill="none" stroke="currentColor" viewBox="0 0 24 24"><path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M8 16H6a2 2 0 01-2-2V6a2 2 0 012-2h8a2 2 0 012 2v2m-6 12h8a2 2 0 002-2v-8a2 2 0 00-2-2h-8a2 2 0 00-2 2v8a2 2 0 002 2z"></path></svg>';
|
||||
html += '</button>';
|
||||
|
||||
html += '<button onclick="renameProject(' + JSON.stringify(p.uuid) + ', ' + JSON.stringify(displayName) + ')" class="text-blue-700 hover:text-blue-900" title="Переименовать">';
|
||||
html += '<svg class="w-5 h-5" fill="none" stroke="currentColor" viewBox="0 0 24 24"><path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M11 5H6a2 2 0 00-2 2v11a2 2 0 002 2h11a2 2 0 002-2v-5m-1.414-9.414a2 2 0 112.828 2.828L11.828 15H9v-2.828l8.586-8.586z"></path></svg>';
|
||||
html += '</button>';
|
||||
|
||||
if ((p.tracker_url || '').trim() !== '') {
|
||||
html += '<a href="' + escapeHtml(p.tracker_url) + '" target="_blank" rel="noopener noreferrer" class="inline-flex items-center justify-center w-5 h-5 text-sky-700 hover:text-sky-900 font-semibold" title="Открыть в трекере">T</a>';
|
||||
}
|
||||
html += '<button onclick="archiveProject(\'' + p.uuid + '\')" class="text-red-700 hover:text-red-900" title="Удалить (в архив)">';
|
||||
html += '<svg class="w-5 h-5" fill="none" stroke="currentColor" viewBox="0 0 24 24"><path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M19 7l-.867 12.142A2 2 0 0116.138 21H7.862a2 2 0 01-1.995-1.858L5 7m5 4v6m4-6v6m1-10V4a1 1 0 00-1-1h-4a1 1 0 00-1 1v3M4 7h16"></path></svg>';
|
||||
html += '</button>';
|
||||
|
||||
html += '<button onclick="addConfigToProject(\'' + p.uuid + '\')" class="text-indigo-700 hover:text-indigo-900" title="Добавить конфигурацию">';
|
||||
html += '<svg class="w-5 h-5" fill="none" stroke="currentColor" viewBox="0 0 24 24"><path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M12 4v16m8-8H4"></path></svg>';
|
||||
html += '</button>';
|
||||
} else {
|
||||
html += '<button onclick="reactivateProject(\'' + p.uuid + '\')" class="text-emerald-700 hover:text-emerald-900" title="Восстановить">';
|
||||
html += '<svg class="w-5 h-5" fill="none" stroke="currentColor" viewBox="0 0 24 24"><path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M5 13l4 4L19 7"></path></svg>';
|
||||
html += '</button>';
|
||||
}
|
||||
html += '</div></td>';
|
||||
html += '</tr>';
|
||||
|
||||
@@ -4,8 +4,9 @@
|
||||
<head>
|
||||
<meta charset="UTF-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||
<title>QuoteForge - Настройка подключения</title>
|
||||
<script src="https://cdn.tailwindcss.com"></script>
|
||||
<title>OFS - Настройка подключения</title>
|
||||
<link rel="stylesheet" href="/static/app.css">
|
||||
<script src="/static/vendor/tailwindcss.browser.js"></script>
|
||||
</head>
|
||||
<body class="bg-gray-100 min-h-screen flex items-center justify-center">
|
||||
<div class="max-w-md w-full mx-4">
|
||||
|
||||
Reference in New Issue
Block a user