Compare commits
65 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
7e1e2ac18d | ||
|
|
aea6bf91ab | ||
|
|
d58d52c5e7 | ||
|
|
7a628deb8a | ||
|
|
7f6be786a8 | ||
|
|
a360992a01 | ||
|
|
1ea21ece33 | ||
|
|
7ae804d2d3 | ||
|
|
da5414c708 | ||
|
|
7a69c1513d | ||
|
|
f448111e77 | ||
|
|
a5dafd37d3 | ||
|
|
3661e345b1 | ||
|
|
f915866f83 | ||
|
|
c34a42aaf5 | ||
|
|
7de0f359b6 | ||
|
|
a8d8d7dfa9 | ||
|
|
20ce0124be | ||
|
|
b0a106415f | ||
|
|
a054fc7564 | ||
|
|
68cd087356 | ||
|
|
579ff46a7f | ||
|
|
35c5600b36 | ||
|
|
c599897142 | ||
|
|
c964d66e64 | ||
|
|
f0e6bba7e9 | ||
|
|
61d7e493bd | ||
|
|
f930c79b34 | ||
|
|
a0a57e0969 | ||
|
|
b3003c4858 | ||
|
|
e2da8b4253 | ||
|
|
06397a6bd1 | ||
|
|
4e977737ee | ||
|
|
7c3752f110 | ||
|
|
08ecfd0826 | ||
|
|
42458455f7 | ||
|
|
8663a87d28 | ||
| 2f0957ae4e | |||
| 65db9b37ea | |||
| ed0ef04d10 | |||
| 2e0faf4aec | |||
| 4b0879779a | |||
| 2b175a3d1e | |||
| 5732c75b85 | |||
| eb7c3739ce | |||
|
|
6e0335af7c | ||
|
|
a42a80beb8 | ||
|
|
586114c79c | ||
|
|
e9230c0e58 | ||
|
|
aa65fc8156 | ||
|
|
b22e961656 | ||
|
|
af83818564 | ||
|
|
8a138327a3 | ||
|
|
d1f65f6684 | ||
|
|
7b371add10 | ||
|
|
8d7fab39b4 | ||
| d0400b18a3 | |||
| d3f1a838eb | |||
| c6086ac03a | |||
| a127ebea82 | |||
| 347599e06b | |||
| 4a44d48366 | |||
| 23882637b5 | |||
| 5e56f386cc | |||
| e5b6902c9e |
7
.gitignore
vendored
7
.gitignore
vendored
@@ -75,7 +75,12 @@ Network Trash Folder
|
|||||||
Temporary Items
|
Temporary Items
|
||||||
.apdisk
|
.apdisk
|
||||||
|
|
||||||
# Release artifacts (binaries, archives, checksums), but DO track releases/memory/ for changelog
|
# Release artifacts (binaries, archives, checksums), but keep markdown notes tracked
|
||||||
releases/*
|
releases/*
|
||||||
|
!releases/README.md
|
||||||
!releases/memory/
|
!releases/memory/
|
||||||
!releases/memory/**
|
!releases/memory/**
|
||||||
|
!releases/**/
|
||||||
|
releases/**/*
|
||||||
|
!releases/README.md
|
||||||
|
!releases/*/RELEASE_NOTES.md
|
||||||
|
|||||||
3
.gitmodules
vendored
Normal file
3
.gitmodules
vendored
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
[submodule "bible"]
|
||||||
|
path = bible
|
||||||
|
url = https://git.mchus.pro/mchus/bible.git
|
||||||
11
AGENTS.md
Normal file
11
AGENTS.md
Normal file
@@ -0,0 +1,11 @@
|
|||||||
|
# QuoteForge — Instructions for Codex
|
||||||
|
|
||||||
|
## Shared Engineering Rules
|
||||||
|
Read `bible/` — shared rules for all projects (CSV, logging, DB, tables, background tasks, code style).
|
||||||
|
Start with `bible/rules/patterns/` for specific contracts.
|
||||||
|
|
||||||
|
## Project Architecture
|
||||||
|
Read `bible-local/` — QuoteForge specific architecture.
|
||||||
|
Read order: `bible-local/README.md` → relevant files for the task.
|
||||||
|
|
||||||
|
Every architectural decision specific to this project must be recorded in `bible-local/`.
|
||||||
29
CLAUDE.md
29
CLAUDE.md
@@ -1,24 +1,17 @@
|
|||||||
# QuoteForge - Claude Code Instructions
|
# QuoteForge — Instructions for Claude
|
||||||
|
|
||||||
## Bible
|
## Shared Engineering Rules
|
||||||
|
Read `bible/` — shared rules for all projects (CSV, logging, DB, tables, background tasks, code style).
|
||||||
|
Start with `bible/rules/patterns/` for specific contracts.
|
||||||
|
|
||||||
The **[bible/](bible/README.md)** is the single source of truth for this project's architecture, schemas, patterns, and rules. Read it before making any changes.
|
## Project Architecture
|
||||||
|
Read `bible-local/` — QuoteForge specific architecture.
|
||||||
|
Read order: `bible-local/README.md` → relevant files for the task.
|
||||||
|
|
||||||
**Rules:**
|
Every architectural decision specific to this project must be recorded in `bible-local/`.
|
||||||
- Every architectural decision must be recorded in `bible/` in the same commit as the code.
|
|
||||||
- Bible files are written and updated in **English only**.
|
|
||||||
- Before working on the codebase, check `releases/memory/` for the latest release notes.
|
|
||||||
|
|
||||||
## Quick Reference
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Verify build
|
go build ./cmd/qfs && go vet ./... # verify
|
||||||
go build ./cmd/qfs && go vet ./...
|
go run ./cmd/qfs # run
|
||||||
|
make build-release # release build
|
||||||
# Run
|
|
||||||
go run ./cmd/qfs
|
|
||||||
make run
|
|
||||||
|
|
||||||
# Build
|
|
||||||
make build-release
|
|
||||||
```
|
```
|
||||||
|
|||||||
77
README.md
77
README.md
@@ -1,66 +1,53 @@
|
|||||||
# QuoteForge
|
# QuoteForge
|
||||||
|
|
||||||
**Корпоративный конфигуратор серверов и расчёт КП**
|
Local-first desktop web app for server configuration, quotation, and project work.
|
||||||
|
|
||||||
Offline-first архитектура: пользовательские операции через локальную SQLite, MariaDB только для синхронизации.
|
Runtime model:
|
||||||
|
- user work is stored in local SQLite;
|
||||||
|
- MariaDB is used only for setup checks and background sync;
|
||||||
|
- HTTP server binds to loopback only.
|
||||||
|
|
||||||

|
## What the app does
|
||||||

|
|
||||||

|
|
||||||
|
|
||||||
---
|
- configuration editor with price refresh from synced pricelists;
|
||||||
|
- projects with variants and ordered configurations;
|
||||||
|
- vendor BOM import and PN -> LOT resolution;
|
||||||
|
- revision history with rollback;
|
||||||
|
- rotating local backups.
|
||||||
|
|
||||||
## Документация
|
## Run
|
||||||
|
|
||||||
Полная архитектурная документация хранится в **[bible/](bible/README.md)**:
|
|
||||||
|
|
||||||
| Файл | Тема |
|
|
||||||
|------|------|
|
|
||||||
| [bible/01-overview.md](bible/01-overview.md) | Продукт, возможности, технологии, структура репо |
|
|
||||||
| [bible/02-architecture.md](bible/02-architecture.md) | Local-first, sync, ценообразование, версионность |
|
|
||||||
| [bible/03-database.md](bible/03-database.md) | SQLite и MariaDB схемы, права, миграции |
|
|
||||||
| [bible/04-api.md](bible/04-api.md) | Все API endpoints и web-маршруты |
|
|
||||||
| [bible/05-config.md](bible/05-config.md) | Конфигурация, env vars, установка |
|
|
||||||
| [bible/06-backup.md](bible/06-backup.md) | Резервное копирование |
|
|
||||||
| [bible/07-dev.md](bible/07-dev.md) | Команды разработки, стиль кода, guardrails |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Быстрый старт
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Применить миграции
|
|
||||||
go run ./cmd/qfs -migrate
|
|
||||||
|
|
||||||
# Запустить
|
|
||||||
go run ./cmd/qfs
|
go run ./cmd/qfs
|
||||||
# или
|
|
||||||
make run
|
|
||||||
```
|
```
|
||||||
|
|
||||||
Приложение: http://localhost:8080 → откроется `/setup` для настройки подключения к MariaDB.
|
Useful commands:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Сборка
|
go run ./cmd/qfs -migrate
|
||||||
|
go test ./...
|
||||||
|
go vet ./...
|
||||||
make build-release
|
make build-release
|
||||||
|
|
||||||
# Проверка
|
|
||||||
go build ./cmd/qfs && go vet ./...
|
|
||||||
```
|
```
|
||||||
|
|
||||||
---
|
On first run the app creates a minimal `config.yaml`, starts on `http://127.0.0.1:8080`, and opens `/setup` if DB credentials were not saved yet.
|
||||||
|
|
||||||
## Releases & Changelog
|
## Documentation
|
||||||
|
|
||||||
Changelog между версиями: `releases/memory/v{major}.{minor}.{patch}.md`
|
- Shared engineering rules: [bible/README.md](bible/README.md)
|
||||||
|
- Project architecture: [bible-local/README.md](bible-local/README.md)
|
||||||
|
- Release notes: `releases/<version>/RELEASE_NOTES.md`
|
||||||
|
|
||||||
---
|
`bible-local/` is the source of truth for QuoteForge-specific architecture. If code changes behavior, update the matching file there in the same commit.
|
||||||
|
|
||||||
## Поддержка
|
## Repository map
|
||||||
|
|
||||||
- Email: mike@mchus.pro
|
```text
|
||||||
- Internal: @mchus
|
cmd/ entry points and migration tools
|
||||||
|
internal/ application code
|
||||||
## Лицензия
|
web/ templates and static assets
|
||||||
|
bible/ shared engineering rules
|
||||||
Собственность компании, только для внутреннего использования. См. [LICENSE](LICENSE).
|
bible-local/ project architecture and contracts
|
||||||
|
releases/ packaged release artifacts and release notes
|
||||||
|
config.example.yaml runtime config reference
|
||||||
|
```
|
||||||
|
|||||||
33
acc_lot_log_import.sql
Normal file
33
acc_lot_log_import.sql
Normal file
@@ -0,0 +1,33 @@
|
|||||||
|
-- Generated from /Users/mchusavitin/Downloads/acc.csv
|
||||||
|
-- Unambiguous rows only. Rows from headers without a date were skipped.
|
||||||
|
INSERT INTO lot_log (`lot`, `supplier`, `date`, `price`, `quality`, `comments`) VALUES
|
||||||
|
('ACC_RMK_L_Type', '', '2024-04-01', 19, NULL, 'header supplier missing in source (45383)'),
|
||||||
|
('ACC_RMK_SLIDE', '', '2024-04-01', 31, NULL, 'header supplier missing in source (45383)'),
|
||||||
|
('NVLINK_2S_Bridge', '', '2023-01-01', 431, NULL, 'header supplier missing in source (44927)'),
|
||||||
|
('NVLINK_2S_Bridge', 'Jevy Yang', '2025-01-15', 139, NULL, NULL),
|
||||||
|
('NVLINK_2S_Bridge', 'Wendy', '2025-01-15', 143, NULL, NULL),
|
||||||
|
('NVLINK_2S_Bridge', 'HONCH (Darian)', '2025-05-06', 155, NULL, NULL),
|
||||||
|
('NVLINK_2S_Bridge', 'HONCH (Sunny)', '2025-06-17', 155, NULL, NULL),
|
||||||
|
('NVLINK_2S_Bridge', 'Wendy', '2025-07-02', 145, NULL, NULL),
|
||||||
|
('NVLINK_2S_Bridge', 'Honch (Sunny)', '2025-07-10', 155, NULL, NULL),
|
||||||
|
('NVLINK_2S_Bridge', 'Honch (Yan)', '2025-08-07', 155, NULL, NULL),
|
||||||
|
('NVLINK_2S_Bridge', 'Jevy', '2025-09-09', 155, NULL, NULL),
|
||||||
|
('NVLINK_2S_Bridge', 'Honch (Darian)', '2025-11-17', 102, NULL, NULL),
|
||||||
|
('NVLINK_2W_Bridge(H200)', '', '2023-01-01', 405, NULL, 'header supplier missing in source (44927)'),
|
||||||
|
('NVLINK_2W_Bridge(H200)', 'network logic / Stephen', '2025-02-10', 305, NULL, NULL),
|
||||||
|
('NVLINK_2W_Bridge(H200)', 'JEVY', '2025-02-18', 411, NULL, NULL),
|
||||||
|
('NVLINK_4W_Bridge(H200)', '', '2023-01-01', 820, NULL, 'header supplier missing in source (44927)'),
|
||||||
|
('NVLINK_4W_Bridge(H200)', 'network logic / Stephen', '2025-02-10', 610, NULL, NULL),
|
||||||
|
('NVLINK_4W_Bridge(H200)', 'JEVY', '2025-02-18', 754, NULL, NULL),
|
||||||
|
('25G_SFP28_MMA2P00-AS', 'HONCH (Doris)', '2025-02-19', 65, NULL, NULL),
|
||||||
|
('ACC_SuperCap', '', '2024-04-01', 59, NULL, 'header supplier missing in source (45383)'),
|
||||||
|
('ACC_SuperCap', 'Chiphome', '2025-02-28', 48, NULL, NULL);
|
||||||
|
|
||||||
|
-- Skipped source values due to missing date in header:
|
||||||
|
-- lot=ACC_RMK_L_Type; header=FOB; price=19; reason=header has supplier but no date
|
||||||
|
-- lot=ACC_RMK_SLIDE; header=FOB; price=31; reason=header has supplier but no date
|
||||||
|
-- lot=NVLINK_2S_Bridge; header=FOB; price=155; reason=header has supplier but no date
|
||||||
|
-- lot=NVLINK_2W_Bridge(H200); header=FOB; price=405; reason=header has supplier but no date
|
||||||
|
-- lot=NVLINK_4W_Bridge(H200); header=FOB; price=754; reason=header has supplier but no date
|
||||||
|
-- lot=25G_SFP28_MMA2P00-AS; header=FOB; price=65; reason=header has supplier but no date
|
||||||
|
-- lot=ACC_SuperCap; header=FOB; price=48; reason=header has supplier but no date
|
||||||
1
bible
Submodule
1
bible
Submodule
Submodule bible added at 52444350c1
70
bible-local/01-overview.md
Normal file
70
bible-local/01-overview.md
Normal file
@@ -0,0 +1,70 @@
|
|||||||
|
# 01 - Overview
|
||||||
|
|
||||||
|
## Product
|
||||||
|
|
||||||
|
QuoteForge is a local-first tool for server configuration, quotation, and project tracking.
|
||||||
|
|
||||||
|
Core user flows:
|
||||||
|
- create and edit configurations locally;
|
||||||
|
- calculate prices from synced pricelists;
|
||||||
|
- group configurations into projects and variants;
|
||||||
|
- import vendor workspaces and map vendor PNs to internal LOTs;
|
||||||
|
- review revision history and roll back safely.
|
||||||
|
|
||||||
|
## Runtime model
|
||||||
|
|
||||||
|
QuoteForge is a single-user thick client.
|
||||||
|
|
||||||
|
Rules:
|
||||||
|
- runtime HTTP binds to loopback only;
|
||||||
|
- browser requests are treated as part of the same local user session;
|
||||||
|
- MariaDB is not a live dependency for normal CRUD;
|
||||||
|
- if non-loopback deployment is ever introduced, auth/RBAC must be added first.
|
||||||
|
|
||||||
|
## Product scope
|
||||||
|
|
||||||
|
In scope:
|
||||||
|
- configurator and quote calculation;
|
||||||
|
- projects, variants, and configuration ordering;
|
||||||
|
- local revision history;
|
||||||
|
- read-only pricelist browsing from SQLite cache;
|
||||||
|
- background sync with MariaDB;
|
||||||
|
- rotating local backups.
|
||||||
|
|
||||||
|
Out of scope and intentionally removed:
|
||||||
|
- admin pricing UI/API;
|
||||||
|
- alerts and notification workflows;
|
||||||
|
- stock import tooling;
|
||||||
|
- cron jobs and importer utilities.
|
||||||
|
|
||||||
|
## Tech stack
|
||||||
|
|
||||||
|
| Layer | Stack |
|
||||||
|
| --- | --- |
|
||||||
|
| Backend | Go, Gin, GORM |
|
||||||
|
| Frontend | HTML templates, htmx, Tailwind CSS |
|
||||||
|
| Local storage | SQLite |
|
||||||
|
| Sync transport | MariaDB |
|
||||||
|
| Export | CSV and XLSX generation |
|
||||||
|
|
||||||
|
## Repository map
|
||||||
|
|
||||||
|
```text
|
||||||
|
cmd/
|
||||||
|
qfs/ main HTTP runtime
|
||||||
|
migrate/ server migration tool
|
||||||
|
migrate_ops_projects/ OPS project migration helper
|
||||||
|
internal/
|
||||||
|
appstate/ backup and runtime state
|
||||||
|
config/ runtime config parsing
|
||||||
|
handlers/ HTTP handlers
|
||||||
|
localdb/ SQLite models and migrations
|
||||||
|
repository/ repositories
|
||||||
|
services/ business logic and sync
|
||||||
|
web/
|
||||||
|
templates/ HTML templates
|
||||||
|
static/ static assets
|
||||||
|
bible/ shared engineering rules
|
||||||
|
bible-local/ project-specific architecture
|
||||||
|
releases/ release artifacts and notes
|
||||||
|
```
|
||||||
127
bible-local/02-architecture.md
Normal file
127
bible-local/02-architecture.md
Normal file
@@ -0,0 +1,127 @@
|
|||||||
|
# 02 - Architecture
|
||||||
|
|
||||||
|
## Local-first rule
|
||||||
|
|
||||||
|
SQLite is the runtime source of truth.
|
||||||
|
MariaDB is sync transport plus setup and migration tooling.
|
||||||
|
|
||||||
|
```text
|
||||||
|
browser -> Gin handlers -> SQLite
|
||||||
|
-> pending_changes
|
||||||
|
background sync <------> MariaDB
|
||||||
|
```
|
||||||
|
|
||||||
|
Rules:
|
||||||
|
- user CRUD must continue when MariaDB is offline;
|
||||||
|
- runtime handlers and pages must read and write SQLite only;
|
||||||
|
- MariaDB access in runtime code is allowed only inside sync and setup flows;
|
||||||
|
- no live MariaDB fallback for reads that already exist in local cache.
|
||||||
|
|
||||||
|
## Sync contract
|
||||||
|
|
||||||
|
Bidirectional:
|
||||||
|
- projects;
|
||||||
|
- configurations;
|
||||||
|
- `vendor_spec`;
|
||||||
|
- pending change metadata.
|
||||||
|
|
||||||
|
Pull-only:
|
||||||
|
- components;
|
||||||
|
- pricelists and pricelist items;
|
||||||
|
- partnumber books and partnumber book items.
|
||||||
|
|
||||||
|
Readiness guard:
|
||||||
|
- every sync push/pull runs a preflight check;
|
||||||
|
- blocked sync returns `423 Locked` with a machine-readable reason;
|
||||||
|
- local work continues even when sync is blocked.
|
||||||
|
- sync metadata updates must preserve project `updated_at`; sync time belongs in `synced_at`, not in the user-facing last-modified timestamp.
|
||||||
|
- pricelist pull must persist a new local snapshot atomically: header and items appear together, and `last_pricelist_sync` advances only after item download succeeds.
|
||||||
|
- UI sync status must distinguish "last sync failed" from "up to date"; if the app can prove newer server pricelist data exists, the indicator must say local cache is incomplete.
|
||||||
|
|
||||||
|
## Pricing contract
|
||||||
|
|
||||||
|
Prices come only from `local_pricelist_items`.
|
||||||
|
|
||||||
|
Rules:
|
||||||
|
- `local_components` is metadata-only;
|
||||||
|
- quote calculation must not read prices from components;
|
||||||
|
- latest pricelist selection ignores snapshots without items;
|
||||||
|
- auto pricelist mode stays auto and must not be persisted as an explicit resolved ID.
|
||||||
|
|
||||||
|
## Pricing tab layout
|
||||||
|
|
||||||
|
The Pricing tab (Ценообразование) has two tables: Buy (Цена покупки) and Sale (Цена продажи).
|
||||||
|
|
||||||
|
Column order (both tables):
|
||||||
|
|
||||||
|
```
|
||||||
|
PN вендора | Описание | LOT | Кол-во | Estimate | Склад | Конкуренты | Ручная цена
|
||||||
|
```
|
||||||
|
|
||||||
|
Per-LOT row expansion rules:
|
||||||
|
- each `lot_mappings` entry in a BOM row becomes its own table row with its own quantity and prices;
|
||||||
|
- `baseLot` (resolved LOT without an explicit mapping) is treated as the first sub-row with `quantity_per_pn` from `_getRowLotQtyPerPN`;
|
||||||
|
- when one vendor PN expands into N LOT sub-rows, PN вендора and Описание cells use `rowspan="N"` and appear only on the first sub-row;
|
||||||
|
- a visual top border (`border-t border-gray-200`) separates each vendor PN group.
|
||||||
|
|
||||||
|
Vendor price attachment:
|
||||||
|
- `vendorOrig` and `vendorOrigUnit` (BOM unit/total price) are attached to the first LOT sub-row only;
|
||||||
|
- subsequent sub-rows carry empty `data-vendor-orig` so `setPricingCustomPriceFromVendor` counts each vendor PN exactly once.
|
||||||
|
|
||||||
|
Controls terminology:
|
||||||
|
- custom price input is labeled **Ручная цена** (not "Своя цена");
|
||||||
|
- the button that fills custom price from BOM totals is labeled **BOM Цена** (not "Проставить цены BOM").
|
||||||
|
|
||||||
|
CSV export reads PN вендора, Описание, and LOT from `data-vendor-pn`, `data-desc`, `data-lot` row attributes to bypass the rowspan cell offset problem.
|
||||||
|
|
||||||
|
## Configuration versioning
|
||||||
|
|
||||||
|
Configuration revisions are append-only snapshots stored in `local_configuration_versions`.
|
||||||
|
|
||||||
|
Rules:
|
||||||
|
- the editable working configuration is always the implicit head named `main`; UI must not switch the user to a numbered revision after save;
|
||||||
|
- create a new revision when spec, BOM, or pricing content changes;
|
||||||
|
- revision history is retrospective: the revisions page shows past snapshots, not the current `main` state;
|
||||||
|
- rollback creates a new head revision from an old snapshot;
|
||||||
|
- rename, reorder, project move, and similar operational edits do not create a new revision snapshot;
|
||||||
|
- revision deduplication includes `items`, `server_count`, `total_price`, `custom_price`, `vendor_spec`, pricelist selectors, `disable_price_refresh`, and `only_in_stock`;
|
||||||
|
- BOM updates must use version-aware save flow, not a direct SQL field update;
|
||||||
|
- current revision pointer must be recoverable if legacy or damaged rows are found locally.
|
||||||
|
|
||||||
|
## Sync UX
|
||||||
|
|
||||||
|
UI-facing sync status must never block on live MariaDB calls.
|
||||||
|
|
||||||
|
Rules:
|
||||||
|
- navbar sync indicator and sync info modal read only local cached state from SQLite/app settings;
|
||||||
|
- background/manual sync may talk to MariaDB, but polling endpoints must stay fast even on slow or broken connections;
|
||||||
|
- any MariaDB timeout/invalid-connection during sync must invalidate the cached remote handle immediately so UI stops treating the connection as healthy.
|
||||||
|
|
||||||
|
## Naming collisions
|
||||||
|
|
||||||
|
UI-driven rename and copy flows use one suffix convention for conflicts.
|
||||||
|
|
||||||
|
Rules:
|
||||||
|
- configuration and variant names must auto-resolve collisions with `_копия`, then `_копия2`, `_копия3`, and so on;
|
||||||
|
- copy checkboxes and copy modals must prefill `_копия`, not ` (копия)`;
|
||||||
|
- the literal variant name `main` is reserved and must not be allowed for non-main variants.
|
||||||
|
|
||||||
|
## Configuration types
|
||||||
|
|
||||||
|
Configurations have a `config_type` field: `"server"` (default) or `"storage"`.
|
||||||
|
|
||||||
|
Rules:
|
||||||
|
- `config_type` defaults to `"server"` for all existing and new configurations unless explicitly set;
|
||||||
|
- the configurator page is shared for both types; the SW tab is always visible regardless of type;
|
||||||
|
- storage configurations use the same vendor_spec + PN→LOT + pricing flow as server configurations;
|
||||||
|
- storage component categories map to existing tabs: `ENC`/`DKC`/`CTL` → Base, `HIC` → PCI (HIC-карты СХД; `HBA`/`NIC` — серверные, не смешивать), `SSD`/`HDD` → Storage (используют существующие серверные LOT), `ACC` → Accessories (используют существующие серверные LOT), `SW` → SW.
|
||||||
|
- `DKC` = контроллерная полка (модель СХД + тип дисков + кол-во слотов + кол-во контроллеров); `CTL` = контроллер (кэш + встроенные порты); `ENC` = дисковая полка без контроллера.
|
||||||
|
|
||||||
|
## Vendor BOM contract
|
||||||
|
|
||||||
|
Vendor BOM is stored in `vendor_spec` on the configuration row.
|
||||||
|
|
||||||
|
Rules:
|
||||||
|
- PN to LOT resolution uses the active local partnumber book;
|
||||||
|
- canonical persisted mapping is `lot_mappings[]`;
|
||||||
|
- QuoteForge does not use legacy BOM tables such as `qt_bom`, `qt_lot_bundles`, or `qt_lot_bundle_items`.
|
||||||
405
bible-local/03-database.md
Normal file
405
bible-local/03-database.md
Normal file
@@ -0,0 +1,405 @@
|
|||||||
|
# 03 - Database
|
||||||
|
|
||||||
|
## SQLite
|
||||||
|
|
||||||
|
SQLite is the local runtime database.
|
||||||
|
|
||||||
|
Main tables:
|
||||||
|
|
||||||
|
| Table | Purpose |
|
||||||
|
| --- | --- |
|
||||||
|
| `local_components` | synced component metadata |
|
||||||
|
| `local_pricelists` | local pricelist headers |
|
||||||
|
| `local_pricelist_items` | local pricelist rows, the only runtime price source |
|
||||||
|
| `local_projects` | user projects |
|
||||||
|
| `local_configurations` | user configurations |
|
||||||
|
| `local_configuration_versions` | immutable revision snapshots |
|
||||||
|
| `local_partnumber_books` | partnumber book headers |
|
||||||
|
| `local_partnumber_book_items` | PN -> LOT catalog payload |
|
||||||
|
| `pending_changes` | sync queue |
|
||||||
|
| `connection_settings` | encrypted MariaDB connection settings |
|
||||||
|
| `app_settings` | local app state |
|
||||||
|
| `local_schema_migrations` | applied local migration markers |
|
||||||
|
|
||||||
|
Rules:
|
||||||
|
- cache tables may be rebuilt if local migration recovery requires it;
|
||||||
|
- user-authored tables must not be dropped as a recovery shortcut;
|
||||||
|
- `local_pricelist_items` is the only valid runtime source of prices;
|
||||||
|
- configuration `items` and `vendor_spec` are stored as JSON payloads inside configuration rows.
|
||||||
|
|
||||||
|
## MariaDB
|
||||||
|
|
||||||
|
MariaDB is the central sync database (`RFQ_LOG`). Final schema as of 2026-04-15.
|
||||||
|
|
||||||
|
### QuoteForge tables (qt_*)
|
||||||
|
|
||||||
|
Runtime read:
|
||||||
|
- `qt_categories` — pricelist categories
|
||||||
|
- `qt_lot_metadata` — component metadata, price settings
|
||||||
|
- `qt_pricelists` — pricelist headers (source: estimate / warehouse / competitor)
|
||||||
|
- `qt_pricelist_items` — pricelist rows
|
||||||
|
- `qt_partnumber_books` — partnumber book headers
|
||||||
|
- `qt_partnumber_book_items` — PN→LOT catalog payload
|
||||||
|
|
||||||
|
Runtime read/write:
|
||||||
|
- `qt_projects` — projects
|
||||||
|
- `qt_configurations` — configurations
|
||||||
|
- `qt_client_schema_state` — per-client sync status and version tracking
|
||||||
|
- `qt_pricelist_sync_status` — pricelist sync timestamps per user
|
||||||
|
|
||||||
|
Insert-only tracking:
|
||||||
|
- `qt_vendor_partnumber_seen` — vendor partnumbers encountered during sync
|
||||||
|
|
||||||
|
Server-side only (not queried by client runtime):
|
||||||
|
- `qt_component_usage_stats` — aggregated component popularity stats (written by server jobs)
|
||||||
|
- `qt_pricing_alerts` — price anomaly alerts (models exist in Go; feature disabled in runtime)
|
||||||
|
- `qt_schema_migrations` — server migration history (applied via `go run ./cmd/qfs -migrate`)
|
||||||
|
- `qt_scheduler_runs` — server background job tracking (no Go code references it in this repo)
|
||||||
|
|
||||||
|
### Competitor subsystem (server-side only, not used by QuoteForge Go code)
|
||||||
|
|
||||||
|
- `qt_competitors` — competitor registry
|
||||||
|
- `partnumber_log_competitors` — competitor price log (FK → qt_competitors)
|
||||||
|
|
||||||
|
These tables exist in the schema and are maintained by another tool or workflow.
|
||||||
|
QuoteForge references competitor pricelists only via `qt_pricelists` (source='competitor').
|
||||||
|
|
||||||
|
### Legacy RFQ tables (pre-QuoteForge, no Go code references)
|
||||||
|
|
||||||
|
- `lot` — original component registry (data preserved; superseded by `qt_lot_metadata`)
|
||||||
|
- `lot_log` — original supplier price log
|
||||||
|
- `supplier` — supplier registry (FK target for lot_log and machine_log)
|
||||||
|
- `machine` — device model registry
|
||||||
|
- `machine_log` — device price/quote log
|
||||||
|
- `parts_log` — supplier partnumber log used by server-side import/pricing workflows, not by QuoteForge runtime
|
||||||
|
|
||||||
|
These tables are retained for historical data. QuoteForge does not read or write them at runtime.
|
||||||
|
|
||||||
|
Rules:
|
||||||
|
- QuoteForge runtime must not depend on any legacy RFQ tables;
|
||||||
|
- QuoteForge sync reads prices and categories from `qt_pricelists` / `qt_pricelist_items` only;
|
||||||
|
- QuoteForge does not enrich local pricelist rows from `parts_log` or any other raw supplier log table;
|
||||||
|
- normal UI requests must not query MariaDB tables directly;
|
||||||
|
- `qt_client_local_migrations` exists in the 2026-04-15 schema dump, but runtime sync does not depend on it.
|
||||||
|
|
||||||
|
## MariaDB Table Structures
|
||||||
|
|
||||||
|
Full column reference as of 2026-03-21 (`RFQ_LOG` final schema).
|
||||||
|
|
||||||
|
### qt_categories
|
||||||
|
| Column | Type | Notes |
|
||||||
|
|--------|------|-------|
|
||||||
|
| id | bigint UNSIGNED PK AUTO_INCREMENT | |
|
||||||
|
| code | varchar(20) UNIQUE NOT NULL | |
|
||||||
|
| name | varchar(100) NOT NULL | |
|
||||||
|
| name_ru | varchar(100) | |
|
||||||
|
| display_order | bigint DEFAULT 0 | |
|
||||||
|
| is_required | tinyint(1) DEFAULT 0 | |
|
||||||
|
|
||||||
|
### qt_client_schema_state
|
||||||
|
PK: (username, hostname)
|
||||||
|
| Column | Type | Notes |
|
||||||
|
|--------|------|-------|
|
||||||
|
| username | varchar(100) | |
|
||||||
|
| hostname | varchar(255) DEFAULT '' | |
|
||||||
|
| last_applied_migration_id | varchar(128) | |
|
||||||
|
| app_version | varchar(64) | |
|
||||||
|
| last_sync_at | datetime | |
|
||||||
|
| last_sync_status | varchar(32) | |
|
||||||
|
| pending_changes_count | int DEFAULT 0 | |
|
||||||
|
| pending_errors_count | int DEFAULT 0 | |
|
||||||
|
| configurations_count | int DEFAULT 0 | |
|
||||||
|
| projects_count | int DEFAULT 0 | |
|
||||||
|
| estimate_pricelist_version | varchar(128) | |
|
||||||
|
| warehouse_pricelist_version | varchar(128) | |
|
||||||
|
| competitor_pricelist_version | varchar(128) | |
|
||||||
|
| last_sync_error_code | varchar(128) | |
|
||||||
|
| last_sync_error_text | text | |
|
||||||
|
| last_checked_at | datetime NOT NULL | |
|
||||||
|
| updated_at | datetime NOT NULL | |
|
||||||
|
|
||||||
|
### qt_component_usage_stats
|
||||||
|
PK: lot_name
|
||||||
|
| Column | Type | Notes |
|
||||||
|
|--------|------|-------|
|
||||||
|
| lot_name | varchar(255) | |
|
||||||
|
| quotes_total | bigint DEFAULT 0 | |
|
||||||
|
| quotes_last30d | bigint DEFAULT 0 | |
|
||||||
|
| quotes_last7d | bigint DEFAULT 0 | |
|
||||||
|
| total_quantity | bigint DEFAULT 0 | |
|
||||||
|
| total_revenue | decimal(14,2) DEFAULT 0 | |
|
||||||
|
| trend_direction | enum('up','stable','down') DEFAULT 'stable' | |
|
||||||
|
| trend_percent | decimal(5,2) DEFAULT 0 | |
|
||||||
|
| last_used_at | datetime(3) | |
|
||||||
|
|
||||||
|
### qt_competitors
|
||||||
|
| Column | Type | Notes |
|
||||||
|
|--------|------|-------|
|
||||||
|
| id | bigint UNSIGNED PK AUTO_INCREMENT | |
|
||||||
|
| name | varchar(255) NOT NULL | |
|
||||||
|
| code | varchar(100) UNIQUE NOT NULL | |
|
||||||
|
| delivery_basis | varchar(50) DEFAULT 'DDP' | |
|
||||||
|
| currency | varchar(10) DEFAULT 'USD' | |
|
||||||
|
| column_mapping | longtext JSON | |
|
||||||
|
| is_active | tinyint(1) DEFAULT 1 | |
|
||||||
|
| created_at | timestamp | |
|
||||||
|
| updated_at | timestamp ON UPDATE | |
|
||||||
|
| price_uplift | decimal(8,4) DEFAULT 1.3 | effective_price = price / price_uplift |
|
||||||
|
|
||||||
|
### qt_configurations
|
||||||
|
| Column | Type | Notes |
|
||||||
|
|--------|------|-------|
|
||||||
|
| id | bigint UNSIGNED PK AUTO_INCREMENT | |
|
||||||
|
| uuid | varchar(36) UNIQUE NOT NULL | |
|
||||||
|
| user_id | bigint UNSIGNED | |
|
||||||
|
| owner_username | varchar(100) NOT NULL | |
|
||||||
|
| app_version | varchar(64) | |
|
||||||
|
| project_uuid | char(36) | FK → qt_projects.uuid ON DELETE SET NULL |
|
||||||
|
| name | varchar(200) NOT NULL | |
|
||||||
|
| items | longtext JSON NOT NULL | component list |
|
||||||
|
| total_price | decimal(12,2) | |
|
||||||
|
| notes | text | |
|
||||||
|
| is_template | tinyint(1) DEFAULT 0 | |
|
||||||
|
| created_at | datetime(3) | |
|
||||||
|
| custom_price | decimal(12,2) | |
|
||||||
|
| server_count | bigint DEFAULT 1 | |
|
||||||
|
| server_model | varchar(100) | |
|
||||||
|
| support_code | varchar(20) | |
|
||||||
|
| article | varchar(80) | |
|
||||||
|
| pricelist_id | bigint UNSIGNED | FK → qt_pricelists.id |
|
||||||
|
| warehouse_pricelist_id | bigint UNSIGNED | FK → qt_pricelists.id |
|
||||||
|
| competitor_pricelist_id | bigint UNSIGNED | FK → qt_pricelists.id |
|
||||||
|
| disable_price_refresh | tinyint(1) DEFAULT 0 | |
|
||||||
|
| only_in_stock | tinyint(1) DEFAULT 0 | |
|
||||||
|
| line_no | int | position within project |
|
||||||
|
| price_updated_at | timestamp | |
|
||||||
|
| vendor_spec | longtext JSON | |
|
||||||
|
|
||||||
|
### qt_lot_metadata
|
||||||
|
PK: lot_name
|
||||||
|
| Column | Type | Notes |
|
||||||
|
|--------|------|-------|
|
||||||
|
| lot_name | varchar(255) | |
|
||||||
|
| category_id | bigint UNSIGNED | FK → qt_categories.id |
|
||||||
|
| vendor | varchar(50) | |
|
||||||
|
| model | varchar(100) | |
|
||||||
|
| specs | longtext JSON | |
|
||||||
|
| current_price | decimal(12,2) | cached computed price |
|
||||||
|
| price_method | enum('manual','median','average','weighted_median') DEFAULT 'median' | |
|
||||||
|
| price_period_days | bigint DEFAULT 90 | |
|
||||||
|
| price_updated_at | datetime(3) | |
|
||||||
|
| request_count | bigint DEFAULT 0 | |
|
||||||
|
| last_request_date | date | |
|
||||||
|
| popularity_score | decimal(10,4) DEFAULT 0 | |
|
||||||
|
| price_coefficient | decimal(5,2) DEFAULT 0 | markup % |
|
||||||
|
| manual_price | decimal(12,2) | |
|
||||||
|
| meta_prices | varchar(1000) | raw price samples JSON |
|
||||||
|
| meta_method | varchar(20) | method used for last compute |
|
||||||
|
| meta_period_days | bigint DEFAULT 90 | |
|
||||||
|
| is_hidden | tinyint(1) DEFAULT 0 | |
|
||||||
|
|
||||||
|
### qt_partnumber_books
|
||||||
|
| Column | Type | Notes |
|
||||||
|
|--------|------|-------|
|
||||||
|
| id | bigint UNSIGNED PK AUTO_INCREMENT | |
|
||||||
|
| version | varchar(30) UNIQUE NOT NULL | |
|
||||||
|
| created_at | timestamp | |
|
||||||
|
| created_by | varchar(100) | |
|
||||||
|
| is_active | tinyint(1) DEFAULT 0 | only one active at a time |
|
||||||
|
| partnumbers_json | longtext DEFAULT '[]' | flat list of partnumbers |
|
||||||
|
|
||||||
|
### qt_partnumber_book_items
|
||||||
|
| Column | Type | Notes |
|
||||||
|
|--------|------|-------|
|
||||||
|
| id | bigint UNSIGNED PK AUTO_INCREMENT | |
|
||||||
|
| partnumber | varchar(255) UNIQUE NOT NULL | |
|
||||||
|
| lots_json | longtext NOT NULL | JSON array of lot_names |
|
||||||
|
| description | varchar(10000) | |
|
||||||
|
|
||||||
|
### qt_pricelists
|
||||||
|
| Column | Type | Notes |
|
||||||
|
|--------|------|-------|
|
||||||
|
| id | bigint UNSIGNED PK AUTO_INCREMENT | |
|
||||||
|
| source | varchar(20) DEFAULT 'estimate' | 'estimate' / 'warehouse' / 'competitor' |
|
||||||
|
| version | varchar(20) NOT NULL | UNIQUE with source |
|
||||||
|
| created_at | datetime(3) | |
|
||||||
|
| created_by | varchar(100) | |
|
||||||
|
| is_active | tinyint(1) DEFAULT 1 | |
|
||||||
|
| usage_count | bigint DEFAULT 0 | |
|
||||||
|
| expires_at | datetime(3) | |
|
||||||
|
| notification | varchar(500) | shown to clients on sync |
|
||||||
|
|
||||||
|
### qt_pricelist_items
|
||||||
|
| Column | Type | Notes |
|
||||||
|
|--------|------|-------|
|
||||||
|
| id | bigint UNSIGNED PK AUTO_INCREMENT | |
|
||||||
|
| pricelist_id | bigint UNSIGNED NOT NULL | FK → qt_pricelists.id |
|
||||||
|
| lot_name | varchar(255) NOT NULL | INDEX with pricelist_id |
|
||||||
|
| lot_category | varchar(50) | |
|
||||||
|
| price | decimal(12,2) NOT NULL | |
|
||||||
|
| price_method | varchar(20) | |
|
||||||
|
| price_period_days | bigint DEFAULT 90 | |
|
||||||
|
| price_coefficient | decimal(5,2) DEFAULT 0 | |
|
||||||
|
| manual_price | decimal(12,2) | |
|
||||||
|
| meta_prices | varchar(1000) | |
|
||||||
|
|
||||||
|
### qt_pricelist_sync_status
|
||||||
|
PK: username
|
||||||
|
| Column | Type | Notes |
|
||||||
|
|--------|------|-------|
|
||||||
|
| username | varchar(100) | |
|
||||||
|
| last_sync_at | datetime NOT NULL | |
|
||||||
|
| updated_at | datetime NOT NULL | |
|
||||||
|
| app_version | varchar(64) | |
|
||||||
|
|
||||||
|
### qt_pricing_alerts
|
||||||
|
| Column | Type | Notes |
|
||||||
|
|--------|------|-------|
|
||||||
|
| id | bigint UNSIGNED PK AUTO_INCREMENT | |
|
||||||
|
| lot_name | varchar(255) NOT NULL | |
|
||||||
|
| alert_type | enum('high_demand_stale_price','price_spike','price_drop','no_recent_quotes','trending_no_price') | |
|
||||||
|
| severity | enum('low','medium','high','critical') DEFAULT 'medium' | |
|
||||||
|
| message | text NOT NULL | |
|
||||||
|
| details | longtext JSON | |
|
||||||
|
| status | enum('new','acknowledged','resolved','ignored') DEFAULT 'new' | |
|
||||||
|
| created_at | datetime(3) | |
|
||||||
|
|
||||||
|
### qt_projects
|
||||||
|
| Column | Type | Notes |
|
||||||
|
|--------|------|-------|
|
||||||
|
| id | bigint UNSIGNED PK AUTO_INCREMENT | |
|
||||||
|
| uuid | char(36) UNIQUE NOT NULL | |
|
||||||
|
| owner_username | varchar(100) NOT NULL | |
|
||||||
|
| code | varchar(100) NOT NULL | UNIQUE with variant |
|
||||||
|
| variant | varchar(100) DEFAULT '' | UNIQUE with code |
|
||||||
|
| name | varchar(200) | |
|
||||||
|
| tracker_url | varchar(500) | |
|
||||||
|
| is_active | tinyint(1) DEFAULT 1 | |
|
||||||
|
| is_system | tinyint(1) DEFAULT 0 | |
|
||||||
|
| created_at | timestamp | |
|
||||||
|
| updated_at | timestamp ON UPDATE | |
|
||||||
|
|
||||||
|
### qt_schema_migrations
|
||||||
|
| Column | Type | Notes |
|
||||||
|
|--------|------|-------|
|
||||||
|
| id | bigint UNSIGNED PK AUTO_INCREMENT | |
|
||||||
|
| filename | varchar(255) UNIQUE NOT NULL | |
|
||||||
|
| applied_at | datetime(3) | |
|
||||||
|
|
||||||
|
### qt_scheduler_runs
|
||||||
|
PK: job_name
|
||||||
|
| Column | Type | Notes |
|
||||||
|
|--------|------|-------|
|
||||||
|
| job_name | varchar(100) | |
|
||||||
|
| last_started_at | datetime | |
|
||||||
|
| last_finished_at | datetime | |
|
||||||
|
| last_status | varchar(20) DEFAULT 'idle' | |
|
||||||
|
| last_error | text | |
|
||||||
|
| updated_at | timestamp ON UPDATE | |
|
||||||
|
|
||||||
|
### qt_vendor_partnumber_seen
|
||||||
|
| Column | Type | Notes |
|
||||||
|
|--------|------|-------|
|
||||||
|
| id | bigint UNSIGNED PK AUTO_INCREMENT | |
|
||||||
|
| source_type | varchar(32) NOT NULL | |
|
||||||
|
| vendor | varchar(255) DEFAULT '' | |
|
||||||
|
| partnumber | varchar(255) UNIQUE NOT NULL | |
|
||||||
|
| description | varchar(10000) | |
|
||||||
|
| last_seen_at | datetime(3) NOT NULL | |
|
||||||
|
| is_ignored | tinyint(1) DEFAULT 0 | |
|
||||||
|
| is_pattern | tinyint(1) DEFAULT 0 | |
|
||||||
|
| ignored_at | datetime(3) | |
|
||||||
|
| ignored_by | varchar(100) | |
|
||||||
|
| created_at | datetime(3) | |
|
||||||
|
| updated_at | datetime(3) | |
|
||||||
|
|
||||||
|
### stock_ignore_rules
|
||||||
|
| Column | Type | Notes |
|
||||||
|
|--------|------|-------|
|
||||||
|
| id | bigint UNSIGNED PK AUTO_INCREMENT | |
|
||||||
|
| target | varchar(20) NOT NULL | UNIQUE with match_type+pattern |
|
||||||
|
| match_type | varchar(20) NOT NULL | |
|
||||||
|
| pattern | varchar(500) NOT NULL | |
|
||||||
|
| created_at | timestamp | |
|
||||||
|
|
||||||
|
### stock_log
|
||||||
|
| Column | Type | Notes |
|
||||||
|
|--------|------|-------|
|
||||||
|
| stock_log_id | bigint UNSIGNED PK AUTO_INCREMENT | |
|
||||||
|
| partnumber | varchar(255) NOT NULL | INDEX with date |
|
||||||
|
| supplier | varchar(255) | |
|
||||||
|
| date | date NOT NULL | |
|
||||||
|
| price | decimal(12,2) NOT NULL | |
|
||||||
|
| quality | varchar(255) | |
|
||||||
|
| comments | text | |
|
||||||
|
| vendor | varchar(255) | INDEX |
|
||||||
|
| qty | decimal(14,3) | |
|
||||||
|
|
||||||
|
### partnumber_log_competitors
|
||||||
|
| Column | Type | Notes |
|
||||||
|
|--------|------|-------|
|
||||||
|
| id | bigint UNSIGNED PK AUTO_INCREMENT | |
|
||||||
|
| competitor_id | bigint UNSIGNED NOT NULL | FK → qt_competitors.id |
|
||||||
|
| partnumber | varchar(255) NOT NULL | |
|
||||||
|
| description | varchar(500) | |
|
||||||
|
| vendor | varchar(255) | |
|
||||||
|
| price | decimal(12,2) NOT NULL | |
|
||||||
|
| price_loccur | decimal(12,2) | local currency price |
|
||||||
|
| currency | varchar(10) | |
|
||||||
|
| qty | decimal(12,4) DEFAULT 1 | |
|
||||||
|
| date | date NOT NULL | |
|
||||||
|
| created_at | timestamp | |
|
||||||
|
|
||||||
|
### Legacy tables (lot / lot_log / machine / machine_log / supplier)
|
||||||
|
|
||||||
|
Retained for historical data only. Not queried by QuoteForge.
|
||||||
|
|
||||||
|
**lot**: lot_name (PK, char 255), lot_category, lot_description
|
||||||
|
**lot_log**: lot_log_id AUTO_INCREMENT, lot (FK→lot), supplier (FK→supplier), date, price double, quality, comments
|
||||||
|
**supplier**: supplier_name (PK, char 255), supplier_comment
|
||||||
|
**machine**: machine_name (PK, char 255), machine_description
|
||||||
|
**machine_log**: machine_log_id AUTO_INCREMENT, date, supplier (FK→supplier), country, opty, type, machine (FK→machine), customer_requirement, variant, price_gpl, price_estimate, qty, quality, carepack, lead_time_weeks, prepayment_percent, price_got, Comment
|
||||||
|
|
||||||
|
## MariaDB User Permissions
|
||||||
|
|
||||||
|
The application user needs read-only access to reference tables and read/write access to runtime tables.
|
||||||
|
|
||||||
|
```sql
|
||||||
|
-- Read-only: reference and pricing data
|
||||||
|
GRANT SELECT ON RFQ_LOG.qt_categories TO 'qfs_user'@'%';
|
||||||
|
GRANT SELECT ON RFQ_LOG.qt_lot_metadata TO 'qfs_user'@'%';
|
||||||
|
GRANT SELECT ON RFQ_LOG.qt_pricelists TO 'qfs_user'@'%';
|
||||||
|
GRANT SELECT ON RFQ_LOG.qt_pricelist_items TO 'qfs_user'@'%';
|
||||||
|
GRANT SELECT ON RFQ_LOG.stock_log TO 'qfs_user'@'%';
|
||||||
|
GRANT SELECT ON RFQ_LOG.stock_ignore_rules TO 'qfs_user'@'%';
|
||||||
|
GRANT SELECT ON RFQ_LOG.qt_partnumber_books TO 'qfs_user'@'%';
|
||||||
|
GRANT SELECT ON RFQ_LOG.qt_partnumber_book_items TO 'qfs_user'@'%';
|
||||||
|
GRANT SELECT ON RFQ_LOG.lot TO 'qfs_user'@'%';
|
||||||
|
|
||||||
|
-- Read/write: runtime sync and user data
|
||||||
|
GRANT SELECT, INSERT, UPDATE, DELETE ON RFQ_LOG.qt_projects TO 'qfs_user'@'%';
|
||||||
|
GRANT SELECT, INSERT, UPDATE, DELETE ON RFQ_LOG.qt_configurations TO 'qfs_user'@'%';
|
||||||
|
GRANT SELECT, INSERT, UPDATE ON RFQ_LOG.qt_client_schema_state TO 'qfs_user'@'%';
|
||||||
|
GRANT SELECT, INSERT, UPDATE ON RFQ_LOG.qt_pricelist_sync_status TO 'qfs_user'@'%';
|
||||||
|
GRANT SELECT, INSERT, UPDATE ON RFQ_LOG.qt_vendor_partnumber_seen TO 'qfs_user'@'%';
|
||||||
|
|
||||||
|
FLUSH PRIVILEGES;
|
||||||
|
```
|
||||||
|
|
||||||
|
Rules:
|
||||||
|
- `qt_client_schema_state` requires INSERT + UPDATE for sync status tracking (uses `ON DUPLICATE KEY UPDATE`);
|
||||||
|
- `qt_vendor_partnumber_seen` requires INSERT + UPDATE (vendor PN discovery during sync);
|
||||||
|
- no DELETE is needed on sync/tracking tables — rows are never removed by the client;
|
||||||
|
- `lot` SELECT is required for the connection validation probe in `/setup`;
|
||||||
|
- the setup page shows `can_write: true` only when `qt_client_schema_state` INSERT succeeds.
|
||||||
|
|
||||||
|
## Migrations
|
||||||
|
|
||||||
|
SQLite:
|
||||||
|
- schema creation and additive changes go through GORM `AutoMigrate`;
|
||||||
|
- data fixes, index repair, and one-off rewrites go through `runLocalMigrations`;
|
||||||
|
- local migration state is tracked in `local_schema_migrations`.
|
||||||
|
|
||||||
|
MariaDB:
|
||||||
|
- SQL files live in `migrations/`;
|
||||||
|
- they are applied by `go run ./cmd/qfs -migrate`.
|
||||||
125
bible-local/04-api.md
Normal file
125
bible-local/04-api.md
Normal file
@@ -0,0 +1,125 @@
|
|||||||
|
# 04 - API
|
||||||
|
|
||||||
|
## Public web routes
|
||||||
|
|
||||||
|
| Route | Purpose |
|
||||||
|
| --- | --- |
|
||||||
|
| `/` | configurator |
|
||||||
|
| `/configs` | configuration list |
|
||||||
|
| `/configs/:uuid/revisions` | revision history page |
|
||||||
|
| `/projects` | project list |
|
||||||
|
| `/projects/:uuid` | project detail |
|
||||||
|
| `/pricelists` | pricelist list |
|
||||||
|
| `/pricelists/:id` | pricelist detail |
|
||||||
|
| `/partnumber-books` | partnumber book page |
|
||||||
|
| `/setup` | DB setup page |
|
||||||
|
|
||||||
|
## Setup and health
|
||||||
|
|
||||||
|
| Method | Path | Purpose |
|
||||||
|
| --- | --- | --- |
|
||||||
|
| `GET` | `/health` | process health |
|
||||||
|
| `GET` | `/setup` | setup page |
|
||||||
|
| `POST` | `/setup` | save tested DB settings |
|
||||||
|
| `POST` | `/setup/test` | test DB connection |
|
||||||
|
| `GET` | `/setup/status` | setup status |
|
||||||
|
| `GET` | `/api/db-status` | current DB/sync status |
|
||||||
|
| `GET` | `/api/current-user` | local user identity |
|
||||||
|
| `GET` | `/api/ping` | lightweight API ping |
|
||||||
|
|
||||||
|
`POST /api/restart` exists only in `debug` mode.
|
||||||
|
|
||||||
|
## Reference data
|
||||||
|
|
||||||
|
| Method | Path | Purpose |
|
||||||
|
| --- | --- | --- |
|
||||||
|
| `GET` | `/api/components` | list component metadata |
|
||||||
|
| `GET` | `/api/components/:lot_name` | one component |
|
||||||
|
| `GET` | `/api/categories` | list categories |
|
||||||
|
| `GET` | `/api/pricelists` | list local pricelists |
|
||||||
|
| `GET` | `/api/pricelists/latest` | latest pricelist by source |
|
||||||
|
| `GET` | `/api/pricelists/:id` | pricelist header |
|
||||||
|
| `GET` | `/api/pricelists/:id/items` | pricelist rows |
|
||||||
|
| `GET` | `/api/pricelists/:id/lots` | lot names in a pricelist |
|
||||||
|
| `GET` | `/api/partnumber-books` | local partnumber books |
|
||||||
|
| `GET` | `/api/partnumber-books/:id` | book items by `server_id` |
|
||||||
|
|
||||||
|
## Quote and export
|
||||||
|
|
||||||
|
| Method | Path | Purpose |
|
||||||
|
| --- | --- | --- |
|
||||||
|
| `POST` | `/api/quote/validate` | validate config items |
|
||||||
|
| `POST` | `/api/quote/calculate` | calculate quote totals |
|
||||||
|
| `POST` | `/api/quote/price-levels` | resolve estimate/warehouse/competitor prices |
|
||||||
|
| `POST` | `/api/export/csv` | export a single configuration |
|
||||||
|
| `GET` | `/api/configs/:uuid/export` | export a stored configuration |
|
||||||
|
| `GET` | `/api/projects/:uuid/export` | legacy project BOM export |
|
||||||
|
| `POST` | `/api/projects/:uuid/export` | pricing-tab project export |
|
||||||
|
|
||||||
|
## Configurations
|
||||||
|
|
||||||
|
| Method | Path | Purpose |
|
||||||
|
| --- | --- | --- |
|
||||||
|
| `GET` | `/api/configs` | list configurations |
|
||||||
|
| `POST` | `/api/configs/import` | import configurations from server |
|
||||||
|
| `POST` | `/api/configs` | create configuration |
|
||||||
|
| `POST` | `/api/configs/preview-article` | preview article |
|
||||||
|
| `GET` | `/api/configs/:uuid` | get configuration |
|
||||||
|
| `PUT` | `/api/configs/:uuid` | update configuration |
|
||||||
|
| `DELETE` | `/api/configs/:uuid` | archive configuration |
|
||||||
|
| `POST` | `/api/configs/:uuid/reactivate` | reactivate configuration |
|
||||||
|
| `PATCH` | `/api/configs/:uuid/rename` | rename configuration |
|
||||||
|
| `POST` | `/api/configs/:uuid/clone` | clone configuration |
|
||||||
|
| `POST` | `/api/configs/:uuid/refresh-prices` | refresh prices |
|
||||||
|
| `PATCH` | `/api/configs/:uuid/project` | move configuration to project |
|
||||||
|
| `GET` | `/api/configs/:uuid/versions` | list revisions |
|
||||||
|
| `GET` | `/api/configs/:uuid/versions/:version` | get one revision |
|
||||||
|
| `POST` | `/api/configs/:uuid/rollback` | rollback by creating a new head revision |
|
||||||
|
| `PATCH` | `/api/configs/:uuid/server-count` | update server count |
|
||||||
|
| `GET` | `/api/configs/:uuid/vendor-spec` | read vendor BOM |
|
||||||
|
| `PUT` | `/api/configs/:uuid/vendor-spec` | replace vendor BOM |
|
||||||
|
| `POST` | `/api/configs/:uuid/vendor-spec/resolve` | resolve PN -> LOT |
|
||||||
|
| `POST` | `/api/configs/:uuid/vendor-spec/apply` | apply BOM to cart |
|
||||||
|
|
||||||
|
## Projects
|
||||||
|
|
||||||
|
| Method | Path | Purpose |
|
||||||
|
| --- | --- | --- |
|
||||||
|
| `GET` | `/api/projects` | paginated project list |
|
||||||
|
| `GET` | `/api/projects/all` | lightweight list for dropdowns |
|
||||||
|
| `POST` | `/api/projects` | create project |
|
||||||
|
| `GET` | `/api/projects/:uuid` | get project |
|
||||||
|
| `PUT` | `/api/projects/:uuid` | update project |
|
||||||
|
| `POST` | `/api/projects/:uuid/archive` | archive project |
|
||||||
|
| `POST` | `/api/projects/:uuid/reactivate` | reactivate project |
|
||||||
|
| `DELETE` | `/api/projects/:uuid` | delete project variant only |
|
||||||
|
| `GET` | `/api/projects/:uuid/configs` | list project configurations |
|
||||||
|
| `PATCH` | `/api/projects/:uuid/configs/reorder` | persist line order |
|
||||||
|
| `POST` | `/api/projects/:uuid/configs` | create configuration inside project |
|
||||||
|
| `POST` | `/api/projects/:uuid/configs/:config_uuid/clone` | clone config into project |
|
||||||
|
| `POST` | `/api/projects/:uuid/vendor-import` | import CFXML workspace into project |
|
||||||
|
|
||||||
|
Vendor import contract:
|
||||||
|
- multipart field name is `file`;
|
||||||
|
- file limit is `1 GiB`;
|
||||||
|
- oversized payloads are rejected before XML parsing.
|
||||||
|
|
||||||
|
## Sync
|
||||||
|
|
||||||
|
| Method | Path | Purpose |
|
||||||
|
| --- | --- | --- |
|
||||||
|
| `GET` | `/api/sync/status` | sync status |
|
||||||
|
| `GET` | `/api/sync/readiness` | sync readiness |
|
||||||
|
| `GET` | `/api/sync/info` | sync modal data |
|
||||||
|
| `GET` | `/api/sync/users-status` | remote user status |
|
||||||
|
| `GET` | `/api/sync/pending/count` | pending queue count |
|
||||||
|
| `GET` | `/api/sync/pending` | pending queue rows |
|
||||||
|
| `POST` | `/api/sync/components` | pull components |
|
||||||
|
| `POST` | `/api/sync/pricelists` | pull pricelists |
|
||||||
|
| `POST` | `/api/sync/partnumber-books` | pull partnumber books |
|
||||||
|
| `POST` | `/api/sync/partnumber-seen` | report unresolved vendor PN |
|
||||||
|
| `POST` | `/api/sync/all` | push and pull full sync |
|
||||||
|
| `POST` | `/api/sync/push` | push pending changes |
|
||||||
|
| `POST` | `/api/sync/repair` | repair broken pending rows |
|
||||||
|
|
||||||
|
When readiness is blocked, sync write endpoints return `423 Locked`.
|
||||||
74
bible-local/05-config.md
Normal file
74
bible-local/05-config.md
Normal file
@@ -0,0 +1,74 @@
|
|||||||
|
# 05 - Config
|
||||||
|
|
||||||
|
## Runtime files
|
||||||
|
|
||||||
|
| Artifact | Default location |
|
||||||
|
| --- | --- |
|
||||||
|
| `qfs.db` | OS-specific user state directory |
|
||||||
|
| `config.yaml` | same state directory as `qfs.db` |
|
||||||
|
| `local_encryption.key` | same state directory as `qfs.db` |
|
||||||
|
| `backups/` | next to `qfs.db` unless overridden |
|
||||||
|
|
||||||
|
The runtime state directory can be overridden with `QFS_STATE_DIR`.
|
||||||
|
Direct paths can be overridden with `QFS_DB_PATH` and `QFS_CONFIG_PATH`.
|
||||||
|
|
||||||
|
## Runtime config shape
|
||||||
|
|
||||||
|
Runtime keeps `config.yaml` intentionally small:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
server:
|
||||||
|
host: "127.0.0.1"
|
||||||
|
port: 8080
|
||||||
|
mode: "release"
|
||||||
|
read_timeout: 30s
|
||||||
|
write_timeout: 30s
|
||||||
|
|
||||||
|
backup:
|
||||||
|
time: "00:00"
|
||||||
|
|
||||||
|
logging:
|
||||||
|
level: "info"
|
||||||
|
format: "json"
|
||||||
|
output: "stdout"
|
||||||
|
```
|
||||||
|
|
||||||
|
Rules:
|
||||||
|
- QuoteForge creates this file automatically if it does not exist;
|
||||||
|
- startup rewrites legacy config files into this minimal runtime shape;
|
||||||
|
- startup normalizes any `server.host` value to `127.0.0.1` before saving the runtime config;
|
||||||
|
- `server.host` must stay on loopback.
|
||||||
|
|
||||||
|
Saved MariaDB credentials do not live in `config.yaml`.
|
||||||
|
They are stored in SQLite and encrypted with `local_encryption.key` unless `QUOTEFORGE_ENCRYPTION_KEY` overrides the key material.
|
||||||
|
|
||||||
|
## Environment variables
|
||||||
|
|
||||||
|
| Variable | Purpose |
|
||||||
|
| --- | --- |
|
||||||
|
| `QFS_STATE_DIR` | override runtime state directory |
|
||||||
|
| `QFS_DB_PATH` | explicit SQLite path |
|
||||||
|
| `QFS_CONFIG_PATH` | explicit config path |
|
||||||
|
| `QFS_BACKUP_DIR` | explicit backup root |
|
||||||
|
| `QFS_BACKUP_DISABLE` | disable rotating backups |
|
||||||
|
| `QUOTEFORGE_ENCRYPTION_KEY` | override encryption key |
|
||||||
|
| `QF_SERVER_PORT` | override HTTP port |
|
||||||
|
|
||||||
|
`QFS_BACKUP_DISABLE` accepts `1`, `true`, or `yes`.
|
||||||
|
|
||||||
|
## CLI flags
|
||||||
|
|
||||||
|
| Flag | Purpose |
|
||||||
|
| --- | --- |
|
||||||
|
| `-config <path>` | config file path |
|
||||||
|
| `-localdb <path>` | SQLite path |
|
||||||
|
| `-reset-localdb` | destructive local DB reset |
|
||||||
|
| `-migrate` | apply server migrations and exit |
|
||||||
|
| `-version` | print app version and exit |
|
||||||
|
|
||||||
|
## First run
|
||||||
|
|
||||||
|
1. runtime ensures `config.yaml` exists;
|
||||||
|
2. runtime opens the local SQLite database;
|
||||||
|
3. if no stored MariaDB credentials exist, `/setup` is served;
|
||||||
|
4. after setup, runtime works locally and sync uses saved DB settings in the background.
|
||||||
55
bible-local/06-backup.md
Normal file
55
bible-local/06-backup.md
Normal file
@@ -0,0 +1,55 @@
|
|||||||
|
# 06 - Backup
|
||||||
|
|
||||||
|
## Scope
|
||||||
|
|
||||||
|
QuoteForge creates rotating local ZIP backups of:
|
||||||
|
- a consistent SQLite snapshot saved as `qfs.db`;
|
||||||
|
- `config.yaml` when present.
|
||||||
|
|
||||||
|
The backup intentionally does not include `local_encryption.key`.
|
||||||
|
|
||||||
|
## Location and naming
|
||||||
|
|
||||||
|
Default root:
|
||||||
|
- `<db dir>/backups`
|
||||||
|
|
||||||
|
Subdirectories:
|
||||||
|
- `daily/`
|
||||||
|
- `weekly/`
|
||||||
|
- `monthly/`
|
||||||
|
- `yearly/`
|
||||||
|
|
||||||
|
Archive name:
|
||||||
|
- `qfs-backp-YYYY-MM-DD.zip`
|
||||||
|
|
||||||
|
## Retention
|
||||||
|
|
||||||
|
| Period | Keep |
|
||||||
|
| --- | --- |
|
||||||
|
| Daily | 7 |
|
||||||
|
| Weekly | 4 |
|
||||||
|
| Monthly | 12 |
|
||||||
|
| Yearly | 10 |
|
||||||
|
|
||||||
|
## Behavior
|
||||||
|
|
||||||
|
- on startup, QuoteForge creates a backup if the current period has none yet;
|
||||||
|
- a daily scheduler creates the next backup at `backup.time`;
|
||||||
|
- duplicate snapshots inside the same period are prevented by a period marker file;
|
||||||
|
- old archives are pruned automatically.
|
||||||
|
|
||||||
|
## Safety rules
|
||||||
|
|
||||||
|
- backup root must be outside the git worktree;
|
||||||
|
- backup creation is blocked if the resolved backup root sits inside the repository;
|
||||||
|
- SQLite snapshot must be created from a consistent database copy, not by copying live WAL files directly;
|
||||||
|
- restore to another machine requires re-entering DB credentials unless the encryption key is migrated separately.
|
||||||
|
|
||||||
|
## Restore
|
||||||
|
|
||||||
|
1. stop QuoteForge;
|
||||||
|
2. unpack the chosen archive outside the repository;
|
||||||
|
3. replace `qfs.db`;
|
||||||
|
4. replace `config.yaml` if needed;
|
||||||
|
5. restart the app;
|
||||||
|
6. re-enter MariaDB credentials if the original encryption key is unavailable.
|
||||||
35
bible-local/07-dev.md
Normal file
35
bible-local/07-dev.md
Normal file
@@ -0,0 +1,35 @@
|
|||||||
|
# 07 - Development
|
||||||
|
|
||||||
|
## Common commands
|
||||||
|
|
||||||
|
```bash
|
||||||
|
go run ./cmd/qfs
|
||||||
|
go run ./cmd/qfs -migrate
|
||||||
|
go run ./cmd/migrate_project_updated_at
|
||||||
|
go test ./...
|
||||||
|
go vet ./...
|
||||||
|
make build-release
|
||||||
|
make install-hooks
|
||||||
|
```
|
||||||
|
|
||||||
|
## Guardrails
|
||||||
|
|
||||||
|
- run `gofmt` before commit;
|
||||||
|
- use `slog` for server logging;
|
||||||
|
- keep runtime business logic SQLite-only;
|
||||||
|
- limit MariaDB access to sync, setup, and migration tooling;
|
||||||
|
- keep `config.yaml` out of git and use `config.example.yaml` only as a template;
|
||||||
|
- update `bible-local/` in the same commit as architecture changes.
|
||||||
|
|
||||||
|
## Removed features that must not return
|
||||||
|
|
||||||
|
- admin pricing UI/API;
|
||||||
|
- alerts and notification workflows;
|
||||||
|
- stock import tooling;
|
||||||
|
- cron jobs;
|
||||||
|
- standalone importer utility.
|
||||||
|
|
||||||
|
## Release notes
|
||||||
|
|
||||||
|
Release history belongs under `releases/<version>/RELEASE_NOTES.md`.
|
||||||
|
Do not keep temporary change summaries in the repository root.
|
||||||
64
bible-local/09-vendor-spec.md
Normal file
64
bible-local/09-vendor-spec.md
Normal file
@@ -0,0 +1,64 @@
|
|||||||
|
# 09 - Vendor BOM
|
||||||
|
|
||||||
|
## Storage contract
|
||||||
|
|
||||||
|
Vendor BOM is stored in `local_configurations.vendor_spec` and synced with `qt_configurations.vendor_spec`.
|
||||||
|
|
||||||
|
Each row uses this canonical shape:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"sort_order": 10,
|
||||||
|
"vendor_partnumber": "ABC-123",
|
||||||
|
"quantity": 2,
|
||||||
|
"description": "row description",
|
||||||
|
"unit_price": 4500.0,
|
||||||
|
"total_price": 9000.0,
|
||||||
|
"lot_mappings": [
|
||||||
|
{ "lot_name": "LOT_A", "quantity_per_pn": 1 }
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Rules:
|
||||||
|
- `lot_mappings[]` is the only persisted PN -> LOT mapping contract;
|
||||||
|
- QuoteForge does not use legacy BOM tables;
|
||||||
|
- apply flow rebuilds cart rows from `lot_mappings[]`.
|
||||||
|
|
||||||
|
## Partnumber books
|
||||||
|
|
||||||
|
Partnumber books are pull-only snapshots from PriceForge.
|
||||||
|
|
||||||
|
Local tables:
|
||||||
|
- `local_partnumber_books`
|
||||||
|
- `local_partnumber_book_items`
|
||||||
|
|
||||||
|
Server tables:
|
||||||
|
- `qt_partnumber_books`
|
||||||
|
- `qt_partnumber_book_items`
|
||||||
|
|
||||||
|
Resolution flow:
|
||||||
|
1. load the active local book;
|
||||||
|
2. find `vendor_partnumber`;
|
||||||
|
3. copy `lots_json` into `lot_mappings[]`;
|
||||||
|
4. keep unresolved rows editable in the UI.
|
||||||
|
|
||||||
|
## CFXML import
|
||||||
|
|
||||||
|
`POST /api/projects/:uuid/vendor-import` imports one vendor workspace into an existing project.
|
||||||
|
|
||||||
|
Rules:
|
||||||
|
- accepted file field is `file`;
|
||||||
|
- maximum file size is `1 GiB`;
|
||||||
|
- one `ProprietaryGroupIdentifier` becomes one QuoteForge configuration;
|
||||||
|
- software rows stay inside their hardware group and never become standalone configurations;
|
||||||
|
- primary group row is selected structurally, without vendor-specific SKU hardcoding;
|
||||||
|
- imported configuration order follows workspace order.
|
||||||
|
|
||||||
|
Imported configuration fields:
|
||||||
|
- `name` from primary row `ProductName`
|
||||||
|
- `server_count` from primary row `Quantity`
|
||||||
|
- `server_model` from primary row `ProductDescription`
|
||||||
|
- `article` or `support_code` from `ProprietaryProductIdentifier`
|
||||||
|
|
||||||
|
Imported BOM rows become `vendor_spec` rows and are resolved through the active local partnumber book when possible.
|
||||||
30
bible-local/README.md
Normal file
30
bible-local/README.md
Normal file
@@ -0,0 +1,30 @@
|
|||||||
|
# QuoteForge Bible
|
||||||
|
|
||||||
|
Project-specific architecture and operational contracts.
|
||||||
|
|
||||||
|
## Files
|
||||||
|
|
||||||
|
| File | Scope |
|
||||||
|
| --- | --- |
|
||||||
|
| [01-overview.md](01-overview.md) | Product scope, runtime model, repository map |
|
||||||
|
| [02-architecture.md](02-architecture.md) | Local-first rules, sync, pricing, versioning |
|
||||||
|
| [03-database.md](03-database.md) | SQLite and MariaDB data model, permissions, migrations |
|
||||||
|
| [04-api.md](04-api.md) | HTTP routes and API contract |
|
||||||
|
| [05-config.md](05-config.md) | Runtime config, paths, env vars, startup behavior |
|
||||||
|
| [06-backup.md](06-backup.md) | Backup contract and restore workflow |
|
||||||
|
| [07-dev.md](07-dev.md) | Development commands and guardrails |
|
||||||
|
| [09-vendor-spec.md](09-vendor-spec.md) | Vendor BOM and CFXML import contract |
|
||||||
|
|
||||||
|
## Rules
|
||||||
|
|
||||||
|
- `bible-local/` is the source of truth for QuoteForge-specific behavior.
|
||||||
|
- Keep these files in English.
|
||||||
|
- Update the matching file in the same commit as any architectural change.
|
||||||
|
- Remove stale documentation instead of preserving history in place.
|
||||||
|
|
||||||
|
## Quick reference
|
||||||
|
|
||||||
|
- Local DB path: see [05-config.md](05-config.md)
|
||||||
|
- Runtime bind: loopback only
|
||||||
|
- Local backups: see [06-backup.md](06-backup.md)
|
||||||
|
- Release notes: `releases/<version>/RELEASE_NOTES.md`
|
||||||
@@ -1,119 +0,0 @@
|
|||||||
# 01 — Product Overview
|
|
||||||
|
|
||||||
## What is QuoteForge
|
|
||||||
|
|
||||||
A corporate server configuration and quotation tool.
|
|
||||||
Operates in **strict local-first** mode: all user operations go through local SQLite; MariaDB is used only for synchronization.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Features
|
|
||||||
|
|
||||||
### For Users
|
|
||||||
- Mobile-first interface — works comfortably on phones and tablets
|
|
||||||
- Server configurator — step-by-step component selection
|
|
||||||
- Automatic price calculation — based on pricelists from local cache
|
|
||||||
- CSV export — ready-to-use specifications for clients
|
|
||||||
- Configuration history — versioned snapshots with rollback support
|
|
||||||
- Full offline operation — continue working without network, sync later
|
|
||||||
- Guarded synchronization — sync is blocked by preflight check if local schema is not ready
|
|
||||||
|
|
||||||
### User Roles
|
|
||||||
|
|
||||||
| Role | Permissions |
|
|
||||||
|------|-------------|
|
|
||||||
| `viewer` | View, create quotes, export |
|
|
||||||
| `editor` | + save configurations |
|
|
||||||
| `pricing_admin` | + manage prices and alerts |
|
|
||||||
| `admin` | Full access, user management |
|
|
||||||
|
|
||||||
### Price Freshness Indicators
|
|
||||||
|
|
||||||
| Color | Status | Condition |
|
|
||||||
|-------|--------|-----------|
|
|
||||||
| Green | Fresh | < 30 days, ≥ 3 sources |
|
|
||||||
| Yellow | Normal | 30–60 days |
|
|
||||||
| Orange | Aging | 60–90 days |
|
|
||||||
| Red | Stale | > 90 days or no data |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Tech Stack
|
|
||||||
|
|
||||||
| Layer | Stack |
|
|
||||||
|-------|-------|
|
|
||||||
| Backend | Go 1.22+, Gin, GORM |
|
|
||||||
| Frontend | HTML, Tailwind CSS, htmx |
|
|
||||||
| Local DB | SQLite (`qfs.db`) |
|
|
||||||
| Server DB | MariaDB 11+ (sync + server admin) |
|
|
||||||
| Export | encoding/csv, excelize (XLSX) |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Product Scope
|
|
||||||
|
|
||||||
**In scope:**
|
|
||||||
- Component configurator and quotation calculation
|
|
||||||
- Projects and configurations
|
|
||||||
- Read-only pricelist viewing from local cache
|
|
||||||
- Sync (pull components/pricelists, push local changes)
|
|
||||||
|
|
||||||
**Out of scope (removed intentionally — do not restore):**
|
|
||||||
- Admin pricing UI/API
|
|
||||||
- Stock import
|
|
||||||
- Alerts
|
|
||||||
- Cron/importer utilities
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Repository Structure
|
|
||||||
|
|
||||||
```
|
|
||||||
quoteforge/
|
|
||||||
├── cmd/
|
|
||||||
│ ├── qfs/main.go # HTTP server entry point
|
|
||||||
│ ├── migrate/ # Migration tool
|
|
||||||
│ └── migrate_ops_projects/ # OPS project migrator
|
|
||||||
├── internal/
|
|
||||||
│ ├── appmeta/ # App version metadata
|
|
||||||
│ ├── appstate/ # State management, backup
|
|
||||||
│ ├── article/ # Article generation
|
|
||||||
│ ├── config/ # Config parsing
|
|
||||||
│ ├── db/ # DB initialization
|
|
||||||
│ ├── handlers/ # HTTP handlers
|
|
||||||
│ ├── localdb/ # SQLite layer
|
|
||||||
│ ├── lotmatch/ # Lot matching logic
|
|
||||||
│ ├── middleware/ # Auth, CORS, etc.
|
|
||||||
│ ├── models/ # GORM models
|
|
||||||
│ ├── repository/ # Repository layer
|
|
||||||
│ └── services/ # Business logic
|
|
||||||
├── web/
|
|
||||||
│ ├── templates/ # HTML templates + partials
|
|
||||||
│ └── static/ # CSS, JS, assets
|
|
||||||
├── migrations/ # SQL migration files (30+)
|
|
||||||
├── bible/ # Architectural documentation (this section)
|
|
||||||
├── releases/memory/ # Per-version changelogs
|
|
||||||
├── config.example.yaml # Config template (the only one in repo)
|
|
||||||
└── go.mod
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Integration with Existing DB
|
|
||||||
|
|
||||||
QuoteForge integrates with the existing `RFQ_LOG` database:
|
|
||||||
|
|
||||||
**Read-only:**
|
|
||||||
- `lot` — component catalog
|
|
||||||
- `qt_lot_metadata` — extended component data
|
|
||||||
- `qt_categories` — categories
|
|
||||||
- `qt_pricelists`, `qt_pricelist_items` — pricelists
|
|
||||||
|
|
||||||
**Read + Write:**
|
|
||||||
- `qt_configurations` — configurations
|
|
||||||
- `qt_projects` — projects
|
|
||||||
|
|
||||||
**Sync service tables:**
|
|
||||||
- `qt_client_local_migrations` — migration catalog (SELECT only)
|
|
||||||
- `qt_client_schema_state` — applied migrations state
|
|
||||||
- `qt_pricelist_sync_status` — pricelist sync status
|
|
||||||
@@ -1,205 +0,0 @@
|
|||||||
# 02 — Architecture
|
|
||||||
|
|
||||||
## Local-First Principle
|
|
||||||
|
|
||||||
**SQLite** is the single source of truth for the user.
|
|
||||||
**MariaDB** is a sync server only — it never blocks local operations.
|
|
||||||
|
|
||||||
```
|
|
||||||
User
|
|
||||||
│
|
|
||||||
▼
|
|
||||||
SQLite (qfs.db) ← all CRUD operations go here
|
|
||||||
│
|
|
||||||
│ background sync (every 5 min)
|
|
||||||
▼
|
|
||||||
MariaDB (RFQ_LOG) ← pull/push only
|
|
||||||
```
|
|
||||||
|
|
||||||
**Rules:**
|
|
||||||
- All CRUD operations go through SQLite only
|
|
||||||
- If MariaDB is unavailable → local work continues without restrictions
|
|
||||||
- Changes are queued in `pending_changes` and pushed on next sync
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Synchronization
|
|
||||||
|
|
||||||
### Data Flow Diagram
|
|
||||||
|
|
||||||
```
|
|
||||||
[ SERVER / MariaDB ]
|
|
||||||
┌───────────────────────────┐
|
|
||||||
│ qt_projects │
|
|
||||||
│ qt_configurations │
|
|
||||||
│ qt_pricelists │
|
|
||||||
│ qt_pricelist_items │
|
|
||||||
│ qt_pricelist_sync_status │
|
|
||||||
└─────────────┬─────────────┘
|
|
||||||
│
|
|
||||||
pull (projects/configs/pricelists)
|
|
||||||
│
|
|
||||||
┌────────────────────┴────────────────────┐
|
|
||||||
│ │
|
|
||||||
[ CLIENT A / SQLite ] [ CLIENT B / SQLite ]
|
|
||||||
local_projects local_projects
|
|
||||||
local_configurations local_configurations
|
|
||||||
local_pricelists local_pricelists
|
|
||||||
local_pricelist_items local_pricelist_items
|
|
||||||
pending_changes pending_changes
|
|
||||||
│ │
|
|
||||||
└────── push (projects/configs only) ─────┘
|
|
||||||
│
|
|
||||||
[ SERVER / MariaDB ]
|
|
||||||
```
|
|
||||||
|
|
||||||
### Sync Direction by Entity
|
|
||||||
|
|
||||||
| Entity | Direction |
|
|
||||||
|--------|-----------|
|
|
||||||
| Configurations | Client ↔ Server ↔ Other Clients |
|
|
||||||
| Projects | Client ↔ Server ↔ Other Clients |
|
|
||||||
| Pricelists | Server → Clients only (no push) |
|
|
||||||
| Components | Server → Clients only |
|
|
||||||
|
|
||||||
Local pricelists not present on the server and not referenced by active configurations are deleted automatically on sync.
|
|
||||||
|
|
||||||
### Soft Deletes (Archive Pattern)
|
|
||||||
|
|
||||||
Configurations and projects are **never hard-deleted**. Deletion is archive via `is_active = false`.
|
|
||||||
|
|
||||||
- `DELETE /api/configs/:uuid` → sets `is_active = false` (archived); can be restored via `reactivate`
|
|
||||||
- `DELETE /api/projects/:uuid` → archives a project **variant** only (`variant` field must be non-empty); main projects cannot be deleted via this endpoint
|
|
||||||
|
|
||||||
## Sync Readiness Guard
|
|
||||||
|
|
||||||
Before every push/pull, a preflight check runs:
|
|
||||||
1. Is the server (MariaDB) reachable?
|
|
||||||
2. Can centralized local DB migrations be applied?
|
|
||||||
3. Does the application version satisfy `min_app_version` of pending migrations?
|
|
||||||
|
|
||||||
**If the check fails:**
|
|
||||||
- Local CRUD continues without restriction
|
|
||||||
- Sync API returns `423 Locked` with `reason_code` and `reason_text`
|
|
||||||
- UI shows a red indicator with the block reason
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Pricing
|
|
||||||
|
|
||||||
### Principle
|
|
||||||
|
|
||||||
**Prices come only from `local_pricelist_items`.**
|
|
||||||
Components (`local_components`) are metadata-only — they contain no pricing information.
|
|
||||||
|
|
||||||
### Lookup Pattern
|
|
||||||
|
|
||||||
```go
|
|
||||||
// Look up a price for a line item
|
|
||||||
price, found := s.lookupPriceByPricelistID(pricelistID, lotName)
|
|
||||||
if found && price > 0 {
|
|
||||||
// use price
|
|
||||||
}
|
|
||||||
|
|
||||||
// Inside lookupPriceByPricelistID:
|
|
||||||
localPL, err := s.localDB.GetLocalPricelistByServerID(pricelistID)
|
|
||||||
price, err := s.localDB.GetLocalPriceForLot(localPL.ID, lotName)
|
|
||||||
```
|
|
||||||
|
|
||||||
### Multi-Level Pricelists
|
|
||||||
|
|
||||||
A configuration can reference up to three pricelists simultaneously:
|
|
||||||
|
|
||||||
| Field | Purpose |
|
|
||||||
|-------|---------|
|
|
||||||
| `pricelist_id` | Primary (estimate) |
|
|
||||||
| `warehouse_pricelist_id` | Warehouse pricing |
|
|
||||||
| `competitor_pricelist_id` | Competitor pricing |
|
|
||||||
|
|
||||||
Pricelist sources: `estimate` | `warehouse` | `competitor`
|
|
||||||
|
|
||||||
### "Auto" Pricelist Selection
|
|
||||||
|
|
||||||
Configurator supports explicit and automatic selection per source (`estimate`, `warehouse`, `competitor`):
|
|
||||||
|
|
||||||
- **Explicit mode:** concrete `pricelist_id` is set by user in settings.
|
|
||||||
- **Auto mode:** client sends no explicit ID for that source; backend resolves the current latest active pricelist.
|
|
||||||
|
|
||||||
`auto` must stay `auto` after price-level refresh and after manual "refresh prices":
|
|
||||||
- resolved IDs are runtime-only and must not overwrite user's mode;
|
|
||||||
- switching to explicit selection must clear runtime auto resolution for that source.
|
|
||||||
|
|
||||||
### Latest Pricelist Resolution Rules
|
|
||||||
|
|
||||||
For both server (`qt_pricelists`) and local cache (`local_pricelists`), "latest by source" is resolved with:
|
|
||||||
|
|
||||||
1. only pricelists that have at least one item (`EXISTS ...pricelist_items`);
|
|
||||||
2. deterministic sort: `created_at DESC, id DESC`.
|
|
||||||
|
|
||||||
This prevents selecting empty/incomplete snapshots and removes nondeterministic ties.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Configuration Versioning
|
|
||||||
|
|
||||||
### Principle
|
|
||||||
|
|
||||||
Append-only: every save creates an immutable snapshot in `local_configuration_versions`.
|
|
||||||
|
|
||||||
```
|
|
||||||
local_configurations
|
|
||||||
└── current_version_id ──► local_configuration_versions (v3) ← active
|
|
||||||
local_configuration_versions (v2)
|
|
||||||
local_configuration_versions (v1)
|
|
||||||
```
|
|
||||||
|
|
||||||
- `version_no = max + 1` on every save
|
|
||||||
- Old versions are never modified or deleted in normal flow
|
|
||||||
- Rollback does **not** rewind history — it creates a **new** version from the snapshot
|
|
||||||
|
|
||||||
### Rollback
|
|
||||||
|
|
||||||
```bash
|
|
||||||
POST /api/configs/:uuid/rollback
|
|
||||||
{
|
|
||||||
"target_version": 3,
|
|
||||||
"note": "optional comment"
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
Result:
|
|
||||||
- A new version `vN` is created with `data` from the target version
|
|
||||||
- `change_note = "rollback to v{target_version}"` (+ note if provided)
|
|
||||||
- `current_version_id` is switched to the new version
|
|
||||||
- Configuration moves to `sync_status = pending`
|
|
||||||
|
|
||||||
### Sync Status Flow
|
|
||||||
|
|
||||||
```
|
|
||||||
local → pending → synced
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Sync Payload for Versioning
|
|
||||||
|
|
||||||
Events in `pending_changes` for configurations contain:
|
|
||||||
|
|
||||||
| Field | Description |
|
|
||||||
|-------|-------------|
|
|
||||||
| `configuration_uuid` | Identifier |
|
|
||||||
| `operation` | `create` / `update` / `rollback` |
|
|
||||||
| `current_version_id` | Active version ID |
|
|
||||||
| `current_version_no` | Version number |
|
|
||||||
| `snapshot` | Current configuration state |
|
|
||||||
| `idempotency_key` | For idempotent push |
|
|
||||||
| `conflict_policy` | `last_write_wins` |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Background Processes
|
|
||||||
|
|
||||||
| Process | Interval | What it does |
|
|
||||||
|---------|----------|--------------|
|
|
||||||
| Sync worker | 5 min | push pending + pull all |
|
|
||||||
| Backup scheduler | configurable (`backup.time`) | creates ZIP archives |
|
|
||||||
@@ -1,173 +0,0 @@
|
|||||||
# 03 — Database
|
|
||||||
|
|
||||||
## SQLite (local, client-side)
|
|
||||||
|
|
||||||
File: `qfs.db` in the user-state directory (see [05-config.md](05-config.md)).
|
|
||||||
|
|
||||||
### Tables
|
|
||||||
|
|
||||||
#### Components and Reference Data
|
|
||||||
|
|
||||||
| Table | Purpose | Key Fields |
|
|
||||||
|-------|---------|------------|
|
|
||||||
| `local_components` | Component metadata (NO prices) | `lot_name` (PK), `lot_description`, `category`, `model` |
|
|
||||||
| `connection_settings` | MariaDB connection settings | key-value store |
|
|
||||||
| `app_settings` | Application settings | `key` (PK), `value`, `updated_at` |
|
|
||||||
|
|
||||||
#### Pricelists
|
|
||||||
|
|
||||||
| Table | Purpose | Key Fields |
|
|
||||||
|-------|---------|------------|
|
|
||||||
| `local_pricelists` | Pricelist headers | `id`, `server_id` (unique), `source`, `version`, `created_at` |
|
|
||||||
| `local_pricelist_items` | Pricelist line items ← **sole source of prices** | `id`, `pricelist_id` (FK), `lot_name`, `price`, `lot_category` |
|
|
||||||
|
|
||||||
#### Configurations and Projects
|
|
||||||
|
|
||||||
| Table | Purpose | Key Fields |
|
|
||||||
|-------|---------|------------|
|
|
||||||
| `local_configurations` | Saved configurations | `id`, `uuid` (unique), `items` (JSON), `pricelist_id`, `warehouse_pricelist_id`, `competitor_pricelist_id`, `current_version_id`, `sync_status` |
|
|
||||||
| `local_configuration_versions` | Immutable snapshots (revisions) | `id`, `configuration_id` (FK), `version_no`, `data` (JSON), `change_note`, `created_at` |
|
|
||||||
| `local_projects` | Projects | `id`, `uuid` (unique), `name`, `code`, `sync_status` |
|
|
||||||
|
|
||||||
#### Sync
|
|
||||||
|
|
||||||
| Table | Purpose |
|
|
||||||
|-------|---------|
|
|
||||||
| `pending_changes` | Queue of changes to push to MariaDB |
|
|
||||||
| `local_schema_migrations` | Applied migrations (idempotency guard) |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Key SQLite Indexes
|
|
||||||
|
|
||||||
```sql
|
|
||||||
-- Pricelists
|
|
||||||
INDEX local_pricelist_items(pricelist_id)
|
|
||||||
UNIQUE INDEX local_pricelists(server_id)
|
|
||||||
INDEX local_pricelists(source, created_at) -- used for "latest by source" queries
|
|
||||||
-- latest-by-source runtime query also applies deterministic tie-break by id DESC
|
|
||||||
|
|
||||||
-- Configurations
|
|
||||||
INDEX local_configurations(pricelist_id)
|
|
||||||
INDEX local_configurations(warehouse_pricelist_id)
|
|
||||||
INDEX local_configurations(competitor_pricelist_id)
|
|
||||||
UNIQUE INDEX local_configurations(uuid)
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### `items` JSON Structure in Configurations
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"items": [
|
|
||||||
{
|
|
||||||
"lot_name": "CPU_AMD_9654",
|
|
||||||
"quantity": 2,
|
|
||||||
"unit_price": 123456.78,
|
|
||||||
"section": "Processors"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
Prices are stored inside the `items` JSON field and refreshed from the pricelist on configuration refresh.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## MariaDB (server-side, sync-only)
|
|
||||||
|
|
||||||
Database: `RFQ_LOG`
|
|
||||||
|
|
||||||
### Tables and Permissions
|
|
||||||
|
|
||||||
| Table | Purpose | Permissions |
|
|
||||||
|-------|---------|-------------|
|
|
||||||
| `lot` | Component catalog | SELECT |
|
|
||||||
| `qt_lot_metadata` | Extended component data | SELECT |
|
|
||||||
| `qt_categories` | Component categories | SELECT |
|
|
||||||
| `qt_pricelists` | Pricelists | SELECT |
|
|
||||||
| `qt_pricelist_items` | Pricelist line items | SELECT |
|
|
||||||
| `qt_configurations` | Saved configurations | SELECT, INSERT, UPDATE |
|
|
||||||
| `qt_projects` | Projects | SELECT, INSERT, UPDATE |
|
|
||||||
| `qt_client_local_migrations` | Migration catalog | SELECT only |
|
|
||||||
| `qt_client_schema_state` | Applied migrations state | SELECT, INSERT, UPDATE |
|
|
||||||
| `qt_pricelist_sync_status` | Pricelist sync status | SELECT, INSERT, UPDATE |
|
|
||||||
|
|
||||||
### Grant Permissions to Existing User
|
|
||||||
|
|
||||||
```sql
|
|
||||||
GRANT SELECT ON RFQ_LOG.lot TO '<DB_USER>'@'%';
|
|
||||||
GRANT SELECT ON RFQ_LOG.qt_lot_metadata TO '<DB_USER>'@'%';
|
|
||||||
GRANT SELECT ON RFQ_LOG.qt_categories TO '<DB_USER>'@'%';
|
|
||||||
GRANT SELECT ON RFQ_LOG.qt_pricelists TO '<DB_USER>'@'%';
|
|
||||||
GRANT SELECT ON RFQ_LOG.qt_pricelist_items TO '<DB_USER>'@'%';
|
|
||||||
|
|
||||||
GRANT SELECT, INSERT, UPDATE ON RFQ_LOG.qt_configurations TO '<DB_USER>'@'%';
|
|
||||||
GRANT SELECT, INSERT, UPDATE ON RFQ_LOG.qt_projects TO '<DB_USER>'@'%';
|
|
||||||
|
|
||||||
GRANT SELECT ON RFQ_LOG.qt_client_local_migrations TO '<DB_USER>'@'%';
|
|
||||||
GRANT SELECT, INSERT, UPDATE ON RFQ_LOG.qt_client_schema_state TO '<DB_USER>'@'%';
|
|
||||||
GRANT SELECT, INSERT, UPDATE ON RFQ_LOG.qt_pricelist_sync_status TO '<DB_USER>'@'%';
|
|
||||||
|
|
||||||
FLUSH PRIVILEGES;
|
|
||||||
```
|
|
||||||
|
|
||||||
### Create a New User
|
|
||||||
|
|
||||||
```sql
|
|
||||||
CREATE USER IF NOT EXISTS 'quote_user'@'%' IDENTIFIED BY '<DB_PASSWORD>';
|
|
||||||
|
|
||||||
GRANT SELECT ON RFQ_LOG.lot TO 'quote_user'@'%';
|
|
||||||
GRANT SELECT ON RFQ_LOG.qt_lot_metadata TO 'quote_user'@'%';
|
|
||||||
GRANT SELECT ON RFQ_LOG.qt_categories TO 'quote_user'@'%';
|
|
||||||
GRANT SELECT ON RFQ_LOG.qt_pricelists TO 'quote_user'@'%';
|
|
||||||
GRANT SELECT ON RFQ_LOG.qt_pricelist_items TO 'quote_user'@'%';
|
|
||||||
GRANT SELECT, INSERT, UPDATE ON RFQ_LOG.qt_configurations TO 'quote_user'@'%';
|
|
||||||
GRANT SELECT, INSERT, UPDATE ON RFQ_LOG.qt_projects TO 'quote_user'@'%';
|
|
||||||
GRANT SELECT ON RFQ_LOG.qt_client_local_migrations TO 'quote_user'@'%';
|
|
||||||
GRANT SELECT, INSERT, UPDATE ON RFQ_LOG.qt_client_schema_state TO 'quote_user'@'%';
|
|
||||||
GRANT SELECT, INSERT, UPDATE ON RFQ_LOG.qt_pricelist_sync_status TO 'quote_user'@'%';
|
|
||||||
|
|
||||||
FLUSH PRIVILEGES;
|
|
||||||
SHOW GRANTS FOR 'quote_user'@'%';
|
|
||||||
```
|
|
||||||
|
|
||||||
**Note:** If you see `Access denied for user ...@'<ip>'`, check for conflicting user entries (user@localhost vs user@'%').
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Migrations
|
|
||||||
|
|
||||||
### SQLite Migrations (local)
|
|
||||||
|
|
||||||
- Stored in `migrations/` (SQL files)
|
|
||||||
- Applied via `-migrate` flag or automatically on first run
|
|
||||||
- Idempotent: checked by `id` in `local_schema_migrations`
|
|
||||||
- Already-applied migrations are skipped
|
|
||||||
|
|
||||||
```bash
|
|
||||||
go run ./cmd/qfs -migrate
|
|
||||||
```
|
|
||||||
|
|
||||||
### Centralized Migrations (server-side)
|
|
||||||
|
|
||||||
- Stored in `qt_client_local_migrations` (MariaDB)
|
|
||||||
- Applied automatically during sync readiness check
|
|
||||||
- `min_app_version` — minimum app version required for the migration
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## DB Debugging
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Inspect schema
|
|
||||||
sqlite3 ~/.local/state/quoteforge/qfs.db ".schema local_components"
|
|
||||||
sqlite3 ~/.local/state/quoteforge/qfs.db ".schema local_configurations"
|
|
||||||
|
|
||||||
# Check pricelist item count
|
|
||||||
sqlite3 ~/.local/state/quoteforge/qfs.db "SELECT COUNT(*) FROM local_pricelist_items"
|
|
||||||
|
|
||||||
# Check pending sync queue
|
|
||||||
sqlite3 ~/.local/state/quoteforge/qfs.db "SELECT COUNT(*) FROM pending_changes"
|
|
||||||
```
|
|
||||||
127
bible/04-api.md
127
bible/04-api.md
@@ -1,127 +0,0 @@
|
|||||||
# 04 — API and Web Routes
|
|
||||||
|
|
||||||
## API Endpoints
|
|
||||||
|
|
||||||
### Setup
|
|
||||||
|
|
||||||
| Method | Endpoint | Purpose |
|
|
||||||
|--------|----------|---------|
|
|
||||||
| GET | `/setup` | Initial setup page |
|
|
||||||
| POST | `/setup` | Save connection settings |
|
|
||||||
| POST | `/setup/test` | Test MariaDB connection |
|
|
||||||
| GET | `/setup/status` | Setup status |
|
|
||||||
|
|
||||||
### Components
|
|
||||||
|
|
||||||
| Method | Endpoint | Purpose |
|
|
||||||
|--------|----------|---------|
|
|
||||||
| GET | `/api/components` | List components (metadata only) |
|
|
||||||
| GET | `/api/components/:lot_name` | Component by lot_name |
|
|
||||||
| GET | `/api/categories` | List categories |
|
|
||||||
|
|
||||||
### Quote
|
|
||||||
|
|
||||||
| Method | Endpoint | Purpose |
|
|
||||||
|--------|----------|---------|
|
|
||||||
| POST | `/api/quote/validate` | Validate line items |
|
|
||||||
| POST | `/api/quote/calculate` | Calculate quote (prices from pricelist) |
|
|
||||||
| POST | `/api/quote/price-levels` | Prices by level (estimate/warehouse/competitor) |
|
|
||||||
|
|
||||||
### Pricelists (read-only)
|
|
||||||
|
|
||||||
| Method | Endpoint | Purpose |
|
|
||||||
|--------|----------|---------|
|
|
||||||
| GET | `/api/pricelists` | List pricelists (`source`, `active_only`, pagination) |
|
|
||||||
| GET | `/api/pricelists/latest` | Latest pricelist by source |
|
|
||||||
| GET | `/api/pricelists/:id` | Pricelist by ID |
|
|
||||||
| GET | `/api/pricelists/:id/items` | Pricelist line items |
|
|
||||||
| GET | `/api/pricelists/:id/lots` | Lot names in pricelist |
|
|
||||||
|
|
||||||
`GET /api/pricelists?active_only=true` returns only pricelists that have synced items (`item_count > 0`).
|
|
||||||
|
|
||||||
### Configurations
|
|
||||||
|
|
||||||
| Method | Endpoint | Purpose |
|
|
||||||
|--------|----------|---------|
|
|
||||||
| GET | `/api/configs` | List configurations |
|
|
||||||
| POST | `/api/configs` | Create configuration |
|
|
||||||
| GET | `/api/configs/:uuid` | Get configuration |
|
|
||||||
| PUT | `/api/configs/:uuid` | Update configuration |
|
|
||||||
| DELETE | `/api/configs/:uuid` | Archive configuration |
|
|
||||||
| POST | `/api/configs/:uuid/refresh-prices` | Refresh prices from pricelist |
|
|
||||||
| POST | `/api/configs/:uuid/clone` | Clone configuration |
|
|
||||||
| POST | `/api/configs/:uuid/reactivate` | Restore archived configuration |
|
|
||||||
| POST | `/api/configs/:uuid/rename` | Rename configuration |
|
|
||||||
| POST | `/api/configs/preview-article` | Preview generated article for a configuration |
|
|
||||||
| POST | `/api/configs/:uuid/rollback` | Roll back to a version |
|
|
||||||
| GET | `/api/configs/:uuid/versions` | List versions |
|
|
||||||
| GET | `/api/configs/:uuid/versions/:version` | Get specific version |
|
|
||||||
|
|
||||||
### Projects
|
|
||||||
|
|
||||||
| Method | Endpoint | Purpose |
|
|
||||||
|--------|----------|---------|
|
|
||||||
| GET | `/api/projects` | List projects |
|
|
||||||
| POST | `/api/projects` | Create project |
|
|
||||||
| GET | `/api/projects/:uuid` | Get project |
|
|
||||||
| PUT | `/api/projects/:uuid` | Update project |
|
|
||||||
| DELETE | `/api/projects/:uuid` | Archive project variant (soft-delete via `is_active=false`; fails if project has no `variant` set — main projects cannot be deleted this way) |
|
|
||||||
| GET | `/api/projects/:uuid/configs` | Project configurations |
|
|
||||||
|
|
||||||
### Sync
|
|
||||||
|
|
||||||
| Method | Endpoint | Purpose | Flow |
|
|
||||||
|--------|----------|---------|------|
|
|
||||||
| GET | `/api/sync/status` | Overall sync status | read-only |
|
|
||||||
| GET | `/api/sync/readiness` | Preflight status (ready/blocked/unknown) | read-only |
|
|
||||||
| GET | `/api/sync/info` | Data for sync modal | read-only |
|
|
||||||
| GET | `/api/sync/users-status` | Users status | read-only |
|
|
||||||
| GET | `/api/sync/pending` | List pending changes | read-only |
|
|
||||||
| GET | `/api/sync/pending/count` | Count of pending changes | read-only |
|
|
||||||
| POST | `/api/sync/push` | Push pending → MariaDB | SQLite → MariaDB |
|
|
||||||
| POST | `/api/sync/components` | Pull components | MariaDB → SQLite |
|
|
||||||
| POST | `/api/sync/pricelists` | Pull pricelists | MariaDB → SQLite |
|
|
||||||
| POST | `/api/sync/all` | Full sync: push + pull + import | bidirectional |
|
|
||||||
| POST | `/api/sync/repair` | Repair broken entries in pending_changes | SQLite |
|
|
||||||
|
|
||||||
**If sync is blocked by the readiness guard:** all POST sync methods return `423 Locked` with `reason_code` and `reason_text`.
|
|
||||||
|
|
||||||
### Export
|
|
||||||
|
|
||||||
| Method | Endpoint | Purpose |
|
|
||||||
|--------|----------|---------|
|
|
||||||
| POST | `/api/export/csv` | Export configuration to CSV |
|
|
||||||
|
|
||||||
**Export filename format:** `YYYY-MM-DD (ProjectCode) ConfigName Article.csv`
|
|
||||||
(uses `project.Code`, not `project.Name`)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Web Routes
|
|
||||||
|
|
||||||
| Route | Page |
|
|
||||||
|-------|------|
|
|
||||||
| `/configs` | Configuration list |
|
|
||||||
| `/configurator` | Configurator |
|
|
||||||
| `/configs/:uuid/revisions` | Configuration revision history |
|
|
||||||
| `/projects` | Project list |
|
|
||||||
| `/projects/:uuid` | Project details |
|
|
||||||
| `/pricelists` | Pricelist list |
|
|
||||||
| `/pricelists/:id` | Pricelist details |
|
|
||||||
| `/setup` | Connection settings |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Rollback API (details)
|
|
||||||
|
|
||||||
```bash
|
|
||||||
POST /api/configs/:uuid/rollback
|
|
||||||
Content-Type: application/json
|
|
||||||
|
|
||||||
{
|
|
||||||
"target_version": 3,
|
|
||||||
"note": "optional comment"
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
Response: updated configuration with the new version.
|
|
||||||
@@ -1,129 +0,0 @@
|
|||||||
# 05 — Configuration and Environment
|
|
||||||
|
|
||||||
## File Paths
|
|
||||||
|
|
||||||
### SQLite database (`qfs.db`)
|
|
||||||
|
|
||||||
| OS | Default path |
|
|
||||||
|----|-------------|
|
|
||||||
| macOS | `~/Library/Application Support/QuoteForge/qfs.db` |
|
|
||||||
| Linux | `$XDG_STATE_HOME/quoteforge/qfs.db` or `~/.local/state/quoteforge/qfs.db` |
|
|
||||||
| Windows | `%LOCALAPPDATA%\QuoteForge\qfs.db` |
|
|
||||||
|
|
||||||
Override: `-localdb <path>` or `QFS_DB_PATH`.
|
|
||||||
|
|
||||||
### config.yaml
|
|
||||||
|
|
||||||
Searched in the same user-state directory as `qfs.db` by default.
|
|
||||||
If the file does not exist, it is created automatically.
|
|
||||||
If the format is outdated, it is automatically migrated to the runtime format (`server` + `logging` sections only).
|
|
||||||
|
|
||||||
Override: `-config <path>` or `QFS_CONFIG_PATH`.
|
|
||||||
|
|
||||||
**Important:** `config.yaml` is a runtime user file — it is **not stored in the repository**.
|
|
||||||
`config.example.yaml` is the only config template in the repo.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## config.yaml Structure
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
server:
|
|
||||||
host: "0.0.0.0"
|
|
||||||
port: 8080
|
|
||||||
mode: "release" # release | debug
|
|
||||||
|
|
||||||
logging:
|
|
||||||
level: "info" # debug | info | warn | error
|
|
||||||
format: "json" # json | text
|
|
||||||
output: "stdout" # stdout | stderr | /path/to/file
|
|
||||||
|
|
||||||
backup:
|
|
||||||
time: "00:00" # HH:MM in local time
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Environment Variables
|
|
||||||
|
|
||||||
| Variable | Description | Default |
|
|
||||||
|----------|-------------|---------|
|
|
||||||
| `QFS_DB_PATH` | Full path to SQLite DB | OS-specific user state dir |
|
|
||||||
| `QFS_STATE_DIR` | State directory (if `QFS_DB_PATH` is not set) | OS-specific user state dir |
|
|
||||||
| `QFS_CONFIG_PATH` | Full path to `config.yaml` | OS-specific user state dir |
|
|
||||||
| `QFS_BACKUP_DIR` | Root directory for rotating backups | `<db dir>/backups` |
|
|
||||||
| `QFS_BACKUP_DISABLE` | Disable automatic backups | — |
|
|
||||||
| `QF_DB_HOST` | MariaDB host | localhost |
|
|
||||||
| `QF_DB_PORT` | MariaDB port | 3306 |
|
|
||||||
| `QF_DB_NAME` | Database name | RFQ_LOG |
|
|
||||||
| `QF_DB_USER` | DB user | — |
|
|
||||||
| `QF_DB_PASSWORD` | DB password | — |
|
|
||||||
| `QF_JWT_SECRET` | JWT secret | — |
|
|
||||||
| `QF_SERVER_PORT` | HTTP server port | 8080 |
|
|
||||||
|
|
||||||
`QFS_BACKUP_DISABLE` accepts: `1`, `true`, `yes`.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## CLI Flags
|
|
||||||
|
|
||||||
| Flag | Description |
|
|
||||||
|------|-------------|
|
|
||||||
| `-config <path>` | Path to config.yaml |
|
|
||||||
| `-localdb <path>` | Path to SQLite DB |
|
|
||||||
| `-reset-localdb` | Reset local DB (destructive!) |
|
|
||||||
| `-migrate` | Apply pending migrations and exit |
|
|
||||||
| `-version` | Print version and exit |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Installation and First Run
|
|
||||||
|
|
||||||
### Requirements
|
|
||||||
- Go 1.22 or higher
|
|
||||||
- MariaDB 11.x (or MySQL 8.x)
|
|
||||||
- ~50 MB disk space
|
|
||||||
|
|
||||||
### Steps
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# 1. Clone the repository
|
|
||||||
git clone <repo-url>
|
|
||||||
cd quoteforge
|
|
||||||
|
|
||||||
# 2. Apply migrations
|
|
||||||
go run ./cmd/qfs -migrate
|
|
||||||
|
|
||||||
# 3. Start
|
|
||||||
go run ./cmd/qfs
|
|
||||||
# or
|
|
||||||
make run
|
|
||||||
```
|
|
||||||
|
|
||||||
Application is available at: http://localhost:8080
|
|
||||||
|
|
||||||
On first run, `/setup` opens for configuring the MariaDB connection.
|
|
||||||
|
|
||||||
### OPS Project Migrator
|
|
||||||
|
|
||||||
Migrates quotes whose names start with `OPS-xxxx` (where `x` is a digit) into a project named `OPS-xxxx`.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Preview first (always)
|
|
||||||
go run ./cmd/migrate_ops_projects
|
|
||||||
|
|
||||||
# Apply
|
|
||||||
go run ./cmd/migrate_ops_projects -apply
|
|
||||||
|
|
||||||
# Apply without interactive confirmation
|
|
||||||
go run ./cmd/migrate_ops_projects -apply -yes
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Docker
|
|
||||||
|
|
||||||
```bash
|
|
||||||
docker build -t quoteforge .
|
|
||||||
docker-compose up -d
|
|
||||||
```
|
|
||||||
@@ -1,221 +0,0 @@
|
|||||||
# 06 — Backup
|
|
||||||
|
|
||||||
## Overview
|
|
||||||
|
|
||||||
Automatic rotating ZIP backup system for local data.
|
|
||||||
|
|
||||||
**What is included in each archive:**
|
|
||||||
- SQLite DB (`qfs.db`)
|
|
||||||
- SQLite sidecars (`qfs.db-wal`, `qfs.db-shm`) if present
|
|
||||||
- `config.yaml` if present
|
|
||||||
|
|
||||||
**Archive name format:** `qfs-backp-YYYY-MM-DD.zip`
|
|
||||||
|
|
||||||
**Retention policy:**
|
|
||||||
| Period | Keep |
|
|
||||||
|--------|------|
|
|
||||||
| Daily | 7 archives |
|
|
||||||
| Weekly | 4 archives |
|
|
||||||
| Monthly | 12 archives |
|
|
||||||
| Yearly | 10 archives |
|
|
||||||
|
|
||||||
**Directories:** `<backup root>/daily`, `/weekly`, `/monthly`, `/yearly`
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Configuration
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
backup:
|
|
||||||
time: "00:00" # Trigger time in local time (HH:MM format)
|
|
||||||
```
|
|
||||||
|
|
||||||
**Environment variables:**
|
|
||||||
- `QFS_BACKUP_DIR` — backup root directory (default: `<db dir>/backups`)
|
|
||||||
- `QFS_BACKUP_DISABLE` — disable backups (`1/true/yes`)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Behavior
|
|
||||||
|
|
||||||
- **At startup:** if no backup exists for the current period, one is created immediately
|
|
||||||
- **Daily:** at the configured time, a new backup is created
|
|
||||||
- **Deduplication:** prevented via a `.period.json` marker file in each period directory
|
|
||||||
- **Rotation:** excess old archives are deleted automatically
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Implementation
|
|
||||||
|
|
||||||
Module: `internal/appstate/backup.go`
|
|
||||||
|
|
||||||
Main function:
|
|
||||||
```go
|
|
||||||
func EnsureRotatingLocalBackup(dbPath, configPath string) ([]string, error)
|
|
||||||
```
|
|
||||||
|
|
||||||
Scheduler (in `main.go`):
|
|
||||||
```go
|
|
||||||
func startBackupScheduler(ctx context.Context, cfg *config.Config, dbPath, configPath string)
|
|
||||||
```
|
|
||||||
|
|
||||||
### Config struct
|
|
||||||
|
|
||||||
```go
|
|
||||||
type BackupConfig struct {
|
|
||||||
Time string `yaml:"time"`
|
|
||||||
}
|
|
||||||
// Default: "00:00"
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Implementation Notes
|
|
||||||
|
|
||||||
- `backup.time` is in **local time** without timezone offset parsing
|
|
||||||
- `.period.json` is the marker that prevents duplicate backups within the same period
|
|
||||||
- Archive filenames contain only the date; uniqueness is ensured by per-period directories + the period marker
|
|
||||||
- When changing naming or retention: update both the filename logic and the prune logic together
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Full Listing: `internal/appstate/backup.go`
|
|
||||||
|
|
||||||
```go
|
|
||||||
package appstate
|
|
||||||
|
|
||||||
import (
|
|
||||||
"archive/zip"
|
|
||||||
"encoding/json"
|
|
||||||
"fmt"
|
|
||||||
"io"
|
|
||||||
"os"
|
|
||||||
"path/filepath"
|
|
||||||
"sort"
|
|
||||||
"strings"
|
|
||||||
"time"
|
|
||||||
)
|
|
||||||
|
|
||||||
type backupPeriod struct {
|
|
||||||
name string
|
|
||||||
retention int
|
|
||||||
key func(time.Time) string
|
|
||||||
date func(time.Time) string
|
|
||||||
}
|
|
||||||
|
|
||||||
var backupPeriods = []backupPeriod{
|
|
||||||
{
|
|
||||||
name: "daily",
|
|
||||||
retention: 7,
|
|
||||||
key: func(t time.Time) string { return t.Format("2006-01-02") },
|
|
||||||
date: func(t time.Time) string { return t.Format("2006-01-02") },
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "weekly",
|
|
||||||
retention: 4,
|
|
||||||
key: func(t time.Time) string {
|
|
||||||
y, w := t.ISOWeek()
|
|
||||||
return fmt.Sprintf("%04d-W%02d", y, w)
|
|
||||||
},
|
|
||||||
date: func(t time.Time) string { return t.Format("2006-01-02") },
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "monthly",
|
|
||||||
retention: 12,
|
|
||||||
key: func(t time.Time) string { return t.Format("2006-01") },
|
|
||||||
date: func(t time.Time) string { return t.Format("2006-01-02") },
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "yearly",
|
|
||||||
retention: 10,
|
|
||||||
key: func(t time.Time) string { return t.Format("2006") },
|
|
||||||
date: func(t time.Time) string { return t.Format("2006-01-02") },
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
func EnsureRotatingLocalBackup(dbPath, configPath string) ([]string, error) {
|
|
||||||
if isBackupDisabled() || dbPath == "" {
|
|
||||||
return nil, nil
|
|
||||||
}
|
|
||||||
if _, err := os.Stat(dbPath); os.IsNotExist(err) {
|
|
||||||
return nil, nil
|
|
||||||
}
|
|
||||||
root := resolveBackupRoot(dbPath)
|
|
||||||
now := time.Now()
|
|
||||||
created := make([]string, 0)
|
|
||||||
for _, period := range backupPeriods {
|
|
||||||
newFiles, err := ensurePeriodBackup(root, period, now, dbPath, configPath)
|
|
||||||
if err != nil {
|
|
||||||
return created, err
|
|
||||||
}
|
|
||||||
created = append(created, newFiles...)
|
|
||||||
}
|
|
||||||
return created, nil
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Full Listing: Scheduler Hook (`main.go`)
|
|
||||||
|
|
||||||
```go
|
|
||||||
func startBackupScheduler(ctx context.Context, cfg *config.Config, dbPath, configPath string) {
|
|
||||||
if cfg == nil {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
hour, minute, err := parseBackupTime(cfg.Backup.Time)
|
|
||||||
if err != nil {
|
|
||||||
slog.Warn("invalid backup time; using 00:00", "value", cfg.Backup.Time, "error", err)
|
|
||||||
hour, minute = 0, 0
|
|
||||||
}
|
|
||||||
|
|
||||||
// Startup check: create backup immediately if none exists for current periods
|
|
||||||
if created, backupErr := appstate.EnsureRotatingLocalBackup(dbPath, configPath); backupErr != nil {
|
|
||||||
slog.Error("local backup failed", "error", backupErr)
|
|
||||||
} else {
|
|
||||||
for _, path := range created {
|
|
||||||
slog.Info("local backup completed", "archive", path)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
for {
|
|
||||||
next := nextBackupTime(time.Now(), hour, minute)
|
|
||||||
timer := time.NewTimer(time.Until(next))
|
|
||||||
select {
|
|
||||||
case <-ctx.Done():
|
|
||||||
timer.Stop()
|
|
||||||
return
|
|
||||||
case <-timer.C:
|
|
||||||
start := time.Now()
|
|
||||||
created, backupErr := appstate.EnsureRotatingLocalBackup(dbPath, configPath)
|
|
||||||
duration := time.Since(start)
|
|
||||||
if backupErr != nil {
|
|
||||||
slog.Error("local backup failed", "error", backupErr, "duration", duration)
|
|
||||||
} else {
|
|
||||||
for _, path := range created {
|
|
||||||
slog.Info("local backup completed", "archive", path, "duration", duration)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func parseBackupTime(value string) (int, int, error) {
|
|
||||||
if strings.TrimSpace(value) == "" {
|
|
||||||
return 0, 0, fmt.Errorf("empty backup time")
|
|
||||||
}
|
|
||||||
parsed, err := time.Parse("15:04", value)
|
|
||||||
if err != nil {
|
|
||||||
return 0, 0, err
|
|
||||||
}
|
|
||||||
return parsed.Hour(), parsed.Minute(), nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func nextBackupTime(now time.Time, hour, minute int) time.Time {
|
|
||||||
target := time.Date(now.Year(), now.Month(), now.Day(), hour, minute, 0, 0, now.Location())
|
|
||||||
if !now.Before(target) {
|
|
||||||
target = target.Add(24 * time.Hour)
|
|
||||||
}
|
|
||||||
return target
|
|
||||||
}
|
|
||||||
```
|
|
||||||
136
bible/07-dev.md
136
bible/07-dev.md
@@ -1,136 +0,0 @@
|
|||||||
# 07 — Development
|
|
||||||
|
|
||||||
## Commands
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Run (dev)
|
|
||||||
go run ./cmd/qfs
|
|
||||||
make run
|
|
||||||
|
|
||||||
# Build
|
|
||||||
make build-release # Optimized build with version info
|
|
||||||
CGO_ENABLED=0 go build -o bin/qfs ./cmd/qfs
|
|
||||||
|
|
||||||
# Cross-platform build
|
|
||||||
make build-all # Linux, macOS, Windows
|
|
||||||
make build-windows # Windows only
|
|
||||||
|
|
||||||
# Verification
|
|
||||||
go build ./cmd/qfs # Must compile without errors
|
|
||||||
go vet ./... # Linter
|
|
||||||
|
|
||||||
# Tests
|
|
||||||
go test ./...
|
|
||||||
make test
|
|
||||||
|
|
||||||
# Utilities
|
|
||||||
make install-hooks # Git hooks (block committing secrets)
|
|
||||||
make clean # Clean bin/
|
|
||||||
make help # All available commands
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Code Style
|
|
||||||
|
|
||||||
- **Formatting:** `gofmt` (mandatory)
|
|
||||||
- **Logging:** `slog` only (structured logging)
|
|
||||||
- **Errors:** explicit wrapping with context (`fmt.Errorf("context: %w", err)`)
|
|
||||||
- **Style:** no unnecessary abstractions; minimum code for the task
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Guardrails
|
|
||||||
|
|
||||||
### What Must Never Be Restored
|
|
||||||
|
|
||||||
The following components were **intentionally removed** and must not be brought back:
|
|
||||||
- cron jobs
|
|
||||||
- importer utility
|
|
||||||
- admin pricing UI/API
|
|
||||||
- alerts
|
|
||||||
- stock import
|
|
||||||
|
|
||||||
### Configuration Files
|
|
||||||
|
|
||||||
- `config.yaml` — runtime user file, **not stored in the repository**
|
|
||||||
- `config.example.yaml` — the only config template in the repo
|
|
||||||
|
|
||||||
### Sync and Local-First
|
|
||||||
|
|
||||||
- Any sync changes must preserve local-first behavior
|
|
||||||
- Local CRUD must not be blocked when MariaDB is unavailable
|
|
||||||
|
|
||||||
### Formats and UI
|
|
||||||
|
|
||||||
- **CSV export:** filename must use **project code** (`project.Code`), not project name
|
|
||||||
Format: `YYYY-MM-DD (ProjectCode) ConfigName Article.csv`
|
|
||||||
- **Breadcrumbs UI:** names longer than 16 characters must be truncated with an ellipsis
|
|
||||||
|
|
||||||
### Architecture Documentation
|
|
||||||
|
|
||||||
- **Every architectural decision must be recorded in `bible/`**
|
|
||||||
- The corresponding Bible file must be updated **in the same commit** as the code change
|
|
||||||
- On every user-requested commit, review and update the Bible in that same commit
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Common Tasks
|
|
||||||
|
|
||||||
### Add a Field to Configuration
|
|
||||||
|
|
||||||
1. Add the field to `LocalConfiguration` struct (`internal/models/`)
|
|
||||||
2. Add GORM tags for the DB column
|
|
||||||
3. Write a SQL migration (`migrations/`)
|
|
||||||
4. Update `ConfigurationToLocal` / `LocalToConfiguration` converters
|
|
||||||
5. Update API handlers and services
|
|
||||||
|
|
||||||
### Add a Field to Component
|
|
||||||
|
|
||||||
1. Add the field to `LocalComponent` struct (`internal/models/`)
|
|
||||||
2. Update the SQL query in `SyncComponents()`
|
|
||||||
3. Update the `componentRow` struct to match
|
|
||||||
4. Update converter functions
|
|
||||||
|
|
||||||
### Add a Pricelist Price Lookup
|
|
||||||
|
|
||||||
```go
|
|
||||||
// Modern pattern
|
|
||||||
price, found := s.lookupPriceByPricelistID(pricelistID, lotName)
|
|
||||||
if found && price > 0 {
|
|
||||||
// use price
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Known Gotchas
|
|
||||||
|
|
||||||
1. **`CurrentPrice` removed from components** — any code using it will fail to compile
|
|
||||||
2. **`HasPrice` filter removed** — `component.go ListComponents` no longer supports this filter
|
|
||||||
3. **Quote calculation:** always offline-first (SQLite); online path is separate
|
|
||||||
4. **Items JSON:** prices are stored in the `items` field of the configuration, not fetched from components
|
|
||||||
5. **Migrations are additive:** already-applied migrations are skipped (checked by `id` in `local_schema_migrations`)
|
|
||||||
6. **`SyncedAt` removed:** last component sync time is now in `app_settings` (key=`last_component_sync`)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Debugging Price Issues
|
|
||||||
|
|
||||||
**Problem: quote returns no prices**
|
|
||||||
1. Check that `pricelist_id` is set on the configuration
|
|
||||||
2. Check that pricelist items exist: `SELECT COUNT(*) FROM local_pricelist_items`
|
|
||||||
3. Check `lookupPriceByPricelistID()` in `quote.go`
|
|
||||||
4. Verify the correct source is used (estimate/warehouse/competitor)
|
|
||||||
|
|
||||||
**Problem: component sync not working**
|
|
||||||
1. Components sync as metadata only — no prices
|
|
||||||
2. Prices come via a separate pricelist sync
|
|
||||||
3. Check `SyncComponents()` and the MariaDB query
|
|
||||||
|
|
||||||
**Problem: configuration refresh does not update prices**
|
|
||||||
1. Refresh uses the latest estimate pricelist by default
|
|
||||||
2. Latest resolution ignores pricelists without items (`EXISTS local_pricelist_items`)
|
|
||||||
3. Old prices in `config.items` are preserved if a line item is not found in the pricelist
|
|
||||||
4. To force a pricelist update: set `configuration.pricelist_id`
|
|
||||||
5. In configurator, `Авто` must remain auto-mode (runtime resolved ID must not be persisted as explicit selection)
|
|
||||||
@@ -1,55 +0,0 @@
|
|||||||
# QuoteForge Bible — Architectural Documentation
|
|
||||||
|
|
||||||
The single source of truth for architecture, schemas, and patterns.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Table of Contents
|
|
||||||
|
|
||||||
| File | Topic |
|
|
||||||
|------|-------|
|
|
||||||
| [01-overview.md](01-overview.md) | Product: purpose, features, tech stack, repository structure |
|
|
||||||
| [02-architecture.md](02-architecture.md) | Architecture: local-first, sync, pricing, versioning |
|
|
||||||
| [03-database.md](03-database.md) | DB schemas: SQLite + MariaDB, permissions, indexes |
|
|
||||||
| [04-api.md](04-api.md) | API endpoints and web routes |
|
|
||||||
| [05-config.md](05-config.md) | Configuration, environment variables, paths, installation |
|
|
||||||
| [06-backup.md](06-backup.md) | Backup: implementation, rotation policy |
|
|
||||||
| [07-dev.md](07-dev.md) | Development: commands, code style, guardrails |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Bible Rules
|
|
||||||
|
|
||||||
> **Every architectural decision must be recorded in the Bible.**
|
|
||||||
>
|
|
||||||
> Any change to DB schema, data access patterns, sync behavior, API contracts,
|
|
||||||
> configuration format, or any other system-level aspect — the corresponding `bible/` file
|
|
||||||
> **must be updated in the same commit** as the code.
|
|
||||||
>
|
|
||||||
> On every user-requested commit, the Bible must be reviewed and updated in that commit.
|
|
||||||
>
|
|
||||||
> The Bible is the single source of truth for architecture. Outdated documentation is worse than none.
|
|
||||||
|
|
||||||
> **Documentation language: English.**
|
|
||||||
>
|
|
||||||
> All files in `bible/` are written and updated **in English only**.
|
|
||||||
> Mixing languages is not allowed.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Quick Reference
|
|
||||||
|
|
||||||
**Where is user data stored?**
|
|
||||||
SQLite → `~/Library/Application Support/QuoteForge/qfs.db` (macOS). MariaDB is sync-only.
|
|
||||||
|
|
||||||
**How to look up a price for a line item?**
|
|
||||||
`local_pricelist_items` → by `pricelist_id` from config + `lot_name`. Prices are **never** taken from `local_components`.
|
|
||||||
|
|
||||||
**Pre-commit check?**
|
|
||||||
`go build ./cmd/qfs && go vet ./...`
|
|
||||||
|
|
||||||
**What must never be restored?**
|
|
||||||
cron jobs, admin pricing, alerts, stock import, importer utility — all removed intentionally.
|
|
||||||
|
|
||||||
**Where is the release changelog?**
|
|
||||||
`releases/memory/v{major}.{minor}.{patch}.md`
|
|
||||||
173
cmd/migrate_project_updated_at/main.go
Normal file
173
cmd/migrate_project_updated_at/main.go
Normal file
@@ -0,0 +1,173 @@
|
|||||||
|
package main
|
||||||
|
|
||||||
|
import (
|
||||||
|
"flag"
|
||||||
|
"fmt"
|
||||||
|
"log"
|
||||||
|
"sort"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/appstate"
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/localdb"
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/models"
|
||||||
|
"gorm.io/driver/mysql"
|
||||||
|
"gorm.io/gorm"
|
||||||
|
"gorm.io/gorm/logger"
|
||||||
|
)
|
||||||
|
|
||||||
|
type projectTimestampRow struct {
|
||||||
|
UUID string
|
||||||
|
UpdatedAt time.Time
|
||||||
|
}
|
||||||
|
|
||||||
|
type updatePlanRow struct {
|
||||||
|
UUID string
|
||||||
|
Code string
|
||||||
|
Variant string
|
||||||
|
LocalUpdatedAt time.Time
|
||||||
|
ServerUpdatedAt time.Time
|
||||||
|
}
|
||||||
|
|
||||||
|
func main() {
|
||||||
|
defaultLocalDBPath, err := appstate.ResolveDBPath("")
|
||||||
|
if err != nil {
|
||||||
|
log.Fatalf("failed to resolve default local SQLite path: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
localDBPath := flag.String("localdb", defaultLocalDBPath, "path to local SQLite database (default: user state dir or QFS_DB_PATH)")
|
||||||
|
apply := flag.Bool("apply", false, "apply updates to local SQLite (default is preview only)")
|
||||||
|
flag.Parse()
|
||||||
|
|
||||||
|
local, err := localdb.New(*localDBPath)
|
||||||
|
if err != nil {
|
||||||
|
log.Fatalf("failed to initialize local database: %v", err)
|
||||||
|
}
|
||||||
|
defer local.Close()
|
||||||
|
|
||||||
|
if !local.HasSettings() {
|
||||||
|
log.Fatalf("SQLite connection settings are not configured. Run qfs setup first.")
|
||||||
|
}
|
||||||
|
|
||||||
|
dsn, err := local.GetDSN()
|
||||||
|
if err != nil {
|
||||||
|
log.Fatalf("failed to build DSN from SQLite settings: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
db, err := gorm.Open(mysql.Open(dsn), &gorm.Config{
|
||||||
|
Logger: logger.Default.LogMode(logger.Silent),
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
log.Fatalf("failed to connect to MariaDB: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
serverRows, err := loadServerProjects(db)
|
||||||
|
if err != nil {
|
||||||
|
log.Fatalf("failed to load server projects: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
localProjects, err := local.GetAllProjects(true)
|
||||||
|
if err != nil {
|
||||||
|
log.Fatalf("failed to load local projects: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
plan := buildUpdatePlan(localProjects, serverRows)
|
||||||
|
printPlan(plan, *apply)
|
||||||
|
|
||||||
|
if !*apply || len(plan) == 0 {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
updated := 0
|
||||||
|
for i := range plan {
|
||||||
|
project, err := local.GetProjectByUUID(plan[i].UUID)
|
||||||
|
if err != nil {
|
||||||
|
log.Printf("skip %s: load local project: %v", plan[i].UUID, err)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
project.UpdatedAt = plan[i].ServerUpdatedAt
|
||||||
|
if err := local.SaveProjectPreservingUpdatedAt(project); err != nil {
|
||||||
|
log.Printf("skip %s: save local project: %v", plan[i].UUID, err)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
updated++
|
||||||
|
}
|
||||||
|
|
||||||
|
log.Printf("updated %d local project timestamps", updated)
|
||||||
|
}
|
||||||
|
|
||||||
|
func loadServerProjects(db *gorm.DB) (map[string]time.Time, error) {
|
||||||
|
var rows []projectTimestampRow
|
||||||
|
if err := db.Model(&models.Project{}).
|
||||||
|
Select("uuid, updated_at").
|
||||||
|
Find(&rows).Error; err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
out := make(map[string]time.Time, len(rows))
|
||||||
|
for _, row := range rows {
|
||||||
|
if row.UUID == "" {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
out[row.UUID] = row.UpdatedAt
|
||||||
|
}
|
||||||
|
return out, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func buildUpdatePlan(localProjects []localdb.LocalProject, serverRows map[string]time.Time) []updatePlanRow {
|
||||||
|
plan := make([]updatePlanRow, 0)
|
||||||
|
for i := range localProjects {
|
||||||
|
project := localProjects[i]
|
||||||
|
serverUpdatedAt, ok := serverRows[project.UUID]
|
||||||
|
if !ok {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if project.UpdatedAt.Equal(serverUpdatedAt) {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
plan = append(plan, updatePlanRow{
|
||||||
|
UUID: project.UUID,
|
||||||
|
Code: project.Code,
|
||||||
|
Variant: project.Variant,
|
||||||
|
LocalUpdatedAt: project.UpdatedAt,
|
||||||
|
ServerUpdatedAt: serverUpdatedAt,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
sort.Slice(plan, func(i, j int) bool {
|
||||||
|
if plan[i].Code != plan[j].Code {
|
||||||
|
return plan[i].Code < plan[j].Code
|
||||||
|
}
|
||||||
|
return plan[i].Variant < plan[j].Variant
|
||||||
|
})
|
||||||
|
|
||||||
|
return plan
|
||||||
|
}
|
||||||
|
|
||||||
|
func printPlan(plan []updatePlanRow, apply bool) {
|
||||||
|
mode := "preview"
|
||||||
|
if apply {
|
||||||
|
mode = "apply"
|
||||||
|
}
|
||||||
|
log.Printf("project updated_at resync mode=%s changes=%d", mode, len(plan))
|
||||||
|
if len(plan) == 0 {
|
||||||
|
log.Printf("no local project timestamps need resync")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
for _, row := range plan {
|
||||||
|
variant := row.Variant
|
||||||
|
if variant == "" {
|
||||||
|
variant = "main"
|
||||||
|
}
|
||||||
|
log.Printf("%s [%s] local=%s server=%s", row.Code, variant, formatStamp(row.LocalUpdatedAt), formatStamp(row.ServerUpdatedAt))
|
||||||
|
}
|
||||||
|
if !apply {
|
||||||
|
fmt.Println("Re-run with -apply to write server updated_at into local SQLite.")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func formatStamp(value time.Time) string {
|
||||||
|
if value.IsZero() {
|
||||||
|
return "zero"
|
||||||
|
}
|
||||||
|
return value.Format(time.RFC3339)
|
||||||
|
}
|
||||||
@@ -39,6 +39,10 @@ logging:
|
|||||||
t.Fatalf("load legacy config: %v", err)
|
t.Fatalf("load legacy config: %v", err)
|
||||||
}
|
}
|
||||||
setConfigDefaults(cfg)
|
setConfigDefaults(cfg)
|
||||||
|
cfg.Server.Host, _, err = normalizeLoopbackServerHost(cfg.Server.Host)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("normalize server host: %v", err)
|
||||||
|
}
|
||||||
if err := migrateConfigFileToRuntimeShape(path, cfg); err != nil {
|
if err := migrateConfigFileToRuntimeShape(path, cfg); err != nil {
|
||||||
t.Fatalf("migrate config: %v", err)
|
t.Fatalf("migrate config: %v", err)
|
||||||
}
|
}
|
||||||
@@ -60,7 +64,43 @@ logging:
|
|||||||
if !strings.Contains(text, "port: 9191") {
|
if !strings.Contains(text, "port: 9191") {
|
||||||
t.Fatalf("migrated config did not preserve server port:\n%s", text)
|
t.Fatalf("migrated config did not preserve server port:\n%s", text)
|
||||||
}
|
}
|
||||||
|
if !strings.Contains(text, "host: 127.0.0.1") {
|
||||||
|
t.Fatalf("migrated config did not normalize server host:\n%s", text)
|
||||||
|
}
|
||||||
if !strings.Contains(text, "level: debug") {
|
if !strings.Contains(text, "level: debug") {
|
||||||
t.Fatalf("migrated config did not preserve logging level:\n%s", text)
|
t.Fatalf("migrated config did not preserve logging level:\n%s", text)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func TestNormalizeLoopbackServerHost(t *testing.T) {
|
||||||
|
t.Parallel()
|
||||||
|
|
||||||
|
cases := []struct {
|
||||||
|
host string
|
||||||
|
want string
|
||||||
|
wantChanged bool
|
||||||
|
wantErr bool
|
||||||
|
}{
|
||||||
|
{host: "127.0.0.1", want: "127.0.0.1", wantChanged: false, wantErr: false},
|
||||||
|
{host: "localhost", want: "127.0.0.1", wantChanged: true, wantErr: false},
|
||||||
|
{host: "::1", want: "127.0.0.1", wantChanged: true, wantErr: false},
|
||||||
|
{host: "0.0.0.0", want: "127.0.0.1", wantChanged: true, wantErr: false},
|
||||||
|
{host: "192.168.1.10", want: "127.0.0.1", wantChanged: true, wantErr: false},
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, tc := range cases {
|
||||||
|
got, changed, err := normalizeLoopbackServerHost(tc.host)
|
||||||
|
if tc.wantErr && err == nil {
|
||||||
|
t.Fatalf("expected error for host %q", tc.host)
|
||||||
|
}
|
||||||
|
if !tc.wantErr && err != nil {
|
||||||
|
t.Fatalf("unexpected error for host %q: %v", tc.host, err)
|
||||||
|
}
|
||||||
|
if got != tc.want {
|
||||||
|
t.Fatalf("unexpected normalized host for %q: got %q want %q", tc.host, got, tc.want)
|
||||||
|
}
|
||||||
|
if changed != tc.wantChanged {
|
||||||
|
t.Fatalf("unexpected changed flag for %q: got %t want %t", tc.host, changed, tc.wantChanged)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|||||||
501
cmd/qfs/main.go
501
cmd/qfs/main.go
@@ -6,9 +6,11 @@ import (
|
|||||||
"errors"
|
"errors"
|
||||||
"flag"
|
"flag"
|
||||||
"fmt"
|
"fmt"
|
||||||
|
"io"
|
||||||
"io/fs"
|
"io/fs"
|
||||||
"log/slog"
|
"log/slog"
|
||||||
"math"
|
"math"
|
||||||
|
"net"
|
||||||
"net/http"
|
"net/http"
|
||||||
"os"
|
"os"
|
||||||
"os/exec"
|
"os/exec"
|
||||||
@@ -31,7 +33,6 @@ import (
|
|||||||
"git.mchus.pro/mchus/quoteforge/internal/localdb"
|
"git.mchus.pro/mchus/quoteforge/internal/localdb"
|
||||||
"git.mchus.pro/mchus/quoteforge/internal/middleware"
|
"git.mchus.pro/mchus/quoteforge/internal/middleware"
|
||||||
"git.mchus.pro/mchus/quoteforge/internal/models"
|
"git.mchus.pro/mchus/quoteforge/internal/models"
|
||||||
"git.mchus.pro/mchus/quoteforge/internal/repository"
|
|
||||||
"git.mchus.pro/mchus/quoteforge/internal/services"
|
"git.mchus.pro/mchus/quoteforge/internal/services"
|
||||||
"git.mchus.pro/mchus/quoteforge/internal/services/sync"
|
"git.mchus.pro/mchus/quoteforge/internal/services/sync"
|
||||||
"github.com/gin-gonic/gin"
|
"github.com/gin-gonic/gin"
|
||||||
@@ -43,11 +44,16 @@ import (
|
|||||||
|
|
||||||
// Version is set via ldflags during build
|
// Version is set via ldflags during build
|
||||||
var Version = "dev"
|
var Version = "dev"
|
||||||
|
var errVendorImportTooLarge = errors.New("vendor workspace file exceeds 1 GiB limit")
|
||||||
|
|
||||||
const backgroundSyncInterval = 5 * time.Minute
|
const backgroundSyncInterval = 5 * time.Minute
|
||||||
const onDemandPullCooldown = 30 * time.Second
|
const onDemandPullCooldown = 30 * time.Second
|
||||||
const startupConsoleWarning = "Не закрывайте консоль иначе приложение не будет работать"
|
const startupConsoleWarning = "Не закрывайте консоль иначе приложение не будет работать"
|
||||||
|
|
||||||
|
var vendorImportMaxBytes int64 = 1 << 30
|
||||||
|
|
||||||
|
const vendorImportMultipartOverheadBytes int64 = 8 << 20
|
||||||
|
|
||||||
func main() {
|
func main() {
|
||||||
showStartupConsoleWarning()
|
showStartupConsoleWarning()
|
||||||
|
|
||||||
@@ -142,6 +148,15 @@ func main() {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
setConfigDefaults(cfg)
|
setConfigDefaults(cfg)
|
||||||
|
normalizedHost, changed, err := normalizeLoopbackServerHost(cfg.Server.Host)
|
||||||
|
if err != nil {
|
||||||
|
slog.Error("invalid server host", "host", cfg.Server.Host, "error", err)
|
||||||
|
os.Exit(1)
|
||||||
|
}
|
||||||
|
if changed {
|
||||||
|
slog.Warn("corrected server host to loopback", "from", cfg.Server.Host, "to", normalizedHost)
|
||||||
|
}
|
||||||
|
cfg.Server.Host = normalizedHost
|
||||||
if err := migrateConfigFileToRuntimeShape(resolvedConfigPath, cfg); err != nil {
|
if err := migrateConfigFileToRuntimeShape(resolvedConfigPath, cfg); err != nil {
|
||||||
slog.Error("failed to migrate config file format", "path", resolvedConfigPath, "error", err)
|
slog.Error("failed to migrate config file format", "path", resolvedConfigPath, "error", err)
|
||||||
os.Exit(1)
|
os.Exit(1)
|
||||||
@@ -150,29 +165,25 @@ func main() {
|
|||||||
|
|
||||||
setupLogger(cfg.Logging)
|
setupLogger(cfg.Logging)
|
||||||
|
|
||||||
// Create connection manager and try to connect immediately if settings exist
|
// Create connection manager. Runtime stays local-first; MariaDB is used on demand by sync/setup only.
|
||||||
connMgr := db.NewConnectionManager(local)
|
connMgr := db.NewConnectionManager(local)
|
||||||
|
|
||||||
dbUser := local.GetDBUser()
|
dbUser := local.GetDBUser()
|
||||||
|
|
||||||
// Try to connect to MariaDB on startup
|
|
||||||
mariaDB, err := connMgr.GetDB()
|
|
||||||
if err != nil {
|
|
||||||
slog.Warn("failed to connect to MariaDB on startup, starting in offline mode", "error", err)
|
|
||||||
mariaDB = nil
|
|
||||||
} else {
|
|
||||||
slog.Info("successfully connected to MariaDB on startup")
|
|
||||||
}
|
|
||||||
|
|
||||||
slog.Info("starting QuoteForge server",
|
slog.Info("starting QuoteForge server",
|
||||||
"version", Version,
|
"version", Version,
|
||||||
"host", cfg.Server.Host,
|
"host", cfg.Server.Host,
|
||||||
"port", cfg.Server.Port,
|
"port", cfg.Server.Port,
|
||||||
"db_user", dbUser,
|
"db_user", dbUser,
|
||||||
"online", mariaDB != nil,
|
"online", false,
|
||||||
)
|
)
|
||||||
|
|
||||||
if *migrate {
|
if *migrate {
|
||||||
|
mariaDB, err := connMgr.GetDB()
|
||||||
|
if err != nil {
|
||||||
|
slog.Error("cannot run migrations: database not available", "error", err)
|
||||||
|
os.Exit(1)
|
||||||
|
}
|
||||||
if mariaDB == nil {
|
if mariaDB == nil {
|
||||||
slog.Error("cannot run migrations: database not available")
|
slog.Error("cannot run migrations: database not available")
|
||||||
os.Exit(1)
|
os.Exit(1)
|
||||||
@@ -189,39 +200,10 @@ func main() {
|
|||||||
slog.Info("migrations completed")
|
slog.Info("migrations completed")
|
||||||
}
|
}
|
||||||
|
|
||||||
// Always apply SQL migrations on startup when database is available.
|
|
||||||
// This keeps schema in sync for long-running installations without manual steps.
|
|
||||||
// If current DB user does not have enough privileges, continue startup in normal mode.
|
|
||||||
if mariaDB != nil {
|
|
||||||
sqlMigrationsPath := filepath.Join("migrations")
|
|
||||||
needsMigrations, err := models.NeedsSQLMigrations(mariaDB, sqlMigrationsPath)
|
|
||||||
if err != nil {
|
|
||||||
if models.IsMigrationPermissionError(err) {
|
|
||||||
slog.Info("startup SQL migrations skipped: insufficient database privileges", "path", sqlMigrationsPath, "error", err)
|
|
||||||
} else {
|
|
||||||
slog.Error("startup SQL migrations check failed", "path", sqlMigrationsPath, "error", err)
|
|
||||||
os.Exit(1)
|
|
||||||
}
|
|
||||||
} else if needsMigrations {
|
|
||||||
if err := models.RunSQLMigrations(mariaDB, sqlMigrationsPath); err != nil {
|
|
||||||
if models.IsMigrationPermissionError(err) {
|
|
||||||
slog.Info("startup SQL migrations skipped: insufficient database privileges", "path", sqlMigrationsPath, "error", err)
|
|
||||||
} else {
|
|
||||||
slog.Error("startup SQL migrations failed", "path", sqlMigrationsPath, "error", err)
|
|
||||||
os.Exit(1)
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
slog.Info("startup SQL migrations applied", "path", sqlMigrationsPath)
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
slog.Debug("startup SQL migrations not needed", "path", sqlMigrationsPath)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
gin.SetMode(cfg.Server.Mode)
|
gin.SetMode(cfg.Server.Mode)
|
||||||
restartSig := make(chan struct{}, 1)
|
restartSig := make(chan struct{}, 1)
|
||||||
|
|
||||||
router, syncService, err := setupRouter(cfg, local, connMgr, mariaDB, dbUser, restartSig)
|
router, syncService, err := setupRouter(cfg, local, connMgr, dbUser, restartSig)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
slog.Error("failed to setup router", "error", err)
|
slog.Error("failed to setup router", "error", err)
|
||||||
os.Exit(1)
|
os.Exit(1)
|
||||||
@@ -336,8 +318,6 @@ func derefString(value *string) string {
|
|||||||
return *value
|
return *value
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
func setConfigDefaults(cfg *config.Config) {
|
func setConfigDefaults(cfg *config.Config) {
|
||||||
if cfg.Server.Host == "" {
|
if cfg.Server.Host == "" {
|
||||||
cfg.Server.Host = "127.0.0.1"
|
cfg.Server.Host = "127.0.0.1"
|
||||||
@@ -354,29 +334,47 @@ func setConfigDefaults(cfg *config.Config) {
|
|||||||
if cfg.Server.WriteTimeout == 0 {
|
if cfg.Server.WriteTimeout == 0 {
|
||||||
cfg.Server.WriteTimeout = 30 * time.Second
|
cfg.Server.WriteTimeout = 30 * time.Second
|
||||||
}
|
}
|
||||||
if cfg.Pricing.DefaultMethod == "" {
|
|
||||||
cfg.Pricing.DefaultMethod = "weighted_median"
|
|
||||||
}
|
|
||||||
if cfg.Pricing.DefaultPeriodDays == 0 {
|
|
||||||
cfg.Pricing.DefaultPeriodDays = 90
|
|
||||||
}
|
|
||||||
if cfg.Pricing.FreshnessGreenDays == 0 {
|
|
||||||
cfg.Pricing.FreshnessGreenDays = 30
|
|
||||||
}
|
|
||||||
if cfg.Pricing.FreshnessYellowDays == 0 {
|
|
||||||
cfg.Pricing.FreshnessYellowDays = 60
|
|
||||||
}
|
|
||||||
if cfg.Pricing.FreshnessRedDays == 0 {
|
|
||||||
cfg.Pricing.FreshnessRedDays = 90
|
|
||||||
}
|
|
||||||
if cfg.Pricing.MinQuotesForMedian == 0 {
|
|
||||||
cfg.Pricing.MinQuotesForMedian = 3
|
|
||||||
}
|
|
||||||
if cfg.Backup.Time == "" {
|
if cfg.Backup.Time == "" {
|
||||||
cfg.Backup.Time = "00:00"
|
cfg.Backup.Time = "00:00"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func normalizeLoopbackServerHost(host string) (string, bool, error) {
|
||||||
|
trimmed := strings.TrimSpace(host)
|
||||||
|
if trimmed == "" {
|
||||||
|
return "", false, fmt.Errorf("server.host must not be empty")
|
||||||
|
}
|
||||||
|
const loopbackHost = "127.0.0.1"
|
||||||
|
if trimmed == loopbackHost {
|
||||||
|
return loopbackHost, false, nil
|
||||||
|
}
|
||||||
|
if strings.EqualFold(trimmed, "localhost") {
|
||||||
|
return loopbackHost, true, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
ip := net.ParseIP(strings.Trim(trimmed, "[]"))
|
||||||
|
if ip != nil {
|
||||||
|
if ip.IsLoopback() || ip.IsUnspecified() {
|
||||||
|
return loopbackHost, trimmed != loopbackHost, nil
|
||||||
|
}
|
||||||
|
return loopbackHost, true, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
return loopbackHost, true, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func vendorImportBodyLimit() int64 {
|
||||||
|
return vendorImportMaxBytes + vendorImportMultipartOverheadBytes
|
||||||
|
}
|
||||||
|
|
||||||
|
func isVendorImportTooLarge(fileSize int64, err error) bool {
|
||||||
|
if fileSize > vendorImportMaxBytes {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
var maxBytesErr *http.MaxBytesError
|
||||||
|
return errors.As(err, &maxBytesErr)
|
||||||
|
}
|
||||||
|
|
||||||
func ensureDefaultConfigFile(configPath string) error {
|
func ensureDefaultConfigFile(configPath string) error {
|
||||||
if strings.TrimSpace(configPath) == "" {
|
if strings.TrimSpace(configPath) == "" {
|
||||||
return fmt.Errorf("config path is empty")
|
return fmt.Errorf("config path is empty")
|
||||||
@@ -557,11 +555,11 @@ func runSetupMode(local *localdb.LocalDB) {
|
|||||||
router := gin.New()
|
router := gin.New()
|
||||||
router.Use(gin.Recovery())
|
router.Use(gin.Recovery())
|
||||||
|
|
||||||
staticPath := filepath.Join("web", "static")
|
if staticFS, err := qfassets.StaticFS(); err == nil {
|
||||||
if stat, err := os.Stat(staticPath); err == nil && stat.IsDir() {
|
|
||||||
router.Static("/static", staticPath)
|
|
||||||
} else if staticFS, err := qfassets.StaticFS(); err == nil {
|
|
||||||
router.StaticFS("/static", http.FS(staticFS))
|
router.StaticFS("/static", http.FS(staticFS))
|
||||||
|
} else {
|
||||||
|
slog.Error("failed to load embedded static assets", "error", err)
|
||||||
|
os.Exit(1)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Setup routes only
|
// Setup routes only
|
||||||
@@ -673,46 +671,14 @@ func setupDatabaseFromDSN(dsn string) (*gorm.DB, error) {
|
|||||||
return db, nil
|
return db, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.ConnectionManager, mariaDB *gorm.DB, dbUsername string, restartSig chan struct{}) (*gin.Engine, *sync.Service, error) {
|
func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.ConnectionManager, dbUsername string, restartSig chan struct{}) (*gin.Engine, *sync.Service, error) {
|
||||||
// mariaDB may be nil if we're in offline mode
|
|
||||||
|
|
||||||
// Repositories
|
|
||||||
var componentRepo *repository.ComponentRepository
|
|
||||||
var categoryRepo *repository.CategoryRepository
|
|
||||||
var statsRepo *repository.StatsRepository
|
|
||||||
var pricelistRepo *repository.PricelistRepository
|
|
||||||
|
|
||||||
// Only initialize repositories if we have a database connection
|
|
||||||
if mariaDB != nil {
|
|
||||||
componentRepo = repository.NewComponentRepository(mariaDB)
|
|
||||||
categoryRepo = repository.NewCategoryRepository(mariaDB)
|
|
||||||
statsRepo = repository.NewStatsRepository(mariaDB)
|
|
||||||
pricelistRepo = repository.NewPricelistRepository(mariaDB)
|
|
||||||
} else {
|
|
||||||
// In offline mode, we'll use nil repositories or handle them differently
|
|
||||||
// This is handled in the sync service and other components
|
|
||||||
}
|
|
||||||
|
|
||||||
// Services
|
|
||||||
var componentService *services.ComponentService
|
|
||||||
var quoteService *services.QuoteService
|
|
||||||
var exportService *services.ExportService
|
|
||||||
var syncService *sync.Service
|
var syncService *sync.Service
|
||||||
var projectService *services.ProjectService
|
var projectService *services.ProjectService
|
||||||
|
|
||||||
// Sync service always uses ConnectionManager (works offline and online)
|
|
||||||
syncService = sync.NewService(connMgr, local)
|
syncService = sync.NewService(connMgr, local)
|
||||||
|
componentService := services.NewComponentService(nil, nil, nil)
|
||||||
if mariaDB != nil {
|
quoteService := services.NewQuoteService(nil, nil, nil, local, nil)
|
||||||
componentService = services.NewComponentService(componentRepo, categoryRepo, statsRepo)
|
exportService := services.NewExportService(cfg.Export, nil, local)
|
||||||
quoteService = services.NewQuoteService(componentRepo, statsRepo, pricelistRepo, local, nil)
|
|
||||||
exportService = services.NewExportService(cfg.Export, categoryRepo, local)
|
|
||||||
} else {
|
|
||||||
// In offline mode, we still need to create services that don't require DB.
|
|
||||||
componentService = services.NewComponentService(nil, nil, nil)
|
|
||||||
quoteService = services.NewQuoteService(nil, nil, nil, local, nil)
|
|
||||||
exportService = services.NewExportService(cfg.Export, nil, local)
|
|
||||||
}
|
|
||||||
|
|
||||||
// isOnline function for local-first architecture
|
// isOnline function for local-first architecture
|
||||||
isOnline := func() bool {
|
isOnline := func() bool {
|
||||||
@@ -733,16 +699,6 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
|||||||
if err := local.BackfillConfigurationProjects(dbUsername); err != nil {
|
if err := local.BackfillConfigurationProjects(dbUsername); err != nil {
|
||||||
slog.Warn("failed to backfill local configuration projects", "error", err)
|
slog.Warn("failed to backfill local configuration projects", "error", err)
|
||||||
}
|
}
|
||||||
if mariaDB != nil {
|
|
||||||
serverProjectRepo := repository.NewProjectRepository(mariaDB)
|
|
||||||
if removed, err := serverProjectRepo.PurgeEmptyNamelessProjects(); err == nil && removed > 0 {
|
|
||||||
slog.Info("purged empty nameless server projects", "removed", removed)
|
|
||||||
}
|
|
||||||
if err := serverProjectRepo.EnsureSystemProjectsAndBackfillConfigurations(); err != nil {
|
|
||||||
slog.Warn("failed to backfill server configuration projects", "error", err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
type pullState struct {
|
type pullState struct {
|
||||||
mu syncpkg.Mutex
|
mu syncpkg.Mutex
|
||||||
running bool
|
running bool
|
||||||
@@ -820,8 +776,11 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
|||||||
// Handlers
|
// Handlers
|
||||||
componentHandler := handlers.NewComponentHandler(componentService, local)
|
componentHandler := handlers.NewComponentHandler(componentService, local)
|
||||||
quoteHandler := handlers.NewQuoteHandler(quoteService)
|
quoteHandler := handlers.NewQuoteHandler(quoteService)
|
||||||
exportHandler := handlers.NewExportHandler(exportService, configService, projectService)
|
exportHandler := handlers.NewExportHandler(exportService, configService, projectService, dbUsername)
|
||||||
pricelistHandler := handlers.NewPricelistHandler(local)
|
pricelistHandler := handlers.NewPricelistHandler(local)
|
||||||
|
vendorSpecHandler := handlers.NewVendorSpecHandler(local)
|
||||||
|
partnumberBooksHandler := handlers.NewPartnumberBooksHandler(local)
|
||||||
|
respondError := handlers.RespondError
|
||||||
syncHandler, err := handlers.NewSyncHandler(local, syncService, connMgr, templatesPath, backgroundSyncInterval)
|
syncHandler, err := handlers.NewSyncHandler(local, syncService, connMgr, templatesPath, backgroundSyncInterval)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, nil, fmt.Errorf("creating sync handler: %w", err)
|
return nil, nil, fmt.Errorf("creating sync handler: %w", err)
|
||||||
@@ -834,24 +793,24 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
|||||||
}
|
}
|
||||||
|
|
||||||
// Web handler (templates)
|
// Web handler (templates)
|
||||||
webHandler, err := handlers.NewWebHandler(templatesPath, componentService)
|
webHandler, err := handlers.NewWebHandler(templatesPath, local)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, nil, err
|
return nil, nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
// Router
|
// Router
|
||||||
router := gin.New()
|
router := gin.New()
|
||||||
|
router.MaxMultipartMemory = vendorImportBodyLimit()
|
||||||
router.Use(gin.Recovery())
|
router.Use(gin.Recovery())
|
||||||
router.Use(requestLogger())
|
router.Use(requestLogger())
|
||||||
router.Use(middleware.CORS())
|
router.Use(middleware.CORS())
|
||||||
router.Use(middleware.OfflineDetector(connMgr, local))
|
router.Use(middleware.OfflineDetector(connMgr, local))
|
||||||
|
|
||||||
// Static files (use filepath.Join for Windows compatibility)
|
// Static files (use filepath.Join for Windows compatibility)
|
||||||
staticPath := filepath.Join("web", "static")
|
if staticFS, err := qfassets.StaticFS(); err == nil {
|
||||||
if stat, err := os.Stat(staticPath); err == nil && stat.IsDir() {
|
|
||||||
router.Static("/static", staticPath)
|
|
||||||
} else if staticFS, err := qfassets.StaticFS(); err == nil {
|
|
||||||
router.StaticFS("/static", http.FS(staticFS))
|
router.StaticFS("/static", http.FS(staticFS))
|
||||||
|
} else {
|
||||||
|
return nil, nil, fmt.Errorf("load embedded static assets: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Health check
|
// Health check
|
||||||
@@ -862,17 +821,17 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
|||||||
})
|
})
|
||||||
})
|
})
|
||||||
|
|
||||||
// Restart endpoint (for development purposes)
|
// Restart endpoint is intentionally debug-only.
|
||||||
router.POST("/api/restart", func(c *gin.Context) {
|
if cfg.Server.Mode == "debug" {
|
||||||
// This will cause the server to restart by exiting
|
router.POST("/api/restart", func(c *gin.Context) {
|
||||||
// The restartProcess function will be called to restart the process
|
slog.Info("Restart requested via API")
|
||||||
slog.Info("Restart requested via API")
|
go func() {
|
||||||
go func() {
|
time.Sleep(100 * time.Millisecond)
|
||||||
time.Sleep(100 * time.Millisecond)
|
restartProcess()
|
||||||
restartProcess()
|
}()
|
||||||
}()
|
c.JSON(http.StatusOK, gin.H{"message": "restarting..."})
|
||||||
c.JSON(http.StatusOK, gin.H{"message": "restarting..."})
|
})
|
||||||
})
|
}
|
||||||
|
|
||||||
// DB status endpoint
|
// DB status endpoint
|
||||||
router.GET("/api/db-status", func(c *gin.Context) {
|
router.GET("/api/db-status", func(c *gin.Context) {
|
||||||
@@ -891,20 +850,8 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Optional diagnostics mode with server table counts.
|
// Runtime diagnostics stay local-only. Server table counts are intentionally unavailable here.
|
||||||
if includeCounts && status.IsConnected {
|
if !includeCounts || !status.IsConnected {
|
||||||
if db, err := connMgr.GetDB(); err == nil && db != nil {
|
|
||||||
_ = db.Table("lot").Count(&lotCount)
|
|
||||||
_ = db.Table("lot_log").Count(&lotLogCount)
|
|
||||||
_ = db.Table("qt_lot_metadata").Count(&metadataCount)
|
|
||||||
} else if err != nil {
|
|
||||||
dbOK = false
|
|
||||||
dbError = err.Error()
|
|
||||||
} else {
|
|
||||||
dbOK = false
|
|
||||||
dbError = "Database not connected (offline mode)"
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
lotCount = 0
|
lotCount = 0
|
||||||
lotLogCount = 0
|
lotLogCount = 0
|
||||||
metadataCount = 0
|
metadataCount = 0
|
||||||
@@ -920,11 +867,10 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
|||||||
})
|
})
|
||||||
})
|
})
|
||||||
|
|
||||||
// Current user info (DB user, not app user)
|
// Current user info (local DB username)
|
||||||
router.GET("/api/current-user", func(c *gin.Context) {
|
router.GET("/api/current-user", func(c *gin.Context) {
|
||||||
c.JSON(http.StatusOK, gin.H{
|
c.JSON(http.StatusOK, gin.H{
|
||||||
"username": local.GetDBUser(),
|
"username": local.GetDBUser(),
|
||||||
"role": "db_user",
|
|
||||||
})
|
})
|
||||||
})
|
})
|
||||||
|
|
||||||
@@ -943,6 +889,7 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
|||||||
router.GET("/configs/:uuid/revisions", webHandler.ConfigRevisions)
|
router.GET("/configs/:uuid/revisions", webHandler.ConfigRevisions)
|
||||||
router.GET("/pricelists", webHandler.Pricelists)
|
router.GET("/pricelists", webHandler.Pricelists)
|
||||||
router.GET("/pricelists/:id", webHandler.PricelistDetail)
|
router.GET("/pricelists/:id", webHandler.PricelistDetail)
|
||||||
|
router.GET("/partnumber-books", webHandler.PartnumberBooks)
|
||||||
|
|
||||||
// htmx partials
|
// htmx partials
|
||||||
partials := router.Group("/partials")
|
partials := router.Group("/partials")
|
||||||
@@ -992,6 +939,13 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
|||||||
pricelists.GET("/:id/lots", pricelistHandler.GetLotNames)
|
pricelists.GET("/:id/lots", pricelistHandler.GetLotNames)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Partnumber books (read-only)
|
||||||
|
pnBooks := api.Group("/partnumber-books")
|
||||||
|
{
|
||||||
|
pnBooks.GET("", partnumberBooksHandler.List)
|
||||||
|
pnBooks.GET("/:id", partnumberBooksHandler.GetItems)
|
||||||
|
}
|
||||||
|
|
||||||
// Configurations (public - RBAC disabled)
|
// Configurations (public - RBAC disabled)
|
||||||
configs := api.Group("/configs")
|
configs := api.Group("/configs")
|
||||||
{
|
{
|
||||||
@@ -1009,7 +963,7 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
|||||||
|
|
||||||
cfgs, total, err := configService.ListAllWithStatus(page, perPage, status, search)
|
cfgs, total, err := configService.ListAllWithStatus(page, perPage, status, search)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
respondError(c, http.StatusInternalServerError, "internal server error", err)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -1030,7 +984,7 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
|||||||
c.JSON(http.StatusServiceUnavailable, gin.H{"error": "Database is offline"})
|
c.JSON(http.StatusServiceUnavailable, gin.H{"error": "Database is offline"})
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
respondError(c, http.StatusInternalServerError, "internal server error", err)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
c.JSON(http.StatusOK, result)
|
c.JSON(http.StatusOK, result)
|
||||||
@@ -1039,13 +993,13 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
|||||||
configs.POST("", func(c *gin.Context) {
|
configs.POST("", func(c *gin.Context) {
|
||||||
var req services.CreateConfigRequest
|
var req services.CreateConfigRequest
|
||||||
if err := c.ShouldBindJSON(&req); err != nil {
|
if err := c.ShouldBindJSON(&req); err != nil {
|
||||||
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
respondError(c, http.StatusBadRequest, "invalid request", err)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
config, err := configService.Create(dbUsername, &req)
|
config, err := configService.Create(dbUsername, &req)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
respondError(c, http.StatusInternalServerError, "internal server error", err)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -1055,12 +1009,12 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
|||||||
configs.POST("/preview-article", func(c *gin.Context) {
|
configs.POST("/preview-article", func(c *gin.Context) {
|
||||||
var req services.ArticlePreviewRequest
|
var req services.ArticlePreviewRequest
|
||||||
if err := c.ShouldBindJSON(&req); err != nil {
|
if err := c.ShouldBindJSON(&req); err != nil {
|
||||||
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
respondError(c, http.StatusBadRequest, "invalid request", err)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
result, err := configService.BuildArticlePreview(&req)
|
result, err := configService.BuildArticlePreview(&req)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
respondError(c, http.StatusBadRequest, "invalid request", err)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
c.JSON(http.StatusOK, gin.H{
|
c.JSON(http.StatusOK, gin.H{
|
||||||
@@ -1083,7 +1037,7 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
|||||||
uuid := c.Param("uuid")
|
uuid := c.Param("uuid")
|
||||||
var req services.CreateConfigRequest
|
var req services.CreateConfigRequest
|
||||||
if err := c.ShouldBindJSON(&req); err != nil {
|
if err := c.ShouldBindJSON(&req); err != nil {
|
||||||
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
respondError(c, http.StatusBadRequest, "invalid request", err)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -1091,13 +1045,13 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
switch {
|
switch {
|
||||||
case errors.Is(err, services.ErrConfigNotFound):
|
case errors.Is(err, services.ErrConfigNotFound):
|
||||||
c.JSON(http.StatusNotFound, gin.H{"error": err.Error()})
|
respondError(c, http.StatusNotFound, "resource not found", err)
|
||||||
case errors.Is(err, services.ErrProjectNotFound):
|
case errors.Is(err, services.ErrProjectNotFound):
|
||||||
c.JSON(http.StatusNotFound, gin.H{"error": err.Error()})
|
respondError(c, http.StatusNotFound, "resource not found", err)
|
||||||
case errors.Is(err, services.ErrProjectForbidden):
|
case errors.Is(err, services.ErrProjectForbidden):
|
||||||
c.JSON(http.StatusForbidden, gin.H{"error": err.Error()})
|
respondError(c, http.StatusForbidden, "access denied", err)
|
||||||
default:
|
default:
|
||||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
respondError(c, http.StatusInternalServerError, "internal server error", err)
|
||||||
}
|
}
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
@@ -1108,7 +1062,7 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
|||||||
configs.DELETE("/:uuid", func(c *gin.Context) {
|
configs.DELETE("/:uuid", func(c *gin.Context) {
|
||||||
uuid := c.Param("uuid")
|
uuid := c.Param("uuid")
|
||||||
if err := configService.DeleteNoAuth(uuid); err != nil {
|
if err := configService.DeleteNoAuth(uuid); err != nil {
|
||||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
respondError(c, http.StatusInternalServerError, "internal server error", err)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
c.JSON(http.StatusOK, gin.H{"message": "archived"})
|
c.JSON(http.StatusOK, gin.H{"message": "archived"})
|
||||||
@@ -1118,7 +1072,7 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
|||||||
uuid := c.Param("uuid")
|
uuid := c.Param("uuid")
|
||||||
config, err := configService.ReactivateNoAuth(uuid)
|
config, err := configService.ReactivateNoAuth(uuid)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
respondError(c, http.StatusInternalServerError, "internal server error", err)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
c.JSON(http.StatusOK, gin.H{
|
c.JSON(http.StatusOK, gin.H{
|
||||||
@@ -1133,13 +1087,13 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
|||||||
Name string `json:"name"`
|
Name string `json:"name"`
|
||||||
}
|
}
|
||||||
if err := c.ShouldBindJSON(&req); err != nil {
|
if err := c.ShouldBindJSON(&req); err != nil {
|
||||||
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
respondError(c, http.StatusBadRequest, "invalid request", err)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
config, err := configService.RenameNoAuth(uuid, req.Name)
|
config, err := configService.RenameNoAuth(uuid, req.Name)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
respondError(c, http.StatusInternalServerError, "internal server error", err)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -1153,7 +1107,7 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
|||||||
FromVersion int `json:"from_version"`
|
FromVersion int `json:"from_version"`
|
||||||
}
|
}
|
||||||
if err := c.ShouldBindJSON(&req); err != nil {
|
if err := c.ShouldBindJSON(&req); err != nil {
|
||||||
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
respondError(c, http.StatusBadRequest, "invalid request", err)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -1163,7 +1117,7 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
|||||||
c.JSON(http.StatusNotFound, gin.H{"error": "version not found"})
|
c.JSON(http.StatusNotFound, gin.H{"error": "version not found"})
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
respondError(c, http.StatusInternalServerError, "internal server error", err)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -1172,9 +1126,14 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
|||||||
|
|
||||||
configs.POST("/:uuid/refresh-prices", func(c *gin.Context) {
|
configs.POST("/:uuid/refresh-prices", func(c *gin.Context) {
|
||||||
uuid := c.Param("uuid")
|
uuid := c.Param("uuid")
|
||||||
config, err := configService.RefreshPricesNoAuth(uuid)
|
var req struct {
|
||||||
|
PricelistID *uint `json:"pricelist_id"`
|
||||||
|
}
|
||||||
|
// Ignore bind error — pricelist_id is optional
|
||||||
|
_ = c.ShouldBindJSON(&req)
|
||||||
|
config, err := configService.RefreshPricesNoAuth(uuid, req.PricelistID)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
respondError(c, http.StatusInternalServerError, "internal server error", err)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
c.JSON(http.StatusOK, config)
|
c.JSON(http.StatusOK, config)
|
||||||
@@ -1186,20 +1145,20 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
|||||||
ProjectUUID string `json:"project_uuid"`
|
ProjectUUID string `json:"project_uuid"`
|
||||||
}
|
}
|
||||||
if err := c.ShouldBindJSON(&req); err != nil {
|
if err := c.ShouldBindJSON(&req); err != nil {
|
||||||
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
respondError(c, http.StatusBadRequest, "invalid request", err)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
updated, err := configService.SetProjectNoAuth(uuid, req.ProjectUUID)
|
updated, err := configService.SetProjectNoAuth(uuid, req.ProjectUUID)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
switch {
|
switch {
|
||||||
case errors.Is(err, services.ErrConfigNotFound):
|
case errors.Is(err, services.ErrConfigNotFound):
|
||||||
c.JSON(http.StatusNotFound, gin.H{"error": err.Error()})
|
respondError(c, http.StatusNotFound, "resource not found", err)
|
||||||
case errors.Is(err, services.ErrProjectNotFound):
|
case errors.Is(err, services.ErrProjectNotFound):
|
||||||
c.JSON(http.StatusNotFound, gin.H{"error": err.Error()})
|
respondError(c, http.StatusNotFound, "resource not found", err)
|
||||||
case errors.Is(err, services.ErrProjectForbidden):
|
case errors.Is(err, services.ErrProjectForbidden):
|
||||||
c.JSON(http.StatusForbidden, gin.H{"error": err.Error()})
|
respondError(c, http.StatusForbidden, "access denied", err)
|
||||||
default:
|
default:
|
||||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
respondError(c, http.StatusInternalServerError, "internal server error", err)
|
||||||
}
|
}
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
@@ -1228,7 +1187,7 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
|||||||
case errors.Is(err, services.ErrInvalidVersionNumber):
|
case errors.Is(err, services.ErrInvalidVersionNumber):
|
||||||
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid paging params"})
|
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid paging params"})
|
||||||
default:
|
default:
|
||||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
respondError(c, http.StatusInternalServerError, "internal server error", err)
|
||||||
}
|
}
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
@@ -1256,7 +1215,7 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
|||||||
case errors.Is(err, services.ErrConfigVersionNotFound):
|
case errors.Is(err, services.ErrConfigVersionNotFound):
|
||||||
c.JSON(http.StatusNotFound, gin.H{"error": "version not found"})
|
c.JSON(http.StatusNotFound, gin.H{"error": "version not found"})
|
||||||
default:
|
default:
|
||||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
respondError(c, http.StatusInternalServerError, "internal server error", err)
|
||||||
}
|
}
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
@@ -1271,7 +1230,7 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
|||||||
Note string `json:"note"`
|
Note string `json:"note"`
|
||||||
}
|
}
|
||||||
if err := c.ShouldBindJSON(&req); err != nil {
|
if err := c.ShouldBindJSON(&req); err != nil {
|
||||||
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
respondError(c, http.StatusBadRequest, "invalid request", err)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
if req.TargetVersion <= 0 {
|
if req.TargetVersion <= 0 {
|
||||||
@@ -1289,7 +1248,7 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
|||||||
case errors.Is(err, services.ErrVersionConflict):
|
case errors.Is(err, services.ErrVersionConflict):
|
||||||
c.JSON(http.StatusConflict, gin.H{"error": "version conflict"})
|
c.JSON(http.StatusConflict, gin.H{"error": "version conflict"})
|
||||||
default:
|
default:
|
||||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
respondError(c, http.StatusInternalServerError, "internal server error", err)
|
||||||
}
|
}
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
@@ -1312,18 +1271,24 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
|||||||
|
|
||||||
configs.GET("/:uuid/export", exportHandler.ExportConfigCSV)
|
configs.GET("/:uuid/export", exportHandler.ExportConfigCSV)
|
||||||
|
|
||||||
|
// Vendor spec (BOM) endpoints
|
||||||
|
configs.GET("/:uuid/vendor-spec", vendorSpecHandler.GetVendorSpec)
|
||||||
|
configs.PUT("/:uuid/vendor-spec", vendorSpecHandler.PutVendorSpec)
|
||||||
|
configs.POST("/:uuid/vendor-spec/resolve", vendorSpecHandler.ResolveVendorSpec)
|
||||||
|
configs.POST("/:uuid/vendor-spec/apply", vendorSpecHandler.ApplyVendorSpec)
|
||||||
|
|
||||||
configs.PATCH("/:uuid/server-count", func(c *gin.Context) {
|
configs.PATCH("/:uuid/server-count", func(c *gin.Context) {
|
||||||
uuid := c.Param("uuid")
|
uuid := c.Param("uuid")
|
||||||
var req struct {
|
var req struct {
|
||||||
ServerCount int `json:"server_count" binding:"required,min=1"`
|
ServerCount int `json:"server_count" binding:"required,min=1"`
|
||||||
}
|
}
|
||||||
if err := c.ShouldBindJSON(&req); err != nil {
|
if err := c.ShouldBindJSON(&req); err != nil {
|
||||||
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
respondError(c, http.StatusBadRequest, "invalid request", err)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
config, err := configService.UpdateServerCount(uuid, req.ServerCount)
|
config, err := configService.UpdateServerCount(uuid, req.ServerCount)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
respondError(c, http.StatusInternalServerError, "internal server error", err)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
c.JSON(http.StatusOK, config)
|
c.JSON(http.StatusOK, config)
|
||||||
@@ -1368,7 +1333,7 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
|||||||
|
|
||||||
allProjects, err := projectService.ListByUser(dbUsername, true)
|
allProjects, err := projectService.ListByUser(dbUsername, true)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
respondError(c, http.StatusInternalServerError, "internal server error", err)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -1502,7 +1467,7 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
|||||||
projects.GET("/all", func(c *gin.Context) {
|
projects.GET("/all", func(c *gin.Context) {
|
||||||
allProjects, err := projectService.ListByUser(dbUsername, true)
|
allProjects, err := projectService.ListByUser(dbUsername, true)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
respondError(c, http.StatusInternalServerError, "internal server error", err)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -1532,7 +1497,7 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
|||||||
projects.POST("", func(c *gin.Context) {
|
projects.POST("", func(c *gin.Context) {
|
||||||
var req services.CreateProjectRequest
|
var req services.CreateProjectRequest
|
||||||
if err := c.ShouldBindJSON(&req); err != nil {
|
if err := c.ShouldBindJSON(&req); err != nil {
|
||||||
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
respondError(c, http.StatusBadRequest, "invalid request", err)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
if strings.TrimSpace(req.Code) == "" {
|
if strings.TrimSpace(req.Code) == "" {
|
||||||
@@ -1542,10 +1507,12 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
|||||||
project, err := projectService.Create(dbUsername, &req)
|
project, err := projectService.Create(dbUsername, &req)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
switch {
|
switch {
|
||||||
|
case errors.Is(err, services.ErrReservedMainVariant):
|
||||||
|
respondError(c, http.StatusBadRequest, "invalid request", err)
|
||||||
case errors.Is(err, services.ErrProjectCodeExists):
|
case errors.Is(err, services.ErrProjectCodeExists):
|
||||||
c.JSON(http.StatusConflict, gin.H{"error": err.Error()})
|
respondError(c, http.StatusConflict, "conflict detected", err)
|
||||||
default:
|
default:
|
||||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
respondError(c, http.StatusInternalServerError, "internal server error", err)
|
||||||
}
|
}
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
@@ -1557,11 +1524,11 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
switch {
|
switch {
|
||||||
case errors.Is(err, services.ErrProjectNotFound):
|
case errors.Is(err, services.ErrProjectNotFound):
|
||||||
c.JSON(http.StatusNotFound, gin.H{"error": err.Error()})
|
respondError(c, http.StatusNotFound, "resource not found", err)
|
||||||
case errors.Is(err, services.ErrProjectForbidden):
|
case errors.Is(err, services.ErrProjectForbidden):
|
||||||
c.JSON(http.StatusForbidden, gin.H{"error": err.Error()})
|
respondError(c, http.StatusForbidden, "access denied", err)
|
||||||
default:
|
default:
|
||||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
respondError(c, http.StatusInternalServerError, "internal server error", err)
|
||||||
}
|
}
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
@@ -1571,20 +1538,23 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
|||||||
projects.PUT("/:uuid", func(c *gin.Context) {
|
projects.PUT("/:uuid", func(c *gin.Context) {
|
||||||
var req services.UpdateProjectRequest
|
var req services.UpdateProjectRequest
|
||||||
if err := c.ShouldBindJSON(&req); err != nil {
|
if err := c.ShouldBindJSON(&req); err != nil {
|
||||||
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
respondError(c, http.StatusBadRequest, "invalid request", err)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
project, err := projectService.Update(c.Param("uuid"), dbUsername, &req)
|
project, err := projectService.Update(c.Param("uuid"), dbUsername, &req)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
switch {
|
switch {
|
||||||
|
case errors.Is(err, services.ErrReservedMainVariant),
|
||||||
|
errors.Is(err, services.ErrCannotRenameMainVariant):
|
||||||
|
respondError(c, http.StatusBadRequest, "invalid request", err)
|
||||||
case errors.Is(err, services.ErrProjectCodeExists):
|
case errors.Is(err, services.ErrProjectCodeExists):
|
||||||
c.JSON(http.StatusConflict, gin.H{"error": err.Error()})
|
respondError(c, http.StatusConflict, "conflict detected", err)
|
||||||
case errors.Is(err, services.ErrProjectNotFound):
|
case errors.Is(err, services.ErrProjectNotFound):
|
||||||
c.JSON(http.StatusNotFound, gin.H{"error": err.Error()})
|
respondError(c, http.StatusNotFound, "resource not found", err)
|
||||||
case errors.Is(err, services.ErrProjectForbidden):
|
case errors.Is(err, services.ErrProjectForbidden):
|
||||||
c.JSON(http.StatusForbidden, gin.H{"error": err.Error()})
|
respondError(c, http.StatusForbidden, "access denied", err)
|
||||||
default:
|
default:
|
||||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
respondError(c, http.StatusInternalServerError, "internal server error", err)
|
||||||
}
|
}
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
@@ -1595,11 +1565,11 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
|||||||
if err := projectService.Archive(c.Param("uuid"), dbUsername); err != nil {
|
if err := projectService.Archive(c.Param("uuid"), dbUsername); err != nil {
|
||||||
switch {
|
switch {
|
||||||
case errors.Is(err, services.ErrProjectNotFound):
|
case errors.Is(err, services.ErrProjectNotFound):
|
||||||
c.JSON(http.StatusNotFound, gin.H{"error": err.Error()})
|
respondError(c, http.StatusNotFound, "resource not found", err)
|
||||||
case errors.Is(err, services.ErrProjectForbidden):
|
case errors.Is(err, services.ErrProjectForbidden):
|
||||||
c.JSON(http.StatusForbidden, gin.H{"error": err.Error()})
|
respondError(c, http.StatusForbidden, "access denied", err)
|
||||||
default:
|
default:
|
||||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
respondError(c, http.StatusInternalServerError, "internal server error", err)
|
||||||
}
|
}
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
@@ -1610,11 +1580,11 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
|||||||
if err := projectService.Reactivate(c.Param("uuid"), dbUsername); err != nil {
|
if err := projectService.Reactivate(c.Param("uuid"), dbUsername); err != nil {
|
||||||
switch {
|
switch {
|
||||||
case errors.Is(err, services.ErrProjectNotFound):
|
case errors.Is(err, services.ErrProjectNotFound):
|
||||||
c.JSON(http.StatusNotFound, gin.H{"error": err.Error()})
|
respondError(c, http.StatusNotFound, "resource not found", err)
|
||||||
case errors.Is(err, services.ErrProjectForbidden):
|
case errors.Is(err, services.ErrProjectForbidden):
|
||||||
c.JSON(http.StatusForbidden, gin.H{"error": err.Error()})
|
respondError(c, http.StatusForbidden, "access denied", err)
|
||||||
default:
|
default:
|
||||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
respondError(c, http.StatusInternalServerError, "internal server error", err)
|
||||||
}
|
}
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
@@ -1625,13 +1595,13 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
|||||||
if err := projectService.DeleteVariant(c.Param("uuid"), dbUsername); err != nil {
|
if err := projectService.DeleteVariant(c.Param("uuid"), dbUsername); err != nil {
|
||||||
switch {
|
switch {
|
||||||
case errors.Is(err, services.ErrCannotDeleteMainVariant):
|
case errors.Is(err, services.ErrCannotDeleteMainVariant):
|
||||||
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
respondError(c, http.StatusBadRequest, "invalid request", err)
|
||||||
case errors.Is(err, services.ErrProjectNotFound):
|
case errors.Is(err, services.ErrProjectNotFound):
|
||||||
c.JSON(http.StatusNotFound, gin.H{"error": err.Error()})
|
respondError(c, http.StatusNotFound, "resource not found", err)
|
||||||
case errors.Is(err, services.ErrProjectForbidden):
|
case errors.Is(err, services.ErrProjectForbidden):
|
||||||
c.JSON(http.StatusForbidden, gin.H{"error": err.Error()})
|
respondError(c, http.StatusForbidden, "access denied", err)
|
||||||
default:
|
default:
|
||||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
respondError(c, http.StatusInternalServerError, "internal server error", err)
|
||||||
}
|
}
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
@@ -1651,11 +1621,11 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
switch {
|
switch {
|
||||||
case errors.Is(err, services.ErrProjectNotFound):
|
case errors.Is(err, services.ErrProjectNotFound):
|
||||||
c.JSON(http.StatusNotFound, gin.H{"error": err.Error()})
|
respondError(c, http.StatusNotFound, "resource not found", err)
|
||||||
case errors.Is(err, services.ErrProjectForbidden):
|
case errors.Is(err, services.ErrProjectForbidden):
|
||||||
c.JSON(http.StatusForbidden, gin.H{"error": err.Error()})
|
respondError(c, http.StatusForbidden, "access denied", err)
|
||||||
default:
|
default:
|
||||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
respondError(c, http.StatusInternalServerError, "internal server error", err)
|
||||||
}
|
}
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
@@ -1663,10 +1633,47 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
|||||||
c.JSON(http.StatusOK, result)
|
c.JSON(http.StatusOK, result)
|
||||||
})
|
})
|
||||||
|
|
||||||
|
projects.PATCH("/:uuid/configs/reorder", func(c *gin.Context) {
|
||||||
|
var req struct {
|
||||||
|
OrderedUUIDs []string `json:"ordered_uuids"`
|
||||||
|
}
|
||||||
|
if err := c.ShouldBindJSON(&req); err != nil {
|
||||||
|
respondError(c, http.StatusBadRequest, "invalid request", err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
if len(req.OrderedUUIDs) == 0 {
|
||||||
|
c.JSON(http.StatusBadRequest, gin.H{"error": "ordered_uuids is required"})
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
configs, err := configService.ReorderProjectConfigurationsNoAuth(c.Param("uuid"), req.OrderedUUIDs)
|
||||||
|
if err != nil {
|
||||||
|
switch {
|
||||||
|
case errors.Is(err, services.ErrProjectNotFound):
|
||||||
|
respondError(c, http.StatusNotFound, "resource not found", err)
|
||||||
|
default:
|
||||||
|
respondError(c, http.StatusBadRequest, "invalid request", err)
|
||||||
|
}
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
total := 0.0
|
||||||
|
for i := range configs {
|
||||||
|
if configs[i].TotalPrice != nil {
|
||||||
|
total += *configs[i].TotalPrice
|
||||||
|
}
|
||||||
|
}
|
||||||
|
c.JSON(http.StatusOK, gin.H{
|
||||||
|
"project_uuid": c.Param("uuid"),
|
||||||
|
"configurations": configs,
|
||||||
|
"total": total,
|
||||||
|
})
|
||||||
|
})
|
||||||
|
|
||||||
projects.POST("/:uuid/configs", func(c *gin.Context) {
|
projects.POST("/:uuid/configs", func(c *gin.Context) {
|
||||||
var req services.CreateConfigRequest
|
var req services.CreateConfigRequest
|
||||||
if err := c.ShouldBindJSON(&req); err != nil {
|
if err := c.ShouldBindJSON(&req); err != nil {
|
||||||
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
respondError(c, http.StatusBadRequest, "invalid request", err)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
projectUUID := c.Param("uuid")
|
projectUUID := c.Param("uuid")
|
||||||
@@ -1674,27 +1681,79 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
|||||||
|
|
||||||
config, err := configService.Create(dbUsername, &req)
|
config, err := configService.Create(dbUsername, &req)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
respondError(c, http.StatusInternalServerError, "internal server error", err)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
c.JSON(http.StatusCreated, config)
|
c.JSON(http.StatusCreated, config)
|
||||||
})
|
})
|
||||||
|
|
||||||
|
projects.POST("/:uuid/vendor-import", func(c *gin.Context) {
|
||||||
|
c.Request.Body = http.MaxBytesReader(c.Writer, c.Request.Body, vendorImportBodyLimit())
|
||||||
|
fileHeader, err := c.FormFile("file")
|
||||||
|
if err != nil {
|
||||||
|
if isVendorImportTooLarge(0, err) {
|
||||||
|
respondError(c, http.StatusBadRequest, "vendor workspace file exceeds 1 GiB limit", errVendorImportTooLarge)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
respondError(c, http.StatusBadRequest, "file is required", err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
if isVendorImportTooLarge(fileHeader.Size, nil) {
|
||||||
|
respondError(c, http.StatusBadRequest, "vendor workspace file exceeds 1 GiB limit", errVendorImportTooLarge)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
file, err := fileHeader.Open()
|
||||||
|
if err != nil {
|
||||||
|
respondError(c, http.StatusBadRequest, "failed to open uploaded file", err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
defer file.Close()
|
||||||
|
|
||||||
|
data, err := io.ReadAll(io.LimitReader(file, vendorImportMaxBytes+1))
|
||||||
|
if err != nil {
|
||||||
|
respondError(c, http.StatusBadRequest, "failed to read uploaded file", err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
if int64(len(data)) > vendorImportMaxBytes {
|
||||||
|
respondError(c, http.StatusBadRequest, "vendor workspace file exceeds 1 GiB limit", errVendorImportTooLarge)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
if !services.IsCFXMLWorkspace(data) {
|
||||||
|
c.JSON(http.StatusBadRequest, gin.H{"error": "unsupported vendor export format"})
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
result, err := configService.ImportVendorWorkspaceToProject(c.Param("uuid"), fileHeader.Filename, data, dbUsername)
|
||||||
|
if err != nil {
|
||||||
|
switch {
|
||||||
|
case errors.Is(err, services.ErrProjectNotFound):
|
||||||
|
respondError(c, http.StatusNotFound, "resource not found", err)
|
||||||
|
default:
|
||||||
|
respondError(c, http.StatusBadRequest, "invalid request", err)
|
||||||
|
}
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
c.JSON(http.StatusCreated, result)
|
||||||
|
})
|
||||||
|
|
||||||
projects.GET("/:uuid/export", exportHandler.ExportProjectCSV)
|
projects.GET("/:uuid/export", exportHandler.ExportProjectCSV)
|
||||||
|
projects.POST("/:uuid/export", exportHandler.ExportProjectPricingCSV)
|
||||||
|
|
||||||
projects.POST("/:uuid/configs/:config_uuid/clone", func(c *gin.Context) {
|
projects.POST("/:uuid/configs/:config_uuid/clone", func(c *gin.Context) {
|
||||||
var req struct {
|
var req struct {
|
||||||
Name string `json:"name"`
|
Name string `json:"name"`
|
||||||
}
|
}
|
||||||
if err := c.ShouldBindJSON(&req); err != nil {
|
if err := c.ShouldBindJSON(&req); err != nil {
|
||||||
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
respondError(c, http.StatusBadRequest, "invalid request", err)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
projectUUID := c.Param("uuid")
|
projectUUID := c.Param("uuid")
|
||||||
config, err := configService.CloneNoAuthToProject(c.Param("config_uuid"), req.Name, dbUsername, &projectUUID)
|
config, err := configService.CloneNoAuthToProject(c.Param("config_uuid"), req.Name, dbUsername, &projectUUID)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
respondError(c, http.StatusInternalServerError, "internal server error", err)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
c.JSON(http.StatusCreated, config)
|
c.JSON(http.StatusCreated, config)
|
||||||
@@ -1710,6 +1769,8 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
|||||||
syncAPI.GET("/users-status", syncHandler.GetUsersStatus)
|
syncAPI.GET("/users-status", syncHandler.GetUsersStatus)
|
||||||
syncAPI.POST("/components", syncHandler.SyncComponents)
|
syncAPI.POST("/components", syncHandler.SyncComponents)
|
||||||
syncAPI.POST("/pricelists", syncHandler.SyncPricelists)
|
syncAPI.POST("/pricelists", syncHandler.SyncPricelists)
|
||||||
|
syncAPI.POST("/partnumber-books", syncHandler.SyncPartnumberBooks)
|
||||||
|
syncAPI.POST("/partnumber-seen", syncHandler.ReportPartnumberSeen)
|
||||||
syncAPI.POST("/all", syncHandler.SyncAll)
|
syncAPI.POST("/all", syncHandler.SyncAll)
|
||||||
syncAPI.POST("/push", syncHandler.PushPendingChanges)
|
syncAPI.POST("/push", syncHandler.PushPendingChanges)
|
||||||
syncAPI.GET("/pending/count", syncHandler.GetPendingCount)
|
syncAPI.GET("/pending/count", syncHandler.GetPendingCount)
|
||||||
@@ -1766,22 +1827,12 @@ func requestLogger() gin.HandlerFunc {
|
|||||||
path := c.Request.URL.Path
|
path := c.Request.URL.Path
|
||||||
query := c.Request.URL.RawQuery
|
query := c.Request.URL.RawQuery
|
||||||
|
|
||||||
blw := &captureResponseWriter{
|
|
||||||
ResponseWriter: c.Writer,
|
|
||||||
body: bytes.NewBuffer(nil),
|
|
||||||
}
|
|
||||||
c.Writer = blw
|
|
||||||
|
|
||||||
c.Next()
|
c.Next()
|
||||||
|
|
||||||
latency := time.Since(start)
|
latency := time.Since(start)
|
||||||
status := c.Writer.Status()
|
status := c.Writer.Status()
|
||||||
|
|
||||||
if status >= http.StatusBadRequest {
|
if status >= http.StatusBadRequest {
|
||||||
responseBody := strings.TrimSpace(blw.body.String())
|
|
||||||
if len(responseBody) > 2048 {
|
|
||||||
responseBody = responseBody[:2048] + "...(truncated)"
|
|
||||||
}
|
|
||||||
errText := strings.TrimSpace(c.Errors.String())
|
errText := strings.TrimSpace(c.Errors.String())
|
||||||
|
|
||||||
slog.Error("request failed",
|
slog.Error("request failed",
|
||||||
@@ -1792,7 +1843,6 @@ func requestLogger() gin.HandlerFunc {
|
|||||||
"latency", latency,
|
"latency", latency,
|
||||||
"ip", c.ClientIP(),
|
"ip", c.ClientIP(),
|
||||||
"errors", errText,
|
"errors", errText,
|
||||||
"response", responseBody,
|
|
||||||
)
|
)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
@@ -1807,22 +1857,3 @@ func requestLogger() gin.HandlerFunc {
|
|||||||
)
|
)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
type captureResponseWriter struct {
|
|
||||||
gin.ResponseWriter
|
|
||||||
body *bytes.Buffer
|
|
||||||
}
|
|
||||||
|
|
||||||
func (w *captureResponseWriter) Write(b []byte) (int, error) {
|
|
||||||
if len(b) > 0 {
|
|
||||||
_, _ = w.body.Write(b)
|
|
||||||
}
|
|
||||||
return w.ResponseWriter.Write(b)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (w *captureResponseWriter) WriteString(s string) (int, error) {
|
|
||||||
if s != "" {
|
|
||||||
_, _ = w.body.WriteString(s)
|
|
||||||
}
|
|
||||||
return w.ResponseWriter.WriteString(s)
|
|
||||||
}
|
|
||||||
|
|||||||
48
cmd/qfs/request_logger_test.go
Normal file
48
cmd/qfs/request_logger_test.go
Normal file
@@ -0,0 +1,48 @@
|
|||||||
|
package main
|
||||||
|
|
||||||
|
import (
|
||||||
|
"bytes"
|
||||||
|
"errors"
|
||||||
|
"log/slog"
|
||||||
|
"net/http"
|
||||||
|
"net/http/httptest"
|
||||||
|
"strings"
|
||||||
|
"testing"
|
||||||
|
|
||||||
|
"github.com/gin-gonic/gin"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestRequestLoggerDoesNotLogResponseBody(t *testing.T) {
|
||||||
|
gin.SetMode(gin.TestMode)
|
||||||
|
|
||||||
|
var logBuffer bytes.Buffer
|
||||||
|
previousLogger := slog.Default()
|
||||||
|
slog.SetDefault(slog.New(slog.NewTextHandler(&logBuffer, &slog.HandlerOptions{})))
|
||||||
|
defer slog.SetDefault(previousLogger)
|
||||||
|
|
||||||
|
router := gin.New()
|
||||||
|
router.Use(requestLogger())
|
||||||
|
router.GET("/fail", func(c *gin.Context) {
|
||||||
|
_ = c.Error(errors.New("root cause"))
|
||||||
|
c.JSON(http.StatusBadRequest, gin.H{"error": "do not log this body"})
|
||||||
|
})
|
||||||
|
|
||||||
|
rec := httptest.NewRecorder()
|
||||||
|
req := httptest.NewRequest(http.MethodGet, "/fail?debug=1", nil)
|
||||||
|
router.ServeHTTP(rec, req)
|
||||||
|
|
||||||
|
if rec.Code != http.StatusBadRequest {
|
||||||
|
t.Fatalf("expected 400, got %d", rec.Code)
|
||||||
|
}
|
||||||
|
|
||||||
|
logOutput := logBuffer.String()
|
||||||
|
if !strings.Contains(logOutput, "request failed") {
|
||||||
|
t.Fatalf("expected request failure log, got %q", logOutput)
|
||||||
|
}
|
||||||
|
if strings.Contains(logOutput, "do not log this body") {
|
||||||
|
t.Fatalf("response body leaked into logs: %q", logOutput)
|
||||||
|
}
|
||||||
|
if !strings.Contains(logOutput, "root cause") {
|
||||||
|
t.Fatalf("expected error details in logs, got %q", logOutput)
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -3,10 +3,12 @@ package main
|
|||||||
import (
|
import (
|
||||||
"bytes"
|
"bytes"
|
||||||
"encoding/json"
|
"encoding/json"
|
||||||
|
"mime/multipart"
|
||||||
"net/http"
|
"net/http"
|
||||||
"net/http/httptest"
|
"net/http/httptest"
|
||||||
"os"
|
"os"
|
||||||
"path/filepath"
|
"path/filepath"
|
||||||
|
"strings"
|
||||||
"testing"
|
"testing"
|
||||||
|
|
||||||
"git.mchus.pro/mchus/quoteforge/internal/config"
|
"git.mchus.pro/mchus/quoteforge/internal/config"
|
||||||
@@ -37,7 +39,7 @@ func TestConfigurationVersioningAPI(t *testing.T) {
|
|||||||
|
|
||||||
cfg := &config.Config{}
|
cfg := &config.Config{}
|
||||||
setConfigDefaults(cfg)
|
setConfigDefaults(cfg)
|
||||||
router, _, err := setupRouter(cfg, local, connMgr, nil, "tester", nil)
|
router, _, err := setupRouter(cfg, local, connMgr, "tester", nil)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
t.Fatalf("setup router: %v", err)
|
t.Fatalf("setup router: %v", err)
|
||||||
}
|
}
|
||||||
@@ -77,7 +79,7 @@ func TestConfigurationVersioningAPI(t *testing.T) {
|
|||||||
if err := json.Unmarshal(rbRec.Body.Bytes(), &rbResp); err != nil {
|
if err := json.Unmarshal(rbRec.Body.Bytes(), &rbResp); err != nil {
|
||||||
t.Fatalf("unmarshal rollback response: %v", err)
|
t.Fatalf("unmarshal rollback response: %v", err)
|
||||||
}
|
}
|
||||||
if rbResp.Message == "" || rbResp.CurrentVersion.VersionNo != 3 {
|
if rbResp.Message == "" || rbResp.CurrentVersion.VersionNo != 2 {
|
||||||
t.Fatalf("unexpected rollback response: %+v", rbResp)
|
t.Fatalf("unexpected rollback response: %+v", rbResp)
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -144,7 +146,7 @@ func TestProjectArchiveHidesConfigsAndCloneIntoProject(t *testing.T) {
|
|||||||
|
|
||||||
cfg := &config.Config{}
|
cfg := &config.Config{}
|
||||||
setConfigDefaults(cfg)
|
setConfigDefaults(cfg)
|
||||||
router, _, err := setupRouter(cfg, local, connMgr, nil, "tester", nil)
|
router, _, err := setupRouter(cfg, local, connMgr, "tester", nil)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
t.Fatalf("setup router: %v", err)
|
t.Fatalf("setup router: %v", err)
|
||||||
}
|
}
|
||||||
@@ -238,7 +240,7 @@ func TestConfigMoveToProjectEndpoint(t *testing.T) {
|
|||||||
local, connMgr, _ := newAPITestStack(t)
|
local, connMgr, _ := newAPITestStack(t)
|
||||||
cfg := &config.Config{}
|
cfg := &config.Config{}
|
||||||
setConfigDefaults(cfg)
|
setConfigDefaults(cfg)
|
||||||
router, _, err := setupRouter(cfg, local, connMgr, nil, "tester", nil)
|
router, _, err := setupRouter(cfg, local, connMgr, "tester", nil)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
t.Fatalf("setup router: %v", err)
|
t.Fatalf("setup router: %v", err)
|
||||||
}
|
}
|
||||||
@@ -290,6 +292,88 @@ func TestConfigMoveToProjectEndpoint(t *testing.T) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func TestVendorImportRejectsOversizedUpload(t *testing.T) {
|
||||||
|
moveToRepoRoot(t)
|
||||||
|
|
||||||
|
prevLimit := vendorImportMaxBytes
|
||||||
|
vendorImportMaxBytes = 128
|
||||||
|
defer func() { vendorImportMaxBytes = prevLimit }()
|
||||||
|
|
||||||
|
local, connMgr, _ := newAPITestStack(t)
|
||||||
|
cfg := &config.Config{}
|
||||||
|
setConfigDefaults(cfg)
|
||||||
|
router, _, err := setupRouter(cfg, local, connMgr, "tester", nil)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("setup router: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
createProjectReq := httptest.NewRequest(http.MethodPost, "/api/projects", bytes.NewReader([]byte(`{"name":"Import Project","code":"IMP"}`)))
|
||||||
|
createProjectReq.Header.Set("Content-Type", "application/json")
|
||||||
|
createProjectRec := httptest.NewRecorder()
|
||||||
|
router.ServeHTTP(createProjectRec, createProjectReq)
|
||||||
|
if createProjectRec.Code != http.StatusCreated {
|
||||||
|
t.Fatalf("create project status=%d body=%s", createProjectRec.Code, createProjectRec.Body.String())
|
||||||
|
}
|
||||||
|
|
||||||
|
var project models.Project
|
||||||
|
if err := json.Unmarshal(createProjectRec.Body.Bytes(), &project); err != nil {
|
||||||
|
t.Fatalf("unmarshal project: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
var body bytes.Buffer
|
||||||
|
writer := multipart.NewWriter(&body)
|
||||||
|
part, err := writer.CreateFormFile("file", "huge.xml")
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("create form file: %v", err)
|
||||||
|
}
|
||||||
|
payload := "<CFXML>" + strings.Repeat("A", int(vendorImportMaxBytes)+1) + "</CFXML>"
|
||||||
|
if _, err := part.Write([]byte(payload)); err != nil {
|
||||||
|
t.Fatalf("write multipart payload: %v", err)
|
||||||
|
}
|
||||||
|
if err := writer.Close(); err != nil {
|
||||||
|
t.Fatalf("close multipart writer: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
req := httptest.NewRequest(http.MethodPost, "/api/projects/"+project.UUID+"/vendor-import", &body)
|
||||||
|
req.Header.Set("Content-Type", writer.FormDataContentType())
|
||||||
|
rec := httptest.NewRecorder()
|
||||||
|
router.ServeHTTP(rec, req)
|
||||||
|
|
||||||
|
if rec.Code != http.StatusBadRequest {
|
||||||
|
t.Fatalf("expected 400 for oversized upload, got %d body=%s", rec.Code, rec.Body.String())
|
||||||
|
}
|
||||||
|
if !strings.Contains(rec.Body.String(), "1 GiB") {
|
||||||
|
t.Fatalf("expected size limit message, got %s", rec.Body.String())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestCreateConfigMalformedJSONReturnsGenericError(t *testing.T) {
|
||||||
|
moveToRepoRoot(t)
|
||||||
|
|
||||||
|
local, connMgr, _ := newAPITestStack(t)
|
||||||
|
cfg := &config.Config{}
|
||||||
|
setConfigDefaults(cfg)
|
||||||
|
router, _, err := setupRouter(cfg, local, connMgr, "tester", nil)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("setup router: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
req := httptest.NewRequest(http.MethodPost, "/api/configs", bytes.NewReader([]byte(`{"name":`)))
|
||||||
|
req.Header.Set("Content-Type", "application/json")
|
||||||
|
rec := httptest.NewRecorder()
|
||||||
|
router.ServeHTTP(rec, req)
|
||||||
|
|
||||||
|
if rec.Code != http.StatusBadRequest {
|
||||||
|
t.Fatalf("expected 400 for malformed json, got %d body=%s", rec.Code, rec.Body.String())
|
||||||
|
}
|
||||||
|
if strings.Contains(strings.ToLower(rec.Body.String()), "unexpected eof") {
|
||||||
|
t.Fatalf("expected sanitized error body, got %s", rec.Body.String())
|
||||||
|
}
|
||||||
|
if !strings.Contains(rec.Body.String(), "invalid request") {
|
||||||
|
t.Fatalf("expected generic invalid request message, got %s", rec.Body.String())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
func newAPITestStack(t *testing.T) (*localdb.LocalDB, *db.ConnectionManager, *services.LocalConfigurationService) {
|
func newAPITestStack(t *testing.T) (*localdb.LocalDB, *db.ConnectionManager, *services.LocalConfigurationService) {
|
||||||
t.Helper()
|
t.Helper()
|
||||||
|
|
||||||
|
|||||||
@@ -1,61 +1,18 @@
|
|||||||
# QuoteForge Configuration
|
# QuoteForge runtime config
|
||||||
# Copy this file to config.yaml and update values
|
# Runtime creates a minimal config automatically on first start.
|
||||||
|
# This file is only a reference template.
|
||||||
|
|
||||||
server:
|
server:
|
||||||
host: "127.0.0.1" # Use 0.0.0.0 to listen on all interfaces
|
host: "127.0.0.1" # Loopback only; remote HTTP binding is unsupported
|
||||||
port: 8080
|
port: 8080
|
||||||
mode: "release" # debug | release
|
mode: "release" # debug | release
|
||||||
read_timeout: "30s"
|
read_timeout: "30s"
|
||||||
write_timeout: "30s"
|
write_timeout: "30s"
|
||||||
|
|
||||||
database:
|
|
||||||
host: "localhost"
|
|
||||||
port: 3306
|
|
||||||
name: "RFQ_LOG"
|
|
||||||
user: "quoteforge"
|
|
||||||
password: "CHANGE_ME"
|
|
||||||
max_open_conns: 25
|
|
||||||
max_idle_conns: 5
|
|
||||||
conn_max_lifetime: "5m"
|
|
||||||
|
|
||||||
auth:
|
|
||||||
jwt_secret: "CHANGE_ME_MIN_32_CHARACTERS_LONG"
|
|
||||||
token_expiry: "24h"
|
|
||||||
refresh_expiry: "168h" # 7 days
|
|
||||||
|
|
||||||
pricing:
|
|
||||||
default_method: "weighted_median" # median | average | weighted_median
|
|
||||||
default_period_days: 90
|
|
||||||
freshness_green_days: 30
|
|
||||||
freshness_yellow_days: 60
|
|
||||||
freshness_red_days: 90
|
|
||||||
min_quotes_for_median: 3
|
|
||||||
popularity_decay_days: 180
|
|
||||||
|
|
||||||
export:
|
|
||||||
temp_dir: "/tmp/quoteforge-exports"
|
|
||||||
max_file_age: "1h"
|
|
||||||
company_name: "Your Company Name"
|
|
||||||
|
|
||||||
backup:
|
backup:
|
||||||
time: "00:00"
|
time: "00:00"
|
||||||
|
|
||||||
alerts:
|
|
||||||
enabled: true
|
|
||||||
check_interval: "1h"
|
|
||||||
high_demand_threshold: 5 # КП за 30 дней
|
|
||||||
trending_threshold_percent: 50 # % роста для алерта
|
|
||||||
|
|
||||||
notifications:
|
|
||||||
email_enabled: false
|
|
||||||
smtp_host: "smtp.example.com"
|
|
||||||
smtp_port: 587
|
|
||||||
smtp_user: ""
|
|
||||||
smtp_password: ""
|
|
||||||
from_address: "quoteforge@example.com"
|
|
||||||
|
|
||||||
logging:
|
logging:
|
||||||
level: "info" # debug | info | warn | error
|
level: "info" # debug | info | warn | error
|
||||||
format: "json" # json | text
|
format: "json" # json | text
|
||||||
output: "stdout" # stdout | file
|
output: "stdout" # stdout | stderr | /path/to/file
|
||||||
file_path: "/var/log/quoteforge/app.log"
|
|
||||||
|
|||||||
BIN
docs/storage-components-guide.docx
Normal file
BIN
docs/storage-components-guide.docx
Normal file
Binary file not shown.
213
docs/storage-components-guide.md
Normal file
213
docs/storage-components-guide.md
Normal file
@@ -0,0 +1,213 @@
|
|||||||
|
# Руководство по составлению каталога лотов СХД
|
||||||
|
|
||||||
|
## Что такое LOT и зачем он нужен
|
||||||
|
|
||||||
|
LOT — это внутренний идентификатор типа компонента в системе QuoteForge.
|
||||||
|
|
||||||
|
Каждый LOT представляет одну рыночную позицию и хранит **средневзвешенную рыночную цену**, рассчитанную по историческим данным от поставщиков. Это позволяет получать актуальную оценку стоимости независимо от конкретного поставщика или прайс-листа.
|
||||||
|
|
||||||
|
Партномера вендора (Part Number, Feature Code) сами по себе не имеют цены в системе — они **переводятся в LOT** через книгу партномеров. Именно через LOT происходит расценка конфигурации.
|
||||||
|
|
||||||
|
**Пример:** Feature Code `B4B9` и Part Number `4C57A14368` — это два разных обозначения одной и той же HIC-карты от Lenovo. Оба маппируются на один LOT `HIC_4pFC32`, у которого есть рыночная цена.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Категории и вкладки конфигуратора
|
||||||
|
|
||||||
|
Категория LOT определяет, в какой вкладке конфигуратора он появится.
|
||||||
|
|
||||||
|
| Код категории | Название | Вкладка | Что сюда относится |
|
||||||
|
|---|---|---|---|
|
||||||
|
| `ENC` | Storage Enclosure | **Base** | Дисковая полка без контроллера |
|
||||||
|
| `DKC` | Disk/Controller Enclosure | **Base** | Контроллерная полка: модель СХД + тип дисков + кол-во слотов + кол-во контроллеров |
|
||||||
|
| `CTL` | Storage Controller | **Base** | Контроллер СХД: объём кэша + встроенные хост-порты |
|
||||||
|
| `HIC` | Host Interface Card | **PCI** | HIC-карты СХД: интерфейсы подключения (FC, iSCSI, SAS) |
|
||||||
|
| `HDD` | HDD | **Storage** | Жёсткие диски (HDD) |
|
||||||
|
| `SSD` | SSD | **Storage** | Твердотельные диски (SSD, NVMe) |
|
||||||
|
| `ACC` | Accessories | **Accessories** | Кабели подключения, кабели питания |
|
||||||
|
| `SW` | Software | **SW** | Программные лицензии |
|
||||||
|
| *(прочее)* | — | **Other** | Гарантийные опции, инсталляция |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Правила именования LOT
|
||||||
|
|
||||||
|
Формат: `КАТЕГОРИЯ_МОДЕЛЬСХД_СПЕЦИФИКА`
|
||||||
|
|
||||||
|
- только латиница, цифры и знак `_`
|
||||||
|
- регистр — ВЕРХНИЙ
|
||||||
|
- без пробелов, дефисов, точек
|
||||||
|
- каждый LOT уникален — два разных компонента не могут иметь одинаковое имя
|
||||||
|
|
||||||
|
### DKC — контроллерная полка
|
||||||
|
|
||||||
|
Специфика: `ТИПДИСКА_СЛОТЫ_NCTRL`
|
||||||
|
|
||||||
|
| Пример | Расшифровка |
|
||||||
|
|---|---|
|
||||||
|
| `DKC_DE4000H_SFF_24_2CTRL` | DE4000H, 24 слота SFF (2.5"), 2 контроллера |
|
||||||
|
| `DKC_DE4000H_LFF_12_2CTRL` | DE4000H, 12 слотов LFF (3.5"), 2 контроллера |
|
||||||
|
| `DKC_DE4000H_SFF_24_1CTRL` | DE4000H, 24 слота SFF, 1 контроллер (симплекс) |
|
||||||
|
|
||||||
|
Обозначения типа диска: `SFF` — 2.5", `LFF` — 3.5", `NVMe` — U.2/U.3.
|
||||||
|
|
||||||
|
### CTL — контроллер
|
||||||
|
|
||||||
|
Специфика: `КЭШГБ_ПОРТЫТИП` (если встроенные порты есть) или `КЭШГБ_BASE` (если без портов, добавляются через HIC)
|
||||||
|
|
||||||
|
| Пример | Расшифровка |
|
||||||
|
|---|---|
|
||||||
|
| `CTL_DE4000H_32GB_BASE` | 32GB кэш, без встроенных хост-портов |
|
||||||
|
| `CTL_DE4000H_8GB_BASE` | 8GB кэш, без встроенных хост-портов |
|
||||||
|
| `CTL_MSA2060_8GB_ISCSI10G_4P` | 8GB кэш, встроенные 4× iSCSI 10GbE |
|
||||||
|
|
||||||
|
### HIC — HIC-карты (интерфейс подключения)
|
||||||
|
|
||||||
|
Специфика: `NpПРОТОКОЛ` — без привязки к модели СХД, по аналогии с серверными `HBA_2pFC16`, `HBA_4pFC32_Gen6`.
|
||||||
|
|
||||||
|
| Пример | Расшифровка |
|
||||||
|
|---|---|
|
||||||
|
| `HIC_4pFC32` | 4 порта FC 32Gb |
|
||||||
|
| `HIC_4pFC16` | 4 порта FC 16G/10GbE |
|
||||||
|
| `HIC_4p25G_iSCSI` | 4 порта 25G iSCSI |
|
||||||
|
| `HIC_4p12G_SAS` | 4 порта SAS 12Gb |
|
||||||
|
| `HIC_2p10G_BaseT` | 2 порта 10G Base-T |
|
||||||
|
|
||||||
|
### HDD / SSD / NVMe — диски
|
||||||
|
|
||||||
|
Диски **не привязываются к модели СХД** — используются существующие LOT из серверного каталога (`HDD_...`, `SSD_...`, `NVME_...`). Новые LOT для дисков СХД не создаются; партномера дисков маппируются на уже существующие серверные LOT.
|
||||||
|
|
||||||
|
### ACC — кабели
|
||||||
|
|
||||||
|
Кабели **не привязываются к модели СХД**. Формат: `ACC_CABLE_{ТИП}_{ДЛИНА}` — универсальные LOT, одинаковые для серверов и СХД.
|
||||||
|
|
||||||
|
| Пример | Расшифровка |
|
||||||
|
|---|---|
|
||||||
|
| `ACC_CABLE_CAT6_10M` | Кабель CAT6 10м |
|
||||||
|
| `ACC_CABLE_FC_OM4_3M` | Кабель FC LC-LC OM4 до 3м |
|
||||||
|
| `ACC_CABLE_PWR_C13C14_15M` | Кабель питания C13–C14 1.5м |
|
||||||
|
|
||||||
|
### SW — программные лицензии
|
||||||
|
|
||||||
|
Специфика: краткое название функции.
|
||||||
|
|
||||||
|
| Пример | Расшифровка |
|
||||||
|
|---|---|
|
||||||
|
| `SW_DE4000H_ASYNC_MIRROR` | Async Mirroring |
|
||||||
|
| `SW_DE4000H_SNAPSHOT_512` | Snapshot 512 |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Таблица лотов: DE4000H (пример заполнения)
|
||||||
|
|
||||||
|
### DKC — контроллерная полка
|
||||||
|
|
||||||
|
| lot_name | vendor | model | description | disk_slots | disk_type | controllers |
|
||||||
|
|---|---|---|---|---|---|---|
|
||||||
|
| `DKC_DE4000H_SFF_24_2CTRL` | Lenovo | DE4000H 2U24 | DE4000H, 24× SFF, 2 контроллера | 24 | SFF | 2 |
|
||||||
|
| `DKC_DE4000H_LFF_12_2CTRL` | Lenovo | DE4000H 2U12 | DE4000H, 12× LFF, 2 контроллера | 12 | LFF | 2 |
|
||||||
|
|
||||||
|
### CTL — контроллер
|
||||||
|
|
||||||
|
| lot_name | vendor | model | description | cache_gb | host_ports |
|
||||||
|
|---|---|---|---|---|---|
|
||||||
|
| `CTL_DE4000H_32GB_BASE` | Lenovo | DE4000 Controller 32GB Gen2 | Контроллер DE4000, 32GB кэш, без встроенных портов | 32 | — |
|
||||||
|
| `CTL_DE4000H_8GB_BASE` | Lenovo | DE4000 Controller 8GB Gen2 | Контроллер DE4000, 8GB кэш, без встроенных портов | 8 | — |
|
||||||
|
|
||||||
|
### HIC — HIC-карты
|
||||||
|
|
||||||
|
| lot_name | vendor | model | description |
|
||||||
|
|---|---|---|---|
|
||||||
|
| `HIC_2p10G_BaseT` | Lenovo | HIC 10GBASE-T 2-Ports | HIC 10GBASE-T, 2 порта |
|
||||||
|
| `HIC_4p25G_iSCSI` | Lenovo | HIC 10/25GbE iSCSI 4-ports | HIC iSCSI 10/25GbE, 4 порта |
|
||||||
|
| `HIC_4p12G_SAS` | Lenovo | HIC 12Gb SAS 4-ports | HIC SAS 12Gb, 4 порта |
|
||||||
|
| `HIC_4pFC32` | Lenovo | HIC 32Gb FC 4-ports | HIC FC 32Gb, 4 порта |
|
||||||
|
| `HIC_4pFC16` | Lenovo | HIC 16G FC/10GbE 4-ports | HIC FC 16G/10GbE, 4 порта |
|
||||||
|
|
||||||
|
### HDD / SSD / NVMe / ACC — диски и кабели
|
||||||
|
|
||||||
|
Для дисков и кабелей новые LOT не создаются. Партномера маппируются на существующие серверные LOT из каталога.
|
||||||
|
|
||||||
|
### SW — программные лицензии
|
||||||
|
|
||||||
|
| lot_name | vendor | model | description |
|
||||||
|
|---|---|---|---|
|
||||||
|
| `SW_DE4000H_ASYNC_MIRROR` | Lenovo | DE4000H Asynchronous Mirroring | Лицензия Async Mirroring |
|
||||||
|
| `SW_DE4000H_SNAPSHOT_512` | Lenovo | DE4000H Snapshot Upgrade 512 | Лицензия Snapshot 512 |
|
||||||
|
| `SW_DE4000H_SYNC_MIRROR` | Lenovo | DE4000 Synchronous Mirroring | Лицензия Sync Mirroring |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Таблица партномеров: DE4000H (пример заполнения)
|
||||||
|
|
||||||
|
Каждый Feature Code и Part Number должен быть привязан к своему LOT.
|
||||||
|
Если у компонента есть оба — добавить две строки.
|
||||||
|
|
||||||
|
| partnumber | lot_name | описание |
|
||||||
|
|---|---|---|
|
||||||
|
| `BEY7` | `ENC_2U24_CHASSIS` | Lenovo ThinkSystem Storage 2U24 Chassis |
|
||||||
|
| `BQA0` | `CTL_DE4000H_32GB_BASE` | DE4000 Controller 32GB Gen2 |
|
||||||
|
| `BQ9Z` | `CTL_DE4000H_8GB_BASE` | DE4000 Controller 8GB Gen2 |
|
||||||
|
| `B4B1` | `HIC_2p10G_BaseT` | HIC 10GBASE-T 2-Ports |
|
||||||
|
| `4C57A14376` | `HIC_2p10G_BaseT` | HIC 10GBASE-T 2-Ports |
|
||||||
|
| `B4BA` | `HIC_4p25G_iSCSI` | HIC 10/25GbE iSCSI 4-ports |
|
||||||
|
| `4C57A14369` | `HIC_4p25G_iSCSI` | HIC 10/25GbE iSCSI 4-ports |
|
||||||
|
| `B4B8` | `HIC_4p12G_SAS` | HIC 12Gb SAS 4-ports |
|
||||||
|
| `4C57A14367` | `HIC_4p12G_SAS` | HIC 12Gb SAS 4-ports |
|
||||||
|
| `B4B9` | `HIC_4pFC32` | HIC 32Gb FC 4-ports |
|
||||||
|
| `4C57A14368` | `HIC_4pFC32` | HIC 32Gb FC 4-ports |
|
||||||
|
| `B4B7` | `HIC_4pFC16` | HIC 16G FC/10GbE 4-ports |
|
||||||
|
| `4C57A14366` | `HIC_4pFC16` | HIC 16G FC/10GbE 4-ports |
|
||||||
|
| `BW12` | `HDD_SAS_02.4TB` | 2.4TB 10K 2.5" HDD 2U24 |
|
||||||
|
| `4XB7A88046` | `HDD_SAS_02.4TB` | 2.4TB 10K 2.5" HDD 2U24 |
|
||||||
|
| `B4C0` | `HDD_SAS_01.8TB` | 1.8TB 10K 2.5" HDD SED FIPS |
|
||||||
|
| `4XB7A14114` | `HDD_SAS_01.8TB` | 1.8TB 10K 2.5" HDD SED FIPS |
|
||||||
|
| `BW13` | `HDD_SAS_02.4TB` | 2.4TB 10K 2.5" HDD FIPS |
|
||||||
|
| `4XB7A88048` | `HDD_SAS_02.4TB` | 2.4TB 10K 2.5" HDD FIPS |
|
||||||
|
| `BKUQ` | `SSD_SAS_0.960T` | 960GB 1DWD 2.5" SSD |
|
||||||
|
| `4XB7A74948` | `SSD_SAS_0.960T` | 960GB 1DWD 2.5" SSD |
|
||||||
|
| `BKUT` | `SSD_SAS_01.92T` | 1.92TB 1DWD 2.5" SSD |
|
||||||
|
| `4XB7A74951` | `SSD_SAS_01.92T` | 1.92TB 1DWD 2.5" SSD |
|
||||||
|
| `BKUK` | `SSD_SAS_03.84T` | 3.84TB 1DWD 2.5" SSD |
|
||||||
|
| `4XB7A74955` | `SSD_SAS_03.84T` | 3.84TB 1DWD 2.5" SSD |
|
||||||
|
| `B4RY` | `SSD_SAS_07.68T` | 7.68TB 1DWD 2.5" SSD |
|
||||||
|
| `4XB7A14176` | `SSD_SAS_07.68T` | 7.68TB 1DWD 2.5" SSD |
|
||||||
|
| `B4CD` | `SSD_SAS_15.36T` | 15.36TB 1DWD 2.5" SSD |
|
||||||
|
| `4XB7A14110` | `SSD_SAS_15.36T` | 15.36TB 1DWD 2.5" SSD |
|
||||||
|
| `BWCJ` | `SSD_SAS_03.84T` | 3.84TB 1DWD 2.5" SSD FIPS |
|
||||||
|
| `4XB7A88469` | `SSD_SAS_03.84T` | 3.84TB 1DWD 2.5" SSD FIPS |
|
||||||
|
| `BW2B` | `SSD_SAS_15.36T` | 15.36TB 1DWD 2.5" SSD SED |
|
||||||
|
| `4XB7A88466` | `SSD_SAS_15.36T` | 15.36TB 1DWD 2.5" SSD SED |
|
||||||
|
| `AVFW` | `ACC_CABLE_CAT6_1M` | CAT6 0.75-1.5m |
|
||||||
|
| `A1MT` | `ACC_CABLE_CAT6_10M` | CAT6 10m |
|
||||||
|
| `90Y3718` | `ACC_CABLE_CAT6_10M` | CAT6 10m |
|
||||||
|
| `A1MW` | `ACC_CABLE_CAT6_25M` | CAT6 25m |
|
||||||
|
| `90Y3727` | `ACC_CABLE_CAT6_25M` | CAT6 25m |
|
||||||
|
| `39Y7937` | `ACC_CABLE_PWR_C13C14_15M` | C13–C14 1.5m |
|
||||||
|
| `39Y7938` | `ACC_CABLE_PWR_C13C20_28M` | C13–C20 2.8m |
|
||||||
|
| `4L67A08371` | `ACC_CABLE_PWR_C13C14_43M` | C13–C14 4.3m |
|
||||||
|
| `C932` | `SW_DE4000H_ASYNC_MIRROR` | DE4000H Asynchronous Mirroring |
|
||||||
|
| `00WE123` | `SW_DE4000H_ASYNC_MIRROR` | DE4000H Asynchronous Mirroring |
|
||||||
|
| `C930` | `SW_DE4000H_SNAPSHOT_512` | DE4000H Snapshot Upgrade 512 |
|
||||||
|
| `C931` | `SW_DE4000H_SYNC_MIRROR` | DE4000 Synchronous Mirroring |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Шаблон для новых моделей СХД
|
||||||
|
|
||||||
|
```
|
||||||
|
DKC_МОДЕЛЬ_ТИПДИСКА_СЛОТЫ_NCTRL — контроллерная полка
|
||||||
|
CTL_МОДЕЛЬ_КЭШГБ_ПОРТЫ — контроллер
|
||||||
|
HIC_МОДЕЛЬ_ПРОТОКОЛ_СКОРОСТЬ_ПОРТЫ — HIC-карта (интерфейс подключения)
|
||||||
|
SW_МОДЕЛЬ_ФУНКЦИЯ — лицензия
|
||||||
|
```
|
||||||
|
|
||||||
|
Диски (HDD/SSD/NVMe) и кабели (ACC) — маппируются на существующие серверные LOT, новые не создаются.
|
||||||
|
|
||||||
|
Пример для HPE MSA 2060:
|
||||||
|
```
|
||||||
|
DKC_MSA2060_SFF_24_2CTRL
|
||||||
|
CTL_MSA2060_8GB_ISCSI10G_4P
|
||||||
|
HIC_MSA2060_FC32G_2P
|
||||||
|
SW_MSA2060_REMOTE_SNAP
|
||||||
|
```
|
||||||
5
go.mod
5
go.mod
@@ -5,9 +5,8 @@ go 1.24.0
|
|||||||
require (
|
require (
|
||||||
github.com/gin-gonic/gin v1.9.1
|
github.com/gin-gonic/gin v1.9.1
|
||||||
github.com/glebarez/sqlite v1.11.0
|
github.com/glebarez/sqlite v1.11.0
|
||||||
github.com/golang-jwt/jwt/v5 v5.3.0
|
github.com/go-sql-driver/mysql v1.7.1
|
||||||
github.com/google/uuid v1.6.0
|
github.com/google/uuid v1.6.0
|
||||||
golang.org/x/crypto v0.43.0
|
|
||||||
gopkg.in/yaml.v3 v3.0.1
|
gopkg.in/yaml.v3 v3.0.1
|
||||||
gorm.io/driver/mysql v1.5.2
|
gorm.io/driver/mysql v1.5.2
|
||||||
gorm.io/gorm v1.25.7
|
gorm.io/gorm v1.25.7
|
||||||
@@ -23,7 +22,6 @@ require (
|
|||||||
github.com/go-playground/locales v0.14.1 // indirect
|
github.com/go-playground/locales v0.14.1 // indirect
|
||||||
github.com/go-playground/universal-translator v0.18.1 // indirect
|
github.com/go-playground/universal-translator v0.18.1 // indirect
|
||||||
github.com/go-playground/validator/v10 v10.14.0 // indirect
|
github.com/go-playground/validator/v10 v10.14.0 // indirect
|
||||||
github.com/go-sql-driver/mysql v1.7.1 // indirect
|
|
||||||
github.com/goccy/go-json v0.10.2 // indirect
|
github.com/goccy/go-json v0.10.2 // indirect
|
||||||
github.com/jinzhu/inflection v1.0.0 // indirect
|
github.com/jinzhu/inflection v1.0.0 // indirect
|
||||||
github.com/jinzhu/now v1.1.5 // indirect
|
github.com/jinzhu/now v1.1.5 // indirect
|
||||||
@@ -39,6 +37,7 @@ require (
|
|||||||
github.com/twitchyliquid64/golang-asm v0.15.1 // indirect
|
github.com/twitchyliquid64/golang-asm v0.15.1 // indirect
|
||||||
github.com/ugorji/go/codec v1.2.11 // indirect
|
github.com/ugorji/go/codec v1.2.11 // indirect
|
||||||
golang.org/x/arch v0.3.0 // indirect
|
golang.org/x/arch v0.3.0 // indirect
|
||||||
|
golang.org/x/crypto v0.43.0 // indirect
|
||||||
golang.org/x/net v0.46.0 // indirect
|
golang.org/x/net v0.46.0 // indirect
|
||||||
golang.org/x/sys v0.37.0 // indirect
|
golang.org/x/sys v0.37.0 // indirect
|
||||||
golang.org/x/text v0.30.0 // indirect
|
golang.org/x/text v0.30.0 // indirect
|
||||||
|
|||||||
2
go.sum
2
go.sum
@@ -32,8 +32,6 @@ github.com/go-sql-driver/mysql v1.7.1 h1:lUIinVbN1DY0xBg0eMOzmmtGoHwWBbvnWubQUrt
|
|||||||
github.com/go-sql-driver/mysql v1.7.1/go.mod h1:OXbVy3sEdcQ2Doequ6Z5BW6fXNQTmx+9S1MCJN5yJMI=
|
github.com/go-sql-driver/mysql v1.7.1/go.mod h1:OXbVy3sEdcQ2Doequ6Z5BW6fXNQTmx+9S1MCJN5yJMI=
|
||||||
github.com/goccy/go-json v0.10.2 h1:CrxCmQqYDkv1z7lO7Wbh2HN93uovUHgrECaO5ZrCXAU=
|
github.com/goccy/go-json v0.10.2 h1:CrxCmQqYDkv1z7lO7Wbh2HN93uovUHgrECaO5ZrCXAU=
|
||||||
github.com/goccy/go-json v0.10.2/go.mod h1:6MelG93GURQebXPDq3khkgXZkazVtN9CRI+MGFi0w8I=
|
github.com/goccy/go-json v0.10.2/go.mod h1:6MelG93GURQebXPDq3khkgXZkazVtN9CRI+MGFi0w8I=
|
||||||
github.com/golang-jwt/jwt/v5 v5.3.0 h1:pv4AsKCKKZuqlgs5sUmn4x8UlGa0kEVt/puTpKx9vvo=
|
|
||||||
github.com/golang-jwt/jwt/v5 v5.3.0/go.mod h1:fxCRLWMO43lRc8nhHWY6LGqRcf+1gQWArsqaEUEa5bE=
|
|
||||||
github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk=
|
github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk=
|
||||||
github.com/google/go-cmp v0.5.5 h1:Khx7svrCpmxxtHBq5j2mp/xVjsi8hQMfNLvJFAlrGgU=
|
github.com/google/go-cmp v0.5.5 h1:Khx7svrCpmxxtHBq5j2mp/xVjsi8hQMfNLvJFAlrGgU=
|
||||||
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
|
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
|
||||||
|
|||||||
@@ -10,6 +10,10 @@ import (
|
|||||||
"sort"
|
"sort"
|
||||||
"strings"
|
"strings"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
|
"github.com/glebarez/sqlite"
|
||||||
|
"gorm.io/gorm"
|
||||||
|
"gorm.io/gorm/logger"
|
||||||
)
|
)
|
||||||
|
|
||||||
type backupPeriod struct {
|
type backupPeriod struct {
|
||||||
@@ -88,6 +92,9 @@ func EnsureRotatingLocalBackup(dbPath, configPath string) ([]string, error) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
root := resolveBackupRoot(dbPath)
|
root := resolveBackupRoot(dbPath)
|
||||||
|
if err := validateBackupRoot(root); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
now := backupNow()
|
now := backupNow()
|
||||||
|
|
||||||
created := make([]string, 0)
|
created := make([]string, 0)
|
||||||
@@ -111,6 +118,40 @@ func resolveBackupRoot(dbPath string) string {
|
|||||||
return filepath.Join(filepath.Dir(dbPath), "backups")
|
return filepath.Join(filepath.Dir(dbPath), "backups")
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func validateBackupRoot(root string) error {
|
||||||
|
absRoot, err := filepath.Abs(root)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("resolve backup root: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if gitRoot, ok := findGitWorktreeRoot(absRoot); ok {
|
||||||
|
return fmt.Errorf("backup root must stay outside git worktree: %s is inside %s", absRoot, gitRoot)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func findGitWorktreeRoot(path string) (string, bool) {
|
||||||
|
current := filepath.Clean(path)
|
||||||
|
info, err := os.Stat(current)
|
||||||
|
if err == nil && !info.IsDir() {
|
||||||
|
current = filepath.Dir(current)
|
||||||
|
}
|
||||||
|
|
||||||
|
for {
|
||||||
|
gitPath := filepath.Join(current, ".git")
|
||||||
|
if _, err := os.Stat(gitPath); err == nil {
|
||||||
|
return current, true
|
||||||
|
}
|
||||||
|
|
||||||
|
parent := filepath.Dir(current)
|
||||||
|
if parent == current {
|
||||||
|
return "", false
|
||||||
|
}
|
||||||
|
current = parent
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
func isBackupDisabled() bool {
|
func isBackupDisabled() bool {
|
||||||
val := strings.ToLower(strings.TrimSpace(os.Getenv(envBackupDisable)))
|
val := strings.ToLower(strings.TrimSpace(os.Getenv(envBackupDisable)))
|
||||||
return val == "1" || val == "true" || val == "yes"
|
return val == "1" || val == "true" || val == "yes"
|
||||||
@@ -213,6 +254,12 @@ func pruneOldBackups(periodDir string, keep int) error {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func createBackupArchive(destPath, dbPath, configPath string) error {
|
func createBackupArchive(destPath, dbPath, configPath string) error {
|
||||||
|
snapshotPath, cleanup, err := createSQLiteSnapshot(dbPath)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
defer cleanup()
|
||||||
|
|
||||||
file, err := os.Create(destPath)
|
file, err := os.Create(destPath)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
@@ -220,12 +267,10 @@ func createBackupArchive(destPath, dbPath, configPath string) error {
|
|||||||
defer file.Close()
|
defer file.Close()
|
||||||
|
|
||||||
zipWriter := zip.NewWriter(file)
|
zipWriter := zip.NewWriter(file)
|
||||||
if err := addZipFile(zipWriter, dbPath); err != nil {
|
if err := addZipFileAs(zipWriter, snapshotPath, filepath.Base(dbPath)); err != nil {
|
||||||
_ = zipWriter.Close()
|
_ = zipWriter.Close()
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
_ = addZipOptionalFile(zipWriter, dbPath+"-wal")
|
|
||||||
_ = addZipOptionalFile(zipWriter, dbPath+"-shm")
|
|
||||||
|
|
||||||
if strings.TrimSpace(configPath) != "" {
|
if strings.TrimSpace(configPath) != "" {
|
||||||
_ = addZipOptionalFile(zipWriter, configPath)
|
_ = addZipOptionalFile(zipWriter, configPath)
|
||||||
@@ -237,6 +282,77 @@ func createBackupArchive(destPath, dbPath, configPath string) error {
|
|||||||
return file.Sync()
|
return file.Sync()
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func createSQLiteSnapshot(dbPath string) (string, func(), error) {
|
||||||
|
tempFile, err := os.CreateTemp("", "qfs-backup-*.db")
|
||||||
|
if err != nil {
|
||||||
|
return "", func() {}, err
|
||||||
|
}
|
||||||
|
tempPath := tempFile.Name()
|
||||||
|
if err := tempFile.Close(); err != nil {
|
||||||
|
_ = os.Remove(tempPath)
|
||||||
|
return "", func() {}, err
|
||||||
|
}
|
||||||
|
if err := os.Remove(tempPath); err != nil && !os.IsNotExist(err) {
|
||||||
|
return "", func() {}, err
|
||||||
|
}
|
||||||
|
|
||||||
|
cleanup := func() {
|
||||||
|
_ = os.Remove(tempPath)
|
||||||
|
}
|
||||||
|
|
||||||
|
db, err := gorm.Open(sqlite.Open(dbPath), &gorm.Config{
|
||||||
|
Logger: logger.Default.LogMode(logger.Silent),
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
cleanup()
|
||||||
|
return "", func() {}, err
|
||||||
|
}
|
||||||
|
|
||||||
|
sqlDB, err := db.DB()
|
||||||
|
if err != nil {
|
||||||
|
cleanup()
|
||||||
|
return "", func() {}, err
|
||||||
|
}
|
||||||
|
defer sqlDB.Close()
|
||||||
|
|
||||||
|
if err := db.Exec("PRAGMA busy_timeout = 5000").Error; err != nil {
|
||||||
|
cleanup()
|
||||||
|
return "", func() {}, fmt.Errorf("configure sqlite busy_timeout: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
literalPath := strings.ReplaceAll(tempPath, "'", "''")
|
||||||
|
if err := vacuumIntoWithRetry(db, literalPath); err != nil {
|
||||||
|
cleanup()
|
||||||
|
return "", func() {}, err
|
||||||
|
}
|
||||||
|
|
||||||
|
return tempPath, cleanup, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func vacuumIntoWithRetry(db *gorm.DB, literalPath string) error {
|
||||||
|
var lastErr error
|
||||||
|
for attempt := 0; attempt < 3; attempt++ {
|
||||||
|
if err := db.Exec("VACUUM INTO '" + literalPath + "'").Error; err != nil {
|
||||||
|
lastErr = err
|
||||||
|
if !isSQLiteBusyError(err) {
|
||||||
|
return fmt.Errorf("create sqlite snapshot: %w", err)
|
||||||
|
}
|
||||||
|
time.Sleep(time.Duration(attempt+1) * 250 * time.Millisecond)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
return fmt.Errorf("create sqlite snapshot after retries: %w", lastErr)
|
||||||
|
}
|
||||||
|
|
||||||
|
func isSQLiteBusyError(err error) bool {
|
||||||
|
if err == nil {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
lower := strings.ToLower(err.Error())
|
||||||
|
return strings.Contains(lower, "database is locked") || strings.Contains(lower, "database is busy")
|
||||||
|
}
|
||||||
|
|
||||||
func addZipOptionalFile(writer *zip.Writer, path string) error {
|
func addZipOptionalFile(writer *zip.Writer, path string) error {
|
||||||
if _, err := os.Stat(path); err != nil {
|
if _, err := os.Stat(path); err != nil {
|
||||||
return nil
|
return nil
|
||||||
@@ -245,6 +361,10 @@ func addZipOptionalFile(writer *zip.Writer, path string) error {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func addZipFile(writer *zip.Writer, path string) error {
|
func addZipFile(writer *zip.Writer, path string) error {
|
||||||
|
return addZipFileAs(writer, path, filepath.Base(path))
|
||||||
|
}
|
||||||
|
|
||||||
|
func addZipFileAs(writer *zip.Writer, path string, archiveName string) error {
|
||||||
in, err := os.Open(path)
|
in, err := os.Open(path)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
@@ -260,7 +380,7 @@ func addZipFile(writer *zip.Writer, path string) error {
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
header.Name = filepath.Base(path)
|
header.Name = archiveName
|
||||||
header.Method = zip.Deflate
|
header.Method = zip.Deflate
|
||||||
|
|
||||||
out, err := writer.CreateHeader(header)
|
out, err := writer.CreateHeader(header)
|
||||||
|
|||||||
@@ -1,10 +1,15 @@
|
|||||||
package appstate
|
package appstate
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"archive/zip"
|
||||||
"os"
|
"os"
|
||||||
"path/filepath"
|
"path/filepath"
|
||||||
|
"strings"
|
||||||
"testing"
|
"testing"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
|
"github.com/glebarez/sqlite"
|
||||||
|
"gorm.io/gorm"
|
||||||
)
|
)
|
||||||
|
|
||||||
func TestEnsureRotatingLocalBackupCreatesAndRotates(t *testing.T) {
|
func TestEnsureRotatingLocalBackupCreatesAndRotates(t *testing.T) {
|
||||||
@@ -12,8 +17,8 @@ func TestEnsureRotatingLocalBackupCreatesAndRotates(t *testing.T) {
|
|||||||
dbPath := filepath.Join(temp, "qfs.db")
|
dbPath := filepath.Join(temp, "qfs.db")
|
||||||
cfgPath := filepath.Join(temp, "config.yaml")
|
cfgPath := filepath.Join(temp, "config.yaml")
|
||||||
|
|
||||||
if err := os.WriteFile(dbPath, []byte("db"), 0644); err != nil {
|
if err := writeTestSQLiteDB(dbPath); err != nil {
|
||||||
t.Fatalf("write db: %v", err)
|
t.Fatalf("write sqlite db: %v", err)
|
||||||
}
|
}
|
||||||
if err := os.WriteFile(cfgPath, []byte("cfg"), 0644); err != nil {
|
if err := os.WriteFile(cfgPath, []byte("cfg"), 0644); err != nil {
|
||||||
t.Fatalf("write config: %v", err)
|
t.Fatalf("write config: %v", err)
|
||||||
@@ -35,6 +40,7 @@ func TestEnsureRotatingLocalBackupCreatesAndRotates(t *testing.T) {
|
|||||||
if _, err := os.Stat(dailyArchive); err != nil {
|
if _, err := os.Stat(dailyArchive); err != nil {
|
||||||
t.Fatalf("daily archive missing: %v", err)
|
t.Fatalf("daily archive missing: %v", err)
|
||||||
}
|
}
|
||||||
|
assertZipContains(t, dailyArchive, "qfs.db", "config.yaml")
|
||||||
|
|
||||||
backupNow = func() time.Time { return time.Date(2026, 2, 12, 10, 0, 0, 0, time.UTC) }
|
backupNow = func() time.Time { return time.Date(2026, 2, 12, 10, 0, 0, 0, time.UTC) }
|
||||||
created, err = EnsureRotatingLocalBackup(dbPath, cfgPath)
|
created, err = EnsureRotatingLocalBackup(dbPath, cfgPath)
|
||||||
@@ -56,8 +62,8 @@ func TestEnsureRotatingLocalBackupEnvControls(t *testing.T) {
|
|||||||
dbPath := filepath.Join(temp, "qfs.db")
|
dbPath := filepath.Join(temp, "qfs.db")
|
||||||
cfgPath := filepath.Join(temp, "config.yaml")
|
cfgPath := filepath.Join(temp, "config.yaml")
|
||||||
|
|
||||||
if err := os.WriteFile(dbPath, []byte("db"), 0644); err != nil {
|
if err := writeTestSQLiteDB(dbPath); err != nil {
|
||||||
t.Fatalf("write db: %v", err)
|
t.Fatalf("write sqlite db: %v", err)
|
||||||
}
|
}
|
||||||
if err := os.WriteFile(cfgPath, []byte("cfg"), 0644); err != nil {
|
if err := os.WriteFile(cfgPath, []byte("cfg"), 0644); err != nil {
|
||||||
t.Fatalf("write config: %v", err)
|
t.Fatalf("write config: %v", err)
|
||||||
@@ -69,7 +75,7 @@ func TestEnsureRotatingLocalBackupEnvControls(t *testing.T) {
|
|||||||
if _, err := EnsureRotatingLocalBackup(dbPath, cfgPath); err != nil {
|
if _, err := EnsureRotatingLocalBackup(dbPath, cfgPath); err != nil {
|
||||||
t.Fatalf("backup with env: %v", err)
|
t.Fatalf("backup with env: %v", err)
|
||||||
}
|
}
|
||||||
if _, err := os.Stat(filepath.Join(backupRoot, "daily", "meta.json")); err != nil {
|
if _, err := os.Stat(filepath.Join(backupRoot, "daily", ".period.json")); err != nil {
|
||||||
t.Fatalf("expected backup in custom dir: %v", err)
|
t.Fatalf("expected backup in custom dir: %v", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -77,7 +83,75 @@ func TestEnsureRotatingLocalBackupEnvControls(t *testing.T) {
|
|||||||
if _, err := EnsureRotatingLocalBackup(dbPath, cfgPath); err != nil {
|
if _, err := EnsureRotatingLocalBackup(dbPath, cfgPath); err != nil {
|
||||||
t.Fatalf("backup disabled: %v", err)
|
t.Fatalf("backup disabled: %v", err)
|
||||||
}
|
}
|
||||||
if _, err := os.Stat(filepath.Join(backupRoot, "daily", "meta.json")); err != nil {
|
if _, err := os.Stat(filepath.Join(backupRoot, "daily", ".period.json")); err != nil {
|
||||||
t.Fatalf("backup should remain from previous run: %v", err)
|
t.Fatalf("backup should remain from previous run: %v", err)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func TestEnsureRotatingLocalBackupRejectsGitWorktree(t *testing.T) {
|
||||||
|
temp := t.TempDir()
|
||||||
|
repoRoot := filepath.Join(temp, "repo")
|
||||||
|
if err := os.MkdirAll(filepath.Join(repoRoot, ".git"), 0755); err != nil {
|
||||||
|
t.Fatalf("mkdir git dir: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
dbPath := filepath.Join(repoRoot, "data", "qfs.db")
|
||||||
|
cfgPath := filepath.Join(repoRoot, "data", "config.yaml")
|
||||||
|
if err := os.MkdirAll(filepath.Dir(dbPath), 0755); err != nil {
|
||||||
|
t.Fatalf("mkdir data dir: %v", err)
|
||||||
|
}
|
||||||
|
if err := writeTestSQLiteDB(dbPath); err != nil {
|
||||||
|
t.Fatalf("write sqlite db: %v", err)
|
||||||
|
}
|
||||||
|
if err := os.WriteFile(cfgPath, []byte("cfg"), 0644); err != nil {
|
||||||
|
t.Fatalf("write cfg: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
_, err := EnsureRotatingLocalBackup(dbPath, cfgPath)
|
||||||
|
if err == nil {
|
||||||
|
t.Fatal("expected git worktree backup root to be rejected")
|
||||||
|
}
|
||||||
|
if !strings.Contains(err.Error(), "outside git worktree") {
|
||||||
|
t.Fatalf("unexpected error: %v", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func writeTestSQLiteDB(path string) error {
|
||||||
|
db, err := gorm.Open(sqlite.Open(path), &gorm.Config{})
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
sqlDB, err := db.DB()
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
defer sqlDB.Close()
|
||||||
|
|
||||||
|
return db.Exec(`
|
||||||
|
CREATE TABLE sample_items (
|
||||||
|
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||||
|
name TEXT NOT NULL
|
||||||
|
);
|
||||||
|
INSERT INTO sample_items(name) VALUES ('backup');
|
||||||
|
`).Error
|
||||||
|
}
|
||||||
|
|
||||||
|
func assertZipContains(t *testing.T, archivePath string, expected ...string) {
|
||||||
|
t.Helper()
|
||||||
|
|
||||||
|
reader, err := zip.OpenReader(archivePath)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("open archive: %v", err)
|
||||||
|
}
|
||||||
|
defer reader.Close()
|
||||||
|
|
||||||
|
found := make(map[string]bool, len(reader.File))
|
||||||
|
for _, file := range reader.File {
|
||||||
|
found[file.Name] = true
|
||||||
|
}
|
||||||
|
for _, name := range expected {
|
||||||
|
if !found[name] {
|
||||||
|
t.Fatalf("archive %s missing %s", archivePath, name)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|||||||
@@ -195,6 +195,9 @@ func buildGPUSegment(items []models.ConfigItem, cats map[string]string) string {
|
|||||||
if !ok || group != GroupGPU {
|
if !ok || group != GroupGPU {
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
|
if strings.HasPrefix(strings.ToUpper(it.LotName), "MB_") {
|
||||||
|
continue
|
||||||
|
}
|
||||||
model := parseGPUModel(it.LotName)
|
model := parseGPUModel(it.LotName)
|
||||||
if model == "" {
|
if model == "" {
|
||||||
model = "UNK"
|
model = "UNK"
|
||||||
@@ -326,33 +329,60 @@ func parseGPUModel(lotName string) string {
|
|||||||
}
|
}
|
||||||
parts := strings.Split(upper, "_")
|
parts := strings.Split(upper, "_")
|
||||||
model := ""
|
model := ""
|
||||||
|
numSuffix := ""
|
||||||
mem := ""
|
mem := ""
|
||||||
for i, p := range parts {
|
for i, p := range parts {
|
||||||
if p == "" {
|
if p == "" {
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
switch p {
|
switch p {
|
||||||
case "NV", "NVIDIA", "AMD", "RADEON", "PCIE", "PCI", "SXM", "SXMX":
|
case "NV", "NVIDIA", "INTEL", "AMD", "RADEON", "PCIE", "PCI", "SXM", "SXMX", "SFF", "LOVELACE":
|
||||||
|
continue
|
||||||
|
case "ADA", "AMPERE", "HOPPER", "BLACKWELL":
|
||||||
|
if model != "" {
|
||||||
|
archAbbr := map[string]string{
|
||||||
|
"ADA": "ADA", "AMPERE": "AMP", "HOPPER": "HOP", "BLACKWELL": "BWL",
|
||||||
|
}
|
||||||
|
numSuffix += archAbbr[p]
|
||||||
|
}
|
||||||
continue
|
continue
|
||||||
default:
|
default:
|
||||||
if strings.Contains(p, "GB") {
|
if strings.Contains(p, "GB") {
|
||||||
mem = p
|
mem = p
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
if model == "" && (i > 0) {
|
if model == "" && i > 0 {
|
||||||
model = p
|
model = p
|
||||||
|
} else if model != "" && numSuffix == "" && isNumeric(p) {
|
||||||
|
numSuffix = p
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
if model != "" && mem != "" {
|
full := model
|
||||||
return model + "_" + mem
|
if numSuffix != "" {
|
||||||
|
full = model + numSuffix
|
||||||
}
|
}
|
||||||
if model != "" {
|
if full != "" && mem != "" {
|
||||||
return model
|
return full + "_" + mem
|
||||||
|
}
|
||||||
|
if full != "" {
|
||||||
|
return full
|
||||||
}
|
}
|
||||||
return normalizeModelToken(lotName)
|
return normalizeModelToken(lotName)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func isNumeric(s string) bool {
|
||||||
|
if s == "" {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
for _, r := range s {
|
||||||
|
if r < '0' || r > '9' {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
|
||||||
func parseMemGiB(lotName string) int {
|
func parseMemGiB(lotName string) int {
|
||||||
if m := reMemTiB.FindStringSubmatch(lotName); len(m) == 3 {
|
if m := reMemTiB.FindStringSubmatch(lotName); len(m) == 3 {
|
||||||
return atoi(m[1]) * 1024
|
return atoi(m[1]) * 1024
|
||||||
|
|||||||
@@ -7,20 +7,14 @@ import (
|
|||||||
"strconv"
|
"strconv"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
mysqlDriver "github.com/go-sql-driver/mysql"
|
|
||||||
"gopkg.in/yaml.v3"
|
"gopkg.in/yaml.v3"
|
||||||
)
|
)
|
||||||
|
|
||||||
type Config struct {
|
type Config struct {
|
||||||
Server ServerConfig `yaml:"server"`
|
Server ServerConfig `yaml:"server"`
|
||||||
Database DatabaseConfig `yaml:"database"`
|
Export ExportConfig `yaml:"export"`
|
||||||
Auth AuthConfig `yaml:"auth"`
|
Logging LoggingConfig `yaml:"logging"`
|
||||||
Pricing PricingConfig `yaml:"pricing"`
|
Backup BackupConfig `yaml:"backup"`
|
||||||
Export ExportConfig `yaml:"export"`
|
|
||||||
Alerts AlertsConfig `yaml:"alerts"`
|
|
||||||
Notifications NotificationsConfig `yaml:"notifications"`
|
|
||||||
Logging LoggingConfig `yaml:"logging"`
|
|
||||||
Backup BackupConfig `yaml:"backup"`
|
|
||||||
}
|
}
|
||||||
|
|
||||||
type ServerConfig struct {
|
type ServerConfig struct {
|
||||||
@@ -31,70 +25,6 @@ type ServerConfig struct {
|
|||||||
WriteTimeout time.Duration `yaml:"write_timeout"`
|
WriteTimeout time.Duration `yaml:"write_timeout"`
|
||||||
}
|
}
|
||||||
|
|
||||||
type DatabaseConfig struct {
|
|
||||||
Host string `yaml:"host"`
|
|
||||||
Port int `yaml:"port"`
|
|
||||||
Name string `yaml:"name"`
|
|
||||||
User string `yaml:"user"`
|
|
||||||
Password string `yaml:"password"`
|
|
||||||
MaxOpenConns int `yaml:"max_open_conns"`
|
|
||||||
MaxIdleConns int `yaml:"max_idle_conns"`
|
|
||||||
ConnMaxLifetime time.Duration `yaml:"conn_max_lifetime"`
|
|
||||||
}
|
|
||||||
|
|
||||||
func (d *DatabaseConfig) DSN() string {
|
|
||||||
cfg := mysqlDriver.NewConfig()
|
|
||||||
cfg.User = d.User
|
|
||||||
cfg.Passwd = d.Password
|
|
||||||
cfg.Net = "tcp"
|
|
||||||
cfg.Addr = net.JoinHostPort(d.Host, strconv.Itoa(d.Port))
|
|
||||||
cfg.DBName = d.Name
|
|
||||||
cfg.ParseTime = true
|
|
||||||
cfg.Loc = time.Local
|
|
||||||
cfg.Params = map[string]string{
|
|
||||||
"charset": "utf8mb4",
|
|
||||||
}
|
|
||||||
return cfg.FormatDSN()
|
|
||||||
}
|
|
||||||
|
|
||||||
type AuthConfig struct {
|
|
||||||
JWTSecret string `yaml:"jwt_secret"`
|
|
||||||
TokenExpiry time.Duration `yaml:"token_expiry"`
|
|
||||||
RefreshExpiry time.Duration `yaml:"refresh_expiry"`
|
|
||||||
}
|
|
||||||
|
|
||||||
type PricingConfig struct {
|
|
||||||
DefaultMethod string `yaml:"default_method"`
|
|
||||||
DefaultPeriodDays int `yaml:"default_period_days"`
|
|
||||||
FreshnessGreenDays int `yaml:"freshness_green_days"`
|
|
||||||
FreshnessYellowDays int `yaml:"freshness_yellow_days"`
|
|
||||||
FreshnessRedDays int `yaml:"freshness_red_days"`
|
|
||||||
MinQuotesForMedian int `yaml:"min_quotes_for_median"`
|
|
||||||
PopularityDecayDays int `yaml:"popularity_decay_days"`
|
|
||||||
}
|
|
||||||
|
|
||||||
type ExportConfig struct {
|
|
||||||
TempDir string `yaml:"temp_dir"`
|
|
||||||
MaxFileAge time.Duration `yaml:"max_file_age"`
|
|
||||||
CompanyName string `yaml:"company_name"`
|
|
||||||
}
|
|
||||||
|
|
||||||
type AlertsConfig struct {
|
|
||||||
Enabled bool `yaml:"enabled"`
|
|
||||||
CheckInterval time.Duration `yaml:"check_interval"`
|
|
||||||
HighDemandThreshold int `yaml:"high_demand_threshold"`
|
|
||||||
TrendingThresholdPercent int `yaml:"trending_threshold_percent"`
|
|
||||||
}
|
|
||||||
|
|
||||||
type NotificationsConfig struct {
|
|
||||||
EmailEnabled bool `yaml:"email_enabled"`
|
|
||||||
SMTPHost string `yaml:"smtp_host"`
|
|
||||||
SMTPPort int `yaml:"smtp_port"`
|
|
||||||
SMTPUser string `yaml:"smtp_user"`
|
|
||||||
SMTPPassword string `yaml:"smtp_password"`
|
|
||||||
FromAddress string `yaml:"from_address"`
|
|
||||||
}
|
|
||||||
|
|
||||||
type LoggingConfig struct {
|
type LoggingConfig struct {
|
||||||
Level string `yaml:"level"`
|
Level string `yaml:"level"`
|
||||||
Format string `yaml:"format"`
|
Format string `yaml:"format"`
|
||||||
@@ -102,6 +32,10 @@ type LoggingConfig struct {
|
|||||||
FilePath string `yaml:"file_path"`
|
FilePath string `yaml:"file_path"`
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// ExportConfig is kept for constructor compatibility in export services.
|
||||||
|
// Runtime no longer persists an export section in config.yaml.
|
||||||
|
type ExportConfig struct{}
|
||||||
|
|
||||||
type BackupConfig struct {
|
type BackupConfig struct {
|
||||||
Time string `yaml:"time"`
|
Time string `yaml:"time"`
|
||||||
}
|
}
|
||||||
@@ -139,45 +73,6 @@ func (c *Config) setDefaults() {
|
|||||||
c.Server.WriteTimeout = 30 * time.Second
|
c.Server.WriteTimeout = 30 * time.Second
|
||||||
}
|
}
|
||||||
|
|
||||||
if c.Database.Port == 0 {
|
|
||||||
c.Database.Port = 3306
|
|
||||||
}
|
|
||||||
if c.Database.MaxOpenConns == 0 {
|
|
||||||
c.Database.MaxOpenConns = 25
|
|
||||||
}
|
|
||||||
if c.Database.MaxIdleConns == 0 {
|
|
||||||
c.Database.MaxIdleConns = 5
|
|
||||||
}
|
|
||||||
if c.Database.ConnMaxLifetime == 0 {
|
|
||||||
c.Database.ConnMaxLifetime = 5 * time.Minute
|
|
||||||
}
|
|
||||||
|
|
||||||
if c.Auth.TokenExpiry == 0 {
|
|
||||||
c.Auth.TokenExpiry = 24 * time.Hour
|
|
||||||
}
|
|
||||||
if c.Auth.RefreshExpiry == 0 {
|
|
||||||
c.Auth.RefreshExpiry = 7 * 24 * time.Hour
|
|
||||||
}
|
|
||||||
|
|
||||||
if c.Pricing.DefaultMethod == "" {
|
|
||||||
c.Pricing.DefaultMethod = "weighted_median"
|
|
||||||
}
|
|
||||||
if c.Pricing.DefaultPeriodDays == 0 {
|
|
||||||
c.Pricing.DefaultPeriodDays = 90
|
|
||||||
}
|
|
||||||
if c.Pricing.FreshnessGreenDays == 0 {
|
|
||||||
c.Pricing.FreshnessGreenDays = 30
|
|
||||||
}
|
|
||||||
if c.Pricing.FreshnessYellowDays == 0 {
|
|
||||||
c.Pricing.FreshnessYellowDays = 60
|
|
||||||
}
|
|
||||||
if c.Pricing.FreshnessRedDays == 0 {
|
|
||||||
c.Pricing.FreshnessRedDays = 90
|
|
||||||
}
|
|
||||||
if c.Pricing.MinQuotesForMedian == 0 {
|
|
||||||
c.Pricing.MinQuotesForMedian = 3
|
|
||||||
}
|
|
||||||
|
|
||||||
if c.Logging.Level == "" {
|
if c.Logging.Level == "" {
|
||||||
c.Logging.Level = "info"
|
c.Logging.Level = "info"
|
||||||
}
|
}
|
||||||
@@ -194,5 +89,5 @@ func (c *Config) setDefaults() {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (c *Config) Address() string {
|
func (c *Config) Address() string {
|
||||||
return fmt.Sprintf("%s:%d", c.Server.Host, c.Server.Port)
|
return net.JoinHostPort(c.Server.Host, strconv.Itoa(c.Server.Port))
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -238,6 +238,22 @@ func (cm *ConnectionManager) Disconnect() {
|
|||||||
cm.lastError = nil
|
cm.lastError = nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// MarkOffline closes the current connection and preserves the last observed error.
|
||||||
|
func (cm *ConnectionManager) MarkOffline(err error) {
|
||||||
|
cm.mu.Lock()
|
||||||
|
defer cm.mu.Unlock()
|
||||||
|
|
||||||
|
if cm.db != nil {
|
||||||
|
sqlDB, dbErr := cm.db.DB()
|
||||||
|
if dbErr == nil {
|
||||||
|
sqlDB.Close()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
cm.db = nil
|
||||||
|
cm.lastError = err
|
||||||
|
cm.lastCheck = time.Now()
|
||||||
|
}
|
||||||
|
|
||||||
// GetLastError returns the last connection error (thread-safe)
|
// GetLastError returns the last connection error (thread-safe)
|
||||||
func (cm *ConnectionManager) GetLastError() error {
|
func (cm *ConnectionManager) GetLastError() error {
|
||||||
cm.mu.RLock()
|
cm.mu.RLock()
|
||||||
|
|||||||
@@ -1,113 +0,0 @@
|
|||||||
package handlers
|
|
||||||
|
|
||||||
import (
|
|
||||||
"net/http"
|
|
||||||
|
|
||||||
"github.com/gin-gonic/gin"
|
|
||||||
"git.mchus.pro/mchus/quoteforge/internal/middleware"
|
|
||||||
"git.mchus.pro/mchus/quoteforge/internal/repository"
|
|
||||||
"git.mchus.pro/mchus/quoteforge/internal/services"
|
|
||||||
)
|
|
||||||
|
|
||||||
type AuthHandler struct {
|
|
||||||
authService *services.AuthService
|
|
||||||
userRepo *repository.UserRepository
|
|
||||||
}
|
|
||||||
|
|
||||||
func NewAuthHandler(authService *services.AuthService, userRepo *repository.UserRepository) *AuthHandler {
|
|
||||||
return &AuthHandler{
|
|
||||||
authService: authService,
|
|
||||||
userRepo: userRepo,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
type LoginRequest struct {
|
|
||||||
Username string `json:"username" binding:"required"`
|
|
||||||
Password string `json:"password" binding:"required"`
|
|
||||||
}
|
|
||||||
|
|
||||||
type LoginResponse struct {
|
|
||||||
AccessToken string `json:"access_token"`
|
|
||||||
RefreshToken string `json:"refresh_token"`
|
|
||||||
ExpiresAt int64 `json:"expires_at"`
|
|
||||||
User UserResponse `json:"user"`
|
|
||||||
}
|
|
||||||
|
|
||||||
type UserResponse struct {
|
|
||||||
ID uint `json:"id"`
|
|
||||||
Username string `json:"username"`
|
|
||||||
Email string `json:"email"`
|
|
||||||
Role string `json:"role"`
|
|
||||||
}
|
|
||||||
|
|
||||||
func (h *AuthHandler) Login(c *gin.Context) {
|
|
||||||
var req LoginRequest
|
|
||||||
if err := c.ShouldBindJSON(&req); err != nil {
|
|
||||||
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
tokens, user, err := h.authService.Login(req.Username, req.Password)
|
|
||||||
if err != nil {
|
|
||||||
c.JSON(http.StatusUnauthorized, gin.H{"error": err.Error()})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
c.JSON(http.StatusOK, LoginResponse{
|
|
||||||
AccessToken: tokens.AccessToken,
|
|
||||||
RefreshToken: tokens.RefreshToken,
|
|
||||||
ExpiresAt: tokens.ExpiresAt,
|
|
||||||
User: UserResponse{
|
|
||||||
ID: user.ID,
|
|
||||||
Username: user.Username,
|
|
||||||
Email: user.Email,
|
|
||||||
Role: string(user.Role),
|
|
||||||
},
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
type RefreshRequest struct {
|
|
||||||
RefreshToken string `json:"refresh_token" binding:"required"`
|
|
||||||
}
|
|
||||||
|
|
||||||
func (h *AuthHandler) Refresh(c *gin.Context) {
|
|
||||||
var req RefreshRequest
|
|
||||||
if err := c.ShouldBindJSON(&req); err != nil {
|
|
||||||
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
tokens, err := h.authService.RefreshTokens(req.RefreshToken)
|
|
||||||
if err != nil {
|
|
||||||
c.JSON(http.StatusUnauthorized, gin.H{"error": err.Error()})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
c.JSON(http.StatusOK, tokens)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (h *AuthHandler) Me(c *gin.Context) {
|
|
||||||
claims := middleware.GetClaims(c)
|
|
||||||
if claims == nil {
|
|
||||||
c.JSON(http.StatusUnauthorized, gin.H{"error": "not authenticated"})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
user, err := h.userRepo.GetByID(claims.UserID)
|
|
||||||
if err != nil {
|
|
||||||
c.JSON(http.StatusNotFound, gin.H{"error": "user not found"})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
c.JSON(http.StatusOK, UserResponse{
|
|
||||||
ID: user.ID,
|
|
||||||
Username: user.Username,
|
|
||||||
Email: user.Email,
|
|
||||||
Role: string(user.Role),
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
func (h *AuthHandler) Logout(c *gin.Context) {
|
|
||||||
// JWT is stateless, logout is handled on client by discarding tokens
|
|
||||||
c.JSON(http.StatusOK, gin.H{"message": "logged out"})
|
|
||||||
}
|
|
||||||
@@ -49,7 +49,7 @@ func (h *ComponentHandler) List(c *gin.Context) {
|
|||||||
offset := (page - 1) * perPage
|
offset := (page - 1) * perPage
|
||||||
localComps, total, err := h.localDB.ListComponents(localFilter, offset, perPage)
|
localComps, total, err := h.localDB.ListComponents(localFilter, offset, perPage)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
RespondError(c, http.StatusInternalServerError, "internal server error", err)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -1,239 +0,0 @@
|
|||||||
package handlers
|
|
||||||
|
|
||||||
import (
|
|
||||||
"net/http"
|
|
||||||
"strconv"
|
|
||||||
|
|
||||||
"git.mchus.pro/mchus/quoteforge/internal/middleware"
|
|
||||||
"git.mchus.pro/mchus/quoteforge/internal/services"
|
|
||||||
"github.com/gin-gonic/gin"
|
|
||||||
)
|
|
||||||
|
|
||||||
type ConfigurationHandler struct {
|
|
||||||
configService *services.ConfigurationService
|
|
||||||
exportService *services.ExportService
|
|
||||||
}
|
|
||||||
|
|
||||||
func NewConfigurationHandler(
|
|
||||||
configService *services.ConfigurationService,
|
|
||||||
exportService *services.ExportService,
|
|
||||||
) *ConfigurationHandler {
|
|
||||||
return &ConfigurationHandler{
|
|
||||||
configService: configService,
|
|
||||||
exportService: exportService,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (h *ConfigurationHandler) List(c *gin.Context) {
|
|
||||||
username := middleware.GetUsername(c)
|
|
||||||
page, _ := strconv.Atoi(c.DefaultQuery("page", "1"))
|
|
||||||
perPage, _ := strconv.Atoi(c.DefaultQuery("per_page", "20"))
|
|
||||||
|
|
||||||
configs, total, err := h.configService.ListByUser(username, page, perPage)
|
|
||||||
if err != nil {
|
|
||||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
c.JSON(http.StatusOK, gin.H{
|
|
||||||
"configurations": configs,
|
|
||||||
"total": total,
|
|
||||||
"page": page,
|
|
||||||
"per_page": perPage,
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
func (h *ConfigurationHandler) Create(c *gin.Context) {
|
|
||||||
username := middleware.GetUsername(c)
|
|
||||||
|
|
||||||
var req services.CreateConfigRequest
|
|
||||||
if err := c.ShouldBindJSON(&req); err != nil {
|
|
||||||
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
config, err := h.configService.Create(username, &req)
|
|
||||||
if err != nil {
|
|
||||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
c.JSON(http.StatusCreated, config)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (h *ConfigurationHandler) Get(c *gin.Context) {
|
|
||||||
username := middleware.GetUsername(c)
|
|
||||||
uuid := c.Param("uuid")
|
|
||||||
|
|
||||||
config, err := h.configService.GetByUUID(uuid, username)
|
|
||||||
if err != nil {
|
|
||||||
status := http.StatusNotFound
|
|
||||||
if err == services.ErrConfigForbidden {
|
|
||||||
status = http.StatusForbidden
|
|
||||||
}
|
|
||||||
c.JSON(status, gin.H{"error": err.Error()})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
c.JSON(http.StatusOK, config)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (h *ConfigurationHandler) Update(c *gin.Context) {
|
|
||||||
username := middleware.GetUsername(c)
|
|
||||||
uuid := c.Param("uuid")
|
|
||||||
|
|
||||||
var req services.CreateConfigRequest
|
|
||||||
if err := c.ShouldBindJSON(&req); err != nil {
|
|
||||||
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
config, err := h.configService.Update(uuid, username, &req)
|
|
||||||
if err != nil {
|
|
||||||
status := http.StatusInternalServerError
|
|
||||||
if err == services.ErrConfigNotFound {
|
|
||||||
status = http.StatusNotFound
|
|
||||||
} else if err == services.ErrConfigForbidden {
|
|
||||||
status = http.StatusForbidden
|
|
||||||
}
|
|
||||||
c.JSON(status, gin.H{"error": err.Error()})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
c.JSON(http.StatusOK, config)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (h *ConfigurationHandler) Delete(c *gin.Context) {
|
|
||||||
username := middleware.GetUsername(c)
|
|
||||||
uuid := c.Param("uuid")
|
|
||||||
|
|
||||||
err := h.configService.Delete(uuid, username)
|
|
||||||
if err != nil {
|
|
||||||
status := http.StatusInternalServerError
|
|
||||||
if err == services.ErrConfigNotFound {
|
|
||||||
status = http.StatusNotFound
|
|
||||||
} else if err == services.ErrConfigForbidden {
|
|
||||||
status = http.StatusForbidden
|
|
||||||
}
|
|
||||||
c.JSON(status, gin.H{"error": err.Error()})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
c.JSON(http.StatusOK, gin.H{"message": "deleted"})
|
|
||||||
}
|
|
||||||
|
|
||||||
type RenameConfigRequest struct {
|
|
||||||
Name string `json:"name" binding:"required"`
|
|
||||||
}
|
|
||||||
|
|
||||||
func (h *ConfigurationHandler) Rename(c *gin.Context) {
|
|
||||||
username := middleware.GetUsername(c)
|
|
||||||
uuid := c.Param("uuid")
|
|
||||||
|
|
||||||
var req RenameConfigRequest
|
|
||||||
if err := c.ShouldBindJSON(&req); err != nil {
|
|
||||||
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
config, err := h.configService.Rename(uuid, username, req.Name)
|
|
||||||
if err != nil {
|
|
||||||
status := http.StatusInternalServerError
|
|
||||||
if err == services.ErrConfigNotFound {
|
|
||||||
status = http.StatusNotFound
|
|
||||||
} else if err == services.ErrConfigForbidden {
|
|
||||||
status = http.StatusForbidden
|
|
||||||
}
|
|
||||||
c.JSON(status, gin.H{"error": err.Error()})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
c.JSON(http.StatusOK, config)
|
|
||||||
}
|
|
||||||
|
|
||||||
type CloneConfigRequest struct {
|
|
||||||
Name string `json:"name" binding:"required"`
|
|
||||||
}
|
|
||||||
|
|
||||||
func (h *ConfigurationHandler) Clone(c *gin.Context) {
|
|
||||||
username := middleware.GetUsername(c)
|
|
||||||
uuid := c.Param("uuid")
|
|
||||||
|
|
||||||
var req CloneConfigRequest
|
|
||||||
if err := c.ShouldBindJSON(&req); err != nil {
|
|
||||||
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
config, err := h.configService.Clone(uuid, username, req.Name)
|
|
||||||
if err != nil {
|
|
||||||
status := http.StatusInternalServerError
|
|
||||||
if err == services.ErrConfigNotFound {
|
|
||||||
status = http.StatusNotFound
|
|
||||||
} else if err == services.ErrConfigForbidden {
|
|
||||||
status = http.StatusForbidden
|
|
||||||
}
|
|
||||||
c.JSON(status, gin.H{"error": err.Error()})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
c.JSON(http.StatusCreated, config)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (h *ConfigurationHandler) RefreshPrices(c *gin.Context) {
|
|
||||||
username := middleware.GetUsername(c)
|
|
||||||
uuid := c.Param("uuid")
|
|
||||||
|
|
||||||
config, err := h.configService.RefreshPrices(uuid, username)
|
|
||||||
if err != nil {
|
|
||||||
status := http.StatusInternalServerError
|
|
||||||
if err == services.ErrConfigNotFound {
|
|
||||||
status = http.StatusNotFound
|
|
||||||
} else if err == services.ErrConfigForbidden {
|
|
||||||
status = http.StatusForbidden
|
|
||||||
}
|
|
||||||
c.JSON(status, gin.H{"error": err.Error()})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
c.JSON(http.StatusOK, config)
|
|
||||||
}
|
|
||||||
|
|
||||||
// func (h *ConfigurationHandler) ExportJSON(c *gin.Context) {
|
|
||||||
// userID := middleware.GetUserID(c)
|
|
||||||
// uuid := c.Param("uuid")
|
|
||||||
//
|
|
||||||
// config, err := h.configService.GetByUUID(uuid, userID)
|
|
||||||
// if err != nil {
|
|
||||||
// c.JSON(http.StatusNotFound, gin.H{"error": err.Error()})
|
|
||||||
// return
|
|
||||||
// }
|
|
||||||
//
|
|
||||||
// data, err := h.configService.ExportJSON(uuid, userID)
|
|
||||||
// if err != nil {
|
|
||||||
// c.JSON(http.StatusNotFound, gin.H{"error": err.Error()})
|
|
||||||
// return
|
|
||||||
// }
|
|
||||||
//
|
|
||||||
// filename := fmt.Sprintf("%s %s SPEC.json", config.CreatedAt.Format("2006-01-02"), config.Name)
|
|
||||||
// c.Header("Content-Disposition", fmt.Sprintf("attachment; filename=\"%s\"", filename))
|
|
||||||
// c.Data(http.StatusOK, "application/json", data)
|
|
||||||
// }
|
|
||||||
|
|
||||||
// func (h *ConfigurationHandler) ImportJSON(c *gin.Context) {
|
|
||||||
// userID := middleware.GetUserID(c)
|
|
||||||
//
|
|
||||||
// data, err := io.ReadAll(c.Request.Body)
|
|
||||||
// if err != nil {
|
|
||||||
// c.JSON(http.StatusBadRequest, gin.H{"error": "failed to read body"})
|
|
||||||
// return
|
|
||||||
// }
|
|
||||||
//
|
|
||||||
// config, err := h.configService.ImportJSON(userID, data)
|
|
||||||
// if err != nil {
|
|
||||||
// c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
|
||||||
// return
|
|
||||||
// }
|
|
||||||
//
|
|
||||||
// c.JSON(http.StatusCreated, config)
|
|
||||||
// }
|
|
||||||
@@ -6,7 +6,6 @@ import (
|
|||||||
"strings"
|
"strings"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"git.mchus.pro/mchus/quoteforge/internal/middleware"
|
|
||||||
"git.mchus.pro/mchus/quoteforge/internal/models"
|
"git.mchus.pro/mchus/quoteforge/internal/models"
|
||||||
"git.mchus.pro/mchus/quoteforge/internal/services"
|
"git.mchus.pro/mchus/quoteforge/internal/services"
|
||||||
"github.com/gin-gonic/gin"
|
"github.com/gin-gonic/gin"
|
||||||
@@ -16,17 +15,20 @@ type ExportHandler struct {
|
|||||||
exportService *services.ExportService
|
exportService *services.ExportService
|
||||||
configService services.ConfigurationGetter
|
configService services.ConfigurationGetter
|
||||||
projectService *services.ProjectService
|
projectService *services.ProjectService
|
||||||
|
dbUsername string
|
||||||
}
|
}
|
||||||
|
|
||||||
func NewExportHandler(
|
func NewExportHandler(
|
||||||
exportService *services.ExportService,
|
exportService *services.ExportService,
|
||||||
configService services.ConfigurationGetter,
|
configService services.ConfigurationGetter,
|
||||||
projectService *services.ProjectService,
|
projectService *services.ProjectService,
|
||||||
|
dbUsername string,
|
||||||
) *ExportHandler {
|
) *ExportHandler {
|
||||||
return &ExportHandler{
|
return &ExportHandler{
|
||||||
exportService: exportService,
|
exportService: exportService,
|
||||||
configService: configService,
|
configService: configService,
|
||||||
projectService: projectService,
|
projectService: projectService,
|
||||||
|
dbUsername: dbUsername,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -45,10 +47,20 @@ type ExportRequest struct {
|
|||||||
Notes string `json:"notes"`
|
Notes string `json:"notes"`
|
||||||
}
|
}
|
||||||
|
|
||||||
|
type ProjectExportOptionsRequest struct {
|
||||||
|
IncludeLOT bool `json:"include_lot"`
|
||||||
|
IncludeBOM bool `json:"include_bom"`
|
||||||
|
IncludeEstimate bool `json:"include_estimate"`
|
||||||
|
IncludeStock bool `json:"include_stock"`
|
||||||
|
IncludeCompetitor bool `json:"include_competitor"`
|
||||||
|
Basis string `json:"basis"` // "fob" or "ddp"
|
||||||
|
SaleMarkup float64 `json:"sale_markup"` // DDP multiplier; 0 defaults to 1.3
|
||||||
|
}
|
||||||
|
|
||||||
func (h *ExportHandler) ExportCSV(c *gin.Context) {
|
func (h *ExportHandler) ExportCSV(c *gin.Context) {
|
||||||
var req ExportRequest
|
var req ExportRequest
|
||||||
if err := c.ShouldBindJSON(&req); err != nil {
|
if err := c.ShouldBindJSON(&req); err != nil {
|
||||||
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
RespondError(c, http.StatusBadRequest, "invalid request", err)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -63,8 +75,7 @@ func (h *ExportHandler) ExportCSV(c *gin.Context) {
|
|||||||
// Get project code for filename
|
// Get project code for filename
|
||||||
projectCode := req.ProjectName // legacy field: may contain code from frontend
|
projectCode := req.ProjectName // legacy field: may contain code from frontend
|
||||||
if projectCode == "" && req.ProjectUUID != "" {
|
if projectCode == "" && req.ProjectUUID != "" {
|
||||||
username := middleware.GetUsername(c)
|
if project, err := h.projectService.GetByUUID(req.ProjectUUID, h.dbUsername); err == nil && project != nil {
|
||||||
if project, err := h.projectService.GetByUUID(req.ProjectUUID, username); err == nil && project != nil {
|
|
||||||
projectCode = project.Code
|
projectCode = project.Code
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -136,13 +147,12 @@ func sanitizeFilenameSegment(value string) string {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (h *ExportHandler) ExportConfigCSV(c *gin.Context) {
|
func (h *ExportHandler) ExportConfigCSV(c *gin.Context) {
|
||||||
username := middleware.GetUsername(c)
|
|
||||||
uuid := c.Param("uuid")
|
uuid := c.Param("uuid")
|
||||||
|
|
||||||
// Get config before streaming (can return JSON error)
|
// Get config before streaming (can return JSON error)
|
||||||
config, err := h.configService.GetByUUID(uuid, username)
|
config, err := h.configService.GetByUUID(uuid, h.dbUsername)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
c.JSON(http.StatusNotFound, gin.H{"error": err.Error()})
|
RespondError(c, http.StatusNotFound, "resource not found", err)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -157,7 +167,7 @@ func (h *ExportHandler) ExportConfigCSV(c *gin.Context) {
|
|||||||
// Get project code for filename
|
// Get project code for filename
|
||||||
projectCode := config.Name // fallback: use config name if no project
|
projectCode := config.Name // fallback: use config name if no project
|
||||||
if config.ProjectUUID != nil && *config.ProjectUUID != "" {
|
if config.ProjectUUID != nil && *config.ProjectUUID != "" {
|
||||||
if project, err := h.projectService.GetByUUID(*config.ProjectUUID, username); err == nil && project != nil {
|
if project, err := h.projectService.GetByUUID(*config.ProjectUUID, h.dbUsername); err == nil && project != nil {
|
||||||
projectCode = project.Code
|
projectCode = project.Code
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -181,18 +191,17 @@ func (h *ExportHandler) ExportConfigCSV(c *gin.Context) {
|
|||||||
|
|
||||||
// ExportProjectCSV exports all active configurations of a project as a single CSV file.
|
// ExportProjectCSV exports all active configurations of a project as a single CSV file.
|
||||||
func (h *ExportHandler) ExportProjectCSV(c *gin.Context) {
|
func (h *ExportHandler) ExportProjectCSV(c *gin.Context) {
|
||||||
username := middleware.GetUsername(c)
|
|
||||||
projectUUID := c.Param("uuid")
|
projectUUID := c.Param("uuid")
|
||||||
|
|
||||||
project, err := h.projectService.GetByUUID(projectUUID, username)
|
project, err := h.projectService.GetByUUID(projectUUID, h.dbUsername)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
c.JSON(http.StatusNotFound, gin.H{"error": err.Error()})
|
RespondError(c, http.StatusNotFound, "resource not found", err)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
result, err := h.projectService.ListConfigurations(projectUUID, username, "active")
|
result, err := h.projectService.ListConfigurations(projectUUID, h.dbUsername, "active")
|
||||||
if err != nil {
|
if err != nil {
|
||||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
RespondError(c, http.StatusInternalServerError, "internal server error", err)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -213,3 +222,62 @@ func (h *ExportHandler) ExportProjectCSV(c *gin.Context) {
|
|||||||
return
|
return
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (h *ExportHandler) ExportProjectPricingCSV(c *gin.Context) {
|
||||||
|
projectUUID := c.Param("uuid")
|
||||||
|
|
||||||
|
var req ProjectExportOptionsRequest
|
||||||
|
if err := c.ShouldBindJSON(&req); err != nil {
|
||||||
|
RespondError(c, http.StatusBadRequest, "invalid request", err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
project, err := h.projectService.GetByUUID(projectUUID, h.dbUsername)
|
||||||
|
if err != nil {
|
||||||
|
RespondError(c, http.StatusNotFound, "resource not found", err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
result, err := h.projectService.ListConfigurations(projectUUID, h.dbUsername, "active")
|
||||||
|
if err != nil {
|
||||||
|
RespondError(c, http.StatusInternalServerError, "internal server error", err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
if len(result.Configs) == 0 {
|
||||||
|
c.JSON(http.StatusBadRequest, gin.H{"error": "no configurations to export"})
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
opts := services.ProjectPricingExportOptions{
|
||||||
|
IncludeLOT: req.IncludeLOT,
|
||||||
|
IncludeBOM: req.IncludeBOM,
|
||||||
|
IncludeEstimate: req.IncludeEstimate,
|
||||||
|
IncludeStock: req.IncludeStock,
|
||||||
|
IncludeCompetitor: req.IncludeCompetitor,
|
||||||
|
Basis: req.Basis,
|
||||||
|
SaleMarkup: req.SaleMarkup,
|
||||||
|
}
|
||||||
|
|
||||||
|
data, err := h.exportService.ProjectToPricingExportData(result.Configs, opts)
|
||||||
|
if err != nil {
|
||||||
|
RespondError(c, http.StatusInternalServerError, "internal server error", err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
basisLabel := "FOB"
|
||||||
|
if strings.EqualFold(strings.TrimSpace(req.Basis), "ddp") {
|
||||||
|
basisLabel = "DDP"
|
||||||
|
}
|
||||||
|
variantLabel := strings.TrimSpace(project.Variant)
|
||||||
|
if variantLabel == "" {
|
||||||
|
variantLabel = "main"
|
||||||
|
}
|
||||||
|
filename := fmt.Sprintf("%s (%s) %s %s.csv", time.Now().Format("2006-01-02"), project.Code, basisLabel, variantLabel)
|
||||||
|
c.Header("Content-Type", "text/csv; charset=utf-8")
|
||||||
|
c.Header("Content-Disposition", fmt.Sprintf("attachment; filename=\"%s\"", filename))
|
||||||
|
|
||||||
|
if err := h.exportService.ToPricingCSV(c.Writer, data, opts); err != nil {
|
||||||
|
c.Error(err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|||||||
@@ -26,7 +26,6 @@ func (m *mockConfigService) GetByUUID(uuid string, ownerUsername string) (*model
|
|||||||
return m.config, m.err
|
return m.config, m.err
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
func TestExportCSV_Success(t *testing.T) {
|
func TestExportCSV_Success(t *testing.T) {
|
||||||
gin.SetMode(gin.TestMode)
|
gin.SetMode(gin.TestMode)
|
||||||
|
|
||||||
@@ -36,6 +35,7 @@ func TestExportCSV_Success(t *testing.T) {
|
|||||||
exportSvc,
|
exportSvc,
|
||||||
&mockConfigService{},
|
&mockConfigService{},
|
||||||
nil,
|
nil,
|
||||||
|
"testuser",
|
||||||
)
|
)
|
||||||
|
|
||||||
// Create JSON request body
|
// Create JSON request body
|
||||||
@@ -110,6 +110,7 @@ func TestExportCSV_InvalidRequest(t *testing.T) {
|
|||||||
exportSvc,
|
exportSvc,
|
||||||
&mockConfigService{},
|
&mockConfigService{},
|
||||||
nil,
|
nil,
|
||||||
|
"testuser",
|
||||||
)
|
)
|
||||||
|
|
||||||
// Create invalid request (missing required field)
|
// Create invalid request (missing required field)
|
||||||
@@ -143,6 +144,7 @@ func TestExportCSV_EmptyItems(t *testing.T) {
|
|||||||
exportSvc,
|
exportSvc,
|
||||||
&mockConfigService{},
|
&mockConfigService{},
|
||||||
nil,
|
nil,
|
||||||
|
"testuser",
|
||||||
)
|
)
|
||||||
|
|
||||||
// Create request with empty items array - should fail binding validation
|
// Create request with empty items array - should fail binding validation
|
||||||
@@ -184,6 +186,7 @@ func TestExportConfigCSV_Success(t *testing.T) {
|
|||||||
exportSvc,
|
exportSvc,
|
||||||
&mockConfigService{config: mockConfig},
|
&mockConfigService{config: mockConfig},
|
||||||
nil,
|
nil,
|
||||||
|
"testuser",
|
||||||
)
|
)
|
||||||
|
|
||||||
// Create HTTP request
|
// Create HTTP request
|
||||||
@@ -196,9 +199,6 @@ func TestExportConfigCSV_Success(t *testing.T) {
|
|||||||
{Key: "uuid", Value: "test-uuid"},
|
{Key: "uuid", Value: "test-uuid"},
|
||||||
}
|
}
|
||||||
|
|
||||||
// Mock middleware.GetUsername
|
|
||||||
c.Set("username", "testuser")
|
|
||||||
|
|
||||||
handler.ExportConfigCSV(c)
|
handler.ExportConfigCSV(c)
|
||||||
|
|
||||||
// Check status code
|
// Check status code
|
||||||
@@ -233,6 +233,7 @@ func TestExportConfigCSV_NotFound(t *testing.T) {
|
|||||||
exportSvc,
|
exportSvc,
|
||||||
&mockConfigService{err: errors.New("config not found")},
|
&mockConfigService{err: errors.New("config not found")},
|
||||||
nil,
|
nil,
|
||||||
|
"testuser",
|
||||||
)
|
)
|
||||||
|
|
||||||
req, _ := http.NewRequest("GET", "/api/configs/nonexistent-uuid/export", nil)
|
req, _ := http.NewRequest("GET", "/api/configs/nonexistent-uuid/export", nil)
|
||||||
@@ -243,8 +244,6 @@ func TestExportConfigCSV_NotFound(t *testing.T) {
|
|||||||
c.Params = gin.Params{
|
c.Params = gin.Params{
|
||||||
{Key: "uuid", Value: "nonexistent-uuid"},
|
{Key: "uuid", Value: "nonexistent-uuid"},
|
||||||
}
|
}
|
||||||
c.Set("username", "testuser")
|
|
||||||
|
|
||||||
handler.ExportConfigCSV(c)
|
handler.ExportConfigCSV(c)
|
||||||
|
|
||||||
// Should return 404 Not Found
|
// Should return 404 Not Found
|
||||||
@@ -277,6 +276,7 @@ func TestExportConfigCSV_EmptyItems(t *testing.T) {
|
|||||||
exportSvc,
|
exportSvc,
|
||||||
&mockConfigService{config: mockConfig},
|
&mockConfigService{config: mockConfig},
|
||||||
nil,
|
nil,
|
||||||
|
"testuser",
|
||||||
)
|
)
|
||||||
|
|
||||||
req, _ := http.NewRequest("GET", "/api/configs/test-uuid/export", nil)
|
req, _ := http.NewRequest("GET", "/api/configs/test-uuid/export", nil)
|
||||||
@@ -287,8 +287,6 @@ func TestExportConfigCSV_EmptyItems(t *testing.T) {
|
|||||||
c.Params = gin.Params{
|
c.Params = gin.Params{
|
||||||
{Key: "uuid", Value: "test-uuid"},
|
{Key: "uuid", Value: "test-uuid"},
|
||||||
}
|
}
|
||||||
c.Set("username", "testuser")
|
|
||||||
|
|
||||||
handler.ExportConfigCSV(c)
|
handler.ExportConfigCSV(c)
|
||||||
|
|
||||||
// Should return 400 Bad Request
|
// Should return 400 Bad Request
|
||||||
|
|||||||
106
internal/handlers/partnumber_books.go
Normal file
106
internal/handlers/partnumber_books.go
Normal file
@@ -0,0 +1,106 @@
|
|||||||
|
package handlers
|
||||||
|
|
||||||
|
import (
|
||||||
|
"net/http"
|
||||||
|
"strconv"
|
||||||
|
"strings"
|
||||||
|
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/localdb"
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/repository"
|
||||||
|
"github.com/gin-gonic/gin"
|
||||||
|
)
|
||||||
|
|
||||||
|
// PartnumberBooksHandler provides read-only access to local partnumber book snapshots.
|
||||||
|
type PartnumberBooksHandler struct {
|
||||||
|
localDB *localdb.LocalDB
|
||||||
|
}
|
||||||
|
|
||||||
|
func NewPartnumberBooksHandler(localDB *localdb.LocalDB) *PartnumberBooksHandler {
|
||||||
|
return &PartnumberBooksHandler{localDB: localDB}
|
||||||
|
}
|
||||||
|
|
||||||
|
// List returns all local partnumber book snapshots.
|
||||||
|
// GET /api/partnumber-books
|
||||||
|
func (h *PartnumberBooksHandler) List(c *gin.Context) {
|
||||||
|
bookRepo := repository.NewPartnumberBookRepository(h.localDB.DB())
|
||||||
|
books, err := bookRepo.ListBooks()
|
||||||
|
if err != nil {
|
||||||
|
RespondError(c, http.StatusInternalServerError, "internal server error", err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
type bookSummary struct {
|
||||||
|
ID uint `json:"id"`
|
||||||
|
ServerID int `json:"server_id"`
|
||||||
|
Version string `json:"version"`
|
||||||
|
CreatedAt string `json:"created_at"`
|
||||||
|
IsActive bool `json:"is_active"`
|
||||||
|
ItemCount int64 `json:"item_count"`
|
||||||
|
}
|
||||||
|
|
||||||
|
summaries := make([]bookSummary, 0, len(books))
|
||||||
|
for _, b := range books {
|
||||||
|
summaries = append(summaries, bookSummary{
|
||||||
|
ID: b.ID,
|
||||||
|
ServerID: b.ServerID,
|
||||||
|
Version: b.Version,
|
||||||
|
CreatedAt: b.CreatedAt.Format("2006-01-02"),
|
||||||
|
IsActive: b.IsActive,
|
||||||
|
ItemCount: bookRepo.CountBookItems(b.ID),
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
c.JSON(http.StatusOK, gin.H{
|
||||||
|
"books": summaries,
|
||||||
|
"total": len(summaries),
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetItems returns items for a partnumber book by server ID.
|
||||||
|
// GET /api/partnumber-books/:id
|
||||||
|
func (h *PartnumberBooksHandler) GetItems(c *gin.Context) {
|
||||||
|
idStr := c.Param("id")
|
||||||
|
id, err := strconv.ParseUint(idStr, 10, 64)
|
||||||
|
if err != nil {
|
||||||
|
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid book ID"})
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
bookRepo := repository.NewPartnumberBookRepository(h.localDB.DB())
|
||||||
|
page, _ := strconv.Atoi(c.DefaultQuery("page", "1"))
|
||||||
|
perPage, _ := strconv.Atoi(c.DefaultQuery("per_page", "100"))
|
||||||
|
search := strings.TrimSpace(c.Query("search"))
|
||||||
|
if page < 1 {
|
||||||
|
page = 1
|
||||||
|
}
|
||||||
|
if perPage < 1 || perPage > 500 {
|
||||||
|
perPage = 100
|
||||||
|
}
|
||||||
|
|
||||||
|
// Find local book by server_id
|
||||||
|
var book localdb.LocalPartnumberBook
|
||||||
|
if err := h.localDB.DB().Where("server_id = ?", id).First(&book).Error; err != nil {
|
||||||
|
c.JSON(http.StatusNotFound, gin.H{"error": "partnumber book not found"})
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
items, total, err := bookRepo.GetBookItemsPage(book.ID, search, page, perPage)
|
||||||
|
if err != nil {
|
||||||
|
RespondError(c, http.StatusInternalServerError, "internal server error", err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
c.JSON(http.StatusOK, gin.H{
|
||||||
|
"book_id": book.ServerID,
|
||||||
|
"version": book.Version,
|
||||||
|
"is_active": book.IsActive,
|
||||||
|
"partnumbers": book.PartnumbersJSON,
|
||||||
|
"items": items,
|
||||||
|
"total": total,
|
||||||
|
"page": page,
|
||||||
|
"per_page": perPage,
|
||||||
|
"search": search,
|
||||||
|
"book_total": bookRepo.CountBookItems(book.ID),
|
||||||
|
"lot_count": bookRepo.CountDistinctLots(book.ID),
|
||||||
|
})
|
||||||
|
}
|
||||||
@@ -34,7 +34,7 @@ func (h *PricelistHandler) List(c *gin.Context) {
|
|||||||
|
|
||||||
localPLs, err := h.localDB.GetLocalPricelists()
|
localPLs, err := h.localDB.GetLocalPricelists()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
RespondError(c, http.StatusInternalServerError, "internal server error", err)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
if source != "" {
|
if source != "" {
|
||||||
@@ -172,24 +172,48 @@ func (h *PricelistHandler) GetItems(c *gin.Context) {
|
|||||||
}
|
}
|
||||||
var total int64
|
var total int64
|
||||||
if err := dbq.Count(&total).Error; err != nil {
|
if err := dbq.Count(&total).Error; err != nil {
|
||||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
RespondError(c, http.StatusInternalServerError, "internal server error", err)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
offset := (page - 1) * perPage
|
offset := (page - 1) * perPage
|
||||||
|
|
||||||
if err := dbq.Order("lot_name").Offset(offset).Limit(perPage).Find(&items).Error; err != nil {
|
if err := dbq.Order("lot_name").Offset(offset).Limit(perPage).Find(&items).Error; err != nil {
|
||||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
RespondError(c, http.StatusInternalServerError, "internal server error", err)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
lotNames := make([]string, len(items))
|
||||||
|
for i, item := range items {
|
||||||
|
lotNames[i] = item.LotName
|
||||||
|
}
|
||||||
|
type compRow struct {
|
||||||
|
LotName string
|
||||||
|
LotDescription string
|
||||||
|
}
|
||||||
|
var comps []compRow
|
||||||
|
if len(lotNames) > 0 {
|
||||||
|
h.localDB.DB().Table("local_components").
|
||||||
|
Select("lot_name, lot_description").
|
||||||
|
Where("lot_name IN ?", lotNames).
|
||||||
|
Scan(&comps)
|
||||||
|
}
|
||||||
|
descMap := make(map[string]string, len(comps))
|
||||||
|
for _, c := range comps {
|
||||||
|
descMap[c.LotName] = c.LotDescription
|
||||||
|
}
|
||||||
|
|
||||||
resultItems := make([]gin.H, 0, len(items))
|
resultItems := make([]gin.H, 0, len(items))
|
||||||
for _, item := range items {
|
for _, item := range items {
|
||||||
resultItems = append(resultItems, gin.H{
|
resultItems = append(resultItems, gin.H{
|
||||||
"id": item.ID,
|
"id": item.ID,
|
||||||
"lot_name": item.LotName,
|
"lot_name": item.LotName,
|
||||||
"price": item.Price,
|
"lot_description": descMap[item.LotName],
|
||||||
"category": item.LotCategory,
|
"price": item.Price,
|
||||||
"available_qty": item.AvailableQty,
|
"category": item.LotCategory,
|
||||||
"partnumbers": []string(item.Partnumbers),
|
"available_qty": item.AvailableQty,
|
||||||
|
"partnumbers": []string(item.Partnumbers),
|
||||||
|
"partnumber_qtys": map[string]interface{}{},
|
||||||
|
"competitor_names": []string{},
|
||||||
|
"price_spread_pct": nil,
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -217,7 +241,7 @@ func (h *PricelistHandler) GetLotNames(c *gin.Context) {
|
|||||||
}
|
}
|
||||||
items, err := h.localDB.GetLocalPricelistItems(localPL.ID)
|
items, err := h.localDB.GetLocalPricelistItems(localPL.ID)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
RespondError(c, http.StatusInternalServerError, "internal server error", err)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
lotNames := make([]string, 0, len(items))
|
lotNames := make([]string, 0, len(items))
|
||||||
|
|||||||
@@ -18,13 +18,13 @@ func NewQuoteHandler(quoteService *services.QuoteService) *QuoteHandler {
|
|||||||
func (h *QuoteHandler) Validate(c *gin.Context) {
|
func (h *QuoteHandler) Validate(c *gin.Context) {
|
||||||
var req services.QuoteRequest
|
var req services.QuoteRequest
|
||||||
if err := c.ShouldBindJSON(&req); err != nil {
|
if err := c.ShouldBindJSON(&req); err != nil {
|
||||||
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
RespondError(c, http.StatusBadRequest, "invalid request", err)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
result, err := h.quoteService.ValidateAndCalculate(&req)
|
result, err := h.quoteService.ValidateAndCalculate(&req)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
RespondError(c, http.StatusBadRequest, "invalid request", err)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -34,13 +34,13 @@ func (h *QuoteHandler) Validate(c *gin.Context) {
|
|||||||
func (h *QuoteHandler) Calculate(c *gin.Context) {
|
func (h *QuoteHandler) Calculate(c *gin.Context) {
|
||||||
var req services.QuoteRequest
|
var req services.QuoteRequest
|
||||||
if err := c.ShouldBindJSON(&req); err != nil {
|
if err := c.ShouldBindJSON(&req); err != nil {
|
||||||
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
RespondError(c, http.StatusBadRequest, "invalid request", err)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
result, err := h.quoteService.ValidateAndCalculate(&req)
|
result, err := h.quoteService.ValidateAndCalculate(&req)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
RespondError(c, http.StatusBadRequest, "invalid request", err)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -53,13 +53,13 @@ func (h *QuoteHandler) Calculate(c *gin.Context) {
|
|||||||
func (h *QuoteHandler) PriceLevels(c *gin.Context) {
|
func (h *QuoteHandler) PriceLevels(c *gin.Context) {
|
||||||
var req services.PriceLevelsRequest
|
var req services.PriceLevelsRequest
|
||||||
if err := c.ShouldBindJSON(&req); err != nil {
|
if err := c.ShouldBindJSON(&req); err != nil {
|
||||||
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
RespondError(c, http.StatusBadRequest, "invalid request", err)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
result, err := h.quoteService.CalculatePriceLevels(&req)
|
result, err := h.quoteService.CalculatePriceLevels(&req)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
RespondError(c, http.StatusBadRequest, "invalid request", err)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
73
internal/handlers/respond.go
Normal file
73
internal/handlers/respond.go
Normal file
@@ -0,0 +1,73 @@
|
|||||||
|
package handlers
|
||||||
|
|
||||||
|
import (
|
||||||
|
"encoding/json"
|
||||||
|
"errors"
|
||||||
|
"io"
|
||||||
|
"strings"
|
||||||
|
|
||||||
|
"github.com/gin-gonic/gin"
|
||||||
|
)
|
||||||
|
|
||||||
|
func RespondError(c *gin.Context, status int, fallback string, err error) {
|
||||||
|
if err != nil {
|
||||||
|
_ = c.Error(err)
|
||||||
|
}
|
||||||
|
c.JSON(status, gin.H{"error": clientFacingErrorMessage(status, fallback, err)})
|
||||||
|
}
|
||||||
|
|
||||||
|
func clientFacingErrorMessage(status int, fallback string, err error) string {
|
||||||
|
if err == nil {
|
||||||
|
return fallback
|
||||||
|
}
|
||||||
|
if status >= 500 {
|
||||||
|
return fallback
|
||||||
|
}
|
||||||
|
if isRequestDecodeError(err) {
|
||||||
|
return fallback
|
||||||
|
}
|
||||||
|
|
||||||
|
message := strings.TrimSpace(err.Error())
|
||||||
|
if message == "" {
|
||||||
|
return fallback
|
||||||
|
}
|
||||||
|
if looksTechnicalError(message) {
|
||||||
|
return fallback
|
||||||
|
}
|
||||||
|
return message
|
||||||
|
}
|
||||||
|
|
||||||
|
func isRequestDecodeError(err error) bool {
|
||||||
|
var syntaxErr *json.SyntaxError
|
||||||
|
if errors.As(err, &syntaxErr) {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
|
||||||
|
var unmarshalTypeErr *json.UnmarshalTypeError
|
||||||
|
if errors.As(err, &unmarshalTypeErr) {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
|
||||||
|
return errors.Is(err, io.ErrUnexpectedEOF) || errors.Is(err, io.EOF)
|
||||||
|
}
|
||||||
|
|
||||||
|
func looksTechnicalError(message string) bool {
|
||||||
|
lower := strings.ToLower(strings.TrimSpace(message))
|
||||||
|
needles := []string{
|
||||||
|
"sql",
|
||||||
|
"gorm",
|
||||||
|
"driver",
|
||||||
|
"constraint",
|
||||||
|
"syntax error",
|
||||||
|
"unexpected eof",
|
||||||
|
"record not found",
|
||||||
|
"no such table",
|
||||||
|
"stack trace",
|
||||||
|
}
|
||||||
|
for _, needle := range needles {
|
||||||
|
if strings.Contains(lower, needle) {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return false
|
||||||
|
}
|
||||||
41
internal/handlers/respond_test.go
Normal file
41
internal/handlers/respond_test.go
Normal file
@@ -0,0 +1,41 @@
|
|||||||
|
package handlers
|
||||||
|
|
||||||
|
import (
|
||||||
|
"encoding/json"
|
||||||
|
"testing"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestClientFacingErrorMessageKeepsDomain4xx(t *testing.T) {
|
||||||
|
t.Parallel()
|
||||||
|
|
||||||
|
got := clientFacingErrorMessage(400, "invalid request", &json.SyntaxError{Offset: 1})
|
||||||
|
if got != "invalid request" {
|
||||||
|
t.Fatalf("expected fallback for decode error, got %q", got)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestClientFacingErrorMessagePreservesBusinessMessage(t *testing.T) {
|
||||||
|
t.Parallel()
|
||||||
|
|
||||||
|
err := errString("main project variant cannot be deleted")
|
||||||
|
got := clientFacingErrorMessage(400, "invalid request", err)
|
||||||
|
if got != err.Error() {
|
||||||
|
t.Fatalf("expected business message, got %q", got)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestClientFacingErrorMessageHidesTechnical4xx(t *testing.T) {
|
||||||
|
t.Parallel()
|
||||||
|
|
||||||
|
err := errString("sql: no rows in result set")
|
||||||
|
got := clientFacingErrorMessage(404, "resource not found", err)
|
||||||
|
if got != "resource not found" {
|
||||||
|
t.Fatalf("expected fallback for technical error, got %q", got)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
type errString string
|
||||||
|
|
||||||
|
func (e errString) Error() string {
|
||||||
|
return string(e)
|
||||||
|
}
|
||||||
@@ -1,21 +1,20 @@
|
|||||||
package handlers
|
package handlers
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"errors"
|
||||||
"fmt"
|
"fmt"
|
||||||
"html/template"
|
"html/template"
|
||||||
"log/slog"
|
"log/slog"
|
||||||
"net"
|
"net"
|
||||||
"net/http"
|
"net/http"
|
||||||
"os"
|
|
||||||
"path/filepath"
|
|
||||||
"strconv"
|
"strconv"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
qfassets "git.mchus.pro/mchus/quoteforge"
|
qfassets "git.mchus.pro/mchus/quoteforge"
|
||||||
"git.mchus.pro/mchus/quoteforge/internal/db"
|
"git.mchus.pro/mchus/quoteforge/internal/db"
|
||||||
"git.mchus.pro/mchus/quoteforge/internal/localdb"
|
"git.mchus.pro/mchus/quoteforge/internal/localdb"
|
||||||
mysqlDriver "github.com/go-sql-driver/mysql"
|
|
||||||
"github.com/gin-gonic/gin"
|
"github.com/gin-gonic/gin"
|
||||||
|
mysqlDriver "github.com/go-sql-driver/mysql"
|
||||||
gormmysql "gorm.io/driver/mysql"
|
gormmysql "gorm.io/driver/mysql"
|
||||||
"gorm.io/gorm"
|
"gorm.io/gorm"
|
||||||
"gorm.io/gorm/logger"
|
"gorm.io/gorm/logger"
|
||||||
@@ -28,7 +27,9 @@ type SetupHandler struct {
|
|||||||
restartSig chan struct{}
|
restartSig chan struct{}
|
||||||
}
|
}
|
||||||
|
|
||||||
func NewSetupHandler(localDB *localdb.LocalDB, connMgr *db.ConnectionManager, templatesPath string, restartSig chan struct{}) (*SetupHandler, error) {
|
var errPermissionProbeRollback = errors.New("permission probe rollback")
|
||||||
|
|
||||||
|
func NewSetupHandler(localDB *localdb.LocalDB, connMgr *db.ConnectionManager, _ string, restartSig chan struct{}) (*SetupHandler, error) {
|
||||||
funcMap := template.FuncMap{
|
funcMap := template.FuncMap{
|
||||||
"sub": func(a, b int) int { return a - b },
|
"sub": func(a, b int) int { return a - b },
|
||||||
"add": func(a, b int) int { return a + b },
|
"add": func(a, b int) int { return a + b },
|
||||||
@@ -37,14 +38,9 @@ func NewSetupHandler(localDB *localdb.LocalDB, connMgr *db.ConnectionManager, te
|
|||||||
templates := make(map[string]*template.Template)
|
templates := make(map[string]*template.Template)
|
||||||
|
|
||||||
// Load setup template (standalone, no base needed)
|
// Load setup template (standalone, no base needed)
|
||||||
setupPath := filepath.Join(templatesPath, "setup.html")
|
|
||||||
var tmpl *template.Template
|
var tmpl *template.Template
|
||||||
var err error
|
var err error
|
||||||
if stat, statErr := os.Stat(templatesPath); statErr == nil && stat.IsDir() {
|
tmpl, err = template.New("").Funcs(funcMap).ParseFS(qfassets.TemplatesFS, "web/templates/setup.html")
|
||||||
tmpl, err = template.New("").Funcs(funcMap).ParseFiles(setupPath)
|
|
||||||
} else {
|
|
||||||
tmpl, err = template.New("").Funcs(funcMap).ParseFS(qfassets.TemplatesFS, "web/templates/setup.html")
|
|
||||||
}
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, fmt.Errorf("parsing setup template: %w", err)
|
return nil, fmt.Errorf("parsing setup template: %w", err)
|
||||||
}
|
}
|
||||||
@@ -71,7 +67,8 @@ func (h *SetupHandler) ShowSetup(c *gin.Context) {
|
|||||||
|
|
||||||
tmpl := h.templates["setup.html"]
|
tmpl := h.templates["setup.html"]
|
||||||
if err := tmpl.ExecuteTemplate(c.Writer, "setup.html", data); err != nil {
|
if err := tmpl.ExecuteTemplate(c.Writer, "setup.html", data); err != nil {
|
||||||
c.String(http.StatusInternalServerError, "Template error: %v", err)
|
_ = c.Error(err)
|
||||||
|
c.String(http.StatusInternalServerError, "Template error")
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -96,49 +93,16 @@ func (h *SetupHandler) TestConnection(c *gin.Context) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
dsn := buildMySQLDSN(host, port, database, user, password, 5*time.Second)
|
dsn := buildMySQLDSN(host, port, database, user, password, 5*time.Second)
|
||||||
|
lotCount, canWrite, err := validateMariaDBConnection(dsn)
|
||||||
db, err := gorm.Open(gormmysql.Open(dsn), &gorm.Config{
|
|
||||||
Logger: logger.Default.LogMode(logger.Silent),
|
|
||||||
})
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
|
_ = c.Error(err)
|
||||||
c.JSON(http.StatusOK, gin.H{
|
c.JSON(http.StatusOK, gin.H{
|
||||||
"success": false,
|
"success": false,
|
||||||
"error": fmt.Sprintf("Connection failed: %v", err),
|
"error": "Connection check failed",
|
||||||
})
|
})
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
sqlDB, err := db.DB()
|
|
||||||
if err != nil {
|
|
||||||
c.JSON(http.StatusOK, gin.H{
|
|
||||||
"success": false,
|
|
||||||
"error": fmt.Sprintf("Failed to get database handle: %v", err),
|
|
||||||
})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
defer sqlDB.Close()
|
|
||||||
|
|
||||||
if err := sqlDB.Ping(); err != nil {
|
|
||||||
c.JSON(http.StatusOK, gin.H{
|
|
||||||
"success": false,
|
|
||||||
"error": fmt.Sprintf("Ping failed: %v", err),
|
|
||||||
})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check for required tables
|
|
||||||
var lotCount int64
|
|
||||||
if err := db.Table("lot").Count(&lotCount).Error; err != nil {
|
|
||||||
c.JSON(http.StatusOK, gin.H{
|
|
||||||
"success": false,
|
|
||||||
"error": fmt.Sprintf("Table 'lot' not found or inaccessible: %v", err),
|
|
||||||
})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check write permission
|
|
||||||
canWrite := testWritePermission(db)
|
|
||||||
|
|
||||||
c.JSON(http.StatusOK, gin.H{
|
c.JSON(http.StatusOK, gin.H{
|
||||||
"success": true,
|
"success": true,
|
||||||
"lot_count": lotCount,
|
"lot_count": lotCount,
|
||||||
@@ -171,26 +135,21 @@ func (h *SetupHandler) SaveConnection(c *gin.Context) {
|
|||||||
|
|
||||||
// Test connection first
|
// Test connection first
|
||||||
dsn := buildMySQLDSN(host, port, database, user, password, 5*time.Second)
|
dsn := buildMySQLDSN(host, port, database, user, password, 5*time.Second)
|
||||||
|
if _, _, err := validateMariaDBConnection(dsn); err != nil {
|
||||||
db, err := gorm.Open(gormmysql.Open(dsn), &gorm.Config{
|
_ = c.Error(err)
|
||||||
Logger: logger.Default.LogMode(logger.Silent),
|
|
||||||
})
|
|
||||||
if err != nil {
|
|
||||||
c.JSON(http.StatusBadRequest, gin.H{
|
c.JSON(http.StatusBadRequest, gin.H{
|
||||||
"success": false,
|
"success": false,
|
||||||
"error": fmt.Sprintf("Connection failed: %v", err),
|
"error": "Connection check failed",
|
||||||
})
|
})
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
sqlDB, _ := db.DB()
|
|
||||||
sqlDB.Close()
|
|
||||||
|
|
||||||
// Save settings
|
// Save settings
|
||||||
if err := h.localDB.SaveSettings(host, port, database, user, password); err != nil {
|
if err := h.localDB.SaveSettings(host, port, database, user, password); err != nil {
|
||||||
|
_ = c.Error(err)
|
||||||
c.JSON(http.StatusInternalServerError, gin.H{
|
c.JSON(http.StatusInternalServerError, gin.H{
|
||||||
"success": false,
|
"success": false,
|
||||||
"error": fmt.Sprintf("Failed to save settings: %v", err),
|
"error": "Failed to save settings",
|
||||||
})
|
})
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
@@ -239,22 +198,6 @@ func (h *SetupHandler) GetStatus(c *gin.Context) {
|
|||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
func testWritePermission(db *gorm.DB) bool {
|
|
||||||
// Simple check: try to create a temporary table and drop it
|
|
||||||
testTable := fmt.Sprintf("qt_write_test_%d", time.Now().UnixNano())
|
|
||||||
|
|
||||||
// Try to create a test table
|
|
||||||
err := db.Exec(fmt.Sprintf("CREATE TABLE %s (id INT)", testTable)).Error
|
|
||||||
if err != nil {
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
|
|
||||||
// Drop it immediately
|
|
||||||
db.Exec(fmt.Sprintf("DROP TABLE %s", testTable))
|
|
||||||
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
|
|
||||||
func buildMySQLDSN(host string, port int, database, user, password string, timeout time.Duration) string {
|
func buildMySQLDSN(host string, port int, database, user, password string, timeout time.Duration) string {
|
||||||
cfg := mysqlDriver.NewConfig()
|
cfg := mysqlDriver.NewConfig()
|
||||||
cfg.User = user
|
cfg.User = user
|
||||||
@@ -270,3 +213,47 @@ func buildMySQLDSN(host string, port int, database, user, password string, timeo
|
|||||||
}
|
}
|
||||||
return cfg.FormatDSN()
|
return cfg.FormatDSN()
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func validateMariaDBConnection(dsn string) (int64, bool, error) {
|
||||||
|
db, err := gorm.Open(gormmysql.Open(dsn), &gorm.Config{
|
||||||
|
Logger: logger.Default.LogMode(logger.Silent),
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
return 0, false, fmt.Errorf("open MariaDB connection: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
sqlDB, err := db.DB()
|
||||||
|
if err != nil {
|
||||||
|
return 0, false, fmt.Errorf("get database handle: %w", err)
|
||||||
|
}
|
||||||
|
defer sqlDB.Close()
|
||||||
|
|
||||||
|
if err := sqlDB.Ping(); err != nil {
|
||||||
|
return 0, false, fmt.Errorf("ping MariaDB: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
var lotCount int64
|
||||||
|
if err := db.Table("lot").Count(&lotCount).Error; err != nil {
|
||||||
|
return 0, false, fmt.Errorf("check required table lot: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return lotCount, testSyncWritePermission(db), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func testSyncWritePermission(db *gorm.DB) bool {
|
||||||
|
sentinel := fmt.Sprintf("quoteforge-permission-check-%d", time.Now().UnixNano())
|
||||||
|
err := db.Transaction(func(tx *gorm.DB) error {
|
||||||
|
if err := tx.Exec(`
|
||||||
|
INSERT INTO qt_client_schema_state (username, hostname, last_checked_at, updated_at)
|
||||||
|
VALUES (?, ?, NOW(), NOW())
|
||||||
|
ON DUPLICATE KEY UPDATE
|
||||||
|
last_checked_at = VALUES(last_checked_at),
|
||||||
|
updated_at = VALUES(updated_at)
|
||||||
|
`, sentinel, "setup-check").Error; err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
return errPermissionProbeRollback
|
||||||
|
})
|
||||||
|
|
||||||
|
return errors.Is(err, errPermissionProbeRollback)
|
||||||
|
}
|
||||||
|
|||||||
@@ -6,8 +6,7 @@ import (
|
|||||||
"html/template"
|
"html/template"
|
||||||
"log/slog"
|
"log/slog"
|
||||||
"net/http"
|
"net/http"
|
||||||
"os"
|
"strings"
|
||||||
"path/filepath"
|
|
||||||
stdsync "sync"
|
stdsync "sync"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
@@ -32,16 +31,9 @@ type SyncHandler struct {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// NewSyncHandler creates a new sync handler
|
// NewSyncHandler creates a new sync handler
|
||||||
func NewSyncHandler(localDB *localdb.LocalDB, syncService *sync.Service, connMgr *db.ConnectionManager, templatesPath string, autoSyncInterval time.Duration) (*SyncHandler, error) {
|
func NewSyncHandler(localDB *localdb.LocalDB, syncService *sync.Service, connMgr *db.ConnectionManager, _ string, autoSyncInterval time.Duration) (*SyncHandler, error) {
|
||||||
// Load sync_status partial template
|
// Load sync_status partial template
|
||||||
partialPath := filepath.Join(templatesPath, "partials", "sync_status.html")
|
tmpl, err := template.ParseFS(qfassets.TemplatesFS, "web/templates/partials/sync_status.html")
|
||||||
var tmpl *template.Template
|
|
||||||
var err error
|
|
||||||
if stat, statErr := os.Stat(templatesPath); statErr == nil && stat.IsDir() {
|
|
||||||
tmpl, err = template.ParseFiles(partialPath)
|
|
||||||
} else {
|
|
||||||
tmpl, err = template.ParseFS(qfassets.TemplatesFS, "web/templates/partials/sync_status.html")
|
|
||||||
}
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
@@ -58,15 +50,20 @@ func NewSyncHandler(localDB *localdb.LocalDB, syncService *sync.Service, connMgr
|
|||||||
|
|
||||||
// SyncStatusResponse represents the sync status
|
// SyncStatusResponse represents the sync status
|
||||||
type SyncStatusResponse struct {
|
type SyncStatusResponse struct {
|
||||||
LastComponentSync *time.Time `json:"last_component_sync"`
|
LastComponentSync *time.Time `json:"last_component_sync"`
|
||||||
LastPricelistSync *time.Time `json:"last_pricelist_sync"`
|
LastPricelistSync *time.Time `json:"last_pricelist_sync"`
|
||||||
IsOnline bool `json:"is_online"`
|
LastPricelistAttemptAt *time.Time `json:"last_pricelist_attempt_at,omitempty"`
|
||||||
ComponentsCount int64 `json:"components_count"`
|
LastPricelistSyncStatus string `json:"last_pricelist_sync_status,omitempty"`
|
||||||
PricelistsCount int64 `json:"pricelists_count"`
|
LastPricelistSyncError string `json:"last_pricelist_sync_error,omitempty"`
|
||||||
ServerPricelists int `json:"server_pricelists"`
|
HasIncompleteServerSync bool `json:"has_incomplete_server_sync"`
|
||||||
NeedComponentSync bool `json:"need_component_sync"`
|
KnownServerChangesMiss bool `json:"known_server_changes_missing"`
|
||||||
NeedPricelistSync bool `json:"need_pricelist_sync"`
|
IsOnline bool `json:"is_online"`
|
||||||
Readiness *sync.SyncReadiness `json:"readiness,omitempty"`
|
ComponentsCount int64 `json:"components_count"`
|
||||||
|
PricelistsCount int64 `json:"pricelists_count"`
|
||||||
|
ServerPricelists int `json:"server_pricelists"`
|
||||||
|
NeedComponentSync bool `json:"need_component_sync"`
|
||||||
|
NeedPricelistSync bool `json:"need_pricelist_sync"`
|
||||||
|
Readiness *sync.SyncReadiness `json:"readiness,omitempty"`
|
||||||
}
|
}
|
||||||
|
|
||||||
type SyncReadinessResponse struct {
|
type SyncReadinessResponse struct {
|
||||||
@@ -81,42 +78,34 @@ type SyncReadinessResponse struct {
|
|||||||
// GetStatus returns current sync status
|
// GetStatus returns current sync status
|
||||||
// GET /api/sync/status
|
// GET /api/sync/status
|
||||||
func (h *SyncHandler) GetStatus(c *gin.Context) {
|
func (h *SyncHandler) GetStatus(c *gin.Context) {
|
||||||
// Check online status by pinging MariaDB
|
connStatus := h.connMgr.GetStatus()
|
||||||
isOnline := h.checkOnline()
|
isOnline := connStatus.IsConnected && strings.TrimSpace(connStatus.LastError) == ""
|
||||||
|
|
||||||
// Get sync times
|
|
||||||
lastComponentSync := h.localDB.GetComponentSyncTime()
|
lastComponentSync := h.localDB.GetComponentSyncTime()
|
||||||
lastPricelistSync := h.localDB.GetLastSyncTime()
|
lastPricelistSync := h.localDB.GetLastSyncTime()
|
||||||
|
|
||||||
// Get counts
|
|
||||||
componentsCount := h.localDB.CountLocalComponents()
|
componentsCount := h.localDB.CountLocalComponents()
|
||||||
pricelistsCount := h.localDB.CountLocalPricelists()
|
pricelistsCount := h.localDB.CountLocalPricelists()
|
||||||
|
lastPricelistAttemptAt := h.localDB.GetLastPricelistSyncAttemptAt()
|
||||||
// Get server pricelist count if online
|
lastPricelistSyncStatus := h.localDB.GetLastPricelistSyncStatus()
|
||||||
serverPricelists := 0
|
lastPricelistSyncError := h.localDB.GetLastPricelistSyncError()
|
||||||
needPricelistSync := false
|
hasFailedSync := strings.EqualFold(lastPricelistSyncStatus, "failed")
|
||||||
if isOnline {
|
|
||||||
status, err := h.syncService.GetStatus()
|
|
||||||
if err == nil {
|
|
||||||
serverPricelists = status.ServerPricelists
|
|
||||||
needPricelistSync = status.NeedsSync
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check if component sync is needed (older than 24 hours)
|
|
||||||
needComponentSync := h.localDB.NeedComponentSync(24)
|
needComponentSync := h.localDB.NeedComponentSync(24)
|
||||||
readiness := h.getReadinessCached(10 * time.Second)
|
readiness := h.getReadinessLocal()
|
||||||
|
|
||||||
c.JSON(http.StatusOK, SyncStatusResponse{
|
c.JSON(http.StatusOK, SyncStatusResponse{
|
||||||
LastComponentSync: lastComponentSync,
|
LastComponentSync: lastComponentSync,
|
||||||
LastPricelistSync: lastPricelistSync,
|
LastPricelistSync: lastPricelistSync,
|
||||||
IsOnline: isOnline,
|
LastPricelistAttemptAt: lastPricelistAttemptAt,
|
||||||
ComponentsCount: componentsCount,
|
LastPricelistSyncStatus: lastPricelistSyncStatus,
|
||||||
PricelistsCount: pricelistsCount,
|
LastPricelistSyncError: lastPricelistSyncError,
|
||||||
ServerPricelists: serverPricelists,
|
HasIncompleteServerSync: hasFailedSync,
|
||||||
NeedComponentSync: needComponentSync,
|
KnownServerChangesMiss: hasFailedSync,
|
||||||
NeedPricelistSync: needPricelistSync,
|
IsOnline: isOnline,
|
||||||
Readiness: readiness,
|
ComponentsCount: componentsCount,
|
||||||
|
PricelistsCount: pricelistsCount,
|
||||||
|
ServerPricelists: 0,
|
||||||
|
NeedComponentSync: needComponentSync,
|
||||||
|
NeedPricelistSync: lastPricelistSync == nil || hasFailedSync,
|
||||||
|
Readiness: readiness,
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -125,9 +114,7 @@ func (h *SyncHandler) GetStatus(c *gin.Context) {
|
|||||||
func (h *SyncHandler) GetReadiness(c *gin.Context) {
|
func (h *SyncHandler) GetReadiness(c *gin.Context) {
|
||||||
readiness, err := h.syncService.GetReadiness()
|
readiness, err := h.syncService.GetReadiness()
|
||||||
if err != nil && readiness == nil {
|
if err != nil && readiness == nil {
|
||||||
c.JSON(http.StatusInternalServerError, gin.H{
|
RespondError(c, http.StatusInternalServerError, "internal server error", err)
|
||||||
"error": err.Error(),
|
|
||||||
})
|
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
if readiness == nil {
|
if readiness == nil {
|
||||||
@@ -167,8 +154,9 @@ func (h *SyncHandler) ensureSyncReadiness(c *gin.Context) bool {
|
|||||||
|
|
||||||
c.JSON(http.StatusInternalServerError, gin.H{
|
c.JSON(http.StatusInternalServerError, gin.H{
|
||||||
"success": false,
|
"success": false,
|
||||||
"error": err.Error(),
|
"error": "internal server error",
|
||||||
})
|
})
|
||||||
|
_ = c.Error(err)
|
||||||
_ = readiness
|
_ = readiness
|
||||||
return false
|
return false
|
||||||
}
|
}
|
||||||
@@ -193,8 +181,9 @@ func (h *SyncHandler) SyncComponents(c *gin.Context) {
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
c.JSON(http.StatusServiceUnavailable, gin.H{
|
c.JSON(http.StatusServiceUnavailable, gin.H{
|
||||||
"success": false,
|
"success": false,
|
||||||
"error": "Database connection failed: " + err.Error(),
|
"error": "database connection failed",
|
||||||
})
|
})
|
||||||
|
_ = c.Error(err)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -203,8 +192,9 @@ func (h *SyncHandler) SyncComponents(c *gin.Context) {
|
|||||||
slog.Error("component sync failed", "error", err)
|
slog.Error("component sync failed", "error", err)
|
||||||
c.JSON(http.StatusInternalServerError, gin.H{
|
c.JSON(http.StatusInternalServerError, gin.H{
|
||||||
"success": false,
|
"success": false,
|
||||||
"error": err.Error(),
|
"error": "component sync failed",
|
||||||
})
|
})
|
||||||
|
_ = c.Error(err)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -229,8 +219,9 @@ func (h *SyncHandler) SyncPricelists(c *gin.Context) {
|
|||||||
slog.Error("pricelist sync failed", "error", err)
|
slog.Error("pricelist sync failed", "error", err)
|
||||||
c.JSON(http.StatusInternalServerError, gin.H{
|
c.JSON(http.StatusInternalServerError, gin.H{
|
||||||
"success": false,
|
"success": false,
|
||||||
"error": err.Error(),
|
"error": "pricelist sync failed",
|
||||||
})
|
})
|
||||||
|
_ = c.Error(err)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -243,6 +234,34 @@ func (h *SyncHandler) SyncPricelists(c *gin.Context) {
|
|||||||
h.syncService.RecordSyncHeartbeat()
|
h.syncService.RecordSyncHeartbeat()
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// SyncPartnumberBooks syncs partnumber book snapshots from MariaDB to local SQLite.
|
||||||
|
// POST /api/sync/partnumber-books
|
||||||
|
func (h *SyncHandler) SyncPartnumberBooks(c *gin.Context) {
|
||||||
|
if !h.ensureSyncReadiness(c) {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
startTime := time.Now()
|
||||||
|
pulled, err := h.syncService.PullPartnumberBooks()
|
||||||
|
if err != nil {
|
||||||
|
slog.Error("partnumber books pull failed", "error", err)
|
||||||
|
c.JSON(http.StatusInternalServerError, gin.H{
|
||||||
|
"success": false,
|
||||||
|
"error": "partnumber books sync failed",
|
||||||
|
})
|
||||||
|
_ = c.Error(err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
c.JSON(http.StatusOK, SyncResultResponse{
|
||||||
|
Success: true,
|
||||||
|
Message: "Partnumber books synced successfully",
|
||||||
|
Synced: pulled,
|
||||||
|
Duration: time.Since(startTime).String(),
|
||||||
|
})
|
||||||
|
h.syncService.RecordSyncHeartbeat()
|
||||||
|
}
|
||||||
|
|
||||||
// SyncAllResponse represents result of full sync
|
// SyncAllResponse represents result of full sync
|
||||||
type SyncAllResponse struct {
|
type SyncAllResponse struct {
|
||||||
Success bool `json:"success"`
|
Success bool `json:"success"`
|
||||||
@@ -277,8 +296,9 @@ func (h *SyncHandler) SyncAll(c *gin.Context) {
|
|||||||
slog.Error("pending push failed during full sync", "error", err)
|
slog.Error("pending push failed during full sync", "error", err)
|
||||||
c.JSON(http.StatusInternalServerError, gin.H{
|
c.JSON(http.StatusInternalServerError, gin.H{
|
||||||
"success": false,
|
"success": false,
|
||||||
"error": "Pending changes push failed: " + err.Error(),
|
"error": "pending changes push failed",
|
||||||
})
|
})
|
||||||
|
_ = c.Error(err)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -287,8 +307,9 @@ func (h *SyncHandler) SyncAll(c *gin.Context) {
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
c.JSON(http.StatusServiceUnavailable, gin.H{
|
c.JSON(http.StatusServiceUnavailable, gin.H{
|
||||||
"success": false,
|
"success": false,
|
||||||
"error": "Database connection failed: " + err.Error(),
|
"error": "database connection failed",
|
||||||
})
|
})
|
||||||
|
_ = c.Error(err)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -297,8 +318,9 @@ func (h *SyncHandler) SyncAll(c *gin.Context) {
|
|||||||
slog.Error("component sync failed during full sync", "error", err)
|
slog.Error("component sync failed during full sync", "error", err)
|
||||||
c.JSON(http.StatusInternalServerError, gin.H{
|
c.JSON(http.StatusInternalServerError, gin.H{
|
||||||
"success": false,
|
"success": false,
|
||||||
"error": "Component sync failed: " + err.Error(),
|
"error": "component sync failed",
|
||||||
})
|
})
|
||||||
|
_ = c.Error(err)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
componentsSynced = compResult.TotalSynced
|
componentsSynced = compResult.TotalSynced
|
||||||
@@ -309,10 +331,11 @@ func (h *SyncHandler) SyncAll(c *gin.Context) {
|
|||||||
slog.Error("pricelist sync failed during full sync", "error", err)
|
slog.Error("pricelist sync failed during full sync", "error", err)
|
||||||
c.JSON(http.StatusInternalServerError, gin.H{
|
c.JSON(http.StatusInternalServerError, gin.H{
|
||||||
"success": false,
|
"success": false,
|
||||||
"error": "Pricelist sync failed: " + err.Error(),
|
"error": "pricelist sync failed",
|
||||||
"pending_pushed": pendingPushed,
|
"pending_pushed": pendingPushed,
|
||||||
"components_synced": componentsSynced,
|
"components_synced": componentsSynced,
|
||||||
})
|
})
|
||||||
|
_ = c.Error(err)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -321,11 +344,12 @@ func (h *SyncHandler) SyncAll(c *gin.Context) {
|
|||||||
slog.Error("project import failed during full sync", "error", err)
|
slog.Error("project import failed during full sync", "error", err)
|
||||||
c.JSON(http.StatusInternalServerError, gin.H{
|
c.JSON(http.StatusInternalServerError, gin.H{
|
||||||
"success": false,
|
"success": false,
|
||||||
"error": "Project import failed: " + err.Error(),
|
"error": "project import failed",
|
||||||
"pending_pushed": pendingPushed,
|
"pending_pushed": pendingPushed,
|
||||||
"components_synced": componentsSynced,
|
"components_synced": componentsSynced,
|
||||||
"pricelists_synced": pricelistsSynced,
|
"pricelists_synced": pricelistsSynced,
|
||||||
})
|
})
|
||||||
|
_ = c.Error(err)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -334,7 +358,7 @@ func (h *SyncHandler) SyncAll(c *gin.Context) {
|
|||||||
slog.Error("configuration import failed during full sync", "error", err)
|
slog.Error("configuration import failed during full sync", "error", err)
|
||||||
c.JSON(http.StatusInternalServerError, gin.H{
|
c.JSON(http.StatusInternalServerError, gin.H{
|
||||||
"success": false,
|
"success": false,
|
||||||
"error": "Configuration import failed: " + err.Error(),
|
"error": "configuration import failed",
|
||||||
"pending_pushed": pendingPushed,
|
"pending_pushed": pendingPushed,
|
||||||
"components_synced": componentsSynced,
|
"components_synced": componentsSynced,
|
||||||
"pricelists_synced": pricelistsSynced,
|
"pricelists_synced": pricelistsSynced,
|
||||||
@@ -342,6 +366,7 @@ func (h *SyncHandler) SyncAll(c *gin.Context) {
|
|||||||
"projects_updated": projectsResult.Updated,
|
"projects_updated": projectsResult.Updated,
|
||||||
"projects_skipped": projectsResult.Skipped,
|
"projects_skipped": projectsResult.Skipped,
|
||||||
})
|
})
|
||||||
|
_ = c.Error(err)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -380,8 +405,9 @@ func (h *SyncHandler) PushPendingChanges(c *gin.Context) {
|
|||||||
slog.Error("push pending changes failed", "error", err)
|
slog.Error("push pending changes failed", "error", err)
|
||||||
c.JSON(http.StatusInternalServerError, gin.H{
|
c.JSON(http.StatusInternalServerError, gin.H{
|
||||||
"success": false,
|
"success": false,
|
||||||
"error": err.Error(),
|
"error": "pending changes push failed",
|
||||||
})
|
})
|
||||||
|
_ = c.Error(err)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -408,9 +434,7 @@ func (h *SyncHandler) GetPendingCount(c *gin.Context) {
|
|||||||
func (h *SyncHandler) GetPendingChanges(c *gin.Context) {
|
func (h *SyncHandler) GetPendingChanges(c *gin.Context) {
|
||||||
changes, err := h.localDB.GetPendingChanges()
|
changes, err := h.localDB.GetPendingChanges()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
c.JSON(http.StatusInternalServerError, gin.H{
|
RespondError(c, http.StatusInternalServerError, "internal server error", err)
|
||||||
"error": err.Error(),
|
|
||||||
})
|
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -427,8 +451,9 @@ func (h *SyncHandler) RepairPendingChanges(c *gin.Context) {
|
|||||||
slog.Error("repair pending changes failed", "error", err)
|
slog.Error("repair pending changes failed", "error", err)
|
||||||
c.JSON(http.StatusInternalServerError, gin.H{
|
c.JSON(http.StatusInternalServerError, gin.H{
|
||||||
"success": false,
|
"success": false,
|
||||||
"error": err.Error(),
|
"error": "pending changes repair failed",
|
||||||
})
|
})
|
||||||
|
_ = c.Error(err)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -447,8 +472,13 @@ type SyncInfoResponse struct {
|
|||||||
DBName string `json:"db_name"`
|
DBName string `json:"db_name"`
|
||||||
|
|
||||||
// Status
|
// Status
|
||||||
IsOnline bool `json:"is_online"`
|
IsOnline bool `json:"is_online"`
|
||||||
LastSyncAt *time.Time `json:"last_sync_at"`
|
LastSyncAt *time.Time `json:"last_sync_at"`
|
||||||
|
LastPricelistAttemptAt *time.Time `json:"last_pricelist_attempt_at,omitempty"`
|
||||||
|
LastPricelistSyncStatus string `json:"last_pricelist_sync_status,omitempty"`
|
||||||
|
LastPricelistSyncError string `json:"last_pricelist_sync_error,omitempty"`
|
||||||
|
NeedPricelistSync bool `json:"need_pricelist_sync"`
|
||||||
|
HasIncompleteServerSync bool `json:"has_incomplete_server_sync"`
|
||||||
|
|
||||||
// Statistics
|
// Statistics
|
||||||
LotCount int64 `json:"lot_count"`
|
LotCount int64 `json:"lot_count"`
|
||||||
@@ -484,8 +514,8 @@ type SyncError struct {
|
|||||||
// GetInfo returns sync information for modal
|
// GetInfo returns sync information for modal
|
||||||
// GET /api/sync/info
|
// GET /api/sync/info
|
||||||
func (h *SyncHandler) GetInfo(c *gin.Context) {
|
func (h *SyncHandler) GetInfo(c *gin.Context) {
|
||||||
// Check online status by pinging MariaDB
|
connStatus := h.connMgr.GetStatus()
|
||||||
isOnline := h.checkOnline()
|
isOnline := connStatus.IsConnected && strings.TrimSpace(connStatus.LastError) == ""
|
||||||
|
|
||||||
// Get DB connection info
|
// Get DB connection info
|
||||||
var dbHost, dbUser, dbName string
|
var dbHost, dbUser, dbName string
|
||||||
@@ -497,19 +527,18 @@ func (h *SyncHandler) GetInfo(c *gin.Context) {
|
|||||||
|
|
||||||
// Get sync times
|
// Get sync times
|
||||||
lastPricelistSync := h.localDB.GetLastSyncTime()
|
lastPricelistSync := h.localDB.GetLastSyncTime()
|
||||||
|
lastPricelistAttemptAt := h.localDB.GetLastPricelistSyncAttemptAt()
|
||||||
// Get MariaDB counts (if online)
|
lastPricelistSyncStatus := h.localDB.GetLastPricelistSyncStatus()
|
||||||
var lotCount, lotLogCount int64
|
lastPricelistSyncError := h.localDB.GetLastPricelistSyncError()
|
||||||
if isOnline {
|
hasFailedSync := strings.EqualFold(lastPricelistSyncStatus, "failed")
|
||||||
if mariaDB, err := h.connMgr.GetDB(); err == nil {
|
needPricelistSync := lastPricelistSync == nil || hasFailedSync
|
||||||
mariaDB.Table("lot").Count(&lotCount)
|
hasIncompleteServerSync := hasFailedSync
|
||||||
mariaDB.Table("lot_log").Count(&lotLogCount)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Get local counts
|
// Get local counts
|
||||||
configCount := h.localDB.CountConfigurations()
|
configCount := h.localDB.CountConfigurations()
|
||||||
projectCount := h.localDB.CountProjects()
|
projectCount := h.localDB.CountProjects()
|
||||||
|
componentCount := h.localDB.CountLocalComponents()
|
||||||
|
pricelistCount := h.localDB.CountLocalPricelists()
|
||||||
|
|
||||||
// Get error count (only changes with LastError != "")
|
// Get error count (only changes with LastError != "")
|
||||||
errorCount := int(h.localDB.CountErroredChanges())
|
errorCount := int(h.localDB.CountErroredChanges())
|
||||||
@@ -536,22 +565,27 @@ func (h *SyncHandler) GetInfo(c *gin.Context) {
|
|||||||
syncErrors = syncErrors[:10]
|
syncErrors = syncErrors[:10]
|
||||||
}
|
}
|
||||||
|
|
||||||
readiness := h.getReadinessCached(10 * time.Second)
|
readiness := h.getReadinessLocal()
|
||||||
|
|
||||||
c.JSON(http.StatusOK, SyncInfoResponse{
|
c.JSON(http.StatusOK, SyncInfoResponse{
|
||||||
DBHost: dbHost,
|
DBHost: dbHost,
|
||||||
DBUser: dbUser,
|
DBUser: dbUser,
|
||||||
DBName: dbName,
|
DBName: dbName,
|
||||||
IsOnline: isOnline,
|
IsOnline: isOnline,
|
||||||
LastSyncAt: lastPricelistSync,
|
LastSyncAt: lastPricelistSync,
|
||||||
LotCount: lotCount,
|
LastPricelistAttemptAt: lastPricelistAttemptAt,
|
||||||
LotLogCount: lotLogCount,
|
LastPricelistSyncStatus: lastPricelistSyncStatus,
|
||||||
ConfigCount: configCount,
|
LastPricelistSyncError: lastPricelistSyncError,
|
||||||
ProjectCount: projectCount,
|
NeedPricelistSync: needPricelistSync,
|
||||||
PendingChanges: changes,
|
HasIncompleteServerSync: hasIncompleteServerSync,
|
||||||
ErrorCount: errorCount,
|
LotCount: componentCount,
|
||||||
Errors: syncErrors,
|
LotLogCount: pricelistCount,
|
||||||
Readiness: readiness,
|
ConfigCount: configCount,
|
||||||
|
ProjectCount: projectCount,
|
||||||
|
PendingChanges: changes,
|
||||||
|
ErrorCount: errorCount,
|
||||||
|
Errors: syncErrors,
|
||||||
|
Readiness: readiness,
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -577,9 +611,7 @@ func (h *SyncHandler) GetUsersStatus(c *gin.Context) {
|
|||||||
|
|
||||||
users, err := h.syncService.ListUserSyncStatuses(threshold)
|
users, err := h.syncService.ListUserSyncStatuses(threshold)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
c.JSON(http.StatusInternalServerError, gin.H{
|
RespondError(c, http.StatusInternalServerError, "internal server error", err)
|
||||||
"error": err.Error(),
|
|
||||||
})
|
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -608,15 +640,33 @@ func (h *SyncHandler) SyncStatusPartial(c *gin.Context) {
|
|||||||
|
|
||||||
// Get pending count
|
// Get pending count
|
||||||
pendingCount := h.localDB.GetPendingCount()
|
pendingCount := h.localDB.GetPendingCount()
|
||||||
readiness := h.getReadinessCached(10 * time.Second)
|
readiness := h.getReadinessLocal()
|
||||||
isBlocked := readiness != nil && readiness.Blocked
|
isBlocked := readiness != nil && readiness.Blocked
|
||||||
|
lastPricelistSyncStatus := h.localDB.GetLastPricelistSyncStatus()
|
||||||
|
lastPricelistSyncError := h.localDB.GetLastPricelistSyncError()
|
||||||
|
hasFailedSync := strings.EqualFold(lastPricelistSyncStatus, "failed")
|
||||||
|
hasIncompleteServerSync := hasFailedSync
|
||||||
|
|
||||||
slog.Debug("rendering sync status", "is_offline", isOffline, "pending_count", pendingCount, "sync_blocked", isBlocked)
|
slog.Debug("rendering sync status", "is_offline", isOffline, "pending_count", pendingCount, "sync_blocked", isBlocked)
|
||||||
|
|
||||||
data := gin.H{
|
data := gin.H{
|
||||||
"IsOffline": isOffline,
|
"IsOffline": isOffline,
|
||||||
"PendingCount": pendingCount,
|
"PendingCount": pendingCount,
|
||||||
"IsBlocked": isBlocked,
|
"IsBlocked": isBlocked,
|
||||||
|
"HasFailedSync": hasFailedSync,
|
||||||
|
"HasIncompleteServerSync": hasIncompleteServerSync,
|
||||||
|
"SyncIssueTitle": func() string {
|
||||||
|
if hasIncompleteServerSync {
|
||||||
|
return "Последняя синхронизация прайслистов прервалась. На сервере есть изменения, которые не загружены локально."
|
||||||
|
}
|
||||||
|
if hasFailedSync {
|
||||||
|
if lastPricelistSyncError != "" {
|
||||||
|
return lastPricelistSyncError
|
||||||
|
}
|
||||||
|
return "Последняя синхронизация прайслистов завершилась ошибкой."
|
||||||
|
}
|
||||||
|
return ""
|
||||||
|
}(),
|
||||||
"BlockedReason": func() string {
|
"BlockedReason": func() string {
|
||||||
if readiness == nil {
|
if readiness == nil {
|
||||||
return ""
|
return ""
|
||||||
@@ -628,27 +678,71 @@ func (h *SyncHandler) SyncStatusPartial(c *gin.Context) {
|
|||||||
c.Header("Content-Type", "text/html; charset=utf-8")
|
c.Header("Content-Type", "text/html; charset=utf-8")
|
||||||
if err := h.tmpl.ExecuteTemplate(c.Writer, "sync_status", data); err != nil {
|
if err := h.tmpl.ExecuteTemplate(c.Writer, "sync_status", data); err != nil {
|
||||||
slog.Error("failed to render sync_status template", "error", err)
|
slog.Error("failed to render sync_status template", "error", err)
|
||||||
c.String(http.StatusInternalServerError, "Template error: "+err.Error())
|
_ = c.Error(err)
|
||||||
|
c.String(http.StatusInternalServerError, "Template error")
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func (h *SyncHandler) getReadinessCached(maxAge time.Duration) *sync.SyncReadiness {
|
func (h *SyncHandler) getReadinessLocal() *sync.SyncReadiness {
|
||||||
h.readinessMu.Lock()
|
h.readinessMu.Lock()
|
||||||
if h.readinessCached != nil && time.Since(h.readinessCachedAt) < maxAge {
|
if h.readinessCached != nil && time.Since(h.readinessCachedAt) < 10*time.Second {
|
||||||
cached := *h.readinessCached
|
cached := *h.readinessCached
|
||||||
h.readinessMu.Unlock()
|
h.readinessMu.Unlock()
|
||||||
return &cached
|
return &cached
|
||||||
}
|
}
|
||||||
h.readinessMu.Unlock()
|
h.readinessMu.Unlock()
|
||||||
|
|
||||||
readiness, err := h.syncService.GetReadiness()
|
state, err := h.localDB.GetSyncGuardState()
|
||||||
if err != nil && readiness == nil {
|
if err != nil || state == nil {
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
readiness := &sync.SyncReadiness{
|
||||||
|
Status: state.Status,
|
||||||
|
Blocked: state.Status == sync.ReadinessBlocked,
|
||||||
|
ReasonCode: state.ReasonCode,
|
||||||
|
ReasonText: state.ReasonText,
|
||||||
|
RequiredMinAppVersion: state.RequiredMinAppVersion,
|
||||||
|
LastCheckedAt: state.LastCheckedAt,
|
||||||
|
}
|
||||||
|
|
||||||
h.readinessMu.Lock()
|
h.readinessMu.Lock()
|
||||||
h.readinessCached = readiness
|
h.readinessCached = readiness
|
||||||
h.readinessCachedAt = time.Now()
|
h.readinessCachedAt = time.Now()
|
||||||
h.readinessMu.Unlock()
|
h.readinessMu.Unlock()
|
||||||
return readiness
|
return readiness
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// ReportPartnumberSeen pushes unresolved vendor partnumbers to qt_vendor_partnumber_seen on MariaDB.
|
||||||
|
// POST /api/sync/partnumber-seen
|
||||||
|
func (h *SyncHandler) ReportPartnumberSeen(c *gin.Context) {
|
||||||
|
var body struct {
|
||||||
|
Items []struct {
|
||||||
|
Partnumber string `json:"partnumber"`
|
||||||
|
Description string `json:"description"`
|
||||||
|
Ignored bool `json:"ignored"`
|
||||||
|
} `json:"items"`
|
||||||
|
}
|
||||||
|
if err := c.ShouldBindJSON(&body); err != nil {
|
||||||
|
RespondError(c, http.StatusBadRequest, "invalid request", err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
items := make([]sync.SeenPartnumber, 0, len(body.Items))
|
||||||
|
for _, it := range body.Items {
|
||||||
|
if it.Partnumber != "" {
|
||||||
|
items = append(items, sync.SeenPartnumber{
|
||||||
|
Partnumber: it.Partnumber,
|
||||||
|
Description: it.Description,
|
||||||
|
Ignored: it.Ignored,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := h.syncService.PushPartnumberSeen(items); err != nil {
|
||||||
|
RespondError(c, http.StatusServiceUnavailable, "service unavailable", err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
c.JSON(http.StatusOK, gin.H{"reported": len(items)})
|
||||||
|
}
|
||||||
|
|||||||
201
internal/handlers/vendor_spec.go
Normal file
201
internal/handlers/vendor_spec.go
Normal file
@@ -0,0 +1,201 @@
|
|||||||
|
package handlers
|
||||||
|
|
||||||
|
import (
|
||||||
|
"errors"
|
||||||
|
"net/http"
|
||||||
|
"strings"
|
||||||
|
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/localdb"
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/repository"
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/services"
|
||||||
|
"github.com/gin-gonic/gin"
|
||||||
|
)
|
||||||
|
|
||||||
|
// VendorSpecHandler handles vendor BOM spec operations for a configuration.
|
||||||
|
type VendorSpecHandler struct {
|
||||||
|
localDB *localdb.LocalDB
|
||||||
|
configService *services.LocalConfigurationService
|
||||||
|
}
|
||||||
|
|
||||||
|
func NewVendorSpecHandler(localDB *localdb.LocalDB) *VendorSpecHandler {
|
||||||
|
return &VendorSpecHandler{
|
||||||
|
localDB: localDB,
|
||||||
|
configService: services.NewLocalConfigurationService(localDB, nil, nil, func() bool { return false }),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// lookupConfig finds an active configuration by UUID using the standard localDB method.
|
||||||
|
func (h *VendorSpecHandler) lookupConfig(uuid string) (*localdb.LocalConfiguration, error) {
|
||||||
|
cfg, err := h.localDB.GetConfigurationByUUID(uuid)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
if !cfg.IsActive {
|
||||||
|
return nil, errors.New("not active")
|
||||||
|
}
|
||||||
|
return cfg, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetVendorSpec returns the vendor spec (BOM) for a configuration.
|
||||||
|
// GET /api/configs/:uuid/vendor-spec
|
||||||
|
func (h *VendorSpecHandler) GetVendorSpec(c *gin.Context) {
|
||||||
|
cfg, err := h.lookupConfig(c.Param("uuid"))
|
||||||
|
if err != nil {
|
||||||
|
c.JSON(http.StatusNotFound, gin.H{"error": "configuration not found"})
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
spec := cfg.VendorSpec
|
||||||
|
if spec == nil {
|
||||||
|
spec = localdb.VendorSpec{}
|
||||||
|
}
|
||||||
|
c.JSON(http.StatusOK, gin.H{"vendor_spec": spec})
|
||||||
|
}
|
||||||
|
|
||||||
|
// PutVendorSpec saves (replaces) the vendor spec for a configuration.
|
||||||
|
// PUT /api/configs/:uuid/vendor-spec
|
||||||
|
func (h *VendorSpecHandler) PutVendorSpec(c *gin.Context) {
|
||||||
|
cfg, err := h.lookupConfig(c.Param("uuid"))
|
||||||
|
if err != nil {
|
||||||
|
c.JSON(http.StatusNotFound, gin.H{"error": "configuration not found"})
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
var body struct {
|
||||||
|
VendorSpec []localdb.VendorSpecItem `json:"vendor_spec"`
|
||||||
|
}
|
||||||
|
if err := c.ShouldBindJSON(&body); err != nil {
|
||||||
|
RespondError(c, http.StatusBadRequest, "invalid request", err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
for i := range body.VendorSpec {
|
||||||
|
if body.VendorSpec[i].SortOrder == 0 {
|
||||||
|
body.VendorSpec[i].SortOrder = (i + 1) * 10
|
||||||
|
}
|
||||||
|
// Persist canonical LOT mapping only.
|
||||||
|
body.VendorSpec[i].LotMappings = normalizeLotMappings(body.VendorSpec[i].LotMappings)
|
||||||
|
body.VendorSpec[i].ResolvedLotName = ""
|
||||||
|
body.VendorSpec[i].ResolutionSource = ""
|
||||||
|
body.VendorSpec[i].ManualLotSuggestion = ""
|
||||||
|
body.VendorSpec[i].LotQtyPerPN = 0
|
||||||
|
body.VendorSpec[i].LotAllocations = nil
|
||||||
|
}
|
||||||
|
|
||||||
|
spec := localdb.VendorSpec(body.VendorSpec)
|
||||||
|
if _, err := h.configService.UpdateVendorSpecNoAuth(cfg.UUID, spec); err != nil {
|
||||||
|
RespondError(c, http.StatusInternalServerError, "internal server error", err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
c.JSON(http.StatusOK, gin.H{"vendor_spec": spec})
|
||||||
|
}
|
||||||
|
|
||||||
|
func normalizeLotMappings(in []localdb.VendorSpecLotMapping) []localdb.VendorSpecLotMapping {
|
||||||
|
if len(in) == 0 {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
merged := make(map[string]int, len(in))
|
||||||
|
order := make([]string, 0, len(in))
|
||||||
|
for _, m := range in {
|
||||||
|
lot := strings.TrimSpace(m.LotName)
|
||||||
|
if lot == "" {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
qty := m.QuantityPerPN
|
||||||
|
if qty < 1 {
|
||||||
|
qty = 1
|
||||||
|
}
|
||||||
|
if _, exists := merged[lot]; !exists {
|
||||||
|
order = append(order, lot)
|
||||||
|
}
|
||||||
|
merged[lot] += qty
|
||||||
|
}
|
||||||
|
out := make([]localdb.VendorSpecLotMapping, 0, len(order))
|
||||||
|
for _, lot := range order {
|
||||||
|
out = append(out, localdb.VendorSpecLotMapping{
|
||||||
|
LotName: lot,
|
||||||
|
QuantityPerPN: merged[lot],
|
||||||
|
})
|
||||||
|
}
|
||||||
|
if len(out) == 0 {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
return out
|
||||||
|
}
|
||||||
|
|
||||||
|
// ResolveVendorSpec resolves vendor PN → LOT without modifying the cart.
|
||||||
|
// POST /api/configs/:uuid/vendor-spec/resolve
|
||||||
|
func (h *VendorSpecHandler) ResolveVendorSpec(c *gin.Context) {
|
||||||
|
if _, err := h.lookupConfig(c.Param("uuid")); err != nil {
|
||||||
|
c.JSON(http.StatusNotFound, gin.H{"error": "configuration not found"})
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
var body struct {
|
||||||
|
VendorSpec []localdb.VendorSpecItem `json:"vendor_spec"`
|
||||||
|
}
|
||||||
|
if err := c.ShouldBindJSON(&body); err != nil {
|
||||||
|
RespondError(c, http.StatusBadRequest, "invalid request", err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
bookRepo := repository.NewPartnumberBookRepository(h.localDB.DB())
|
||||||
|
resolver := services.NewVendorSpecResolver(bookRepo)
|
||||||
|
|
||||||
|
resolved, err := resolver.Resolve(body.VendorSpec)
|
||||||
|
if err != nil {
|
||||||
|
RespondError(c, http.StatusInternalServerError, "internal server error", err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
book, _ := bookRepo.GetActiveBook()
|
||||||
|
aggregated, err := services.AggregateLOTs(resolved, book, bookRepo)
|
||||||
|
if err != nil {
|
||||||
|
RespondError(c, http.StatusInternalServerError, "internal server error", err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
c.JSON(http.StatusOK, gin.H{
|
||||||
|
"resolved": resolved,
|
||||||
|
"aggregated": aggregated,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
// ApplyVendorSpec applies the resolved BOM to the cart (Estimate items).
|
||||||
|
// POST /api/configs/:uuid/vendor-spec/apply
|
||||||
|
func (h *VendorSpecHandler) ApplyVendorSpec(c *gin.Context) {
|
||||||
|
cfg, err := h.lookupConfig(c.Param("uuid"))
|
||||||
|
if err != nil {
|
||||||
|
c.JSON(http.StatusNotFound, gin.H{"error": "configuration not found"})
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
var body struct {
|
||||||
|
Items []struct {
|
||||||
|
LotName string `json:"lot_name"`
|
||||||
|
Quantity int `json:"quantity"`
|
||||||
|
UnitPrice float64 `json:"unit_price"`
|
||||||
|
} `json:"items"`
|
||||||
|
}
|
||||||
|
if err := c.ShouldBindJSON(&body); err != nil {
|
||||||
|
RespondError(c, http.StatusBadRequest, "invalid request", err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
newItems := make(localdb.LocalConfigItems, 0, len(body.Items))
|
||||||
|
for _, it := range body.Items {
|
||||||
|
newItems = append(newItems, localdb.LocalConfigItem{
|
||||||
|
LotName: it.LotName,
|
||||||
|
Quantity: it.Quantity,
|
||||||
|
UnitPrice: it.UnitPrice,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
if _, err := h.configService.ApplyVendorSpecItemsNoAuth(cfg.UUID, newItems); err != nil {
|
||||||
|
RespondError(c, http.StatusInternalServerError, "internal server error", err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
c.JSON(http.StatusOK, gin.H{"items": newItems})
|
||||||
|
}
|
||||||
@@ -1,23 +1,24 @@
|
|||||||
package handlers
|
package handlers
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"fmt"
|
||||||
"html/template"
|
"html/template"
|
||||||
"os"
|
|
||||||
"path/filepath"
|
|
||||||
"strconv"
|
"strconv"
|
||||||
|
"strings"
|
||||||
|
|
||||||
qfassets "git.mchus.pro/mchus/quoteforge"
|
qfassets "git.mchus.pro/mchus/quoteforge"
|
||||||
"git.mchus.pro/mchus/quoteforge/internal/repository"
|
"git.mchus.pro/mchus/quoteforge/internal/appmeta"
|
||||||
"git.mchus.pro/mchus/quoteforge/internal/services"
|
"git.mchus.pro/mchus/quoteforge/internal/localdb"
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/models"
|
||||||
"github.com/gin-gonic/gin"
|
"github.com/gin-gonic/gin"
|
||||||
)
|
)
|
||||||
|
|
||||||
type WebHandler struct {
|
type WebHandler struct {
|
||||||
templates map[string]*template.Template
|
templates map[string]*template.Template
|
||||||
componentService *services.ComponentService
|
localDB *localdb.LocalDB
|
||||||
}
|
}
|
||||||
|
|
||||||
func NewWebHandler(templatesPath string, componentService *services.ComponentService) (*WebHandler, error) {
|
func NewWebHandler(_ string, localDB *localdb.LocalDB) (*WebHandler, error) {
|
||||||
funcMap := template.FuncMap{
|
funcMap := template.FuncMap{
|
||||||
"sub": func(a, b int) int { return a - b },
|
"sub": func(a, b int) int { return a - b },
|
||||||
"add": func(a, b int) int { return a + b },
|
"add": func(a, b int) int { return a + b },
|
||||||
@@ -60,27 +61,16 @@ func NewWebHandler(templatesPath string, componentService *services.ComponentSer
|
|||||||
}
|
}
|
||||||
|
|
||||||
templates := make(map[string]*template.Template)
|
templates := make(map[string]*template.Template)
|
||||||
basePath := filepath.Join(templatesPath, "base.html")
|
|
||||||
useDisk := false
|
|
||||||
if stat, statErr := os.Stat(templatesPath); statErr == nil && stat.IsDir() {
|
|
||||||
useDisk = true
|
|
||||||
}
|
|
||||||
|
|
||||||
// Load each page template with base
|
// Load each page template with base
|
||||||
simplePages := []string{"login.html", "configs.html", "projects.html", "project_detail.html", "pricelists.html", "pricelist_detail.html", "config_revisions.html"}
|
simplePages := []string{"configs.html", "projects.html", "project_detail.html", "pricelists.html", "pricelist_detail.html", "config_revisions.html", "partnumber_books.html"}
|
||||||
for _, page := range simplePages {
|
for _, page := range simplePages {
|
||||||
pagePath := filepath.Join(templatesPath, page)
|
|
||||||
var tmpl *template.Template
|
var tmpl *template.Template
|
||||||
var err error
|
var err error
|
||||||
if useDisk {
|
tmpl, err = template.New("").Funcs(funcMap).ParseFS(
|
||||||
tmpl, err = template.New("").Funcs(funcMap).ParseFiles(basePath, pagePath)
|
qfassets.TemplatesFS,
|
||||||
} else {
|
"web/templates/base.html",
|
||||||
tmpl, err = template.New("").Funcs(funcMap).ParseFS(
|
"web/templates/"+page,
|
||||||
qfassets.TemplatesFS,
|
)
|
||||||
"web/templates/base.html",
|
|
||||||
"web/templates/"+page,
|
|
||||||
)
|
|
||||||
}
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
@@ -88,20 +78,14 @@ func NewWebHandler(templatesPath string, componentService *services.ComponentSer
|
|||||||
}
|
}
|
||||||
|
|
||||||
// Index page needs components_list.html as well
|
// Index page needs components_list.html as well
|
||||||
indexPath := filepath.Join(templatesPath, "index.html")
|
|
||||||
componentsListPath := filepath.Join(templatesPath, "components_list.html")
|
|
||||||
var indexTmpl *template.Template
|
var indexTmpl *template.Template
|
||||||
var err error
|
var err error
|
||||||
if useDisk {
|
indexTmpl, err = template.New("").Funcs(funcMap).ParseFS(
|
||||||
indexTmpl, err = template.New("").Funcs(funcMap).ParseFiles(basePath, indexPath, componentsListPath)
|
qfassets.TemplatesFS,
|
||||||
} else {
|
"web/templates/base.html",
|
||||||
indexTmpl, err = template.New("").Funcs(funcMap).ParseFS(
|
"web/templates/index.html",
|
||||||
qfassets.TemplatesFS,
|
"web/templates/components_list.html",
|
||||||
"web/templates/base.html",
|
)
|
||||||
"web/templates/index.html",
|
|
||||||
"web/templates/components_list.html",
|
|
||||||
)
|
|
||||||
}
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
@@ -110,17 +94,12 @@ func NewWebHandler(templatesPath string, componentService *services.ComponentSer
|
|||||||
// Load partial templates (no base needed)
|
// Load partial templates (no base needed)
|
||||||
partials := []string{"components_list.html"}
|
partials := []string{"components_list.html"}
|
||||||
for _, partial := range partials {
|
for _, partial := range partials {
|
||||||
partialPath := filepath.Join(templatesPath, partial)
|
|
||||||
var tmpl *template.Template
|
var tmpl *template.Template
|
||||||
var err error
|
var err error
|
||||||
if useDisk {
|
tmpl, err = template.New("").Funcs(funcMap).ParseFS(
|
||||||
tmpl, err = template.New("").Funcs(funcMap).ParseFiles(partialPath)
|
qfassets.TemplatesFS,
|
||||||
} else {
|
"web/templates/"+partial,
|
||||||
tmpl, err = template.New("").Funcs(funcMap).ParseFS(
|
)
|
||||||
qfassets.TemplatesFS,
|
|
||||||
"web/templates/"+partial,
|
|
||||||
)
|
|
||||||
}
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
@@ -128,21 +107,24 @@ func NewWebHandler(templatesPath string, componentService *services.ComponentSer
|
|||||||
}
|
}
|
||||||
|
|
||||||
return &WebHandler{
|
return &WebHandler{
|
||||||
templates: templates,
|
templates: templates,
|
||||||
componentService: componentService,
|
localDB: localDB,
|
||||||
}, nil
|
}, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (h *WebHandler) render(c *gin.Context, name string, data gin.H) {
|
func (h *WebHandler) render(c *gin.Context, name string, data gin.H) {
|
||||||
|
data["AppVersion"] = appmeta.Version()
|
||||||
c.Header("Content-Type", "text/html; charset=utf-8")
|
c.Header("Content-Type", "text/html; charset=utf-8")
|
||||||
tmpl, ok := h.templates[name]
|
tmpl, ok := h.templates[name]
|
||||||
if !ok {
|
if !ok {
|
||||||
c.String(500, "Template not found: %s", name)
|
_ = c.Error(fmt.Errorf("template %q not found", name))
|
||||||
|
c.String(500, "Template error")
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
// Execute the page template which will use base
|
// Execute the page template which will use base
|
||||||
if err := tmpl.ExecuteTemplate(c.Writer, name, data); err != nil {
|
if err := tmpl.ExecuteTemplate(c.Writer, name, data); err != nil {
|
||||||
c.String(500, "Template error: %v", err)
|
_ = c.Error(err)
|
||||||
|
c.String(500, "Template error")
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -152,36 +134,28 @@ func (h *WebHandler) Index(c *gin.Context) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (h *WebHandler) Configurator(c *gin.Context) {
|
func (h *WebHandler) Configurator(c *gin.Context) {
|
||||||
categories, _ := h.componentService.GetCategories()
|
|
||||||
uuid := c.Query("uuid")
|
uuid := c.Query("uuid")
|
||||||
|
categories, _ := h.localCategories()
|
||||||
filter := repository.ComponentFilter{}
|
components, total, err := h.localDB.ListComponents(localdb.ComponentFilter{}, 0, 20)
|
||||||
result, err := h.componentService.List(filter, 1, 20)
|
|
||||||
|
|
||||||
data := gin.H{
|
data := gin.H{
|
||||||
"ActivePage": "configurator",
|
"ActivePage": "configurator",
|
||||||
"Categories": categories,
|
"Categories": categories,
|
||||||
"Components": []interface{}{},
|
"Components": []localComponentView{},
|
||||||
"Total": int64(0),
|
"Total": int64(0),
|
||||||
"Page": 1,
|
"Page": 1,
|
||||||
"PerPage": 20,
|
"PerPage": 20,
|
||||||
"ConfigUUID": uuid,
|
"ConfigUUID": uuid,
|
||||||
}
|
}
|
||||||
|
|
||||||
if err == nil && result != nil {
|
if err == nil {
|
||||||
data["Components"] = result.Components
|
data["Components"] = toLocalComponentViews(components)
|
||||||
data["Total"] = result.Total
|
data["Total"] = total
|
||||||
data["Page"] = result.Page
|
|
||||||
data["PerPage"] = result.PerPage
|
|
||||||
}
|
}
|
||||||
|
|
||||||
h.render(c, "index.html", data)
|
h.render(c, "index.html", data)
|
||||||
}
|
}
|
||||||
|
|
||||||
func (h *WebHandler) Login(c *gin.Context) {
|
|
||||||
h.render(c, "login.html", nil)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (h *WebHandler) Configs(c *gin.Context) {
|
func (h *WebHandler) Configs(c *gin.Context) {
|
||||||
h.render(c, "configs.html", gin.H{"ActivePage": "configs"})
|
h.render(c, "configs.html", gin.H{"ActivePage": "configs"})
|
||||||
}
|
}
|
||||||
@@ -212,29 +186,38 @@ func (h *WebHandler) PricelistDetail(c *gin.Context) {
|
|||||||
h.render(c, "pricelist_detail.html", gin.H{"ActivePage": "pricelists"})
|
h.render(c, "pricelist_detail.html", gin.H{"ActivePage": "pricelists"})
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (h *WebHandler) PartnumberBooks(c *gin.Context) {
|
||||||
|
h.render(c, "partnumber_books.html", gin.H{"ActivePage": "partnumber-books"})
|
||||||
|
}
|
||||||
|
|
||||||
// Partials for htmx
|
// Partials for htmx
|
||||||
|
|
||||||
func (h *WebHandler) ComponentsPartial(c *gin.Context) {
|
func (h *WebHandler) ComponentsPartial(c *gin.Context) {
|
||||||
page, _ := strconv.Atoi(c.DefaultQuery("page", "1"))
|
page, _ := strconv.Atoi(c.DefaultQuery("page", "1"))
|
||||||
|
if page < 1 {
|
||||||
|
page = 1
|
||||||
|
}
|
||||||
|
|
||||||
filter := repository.ComponentFilter{
|
filter := localdb.ComponentFilter{
|
||||||
Category: c.Query("category"),
|
Category: c.Query("category"),
|
||||||
Search: c.Query("search"),
|
Search: c.Query("search"),
|
||||||
}
|
}
|
||||||
|
if c.Query("has_price") == "true" {
|
||||||
|
filter.HasPrice = true
|
||||||
|
}
|
||||||
|
offset := (page - 1) * 20
|
||||||
|
|
||||||
data := gin.H{
|
data := gin.H{
|
||||||
"Components": []interface{}{},
|
"Components": []localComponentView{},
|
||||||
"Total": int64(0),
|
"Total": int64(0),
|
||||||
"Page": page,
|
"Page": page,
|
||||||
"PerPage": 20,
|
"PerPage": 20,
|
||||||
}
|
}
|
||||||
|
|
||||||
result, err := h.componentService.List(filter, page, 20)
|
components, total, err := h.localDB.ListComponents(filter, offset, 20)
|
||||||
if err == nil && result != nil {
|
if err == nil {
|
||||||
data["Components"] = result.Components
|
data["Components"] = toLocalComponentViews(components)
|
||||||
data["Total"] = result.Total
|
data["Total"] = total
|
||||||
data["Page"] = result.Page
|
|
||||||
data["PerPage"] = result.PerPage
|
|
||||||
}
|
}
|
||||||
|
|
||||||
c.Header("Content-Type", "text/html; charset=utf-8")
|
c.Header("Content-Type", "text/html; charset=utf-8")
|
||||||
@@ -242,3 +225,46 @@ func (h *WebHandler) ComponentsPartial(c *gin.Context) {
|
|||||||
tmpl.ExecuteTemplate(c.Writer, "components_list.html", data)
|
tmpl.ExecuteTemplate(c.Writer, "components_list.html", data)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
type localComponentView struct {
|
||||||
|
LotName string
|
||||||
|
Description string
|
||||||
|
Category string
|
||||||
|
CategoryName string
|
||||||
|
Model string
|
||||||
|
CurrentPrice *float64
|
||||||
|
}
|
||||||
|
|
||||||
|
func toLocalComponentViews(items []localdb.LocalComponent) []localComponentView {
|
||||||
|
result := make([]localComponentView, 0, len(items))
|
||||||
|
for _, item := range items {
|
||||||
|
result = append(result, localComponentView{
|
||||||
|
LotName: item.LotName,
|
||||||
|
Description: item.LotDescription,
|
||||||
|
Category: item.Category,
|
||||||
|
CategoryName: item.Category,
|
||||||
|
Model: item.Model,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
return result
|
||||||
|
}
|
||||||
|
|
||||||
|
func (h *WebHandler) localCategories() ([]models.Category, error) {
|
||||||
|
codes, err := h.localDB.GetLocalComponentCategories()
|
||||||
|
if err != nil || len(codes) == 0 {
|
||||||
|
return []models.Category{}, err
|
||||||
|
}
|
||||||
|
|
||||||
|
categories := make([]models.Category, 0, len(codes))
|
||||||
|
for _, code := range codes {
|
||||||
|
trimmed := strings.TrimSpace(code)
|
||||||
|
if trimmed == "" {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
categories = append(categories, models.Category{
|
||||||
|
Code: trimmed,
|
||||||
|
Name: trimmed,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
return categories, nil
|
||||||
|
}
|
||||||
|
|||||||
47
internal/handlers/web_test.go
Normal file
47
internal/handlers/web_test.go
Normal file
@@ -0,0 +1,47 @@
|
|||||||
|
package handlers
|
||||||
|
|
||||||
|
import (
|
||||||
|
"errors"
|
||||||
|
"html/template"
|
||||||
|
"net/http"
|
||||||
|
"net/http/httptest"
|
||||||
|
"strings"
|
||||||
|
"testing"
|
||||||
|
|
||||||
|
"github.com/gin-gonic/gin"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestWebHandlerRenderHidesTemplateExecutionError(t *testing.T) {
|
||||||
|
gin.SetMode(gin.TestMode)
|
||||||
|
|
||||||
|
tmpl := template.Must(template.New("broken.html").Funcs(template.FuncMap{
|
||||||
|
"boom": func() (string, error) {
|
||||||
|
return "", errors.New("secret template failure")
|
||||||
|
},
|
||||||
|
}).Parse(`{{define "broken.html"}}{{boom}}{{end}}`))
|
||||||
|
|
||||||
|
handler := &WebHandler{
|
||||||
|
templates: map[string]*template.Template{
|
||||||
|
"broken.html": tmpl,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
rec := httptest.NewRecorder()
|
||||||
|
ctx, _ := gin.CreateTestContext(rec)
|
||||||
|
ctx.Request = httptest.NewRequest(http.MethodGet, "/broken", nil)
|
||||||
|
|
||||||
|
handler.render(ctx, "broken.html", gin.H{})
|
||||||
|
|
||||||
|
if rec.Code != http.StatusInternalServerError {
|
||||||
|
t.Fatalf("expected 500, got %d", rec.Code)
|
||||||
|
}
|
||||||
|
if body := strings.TrimSpace(rec.Body.String()); body != "Template error" {
|
||||||
|
t.Fatalf("expected generic template error, got %q", body)
|
||||||
|
}
|
||||||
|
if len(ctx.Errors) != 1 {
|
||||||
|
t.Fatalf("expected logged template error, got %d", len(ctx.Errors))
|
||||||
|
}
|
||||||
|
if !strings.Contains(ctx.Errors.String(), "secret template failure") {
|
||||||
|
t.Fatalf("expected original error in gin context, got %q", ctx.Errors.String())
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -71,18 +71,25 @@ func (l *LocalDB) SyncComponents(mariaDB *gorm.DB) (*ComponentSyncResult, error)
|
|||||||
existingMap[c.LotName] = true
|
existingMap[c.LotName] = true
|
||||||
}
|
}
|
||||||
|
|
||||||
// Prepare components for batch insert/update
|
// Prepare components for batch insert/update.
|
||||||
|
// Source joins may duplicate the same lot_name, so collapse them before insert.
|
||||||
syncTime := time.Now()
|
syncTime := time.Now()
|
||||||
components := make([]LocalComponent, 0, len(rows))
|
components := make([]LocalComponent, 0, len(rows))
|
||||||
|
componentIndex := make(map[string]int, len(rows))
|
||||||
newCount := 0
|
newCount := 0
|
||||||
|
|
||||||
for _, row := range rows {
|
for _, row := range rows {
|
||||||
|
lotName := strings.TrimSpace(row.LotName)
|
||||||
|
if lotName == "" {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
category := ""
|
category := ""
|
||||||
if row.Category != nil {
|
if row.Category != nil {
|
||||||
category = *row.Category
|
category = strings.TrimSpace(*row.Category)
|
||||||
} else {
|
} else {
|
||||||
// Parse category from lot_name (e.g., "CPU_AMD_9654" -> "CPU")
|
// Parse category from lot_name (e.g., "CPU_AMD_9654" -> "CPU")
|
||||||
parts := strings.SplitN(row.LotName, "_", 2)
|
parts := strings.SplitN(lotName, "_", 2)
|
||||||
if len(parts) >= 1 {
|
if len(parts) >= 1 {
|
||||||
category = parts[0]
|
category = parts[0]
|
||||||
}
|
}
|
||||||
@@ -90,18 +97,34 @@ func (l *LocalDB) SyncComponents(mariaDB *gorm.DB) (*ComponentSyncResult, error)
|
|||||||
|
|
||||||
model := ""
|
model := ""
|
||||||
if row.Model != nil {
|
if row.Model != nil {
|
||||||
model = *row.Model
|
model = strings.TrimSpace(*row.Model)
|
||||||
}
|
}
|
||||||
|
|
||||||
comp := LocalComponent{
|
comp := LocalComponent{
|
||||||
LotName: row.LotName,
|
LotName: lotName,
|
||||||
LotDescription: row.LotDescription,
|
LotDescription: strings.TrimSpace(row.LotDescription),
|
||||||
Category: category,
|
Category: category,
|
||||||
Model: model,
|
Model: model,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if idx, exists := componentIndex[lotName]; exists {
|
||||||
|
// Keep the first row, but fill any missing metadata from duplicates.
|
||||||
|
if components[idx].LotDescription == "" && comp.LotDescription != "" {
|
||||||
|
components[idx].LotDescription = comp.LotDescription
|
||||||
|
}
|
||||||
|
if components[idx].Category == "" && comp.Category != "" {
|
||||||
|
components[idx].Category = comp.Category
|
||||||
|
}
|
||||||
|
if components[idx].Model == "" && comp.Model != "" {
|
||||||
|
components[idx].Model = comp.Model
|
||||||
|
}
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
componentIndex[lotName] = len(components)
|
||||||
components = append(components, comp)
|
components = append(components, comp)
|
||||||
|
|
||||||
if !existingMap[row.LotName] {
|
if !existingMap[lotName] {
|
||||||
newCount++
|
newCount++
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
154
internal/localdb/configuration_business_fields_test.go
Normal file
154
internal/localdb/configuration_business_fields_test.go
Normal file
@@ -0,0 +1,154 @@
|
|||||||
|
package localdb
|
||||||
|
|
||||||
|
import (
|
||||||
|
"testing"
|
||||||
|
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/models"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestConfigurationConvertersPreserveBusinessFields(t *testing.T) {
|
||||||
|
estimateID := uint(11)
|
||||||
|
warehouseID := uint(22)
|
||||||
|
competitorID := uint(33)
|
||||||
|
|
||||||
|
cfg := &models.Configuration{
|
||||||
|
UUID: "cfg-1",
|
||||||
|
OwnerUsername: "tester",
|
||||||
|
Name: "Config",
|
||||||
|
PricelistID: &estimateID,
|
||||||
|
WarehousePricelistID: &warehouseID,
|
||||||
|
CompetitorPricelistID: &competitorID,
|
||||||
|
DisablePriceRefresh: true,
|
||||||
|
OnlyInStock: true,
|
||||||
|
}
|
||||||
|
|
||||||
|
local := ConfigurationToLocal(cfg)
|
||||||
|
if local.WarehousePricelistID == nil || *local.WarehousePricelistID != warehouseID {
|
||||||
|
t.Fatalf("warehouse pricelist lost in ConfigurationToLocal: %+v", local.WarehousePricelistID)
|
||||||
|
}
|
||||||
|
if local.CompetitorPricelistID == nil || *local.CompetitorPricelistID != competitorID {
|
||||||
|
t.Fatalf("competitor pricelist lost in ConfigurationToLocal: %+v", local.CompetitorPricelistID)
|
||||||
|
}
|
||||||
|
if !local.DisablePriceRefresh {
|
||||||
|
t.Fatalf("disable_price_refresh lost in ConfigurationToLocal")
|
||||||
|
}
|
||||||
|
|
||||||
|
back := LocalToConfiguration(local)
|
||||||
|
if back.WarehousePricelistID == nil || *back.WarehousePricelistID != warehouseID {
|
||||||
|
t.Fatalf("warehouse pricelist lost in LocalToConfiguration: %+v", back.WarehousePricelistID)
|
||||||
|
}
|
||||||
|
if back.CompetitorPricelistID == nil || *back.CompetitorPricelistID != competitorID {
|
||||||
|
t.Fatalf("competitor pricelist lost in LocalToConfiguration: %+v", back.CompetitorPricelistID)
|
||||||
|
}
|
||||||
|
if !back.DisablePriceRefresh {
|
||||||
|
t.Fatalf("disable_price_refresh lost in LocalToConfiguration")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestConfigurationSnapshotPreservesBusinessFields(t *testing.T) {
|
||||||
|
estimateID := uint(11)
|
||||||
|
warehouseID := uint(22)
|
||||||
|
competitorID := uint(33)
|
||||||
|
|
||||||
|
cfg := &LocalConfiguration{
|
||||||
|
UUID: "cfg-1",
|
||||||
|
Name: "Config",
|
||||||
|
PricelistID: &estimateID,
|
||||||
|
WarehousePricelistID: &warehouseID,
|
||||||
|
CompetitorPricelistID: &competitorID,
|
||||||
|
DisablePriceRefresh: true,
|
||||||
|
OnlyInStock: true,
|
||||||
|
VendorSpec: VendorSpec{
|
||||||
|
{
|
||||||
|
SortOrder: 10,
|
||||||
|
VendorPartnumber: "PN-1",
|
||||||
|
Quantity: 1,
|
||||||
|
LotMappings: []VendorSpecLotMapping{
|
||||||
|
{LotName: "LOT_A", QuantityPerPN: 2},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
raw, err := BuildConfigurationSnapshot(cfg)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("BuildConfigurationSnapshot: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
decoded, err := DecodeConfigurationSnapshot(raw)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("DecodeConfigurationSnapshot: %v", err)
|
||||||
|
}
|
||||||
|
if decoded.WarehousePricelistID == nil || *decoded.WarehousePricelistID != warehouseID {
|
||||||
|
t.Fatalf("warehouse pricelist lost in snapshot: %+v", decoded.WarehousePricelistID)
|
||||||
|
}
|
||||||
|
if decoded.CompetitorPricelistID == nil || *decoded.CompetitorPricelistID != competitorID {
|
||||||
|
t.Fatalf("competitor pricelist lost in snapshot: %+v", decoded.CompetitorPricelistID)
|
||||||
|
}
|
||||||
|
if !decoded.DisablePriceRefresh {
|
||||||
|
t.Fatalf("disable_price_refresh lost in snapshot")
|
||||||
|
}
|
||||||
|
if len(decoded.VendorSpec) != 1 || decoded.VendorSpec[0].VendorPartnumber != "PN-1" {
|
||||||
|
t.Fatalf("vendor_spec lost in snapshot: %+v", decoded.VendorSpec)
|
||||||
|
}
|
||||||
|
if len(decoded.VendorSpec[0].LotMappings) != 1 || decoded.VendorSpec[0].LotMappings[0].LotName != "LOT_A" {
|
||||||
|
t.Fatalf("lot mappings lost in snapshot: %+v", decoded.VendorSpec)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestConfigurationFingerprintIncludesPricingSelectorsAndVendorSpec(t *testing.T) {
|
||||||
|
estimateID := uint(11)
|
||||||
|
warehouseID := uint(22)
|
||||||
|
competitorID := uint(33)
|
||||||
|
|
||||||
|
base := &LocalConfiguration{
|
||||||
|
UUID: "cfg-1",
|
||||||
|
Name: "Config",
|
||||||
|
ServerCount: 1,
|
||||||
|
Items: LocalConfigItems{{LotName: "LOT_A", Quantity: 1, UnitPrice: 100}},
|
||||||
|
PricelistID: &estimateID,
|
||||||
|
WarehousePricelistID: &warehouseID,
|
||||||
|
CompetitorPricelistID: &competitorID,
|
||||||
|
DisablePriceRefresh: true,
|
||||||
|
OnlyInStock: true,
|
||||||
|
VendorSpec: VendorSpec{
|
||||||
|
{
|
||||||
|
SortOrder: 10,
|
||||||
|
VendorPartnumber: "PN-1",
|
||||||
|
Quantity: 1,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
baseFingerprint, err := BuildConfigurationSpecPriceFingerprint(base)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("base fingerprint: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
changedPricelist := *base
|
||||||
|
newEstimateID := uint(44)
|
||||||
|
changedPricelist.PricelistID = &newEstimateID
|
||||||
|
pricelistFingerprint, err := BuildConfigurationSpecPriceFingerprint(&changedPricelist)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("pricelist fingerprint: %v", err)
|
||||||
|
}
|
||||||
|
if pricelistFingerprint == baseFingerprint {
|
||||||
|
t.Fatalf("expected pricelist selector to affect fingerprint")
|
||||||
|
}
|
||||||
|
|
||||||
|
changedVendorSpec := *base
|
||||||
|
changedVendorSpec.VendorSpec = VendorSpec{
|
||||||
|
{
|
||||||
|
SortOrder: 10,
|
||||||
|
VendorPartnumber: "PN-2",
|
||||||
|
Quantity: 1,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
vendorFingerprint, err := BuildConfigurationSpecPriceFingerprint(&changedVendorSpec)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("vendor fingerprint: %v", err)
|
||||||
|
}
|
||||||
|
if vendorFingerprint == baseFingerprint {
|
||||||
|
t.Fatalf("expected vendor spec to affect fingerprint")
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -18,31 +18,33 @@ func ConfigurationToLocal(cfg *models.Configuration) *LocalConfiguration {
|
|||||||
}
|
}
|
||||||
|
|
||||||
local := &LocalConfiguration{
|
local := &LocalConfiguration{
|
||||||
UUID: cfg.UUID,
|
UUID: cfg.UUID,
|
||||||
ProjectUUID: cfg.ProjectUUID,
|
ProjectUUID: cfg.ProjectUUID,
|
||||||
IsActive: true,
|
IsActive: true,
|
||||||
Name: cfg.Name,
|
Name: cfg.Name,
|
||||||
Items: items,
|
Items: items,
|
||||||
TotalPrice: cfg.TotalPrice,
|
TotalPrice: cfg.TotalPrice,
|
||||||
CustomPrice: cfg.CustomPrice,
|
CustomPrice: cfg.CustomPrice,
|
||||||
Notes: cfg.Notes,
|
Notes: cfg.Notes,
|
||||||
IsTemplate: cfg.IsTemplate,
|
IsTemplate: cfg.IsTemplate,
|
||||||
ServerCount: cfg.ServerCount,
|
ServerCount: cfg.ServerCount,
|
||||||
ServerModel: cfg.ServerModel,
|
ServerModel: cfg.ServerModel,
|
||||||
SupportCode: cfg.SupportCode,
|
SupportCode: cfg.SupportCode,
|
||||||
Article: cfg.Article,
|
Article: cfg.Article,
|
||||||
PricelistID: cfg.PricelistID,
|
PricelistID: cfg.PricelistID,
|
||||||
OnlyInStock: cfg.OnlyInStock,
|
WarehousePricelistID: cfg.WarehousePricelistID,
|
||||||
PriceUpdatedAt: cfg.PriceUpdatedAt,
|
CompetitorPricelistID: cfg.CompetitorPricelistID,
|
||||||
CreatedAt: cfg.CreatedAt,
|
ConfigType: cfg.ConfigType,
|
||||||
UpdatedAt: time.Now(),
|
VendorSpec: modelVendorSpecToLocal(cfg.VendorSpec),
|
||||||
SyncStatus: "pending",
|
DisablePriceRefresh: cfg.DisablePriceRefresh,
|
||||||
OriginalUserID: derefUint(cfg.UserID),
|
OnlyInStock: cfg.OnlyInStock,
|
||||||
OriginalUsername: cfg.OwnerUsername,
|
Line: cfg.Line,
|
||||||
}
|
PriceUpdatedAt: cfg.PriceUpdatedAt,
|
||||||
|
CreatedAt: cfg.CreatedAt,
|
||||||
if local.OriginalUsername == "" && cfg.User != nil {
|
UpdatedAt: time.Now(),
|
||||||
local.OriginalUsername = cfg.User.Username
|
SyncStatus: "pending",
|
||||||
|
OriginalUserID: derefUint(cfg.UserID),
|
||||||
|
OriginalUsername: cfg.OwnerUsername,
|
||||||
}
|
}
|
||||||
|
|
||||||
if cfg.ID > 0 {
|
if cfg.ID > 0 {
|
||||||
@@ -65,23 +67,29 @@ func LocalToConfiguration(local *LocalConfiguration) *models.Configuration {
|
|||||||
}
|
}
|
||||||
|
|
||||||
cfg := &models.Configuration{
|
cfg := &models.Configuration{
|
||||||
UUID: local.UUID,
|
UUID: local.UUID,
|
||||||
OwnerUsername: local.OriginalUsername,
|
OwnerUsername: local.OriginalUsername,
|
||||||
ProjectUUID: local.ProjectUUID,
|
ProjectUUID: local.ProjectUUID,
|
||||||
Name: local.Name,
|
Name: local.Name,
|
||||||
Items: items,
|
Items: items,
|
||||||
TotalPrice: local.TotalPrice,
|
TotalPrice: local.TotalPrice,
|
||||||
CustomPrice: local.CustomPrice,
|
CustomPrice: local.CustomPrice,
|
||||||
Notes: local.Notes,
|
Notes: local.Notes,
|
||||||
IsTemplate: local.IsTemplate,
|
IsTemplate: local.IsTemplate,
|
||||||
ServerCount: local.ServerCount,
|
ServerCount: local.ServerCount,
|
||||||
ServerModel: local.ServerModel,
|
ServerModel: local.ServerModel,
|
||||||
SupportCode: local.SupportCode,
|
SupportCode: local.SupportCode,
|
||||||
Article: local.Article,
|
Article: local.Article,
|
||||||
PricelistID: local.PricelistID,
|
PricelistID: local.PricelistID,
|
||||||
OnlyInStock: local.OnlyInStock,
|
WarehousePricelistID: local.WarehousePricelistID,
|
||||||
PriceUpdatedAt: local.PriceUpdatedAt,
|
CompetitorPricelistID: local.CompetitorPricelistID,
|
||||||
CreatedAt: local.CreatedAt,
|
ConfigType: local.ConfigType,
|
||||||
|
VendorSpec: localVendorSpecToModel(local.VendorSpec),
|
||||||
|
DisablePriceRefresh: local.DisablePriceRefresh,
|
||||||
|
OnlyInStock: local.OnlyInStock,
|
||||||
|
Line: local.Line,
|
||||||
|
PriceUpdatedAt: local.PriceUpdatedAt,
|
||||||
|
CreatedAt: local.CreatedAt,
|
||||||
}
|
}
|
||||||
|
|
||||||
if local.ServerID != nil {
|
if local.ServerID != nil {
|
||||||
@@ -105,6 +113,88 @@ func derefUint(v *uint) uint {
|
|||||||
return *v
|
return *v
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func modelVendorSpecToLocal(spec models.VendorSpec) VendorSpec {
|
||||||
|
if len(spec) == 0 {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
out := make(VendorSpec, 0, len(spec))
|
||||||
|
for _, item := range spec {
|
||||||
|
row := VendorSpecItem{
|
||||||
|
SortOrder: item.SortOrder,
|
||||||
|
VendorPartnumber: item.VendorPartnumber,
|
||||||
|
Quantity: item.Quantity,
|
||||||
|
Description: item.Description,
|
||||||
|
UnitPrice: item.UnitPrice,
|
||||||
|
TotalPrice: item.TotalPrice,
|
||||||
|
ResolvedLotName: item.ResolvedLotName,
|
||||||
|
ResolutionSource: item.ResolutionSource,
|
||||||
|
ManualLotSuggestion: item.ManualLotSuggestion,
|
||||||
|
LotQtyPerPN: item.LotQtyPerPN,
|
||||||
|
}
|
||||||
|
if len(item.LotAllocations) > 0 {
|
||||||
|
row.LotAllocations = make([]VendorSpecLotAllocation, 0, len(item.LotAllocations))
|
||||||
|
for _, alloc := range item.LotAllocations {
|
||||||
|
row.LotAllocations = append(row.LotAllocations, VendorSpecLotAllocation{
|
||||||
|
LotName: alloc.LotName,
|
||||||
|
Quantity: alloc.Quantity,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if len(item.LotMappings) > 0 {
|
||||||
|
row.LotMappings = make([]VendorSpecLotMapping, 0, len(item.LotMappings))
|
||||||
|
for _, mapping := range item.LotMappings {
|
||||||
|
row.LotMappings = append(row.LotMappings, VendorSpecLotMapping{
|
||||||
|
LotName: mapping.LotName,
|
||||||
|
QuantityPerPN: mapping.QuantityPerPN,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
out = append(out, row)
|
||||||
|
}
|
||||||
|
return out
|
||||||
|
}
|
||||||
|
|
||||||
|
func localVendorSpecToModel(spec VendorSpec) models.VendorSpec {
|
||||||
|
if len(spec) == 0 {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
out := make(models.VendorSpec, 0, len(spec))
|
||||||
|
for _, item := range spec {
|
||||||
|
row := models.VendorSpecItem{
|
||||||
|
SortOrder: item.SortOrder,
|
||||||
|
VendorPartnumber: item.VendorPartnumber,
|
||||||
|
Quantity: item.Quantity,
|
||||||
|
Description: item.Description,
|
||||||
|
UnitPrice: item.UnitPrice,
|
||||||
|
TotalPrice: item.TotalPrice,
|
||||||
|
ResolvedLotName: item.ResolvedLotName,
|
||||||
|
ResolutionSource: item.ResolutionSource,
|
||||||
|
ManualLotSuggestion: item.ManualLotSuggestion,
|
||||||
|
LotQtyPerPN: item.LotQtyPerPN,
|
||||||
|
}
|
||||||
|
if len(item.LotAllocations) > 0 {
|
||||||
|
row.LotAllocations = make([]models.VendorSpecLotAllocation, 0, len(item.LotAllocations))
|
||||||
|
for _, alloc := range item.LotAllocations {
|
||||||
|
row.LotAllocations = append(row.LotAllocations, models.VendorSpecLotAllocation{
|
||||||
|
LotName: alloc.LotName,
|
||||||
|
Quantity: alloc.Quantity,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if len(item.LotMappings) > 0 {
|
||||||
|
row.LotMappings = make([]models.VendorSpecLotMapping, 0, len(item.LotMappings))
|
||||||
|
for _, mapping := range item.LotMappings {
|
||||||
|
row.LotMappings = append(row.LotMappings, models.VendorSpecLotMapping{
|
||||||
|
LotName: mapping.LotName,
|
||||||
|
QuantityPerPN: mapping.QuantityPerPN,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
out = append(out, row)
|
||||||
|
}
|
||||||
|
return out
|
||||||
|
}
|
||||||
|
|
||||||
func ProjectToLocal(project *models.Project) *LocalProject {
|
func ProjectToLocal(project *models.Project) *LocalProject {
|
||||||
local := &LocalProject{
|
local := &LocalProject{
|
||||||
UUID: project.UUID,
|
UUID: project.UUID,
|
||||||
|
|||||||
@@ -7,19 +7,104 @@ import (
|
|||||||
"crypto/sha256"
|
"crypto/sha256"
|
||||||
"encoding/base64"
|
"encoding/base64"
|
||||||
"errors"
|
"errors"
|
||||||
|
"fmt"
|
||||||
"io"
|
"io"
|
||||||
"os"
|
"os"
|
||||||
|
"path/filepath"
|
||||||
|
"strings"
|
||||||
|
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/appstate"
|
||||||
)
|
)
|
||||||
|
|
||||||
// getEncryptionKey derives a 32-byte key from environment variable or machine ID
|
const encryptionKeyFileName = "local_encryption.key"
|
||||||
func getEncryptionKey() []byte {
|
|
||||||
|
// getEncryptionKey resolves the active encryption key.
|
||||||
|
// Preference order:
|
||||||
|
// 1. QUOTEFORGE_ENCRYPTION_KEY env var
|
||||||
|
// 2. application-managed random key file in the user state directory
|
||||||
|
func getEncryptionKey() ([]byte, error) {
|
||||||
key := os.Getenv("QUOTEFORGE_ENCRYPTION_KEY")
|
key := os.Getenv("QUOTEFORGE_ENCRYPTION_KEY")
|
||||||
if key == "" {
|
if key != "" {
|
||||||
// Fallback to a machine-based key (hostname + fixed salt)
|
hash := sha256.Sum256([]byte(key))
|
||||||
hostname, _ := os.Hostname()
|
return hash[:], nil
|
||||||
key = hostname + "quoteforge-salt-2024"
|
|
||||||
}
|
}
|
||||||
// Hash to get exactly 32 bytes for AES-256
|
|
||||||
|
stateDir, err := resolveEncryptionStateDir()
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("resolve encryption state dir: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return loadOrCreateEncryptionKey(filepath.Join(stateDir, encryptionKeyFileName))
|
||||||
|
}
|
||||||
|
|
||||||
|
func resolveEncryptionStateDir() (string, error) {
|
||||||
|
configPath, err := appstate.ResolveConfigPath("")
|
||||||
|
if err != nil {
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
return filepath.Dir(configPath), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func loadOrCreateEncryptionKey(path string) ([]byte, error) {
|
||||||
|
if data, err := os.ReadFile(path); err == nil {
|
||||||
|
return parseEncryptionKeyFile(data)
|
||||||
|
} else if !errors.Is(err, os.ErrNotExist) {
|
||||||
|
return nil, fmt.Errorf("read encryption key: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := os.MkdirAll(filepath.Dir(path), 0700); err != nil {
|
||||||
|
return nil, fmt.Errorf("create encryption key dir: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
raw := make([]byte, 32)
|
||||||
|
if _, err := io.ReadFull(rand.Reader, raw); err != nil {
|
||||||
|
return nil, fmt.Errorf("generate encryption key: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
encoded := base64.StdEncoding.EncodeToString(raw)
|
||||||
|
if err := writeKeyFile(path, []byte(encoded+"\n")); err != nil {
|
||||||
|
if errors.Is(err, os.ErrExist) {
|
||||||
|
data, readErr := os.ReadFile(path)
|
||||||
|
if readErr != nil {
|
||||||
|
return nil, fmt.Errorf("read concurrent encryption key: %w", readErr)
|
||||||
|
}
|
||||||
|
return parseEncryptionKeyFile(data)
|
||||||
|
}
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
return raw, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func writeKeyFile(path string, data []byte) error {
|
||||||
|
file, err := os.OpenFile(path, os.O_WRONLY|os.O_CREATE|os.O_EXCL, 0600)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
defer file.Close()
|
||||||
|
|
||||||
|
if _, err := file.Write(data); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
return file.Sync()
|
||||||
|
}
|
||||||
|
|
||||||
|
func parseEncryptionKeyFile(data []byte) ([]byte, error) {
|
||||||
|
trimmed := strings.TrimSpace(string(data))
|
||||||
|
decoded, err := base64.StdEncoding.DecodeString(trimmed)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("decode encryption key file: %w", err)
|
||||||
|
}
|
||||||
|
if len(decoded) != 32 {
|
||||||
|
return nil, fmt.Errorf("invalid encryption key length: %d", len(decoded))
|
||||||
|
}
|
||||||
|
return decoded, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func getLegacyEncryptionKey() []byte {
|
||||||
|
hostname, _ := os.Hostname()
|
||||||
|
key := hostname + "quoteforge-salt-2024"
|
||||||
hash := sha256.Sum256([]byte(key))
|
hash := sha256.Sum256([]byte(key))
|
||||||
return hash[:]
|
return hash[:]
|
||||||
}
|
}
|
||||||
@@ -30,7 +115,10 @@ func Encrypt(plaintext string) (string, error) {
|
|||||||
return "", nil
|
return "", nil
|
||||||
}
|
}
|
||||||
|
|
||||||
key := getEncryptionKey()
|
key, err := getEncryptionKey()
|
||||||
|
if err != nil {
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
block, err := aes.NewCipher(key)
|
block, err := aes.NewCipher(key)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return "", err
|
return "", err
|
||||||
@@ -56,12 +144,50 @@ func Decrypt(ciphertext string) (string, error) {
|
|||||||
return "", nil
|
return "", nil
|
||||||
}
|
}
|
||||||
|
|
||||||
key := getEncryptionKey()
|
key, err := getEncryptionKey()
|
||||||
data, err := base64.StdEncoding.DecodeString(ciphertext)
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return "", err
|
return "", err
|
||||||
}
|
}
|
||||||
|
plaintext, legacy, err := decryptWithKeys(ciphertext, key, getLegacyEncryptionKey())
|
||||||
|
if err != nil {
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
_ = legacy
|
||||||
|
return plaintext, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func DecryptWithMetadata(ciphertext string) (string, bool, error) {
|
||||||
|
if ciphertext == "" {
|
||||||
|
return "", false, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
key, err := getEncryptionKey()
|
||||||
|
if err != nil {
|
||||||
|
return "", false, err
|
||||||
|
}
|
||||||
|
return decryptWithKeys(ciphertext, key, getLegacyEncryptionKey())
|
||||||
|
}
|
||||||
|
|
||||||
|
func decryptWithKeys(ciphertext string, primaryKey, legacyKey []byte) (string, bool, error) {
|
||||||
|
data, err := base64.StdEncoding.DecodeString(ciphertext)
|
||||||
|
if err != nil {
|
||||||
|
return "", false, err
|
||||||
|
}
|
||||||
|
|
||||||
|
plaintext, err := decryptWithKey(data, primaryKey)
|
||||||
|
if err == nil {
|
||||||
|
return plaintext, false, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
legacyPlaintext, legacyErr := decryptWithKey(data, legacyKey)
|
||||||
|
if legacyErr == nil {
|
||||||
|
return legacyPlaintext, true, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
return "", false, err
|
||||||
|
}
|
||||||
|
|
||||||
|
func decryptWithKey(data, key []byte) (string, error) {
|
||||||
block, err := aes.NewCipher(key)
|
block, err := aes.NewCipher(key)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return "", err
|
return "", err
|
||||||
|
|||||||
97
internal/localdb/encryption_test.go
Normal file
97
internal/localdb/encryption_test.go
Normal file
@@ -0,0 +1,97 @@
|
|||||||
|
package localdb
|
||||||
|
|
||||||
|
import (
|
||||||
|
"crypto/aes"
|
||||||
|
"crypto/cipher"
|
||||||
|
"crypto/rand"
|
||||||
|
"crypto/sha256"
|
||||||
|
"encoding/base64"
|
||||||
|
"io"
|
||||||
|
"os"
|
||||||
|
"path/filepath"
|
||||||
|
"testing"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestEncryptCreatesPersistentKeyFile(t *testing.T) {
|
||||||
|
stateDir := t.TempDir()
|
||||||
|
t.Setenv("QFS_STATE_DIR", stateDir)
|
||||||
|
t.Setenv("QUOTEFORGE_ENCRYPTION_KEY", "")
|
||||||
|
|
||||||
|
ciphertext, err := Encrypt("secret-password")
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("encrypt: %v", err)
|
||||||
|
}
|
||||||
|
if ciphertext == "" {
|
||||||
|
t.Fatal("expected ciphertext")
|
||||||
|
}
|
||||||
|
|
||||||
|
keyPath := filepath.Join(stateDir, encryptionKeyFileName)
|
||||||
|
info, err := os.Stat(keyPath)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("stat key file: %v", err)
|
||||||
|
}
|
||||||
|
if info.Mode().Perm() != 0600 {
|
||||||
|
t.Fatalf("expected 0600 key file, got %v", info.Mode().Perm())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestDecryptMigratesLegacyCiphertext(t *testing.T) {
|
||||||
|
stateDir := t.TempDir()
|
||||||
|
t.Setenv("QFS_STATE_DIR", stateDir)
|
||||||
|
t.Setenv("QUOTEFORGE_ENCRYPTION_KEY", "")
|
||||||
|
|
||||||
|
legacyCiphertext := encryptWithKeyForTest(t, getLegacyEncryptionKey(), "legacy-password")
|
||||||
|
|
||||||
|
plaintext, migrated, err := DecryptWithMetadata(legacyCiphertext)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("decrypt legacy: %v", err)
|
||||||
|
}
|
||||||
|
if plaintext != "legacy-password" {
|
||||||
|
t.Fatalf("unexpected plaintext: %q", plaintext)
|
||||||
|
}
|
||||||
|
if !migrated {
|
||||||
|
t.Fatal("expected legacy ciphertext to require migration")
|
||||||
|
}
|
||||||
|
|
||||||
|
currentCiphertext, err := Encrypt("legacy-password")
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("encrypt current: %v", err)
|
||||||
|
}
|
||||||
|
plaintext, migrated, err = DecryptWithMetadata(currentCiphertext)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("decrypt current: %v", err)
|
||||||
|
}
|
||||||
|
if migrated {
|
||||||
|
t.Fatal("did not expect current ciphertext to require migration")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func encryptWithKeyForTest(t *testing.T, key []byte, plaintext string) string {
|
||||||
|
t.Helper()
|
||||||
|
|
||||||
|
block, err := aes.NewCipher(key)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("new cipher: %v", err)
|
||||||
|
}
|
||||||
|
gcm, err := cipher.NewGCM(block)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("new gcm: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
nonce := make([]byte, gcm.NonceSize())
|
||||||
|
if _, err := io.ReadFull(rand.Reader, nonce); err != nil {
|
||||||
|
t.Fatalf("read nonce: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
ciphertext := gcm.Seal(nonce, nonce, []byte(plaintext), nil)
|
||||||
|
return base64.StdEncoding.EncodeToString(ciphertext)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestLegacyEncryptionKeyRemainsDeterministic(t *testing.T) {
|
||||||
|
hostname, _ := os.Hostname()
|
||||||
|
expected := sha256.Sum256([]byte(hostname + "quoteforge-salt-2024"))
|
||||||
|
actual := getLegacyEncryptionKey()
|
||||||
|
if string(actual) != string(expected[:]) {
|
||||||
|
t.Fatal("legacy key derivation changed")
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -5,7 +5,10 @@ import (
|
|||||||
"testing"
|
"testing"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
|
"github.com/glebarez/sqlite"
|
||||||
"github.com/google/uuid"
|
"github.com/google/uuid"
|
||||||
|
"gorm.io/gorm"
|
||||||
|
"gorm.io/gorm/logger"
|
||||||
)
|
)
|
||||||
|
|
||||||
func TestRunLocalMigrationsBackfillsExistingConfigurations(t *testing.T) {
|
func TestRunLocalMigrationsBackfillsExistingConfigurations(t *testing.T) {
|
||||||
@@ -253,3 +256,340 @@ func TestRunLocalMigrationsDeduplicatesConfigurationVersionsBySpecAndPrice(t *te
|
|||||||
t.Fatalf("expected current_version_id to point to kept latest version v3")
|
t.Fatalf("expected current_version_id to point to kept latest version v3")
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func TestRunLocalMigrationsBackfillsConfigurationLineNo(t *testing.T) {
|
||||||
|
dbPath := filepath.Join(t.TempDir(), "line_no_backfill.db")
|
||||||
|
|
||||||
|
local, err := New(dbPath)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("open localdb: %v", err)
|
||||||
|
}
|
||||||
|
t.Cleanup(func() { _ = local.Close() })
|
||||||
|
|
||||||
|
projectUUID := "project-line"
|
||||||
|
cfg1 := &LocalConfiguration{
|
||||||
|
UUID: "line-cfg-1",
|
||||||
|
ProjectUUID: &projectUUID,
|
||||||
|
Name: "Cfg 1",
|
||||||
|
Items: LocalConfigItems{},
|
||||||
|
SyncStatus: "pending",
|
||||||
|
OriginalUsername: "tester",
|
||||||
|
IsActive: true,
|
||||||
|
CreatedAt: time.Now().Add(-2 * time.Hour),
|
||||||
|
}
|
||||||
|
cfg2 := &LocalConfiguration{
|
||||||
|
UUID: "line-cfg-2",
|
||||||
|
ProjectUUID: &projectUUID,
|
||||||
|
Name: "Cfg 2",
|
||||||
|
Items: LocalConfigItems{},
|
||||||
|
SyncStatus: "pending",
|
||||||
|
OriginalUsername: "tester",
|
||||||
|
IsActive: true,
|
||||||
|
CreatedAt: time.Now().Add(-1 * time.Hour),
|
||||||
|
}
|
||||||
|
if err := local.SaveConfiguration(cfg1); err != nil {
|
||||||
|
t.Fatalf("save cfg1: %v", err)
|
||||||
|
}
|
||||||
|
if err := local.SaveConfiguration(cfg2); err != nil {
|
||||||
|
t.Fatalf("save cfg2: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := local.DB().Model(&LocalConfiguration{}).Where("uuid IN ?", []string{cfg1.UUID, cfg2.UUID}).Update("line_no", 0).Error; err != nil {
|
||||||
|
t.Fatalf("reset line_no: %v", err)
|
||||||
|
}
|
||||||
|
if err := local.DB().Where("id = ?", "2026_02_19_local_config_line_no").Delete(&LocalSchemaMigration{}).Error; err != nil {
|
||||||
|
t.Fatalf("delete migration record: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := runLocalMigrations(local.DB()); err != nil {
|
||||||
|
t.Fatalf("rerun local migrations: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
var rows []LocalConfiguration
|
||||||
|
if err := local.DB().Where("uuid IN ?", []string{cfg1.UUID, cfg2.UUID}).Order("created_at ASC").Find(&rows).Error; err != nil {
|
||||||
|
t.Fatalf("load configurations: %v", err)
|
||||||
|
}
|
||||||
|
if len(rows) != 2 {
|
||||||
|
t.Fatalf("expected 2 configurations, got %d", len(rows))
|
||||||
|
}
|
||||||
|
if rows[0].Line != 10 || rows[1].Line != 20 {
|
||||||
|
t.Fatalf("expected line_no [10,20], got [%d,%d]", rows[0].Line, rows[1].Line)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestRunLocalMigrationsDeduplicatesCanonicalPartnumberCatalog(t *testing.T) {
|
||||||
|
dbPath := filepath.Join(t.TempDir(), "partnumber_catalog_dedup.db")
|
||||||
|
db, err := gorm.Open(sqlite.Open(dbPath), &gorm.Config{
|
||||||
|
Logger: logger.Default.LogMode(logger.Silent),
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("open sqlite: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
firstLots := LocalPartnumberBookLots{
|
||||||
|
{LotName: "LOT-A", Qty: 1},
|
||||||
|
}
|
||||||
|
secondLots := LocalPartnumberBookLots{
|
||||||
|
{LotName: "LOT-B", Qty: 2},
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := db.Exec(`
|
||||||
|
CREATE TABLE local_partnumber_book_items (
|
||||||
|
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||||
|
partnumber TEXT NOT NULL,
|
||||||
|
lots_json TEXT NOT NULL,
|
||||||
|
description TEXT
|
||||||
|
)
|
||||||
|
`).Error; err != nil {
|
||||||
|
t.Fatalf("create dirty local_partnumber_book_items: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := db.Create(&LocalPartnumberBookItem{
|
||||||
|
Partnumber: "PN-001",
|
||||||
|
LotsJSON: firstLots,
|
||||||
|
Description: "",
|
||||||
|
}).Error; err != nil {
|
||||||
|
t.Fatalf("insert first duplicate row: %v", err)
|
||||||
|
}
|
||||||
|
if err := db.Create(&LocalPartnumberBookItem{
|
||||||
|
Partnumber: "PN-001",
|
||||||
|
LotsJSON: secondLots,
|
||||||
|
Description: "Canonical description",
|
||||||
|
}).Error; err != nil {
|
||||||
|
t.Fatalf("insert second duplicate row: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := migrateLocalPartnumberBookCatalog(db); err != nil {
|
||||||
|
t.Fatalf("migrate local partnumber catalog: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
var items []LocalPartnumberBookItem
|
||||||
|
if err := db.Order("partnumber ASC").Find(&items).Error; err != nil {
|
||||||
|
t.Fatalf("load migrated partnumber items: %v", err)
|
||||||
|
}
|
||||||
|
if len(items) != 1 {
|
||||||
|
t.Fatalf("expected 1 deduplicated item, got %d", len(items))
|
||||||
|
}
|
||||||
|
if items[0].Partnumber != "PN-001" {
|
||||||
|
t.Fatalf("unexpected partnumber: %s", items[0].Partnumber)
|
||||||
|
}
|
||||||
|
if items[0].Description != "Canonical description" {
|
||||||
|
t.Fatalf("expected merged description, got %q", items[0].Description)
|
||||||
|
}
|
||||||
|
if len(items[0].LotsJSON) != 2 {
|
||||||
|
t.Fatalf("expected merged lots from duplicates, got %d", len(items[0].LotsJSON))
|
||||||
|
}
|
||||||
|
|
||||||
|
var duplicateCount int64
|
||||||
|
if err := db.Model(&LocalPartnumberBookItem{}).
|
||||||
|
Where("partnumber = ?", "PN-001").
|
||||||
|
Count(&duplicateCount).Error; err != nil {
|
||||||
|
t.Fatalf("count deduplicated partnumber: %v", err)
|
||||||
|
}
|
||||||
|
if duplicateCount != 1 {
|
||||||
|
t.Fatalf("expected unique partnumber row after migration, got %d", duplicateCount)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestSanitizeLocalPartnumberBookCatalogRemovesRowsWithoutPartnumber(t *testing.T) {
|
||||||
|
dbPath := filepath.Join(t.TempDir(), "sanitize_partnumber_catalog.db")
|
||||||
|
db, err := gorm.Open(sqlite.Open(dbPath), &gorm.Config{
|
||||||
|
Logger: logger.Default.LogMode(logger.Silent),
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("open sqlite: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := db.Exec(`
|
||||||
|
CREATE TABLE local_partnumber_book_items (
|
||||||
|
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||||
|
partnumber TEXT NULL,
|
||||||
|
lots_json TEXT NOT NULL,
|
||||||
|
description TEXT
|
||||||
|
)
|
||||||
|
`).Error; err != nil {
|
||||||
|
t.Fatalf("create local_partnumber_book_items: %v", err)
|
||||||
|
}
|
||||||
|
if err := db.Exec(`
|
||||||
|
INSERT INTO local_partnumber_book_items (partnumber, lots_json, description) VALUES
|
||||||
|
(NULL, '[]', 'null pn'),
|
||||||
|
('', '[]', 'empty pn'),
|
||||||
|
('PN-OK', '[]', 'valid pn')
|
||||||
|
`).Error; err != nil {
|
||||||
|
t.Fatalf("seed local_partnumber_book_items: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := sanitizeLocalPartnumberBookCatalog(db); err != nil {
|
||||||
|
t.Fatalf("sanitize local partnumber catalog: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
var items []LocalPartnumberBookItem
|
||||||
|
if err := db.Order("id ASC").Find(&items).Error; err != nil {
|
||||||
|
t.Fatalf("load sanitized items: %v", err)
|
||||||
|
}
|
||||||
|
if len(items) != 1 {
|
||||||
|
t.Fatalf("expected 1 valid item after sanitize, got %d", len(items))
|
||||||
|
}
|
||||||
|
if items[0].Partnumber != "PN-OK" {
|
||||||
|
t.Fatalf("expected remaining partnumber PN-OK, got %q", items[0].Partnumber)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestNewMigratesLegacyPartnumberBookCatalogBeforeAutoMigrate(t *testing.T) {
|
||||||
|
dbPath := filepath.Join(t.TempDir(), "legacy_partnumber_catalog.db")
|
||||||
|
db, err := gorm.Open(sqlite.Open(dbPath), &gorm.Config{
|
||||||
|
Logger: logger.Default.LogMode(logger.Silent),
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("open sqlite: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := db.Exec(`
|
||||||
|
CREATE TABLE local_partnumber_book_items (
|
||||||
|
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||||
|
partnumber TEXT NOT NULL UNIQUE,
|
||||||
|
lots_json TEXT NOT NULL,
|
||||||
|
is_primary_pn INTEGER NOT NULL DEFAULT 0,
|
||||||
|
description TEXT
|
||||||
|
)
|
||||||
|
`).Error; err != nil {
|
||||||
|
t.Fatalf("create legacy local_partnumber_book_items: %v", err)
|
||||||
|
}
|
||||||
|
if err := db.Exec(`
|
||||||
|
INSERT INTO local_partnumber_book_items (partnumber, lots_json, is_primary_pn, description)
|
||||||
|
VALUES ('PN-001', '[{"lot_name":"CPU_A","qty":1}]', 0, 'Legacy row')
|
||||||
|
`).Error; err != nil {
|
||||||
|
t.Fatalf("seed legacy local_partnumber_book_items: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
local, err := New(dbPath)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("open localdb with legacy catalog: %v", err)
|
||||||
|
}
|
||||||
|
t.Cleanup(func() { _ = local.Close() })
|
||||||
|
|
||||||
|
var columns []struct {
|
||||||
|
Name string `gorm:"column:name"`
|
||||||
|
}
|
||||||
|
if err := local.DB().Raw(`SELECT name FROM pragma_table_info('local_partnumber_book_items')`).Scan(&columns).Error; err != nil {
|
||||||
|
t.Fatalf("load local_partnumber_book_items columns: %v", err)
|
||||||
|
}
|
||||||
|
for _, column := range columns {
|
||||||
|
if column.Name == "is_primary_pn" {
|
||||||
|
t.Fatalf("expected legacy is_primary_pn column to be removed before automigrate")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
var items []LocalPartnumberBookItem
|
||||||
|
if err := local.DB().Find(&items).Error; err != nil {
|
||||||
|
t.Fatalf("load migrated local_partnumber_book_items: %v", err)
|
||||||
|
}
|
||||||
|
if len(items) != 1 || items[0].Partnumber != "PN-001" {
|
||||||
|
t.Fatalf("unexpected migrated rows: %#v", items)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestNewRecoversBrokenPartnumberBookCatalogCache(t *testing.T) {
|
||||||
|
dbPath := filepath.Join(t.TempDir(), "broken_partnumber_catalog.db")
|
||||||
|
db, err := gorm.Open(sqlite.Open(dbPath), &gorm.Config{
|
||||||
|
Logger: logger.Default.LogMode(logger.Silent),
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("open sqlite: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := db.Exec(`
|
||||||
|
CREATE TABLE local_partnumber_book_items (
|
||||||
|
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||||
|
partnumber TEXT NOT NULL UNIQUE,
|
||||||
|
lots_json TEXT NOT NULL,
|
||||||
|
description TEXT
|
||||||
|
)
|
||||||
|
`).Error; err != nil {
|
||||||
|
t.Fatalf("create broken local_partnumber_book_items: %v", err)
|
||||||
|
}
|
||||||
|
if err := db.Exec(`
|
||||||
|
INSERT INTO local_partnumber_book_items (partnumber, lots_json, description)
|
||||||
|
VALUES ('PN-001', '{not-json}', 'Broken cache row')
|
||||||
|
`).Error; err != nil {
|
||||||
|
t.Fatalf("seed broken local_partnumber_book_items: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
local, err := New(dbPath)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("open localdb with broken catalog cache: %v", err)
|
||||||
|
}
|
||||||
|
t.Cleanup(func() { _ = local.Close() })
|
||||||
|
|
||||||
|
var count int64
|
||||||
|
if err := local.DB().Model(&LocalPartnumberBookItem{}).Count(&count).Error; err != nil {
|
||||||
|
t.Fatalf("count recovered local_partnumber_book_items: %v", err)
|
||||||
|
}
|
||||||
|
if count != 0 {
|
||||||
|
t.Fatalf("expected empty recovered local_partnumber_book_items, got %d rows", count)
|
||||||
|
}
|
||||||
|
|
||||||
|
var quarantineTables []struct {
|
||||||
|
Name string `gorm:"column:name"`
|
||||||
|
}
|
||||||
|
if err := local.DB().Raw(`
|
||||||
|
SELECT name
|
||||||
|
FROM sqlite_master
|
||||||
|
WHERE type = 'table' AND name LIKE 'local_partnumber_book_items_broken_%'
|
||||||
|
`).Scan(&quarantineTables).Error; err != nil {
|
||||||
|
t.Fatalf("load quarantine tables: %v", err)
|
||||||
|
}
|
||||||
|
if len(quarantineTables) != 1 {
|
||||||
|
t.Fatalf("expected one quarantined broken catalog table, got %d", len(quarantineTables))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestCleanupStaleReadOnlyCacheTempTablesDropsShadowTempWhenBaseExists(t *testing.T) {
|
||||||
|
dbPath := filepath.Join(t.TempDir(), "stale_cache_temp.db")
|
||||||
|
db, err := gorm.Open(sqlite.Open(dbPath), &gorm.Config{
|
||||||
|
Logger: logger.Default.LogMode(logger.Silent),
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("open sqlite: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := db.Exec(`
|
||||||
|
CREATE TABLE local_pricelist_items (
|
||||||
|
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||||
|
pricelist_id INTEGER NOT NULL,
|
||||||
|
partnumber TEXT,
|
||||||
|
brand TEXT NOT NULL DEFAULT '',
|
||||||
|
lot_name TEXT NOT NULL,
|
||||||
|
description TEXT,
|
||||||
|
price REAL NOT NULL DEFAULT 0,
|
||||||
|
quantity INTEGER NOT NULL DEFAULT 0,
|
||||||
|
reserve INTEGER NOT NULL DEFAULT 0,
|
||||||
|
available_qty REAL,
|
||||||
|
partnumbers TEXT,
|
||||||
|
lot_category TEXT,
|
||||||
|
created_at DATETIME,
|
||||||
|
updated_at DATETIME
|
||||||
|
)
|
||||||
|
`).Error; err != nil {
|
||||||
|
t.Fatalf("create local_pricelist_items: %v", err)
|
||||||
|
}
|
||||||
|
if err := db.Exec(`
|
||||||
|
CREATE TABLE local_pricelist_items__temp (
|
||||||
|
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||||
|
legacy TEXT
|
||||||
|
)
|
||||||
|
`).Error; err != nil {
|
||||||
|
t.Fatalf("create local_pricelist_items__temp: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := cleanupStaleReadOnlyCacheTempTables(db); err != nil {
|
||||||
|
t.Fatalf("cleanup stale read-only cache temp tables: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if db.Migrator().HasTable("local_pricelist_items__temp") {
|
||||||
|
t.Fatalf("expected stale temp table to be dropped")
|
||||||
|
}
|
||||||
|
if !db.Migrator().HasTable("local_pricelist_items") {
|
||||||
|
t.Fatalf("expected base local_pricelist_items table to remain")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|||||||
@@ -1,6 +1,7 @@
|
|||||||
package localdb
|
package localdb
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"encoding/json"
|
||||||
"errors"
|
"errors"
|
||||||
"fmt"
|
"fmt"
|
||||||
"log/slog"
|
"log/slog"
|
||||||
@@ -42,6 +43,14 @@ type LocalDB struct {
|
|||||||
path string
|
path string
|
||||||
}
|
}
|
||||||
|
|
||||||
|
var localReadOnlyCacheTables = []string{
|
||||||
|
"local_pricelist_items",
|
||||||
|
"local_pricelists",
|
||||||
|
"local_components",
|
||||||
|
"local_partnumber_book_items",
|
||||||
|
"local_partnumber_books",
|
||||||
|
}
|
||||||
|
|
||||||
// ResetData clears local data tables while keeping connection settings.
|
// ResetData clears local data tables while keeping connection settings.
|
||||||
// It does not drop schema or connection_settings.
|
// It does not drop schema or connection_settings.
|
||||||
func ResetData(dbPath string) error {
|
func ResetData(dbPath string) error {
|
||||||
@@ -70,7 +79,6 @@ func ResetData(dbPath string) error {
|
|||||||
"local_pricelists",
|
"local_pricelists",
|
||||||
"local_pricelist_items",
|
"local_pricelist_items",
|
||||||
"local_components",
|
"local_components",
|
||||||
"local_remote_migrations_applied",
|
|
||||||
"local_sync_guard_state",
|
"local_sync_guard_state",
|
||||||
"pending_changes",
|
"pending_changes",
|
||||||
"app_settings",
|
"app_settings",
|
||||||
@@ -108,9 +116,23 @@ func New(dbPath string) (*LocalDB, error) {
|
|||||||
return nil, fmt.Errorf("opening sqlite database: %w", err)
|
return nil, fmt.Errorf("opening sqlite database: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Enable WAL mode so background sync writes never block UI reads.
|
||||||
|
if err := db.Exec("PRAGMA journal_mode=WAL").Error; err != nil {
|
||||||
|
slog.Warn("failed to enable WAL mode", "error", err)
|
||||||
|
}
|
||||||
|
if err := db.Exec("PRAGMA synchronous=NORMAL").Error; err != nil {
|
||||||
|
slog.Warn("failed to set synchronous=NORMAL", "error", err)
|
||||||
|
}
|
||||||
|
|
||||||
if err := ensureLocalProjectsTable(db); err != nil {
|
if err := ensureLocalProjectsTable(db); err != nil {
|
||||||
return nil, fmt.Errorf("ensure local_projects table: %w", err)
|
return nil, fmt.Errorf("ensure local_projects table: %w", err)
|
||||||
}
|
}
|
||||||
|
if err := prepareLocalPartnumberBookCatalog(db); err != nil {
|
||||||
|
return nil, fmt.Errorf("prepare local partnumber book catalog: %w", err)
|
||||||
|
}
|
||||||
|
if err := cleanupStaleReadOnlyCacheTempTables(db); err != nil {
|
||||||
|
return nil, fmt.Errorf("cleanup stale read-only cache temp tables: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
// Preflight: ensure local_projects has non-null UUIDs before AutoMigrate rebuilds tables.
|
// Preflight: ensure local_projects has non-null UUIDs before AutoMigrate rebuilds tables.
|
||||||
if db.Migrator().HasTable(&LocalProject{}) {
|
if db.Migrator().HasTable(&LocalProject{}) {
|
||||||
@@ -131,22 +153,28 @@ func New(dbPath string) (*LocalDB, error) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// Auto-migrate all local tables
|
// Auto-migrate all local tables
|
||||||
if err := db.AutoMigrate(
|
if err := autoMigrateLocalSchema(db); err != nil {
|
||||||
&ConnectionSettings{},
|
if recovered, recoveryErr := recoverFromReadOnlyCacheInitFailure(db, err); recoveryErr != nil {
|
||||||
&LocalConfiguration{},
|
return nil, fmt.Errorf("migrating sqlite database: %w (recovery failed: %v)", err, recoveryErr)
|
||||||
&LocalConfigurationVersion{},
|
} else if !recovered {
|
||||||
&LocalPricelist{},
|
return nil, fmt.Errorf("migrating sqlite database: %w", err)
|
||||||
&LocalPricelistItem{},
|
}
|
||||||
&LocalComponent{},
|
if err := autoMigrateLocalSchema(db); err != nil {
|
||||||
&AppSetting{},
|
return nil, fmt.Errorf("migrating sqlite database after cache recovery: %w", err)
|
||||||
&LocalRemoteMigrationApplied{},
|
}
|
||||||
&LocalSyncGuardState{},
|
}
|
||||||
&PendingChange{},
|
if err := ensureLocalPartnumberBookItemTable(db); err != nil {
|
||||||
); err != nil {
|
return nil, fmt.Errorf("ensure local partnumber book item table: %w", err)
|
||||||
return nil, fmt.Errorf("migrating sqlite database: %w", err)
|
|
||||||
}
|
}
|
||||||
if err := runLocalMigrations(db); err != nil {
|
if err := runLocalMigrations(db); err != nil {
|
||||||
return nil, fmt.Errorf("running local sqlite migrations: %w", err)
|
if recovered, recoveryErr := recoverFromReadOnlyCacheInitFailure(db, err); recoveryErr != nil {
|
||||||
|
return nil, fmt.Errorf("running local sqlite migrations: %w (recovery failed: %v)", err, recoveryErr)
|
||||||
|
} else if !recovered {
|
||||||
|
return nil, fmt.Errorf("running local sqlite migrations: %w", err)
|
||||||
|
}
|
||||||
|
if err := runLocalMigrations(db); err != nil {
|
||||||
|
return nil, fmt.Errorf("running local sqlite migrations after cache recovery: %w", err)
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
slog.Info("local SQLite database initialized", "path", dbPath)
|
slog.Info("local SQLite database initialized", "path", dbPath)
|
||||||
@@ -189,6 +217,282 @@ CREATE TABLE local_projects (
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func autoMigrateLocalSchema(db *gorm.DB) error {
|
||||||
|
return db.AutoMigrate(
|
||||||
|
&ConnectionSettings{},
|
||||||
|
&LocalConfiguration{},
|
||||||
|
&LocalConfigurationVersion{},
|
||||||
|
&LocalPricelist{},
|
||||||
|
&LocalPricelistItem{},
|
||||||
|
&LocalComponent{},
|
||||||
|
&AppSetting{},
|
||||||
|
&LocalSyncGuardState{},
|
||||||
|
&PendingChange{},
|
||||||
|
&LocalPartnumberBook{},
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
func sanitizeLocalPartnumberBookCatalog(db *gorm.DB) error {
|
||||||
|
if !db.Migrator().HasTable(&LocalPartnumberBookItem{}) {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Old local databases may contain partially migrated catalog rows with NULL/empty
|
||||||
|
// partnumber values. SQLite table rebuild during AutoMigrate fails on such rows once
|
||||||
|
// the schema enforces NOT NULL, so remove them before AutoMigrate touches the table.
|
||||||
|
if err := db.Exec(`
|
||||||
|
DELETE FROM local_partnumber_book_items
|
||||||
|
WHERE partnumber IS NULL OR TRIM(partnumber) = ''
|
||||||
|
`).Error; err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func prepareLocalPartnumberBookCatalog(db *gorm.DB) error {
|
||||||
|
if err := cleanupStaleLocalPartnumberBookCatalogTempTable(db); err != nil {
|
||||||
|
if recoveryErr := recoverLocalPartnumberBookCatalog(db, fmt.Errorf("cleanup stale temp table: %w", err)); recoveryErr != nil {
|
||||||
|
return recoveryErr
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
if err := sanitizeLocalPartnumberBookCatalog(db); err != nil {
|
||||||
|
if recoveryErr := recoverLocalPartnumberBookCatalog(db, fmt.Errorf("sanitize catalog: %w", err)); recoveryErr != nil {
|
||||||
|
return recoveryErr
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
if err := migrateLegacyPartnumberBookCatalogBeforeAutoMigrate(db); err != nil {
|
||||||
|
if recoveryErr := recoverLocalPartnumberBookCatalog(db, fmt.Errorf("migrate legacy catalog: %w", err)); recoveryErr != nil {
|
||||||
|
return recoveryErr
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
if err := ensureLocalPartnumberBookItemTable(db); err != nil {
|
||||||
|
if recoveryErr := recoverLocalPartnumberBookCatalog(db, fmt.Errorf("ensure canonical catalog table: %w", err)); recoveryErr != nil {
|
||||||
|
return recoveryErr
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
if err := validateLocalPartnumberBookCatalog(db); err != nil {
|
||||||
|
if recoveryErr := recoverLocalPartnumberBookCatalog(db, fmt.Errorf("validate canonical catalog: %w", err)); recoveryErr != nil {
|
||||||
|
return recoveryErr
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func cleanupStaleReadOnlyCacheTempTables(db *gorm.DB) error {
|
||||||
|
for _, tableName := range localReadOnlyCacheTables {
|
||||||
|
tempName := tableName + "__temp"
|
||||||
|
if !db.Migrator().HasTable(tempName) {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if db.Migrator().HasTable(tableName) {
|
||||||
|
if err := db.Exec(`DROP TABLE ` + tempName).Error; err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if err := quarantineSQLiteTable(db, tempName, localReadOnlyCacheQuarantineTableName(tableName, "temp")); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func cleanupStaleLocalPartnumberBookCatalogTempTable(db *gorm.DB) error {
|
||||||
|
if !db.Migrator().HasTable("local_partnumber_book_items__temp") {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
if db.Migrator().HasTable(&LocalPartnumberBookItem{}) {
|
||||||
|
return db.Exec(`DROP TABLE local_partnumber_book_items__temp`).Error
|
||||||
|
}
|
||||||
|
return quarantineSQLiteTable(db, "local_partnumber_book_items__temp", localPartnumberBookCatalogQuarantineTableName("temp"))
|
||||||
|
}
|
||||||
|
|
||||||
|
func migrateLegacyPartnumberBookCatalogBeforeAutoMigrate(db *gorm.DB) error {
|
||||||
|
if !db.Migrator().HasTable(&LocalPartnumberBookItem{}) {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Legacy databases may still have the pre-catalog shape (`book_id`/`lot_name`) or the
|
||||||
|
// intermediate canonical shape with obsolete columns like `is_primary_pn`. Let the
|
||||||
|
// explicit rebuild logic normalize this table before GORM AutoMigrate attempts a
|
||||||
|
// table-copy migration on its own.
|
||||||
|
return migrateLocalPartnumberBookCatalog(db)
|
||||||
|
}
|
||||||
|
|
||||||
|
func ensureLocalPartnumberBookItemTable(db *gorm.DB) error {
|
||||||
|
if db.Migrator().HasTable(&LocalPartnumberBookItem{}) {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
if err := db.Exec(`
|
||||||
|
CREATE TABLE local_partnumber_book_items (
|
||||||
|
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||||
|
partnumber TEXT NOT NULL UNIQUE,
|
||||||
|
lots_json TEXT NOT NULL,
|
||||||
|
description TEXT
|
||||||
|
)
|
||||||
|
`).Error; err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
return db.Exec(`CREATE UNIQUE INDEX IF NOT EXISTS idx_local_partnumber_book_items_partnumber ON local_partnumber_book_items(partnumber)`).Error
|
||||||
|
}
|
||||||
|
|
||||||
|
func validateLocalPartnumberBookCatalog(db *gorm.DB) error {
|
||||||
|
if !db.Migrator().HasTable(&LocalPartnumberBookItem{}) {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
type rawCatalogRow struct {
|
||||||
|
Partnumber string `gorm:"column:partnumber"`
|
||||||
|
LotsJSON string `gorm:"column:lots_json"`
|
||||||
|
Description string `gorm:"column:description"`
|
||||||
|
}
|
||||||
|
|
||||||
|
var rows []rawCatalogRow
|
||||||
|
if err := db.Raw(`
|
||||||
|
SELECT partnumber, lots_json, COALESCE(description, '') AS description
|
||||||
|
FROM local_partnumber_book_items
|
||||||
|
ORDER BY id ASC
|
||||||
|
`).Scan(&rows).Error; err != nil {
|
||||||
|
return fmt.Errorf("load canonical catalog rows: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
seen := make(map[string]struct{}, len(rows))
|
||||||
|
for _, row := range rows {
|
||||||
|
partnumber := strings.TrimSpace(row.Partnumber)
|
||||||
|
if partnumber == "" {
|
||||||
|
return errors.New("catalog contains empty partnumber")
|
||||||
|
}
|
||||||
|
if _, exists := seen[partnumber]; exists {
|
||||||
|
return fmt.Errorf("catalog contains duplicate partnumber %q", partnumber)
|
||||||
|
}
|
||||||
|
seen[partnumber] = struct{}{}
|
||||||
|
if strings.TrimSpace(row.LotsJSON) == "" {
|
||||||
|
return fmt.Errorf("catalog row %q has empty lots_json", partnumber)
|
||||||
|
}
|
||||||
|
var lots LocalPartnumberBookLots
|
||||||
|
if err := json.Unmarshal([]byte(row.LotsJSON), &lots); err != nil {
|
||||||
|
return fmt.Errorf("catalog row %q has invalid lots_json: %w", partnumber, err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func recoverLocalPartnumberBookCatalog(db *gorm.DB, cause error) error {
|
||||||
|
slog.Warn("recovering broken local partnumber book catalog", "error", cause.Error())
|
||||||
|
|
||||||
|
if err := ensureLocalPartnumberBooksCatalogColumn(db); err != nil {
|
||||||
|
return fmt.Errorf("ensure local_partnumber_books.partnumbers_json during recovery: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if db.Migrator().HasTable("local_partnumber_book_items__temp") {
|
||||||
|
if err := quarantineSQLiteTable(db, "local_partnumber_book_items__temp", localPartnumberBookCatalogQuarantineTableName("temp")); err != nil {
|
||||||
|
return fmt.Errorf("quarantine local_partnumber_book_items__temp: %w", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if db.Migrator().HasTable(&LocalPartnumberBookItem{}) {
|
||||||
|
if err := quarantineSQLiteTable(db, "local_partnumber_book_items", localPartnumberBookCatalogQuarantineTableName("broken")); err != nil {
|
||||||
|
return fmt.Errorf("quarantine local_partnumber_book_items: %w", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if err := ensureLocalPartnumberBookItemTable(db); err != nil {
|
||||||
|
return fmt.Errorf("recreate local_partnumber_book_items after recovery: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
slog.Warn("local partnumber book catalog reset to empty cache; next sync will rebuild it")
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func recoverFromReadOnlyCacheInitFailure(db *gorm.DB, cause error) (bool, error) {
|
||||||
|
lowerCause := strings.ToLower(cause.Error())
|
||||||
|
recoveredAny := false
|
||||||
|
|
||||||
|
for _, tableName := range affectedReadOnlyCacheTables(lowerCause) {
|
||||||
|
if err := resetReadOnlyCacheTable(db, tableName); err != nil {
|
||||||
|
return recoveredAny, err
|
||||||
|
}
|
||||||
|
recoveredAny = true
|
||||||
|
}
|
||||||
|
|
||||||
|
if strings.Contains(lowerCause, "local_partnumber_book_items") || strings.Contains(lowerCause, "local_partnumber_books") {
|
||||||
|
if err := recoverLocalPartnumberBookCatalog(db, cause); err != nil {
|
||||||
|
return recoveredAny, err
|
||||||
|
}
|
||||||
|
recoveredAny = true
|
||||||
|
}
|
||||||
|
|
||||||
|
if recoveredAny {
|
||||||
|
slog.Warn("recovered read-only local cache tables after startup failure", "error", cause.Error())
|
||||||
|
}
|
||||||
|
return recoveredAny, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func affectedReadOnlyCacheTables(lowerCause string) []string {
|
||||||
|
affected := make([]string, 0, len(localReadOnlyCacheTables))
|
||||||
|
for _, tableName := range localReadOnlyCacheTables {
|
||||||
|
if tableName == "local_partnumber_book_items" || tableName == "local_partnumber_books" {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if strings.Contains(lowerCause, tableName) {
|
||||||
|
affected = append(affected, tableName)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return affected
|
||||||
|
}
|
||||||
|
|
||||||
|
func resetReadOnlyCacheTable(db *gorm.DB, tableName string) error {
|
||||||
|
slog.Warn("resetting read-only local cache table", "table", tableName)
|
||||||
|
tempName := tableName + "__temp"
|
||||||
|
if db.Migrator().HasTable(tempName) {
|
||||||
|
if err := quarantineSQLiteTable(db, tempName, localReadOnlyCacheQuarantineTableName(tableName, "temp")); err != nil {
|
||||||
|
return fmt.Errorf("quarantine temp table %s: %w", tempName, err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if db.Migrator().HasTable(tableName) {
|
||||||
|
if err := quarantineSQLiteTable(db, tableName, localReadOnlyCacheQuarantineTableName(tableName, "broken")); err != nil {
|
||||||
|
return fmt.Errorf("quarantine table %s: %w", tableName, err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func ensureLocalPartnumberBooksCatalogColumn(db *gorm.DB) error {
|
||||||
|
if !db.Migrator().HasTable(&LocalPartnumberBook{}) {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
if db.Migrator().HasColumn(&LocalPartnumberBook{}, "partnumbers_json") {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
return db.Exec(`ALTER TABLE local_partnumber_books ADD COLUMN partnumbers_json TEXT NOT NULL DEFAULT '[]'`).Error
|
||||||
|
}
|
||||||
|
|
||||||
|
func quarantineSQLiteTable(db *gorm.DB, tableName string, quarantineName string) error {
|
||||||
|
if !db.Migrator().HasTable(tableName) {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
if tableName == quarantineName {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
if db.Migrator().HasTable(quarantineName) {
|
||||||
|
if err := db.Exec(`DROP TABLE ` + quarantineName).Error; err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return db.Exec(`ALTER TABLE ` + tableName + ` RENAME TO ` + quarantineName).Error
|
||||||
|
}
|
||||||
|
|
||||||
|
func localPartnumberBookCatalogQuarantineTableName(kind string) string {
|
||||||
|
return "local_partnumber_book_items_" + kind + "_" + time.Now().UTC().Format("20060102150405")
|
||||||
|
}
|
||||||
|
|
||||||
|
func localReadOnlyCacheQuarantineTableName(tableName string, kind string) string {
|
||||||
|
return tableName + "_" + kind + "_" + time.Now().UTC().Format("20060102150405")
|
||||||
|
}
|
||||||
|
|
||||||
// HasSettings returns true if connection settings exist
|
// HasSettings returns true if connection settings exist
|
||||||
func (l *LocalDB) HasSettings() bool {
|
func (l *LocalDB) HasSettings() bool {
|
||||||
var count int64
|
var count int64
|
||||||
@@ -204,10 +508,23 @@ func (l *LocalDB) GetSettings() (*ConnectionSettings, error) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// Decrypt password
|
// Decrypt password
|
||||||
password, err := Decrypt(settings.PasswordEncrypted)
|
password, migrated, err := DecryptWithMetadata(settings.PasswordEncrypted)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, fmt.Errorf("decrypting password: %w", err)
|
return nil, fmt.Errorf("decrypting password: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if migrated {
|
||||||
|
encrypted, encryptErr := Encrypt(password)
|
||||||
|
if encryptErr != nil {
|
||||||
|
return nil, fmt.Errorf("re-encrypting legacy password: %w", encryptErr)
|
||||||
|
}
|
||||||
|
if err := l.db.Model(&ConnectionSettings{}).
|
||||||
|
Where("id = ?", settings.ID).
|
||||||
|
Update("password_encrypted", encrypted).Error; err != nil {
|
||||||
|
return nil, fmt.Errorf("upgrading legacy password encryption: %w", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
settings.PasswordEncrypted = password // Return decrypted password in this field
|
settings.PasswordEncrypted = password // Return decrypted password in this field
|
||||||
|
|
||||||
return &settings, nil
|
return &settings, nil
|
||||||
@@ -302,6 +619,46 @@ func (l *LocalDB) SaveProject(project *LocalProject) error {
|
|||||||
return l.db.Save(project).Error
|
return l.db.Save(project).Error
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// SaveProjectPreservingUpdatedAt stores a project without replacing UpdatedAt
|
||||||
|
// with the current local sync time.
|
||||||
|
func (l *LocalDB) SaveProjectPreservingUpdatedAt(project *LocalProject) error {
|
||||||
|
if project == nil {
|
||||||
|
return fmt.Errorf("project is nil")
|
||||||
|
}
|
||||||
|
|
||||||
|
if project.ID == 0 && strings.TrimSpace(project.UUID) != "" {
|
||||||
|
var existing LocalProject
|
||||||
|
err := l.db.Where("uuid = ?", project.UUID).First(&existing).Error
|
||||||
|
if err == nil {
|
||||||
|
project.ID = existing.ID
|
||||||
|
} else if !errors.Is(err, gorm.ErrRecordNotFound) {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if project.ID == 0 {
|
||||||
|
return l.db.Create(project).Error
|
||||||
|
}
|
||||||
|
|
||||||
|
return l.db.Model(&LocalProject{}).
|
||||||
|
Where("id = ?", project.ID).
|
||||||
|
UpdateColumns(map[string]interface{}{
|
||||||
|
"uuid": project.UUID,
|
||||||
|
"server_id": project.ServerID,
|
||||||
|
"owner_username": project.OwnerUsername,
|
||||||
|
"code": project.Code,
|
||||||
|
"variant": project.Variant,
|
||||||
|
"name": project.Name,
|
||||||
|
"tracker_url": project.TrackerURL,
|
||||||
|
"is_active": project.IsActive,
|
||||||
|
"is_system": project.IsSystem,
|
||||||
|
"created_at": project.CreatedAt,
|
||||||
|
"updated_at": project.UpdatedAt,
|
||||||
|
"synced_at": project.SyncedAt,
|
||||||
|
"sync_status": project.SyncStatus,
|
||||||
|
}).Error
|
||||||
|
}
|
||||||
|
|
||||||
func (l *LocalDB) GetProjects(ownerUsername string, includeArchived bool) ([]LocalProject, error) {
|
func (l *LocalDB) GetProjects(ownerUsername string, includeArchived bool) ([]LocalProject, error) {
|
||||||
var projects []LocalProject
|
var projects []LocalProject
|
||||||
query := l.db.Model(&LocalProject{}).Where("owner_username = ?", ownerUsername)
|
query := l.db.Model(&LocalProject{}).Where("owner_username = ?", ownerUsername)
|
||||||
@@ -341,7 +698,7 @@ func (l *LocalDB) GetProjectByName(ownerUsername, name string) (*LocalProject, e
|
|||||||
func (l *LocalDB) GetProjectConfigurations(projectUUID string) ([]LocalConfiguration, error) {
|
func (l *LocalDB) GetProjectConfigurations(projectUUID string) ([]LocalConfiguration, error) {
|
||||||
var configs []LocalConfiguration
|
var configs []LocalConfiguration
|
||||||
err := l.db.Where("project_uuid = ? AND is_active = ?", projectUUID, true).
|
err := l.db.Where("project_uuid = ? AND is_active = ?", projectUUID, true).
|
||||||
Order("created_at DESC").
|
Order(configurationLineOrderClause()).
|
||||||
Find(&configs).Error
|
Find(&configs).Error
|
||||||
return configs, err
|
return configs, err
|
||||||
}
|
}
|
||||||
@@ -514,9 +871,54 @@ func (l *LocalDB) BackfillConfigurationProjects(defaultOwner string) error {
|
|||||||
|
|
||||||
// SaveConfiguration saves a configuration to local SQLite
|
// SaveConfiguration saves a configuration to local SQLite
|
||||||
func (l *LocalDB) SaveConfiguration(config *LocalConfiguration) error {
|
func (l *LocalDB) SaveConfiguration(config *LocalConfiguration) error {
|
||||||
|
if config != nil && config.IsActive && config.Line <= 0 {
|
||||||
|
line, err := l.NextConfigurationLine(config.ProjectUUID, config.UUID)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
config.Line = line
|
||||||
|
}
|
||||||
return l.db.Save(config).Error
|
return l.db.Save(config).Error
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (l *LocalDB) NextConfigurationLine(projectUUID *string, excludeUUID string) (int, error) {
|
||||||
|
return NextConfigurationLineTx(l.db, projectUUID, excludeUUID)
|
||||||
|
}
|
||||||
|
|
||||||
|
func NextConfigurationLineTx(tx *gorm.DB, projectUUID *string, excludeUUID string) (int, error) {
|
||||||
|
query := tx.Model(&LocalConfiguration{}).
|
||||||
|
Where("is_active = ?", true)
|
||||||
|
|
||||||
|
trimmedExclude := strings.TrimSpace(excludeUUID)
|
||||||
|
if trimmedExclude != "" {
|
||||||
|
query = query.Where("uuid <> ?", trimmedExclude)
|
||||||
|
}
|
||||||
|
|
||||||
|
if projectUUID != nil && strings.TrimSpace(*projectUUID) != "" {
|
||||||
|
query = query.Where("project_uuid = ?", strings.TrimSpace(*projectUUID))
|
||||||
|
} else {
|
||||||
|
query = query.Where("project_uuid IS NULL OR TRIM(project_uuid) = ''")
|
||||||
|
}
|
||||||
|
|
||||||
|
var maxLine int
|
||||||
|
if err := query.Select("COALESCE(MAX(line_no), 0)").Scan(&maxLine).Error; err != nil {
|
||||||
|
return 0, fmt.Errorf("read max line_no: %w", err)
|
||||||
|
}
|
||||||
|
if maxLine < 0 {
|
||||||
|
maxLine = 0
|
||||||
|
}
|
||||||
|
|
||||||
|
next := ((maxLine / 10) + 1) * 10
|
||||||
|
if next < 10 {
|
||||||
|
next = 10
|
||||||
|
}
|
||||||
|
return next, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func configurationLineOrderClause() string {
|
||||||
|
return "CASE WHEN COALESCE(local_configurations.line_no, 0) <= 0 THEN 2147483647 ELSE local_configurations.line_no END ASC, local_configurations.created_at DESC, local_configurations.id DESC"
|
||||||
|
}
|
||||||
|
|
||||||
// GetConfigurations returns all local configurations
|
// GetConfigurations returns all local configurations
|
||||||
func (l *LocalDB) GetConfigurations() ([]LocalConfiguration, error) {
|
func (l *LocalDB) GetConfigurations() ([]LocalConfiguration, error) {
|
||||||
var configs []LocalConfiguration
|
var configs []LocalConfiguration
|
||||||
@@ -672,6 +1074,26 @@ func (l *LocalDB) GetLastSyncTime() *time.Time {
|
|||||||
return &t
|
return &t
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (l *LocalDB) getAppSettingValue(key string) (string, bool) {
|
||||||
|
var setting struct {
|
||||||
|
Value string
|
||||||
|
}
|
||||||
|
if err := l.db.Table("app_settings").
|
||||||
|
Where("key = ?", key).
|
||||||
|
First(&setting).Error; err != nil {
|
||||||
|
return "", false
|
||||||
|
}
|
||||||
|
return setting.Value, true
|
||||||
|
}
|
||||||
|
|
||||||
|
func (l *LocalDB) upsertAppSetting(tx *gorm.DB, key, value string, updatedAt time.Time) error {
|
||||||
|
return tx.Exec(`
|
||||||
|
INSERT INTO app_settings (key, value, updated_at)
|
||||||
|
VALUES (?, ?, ?)
|
||||||
|
ON CONFLICT(key) DO UPDATE SET value = excluded.value, updated_at = excluded.updated_at
|
||||||
|
`, key, value, updatedAt.Format(time.RFC3339)).Error
|
||||||
|
}
|
||||||
|
|
||||||
// SetLastSyncTime sets the last sync timestamp
|
// SetLastSyncTime sets the last sync timestamp
|
||||||
func (l *LocalDB) SetLastSyncTime(t time.Time) error {
|
func (l *LocalDB) SetLastSyncTime(t time.Time) error {
|
||||||
// Using raw SQL for upsert since SQLite doesn't have native UPSERT in all versions
|
// Using raw SQL for upsert since SQLite doesn't have native UPSERT in all versions
|
||||||
@@ -682,6 +1104,55 @@ func (l *LocalDB) SetLastSyncTime(t time.Time) error {
|
|||||||
`, "last_pricelist_sync", t.Format(time.RFC3339), time.Now().Format(time.RFC3339)).Error
|
`, "last_pricelist_sync", t.Format(time.RFC3339), time.Now().Format(time.RFC3339)).Error
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (l *LocalDB) GetLastPricelistSyncAttemptAt() *time.Time {
|
||||||
|
value, ok := l.getAppSettingValue("last_pricelist_sync_attempt_at")
|
||||||
|
if !ok {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
t, err := time.Parse(time.RFC3339, value)
|
||||||
|
if err != nil {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
return &t
|
||||||
|
}
|
||||||
|
|
||||||
|
func (l *LocalDB) GetLastPricelistSyncStatus() string {
|
||||||
|
value, ok := l.getAppSettingValue("last_pricelist_sync_status")
|
||||||
|
if !ok {
|
||||||
|
return ""
|
||||||
|
}
|
||||||
|
return strings.TrimSpace(value)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (l *LocalDB) GetLastPricelistSyncError() string {
|
||||||
|
value, ok := l.getAppSettingValue("last_pricelist_sync_error")
|
||||||
|
if !ok {
|
||||||
|
return ""
|
||||||
|
}
|
||||||
|
return strings.TrimSpace(value)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (l *LocalDB) SetPricelistSyncResult(status, errorText string, attemptedAt time.Time) error {
|
||||||
|
status = strings.TrimSpace(status)
|
||||||
|
errorText = strings.TrimSpace(errorText)
|
||||||
|
if status == "" {
|
||||||
|
status = "unknown"
|
||||||
|
}
|
||||||
|
|
||||||
|
return l.db.Transaction(func(tx *gorm.DB) error {
|
||||||
|
if err := l.upsertAppSetting(tx, "last_pricelist_sync_status", status, attemptedAt); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if err := l.upsertAppSetting(tx, "last_pricelist_sync_error", errorText, attemptedAt); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if err := l.upsertAppSetting(tx, "last_pricelist_sync_attempt_at", attemptedAt.Format(time.RFC3339), attemptedAt); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
// CountLocalPricelists returns the number of local pricelists
|
// CountLocalPricelists returns the number of local pricelists
|
||||||
func (l *LocalDB) CountLocalPricelists() int64 {
|
func (l *LocalDB) CountLocalPricelists() int64 {
|
||||||
var count int64
|
var count int64
|
||||||
@@ -689,6 +1160,29 @@ func (l *LocalDB) CountLocalPricelists() int64 {
|
|||||||
return count
|
return count
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// CountAllPricelistItems returns total rows across all local_pricelist_items.
|
||||||
|
func (l *LocalDB) CountAllPricelistItems() int64 {
|
||||||
|
var count int64
|
||||||
|
l.db.Model(&LocalPricelistItem{}).Count(&count)
|
||||||
|
return count
|
||||||
|
}
|
||||||
|
|
||||||
|
// CountComponents returns the number of rows in local_components.
|
||||||
|
func (l *LocalDB) CountComponents() int64 {
|
||||||
|
var count int64
|
||||||
|
l.db.Model(&LocalComponent{}).Count(&count)
|
||||||
|
return count
|
||||||
|
}
|
||||||
|
|
||||||
|
// DBFileSizeBytes returns the size of the SQLite database file in bytes.
|
||||||
|
func (l *LocalDB) DBFileSizeBytes() int64 {
|
||||||
|
info, err := os.Stat(l.path)
|
||||||
|
if err != nil {
|
||||||
|
return 0
|
||||||
|
}
|
||||||
|
return info.Size()
|
||||||
|
}
|
||||||
|
|
||||||
// GetLatestLocalPricelist returns the most recently synced pricelist
|
// GetLatestLocalPricelist returns the most recently synced pricelist
|
||||||
func (l *LocalDB) GetLatestLocalPricelist() (*LocalPricelist, error) {
|
func (l *LocalDB) GetLatestLocalPricelist() (*LocalPricelist, error) {
|
||||||
var pricelist LocalPricelist
|
var pricelist LocalPricelist
|
||||||
@@ -793,20 +1287,20 @@ func (l *LocalDB) CountLocalPricelistItemsWithEmptyCategory(pricelistID uint) (i
|
|||||||
return count, nil
|
return count, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// SaveLocalPricelistItems saves pricelist items to local SQLite
|
// SaveLocalPricelistItems saves pricelist items to local SQLite.
|
||||||
|
// Duplicate (pricelist_id, lot_name) rows are silently ignored.
|
||||||
func (l *LocalDB) SaveLocalPricelistItems(items []LocalPricelistItem) error {
|
func (l *LocalDB) SaveLocalPricelistItems(items []LocalPricelistItem) error {
|
||||||
if len(items) == 0 {
|
if len(items) == 0 {
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// Batch insert
|
|
||||||
batchSize := 500
|
batchSize := 500
|
||||||
for i := 0; i < len(items); i += batchSize {
|
for i := 0; i < len(items); i += batchSize {
|
||||||
end := i + batchSize
|
end := i + batchSize
|
||||||
if end > len(items) {
|
if end > len(items) {
|
||||||
end = len(items)
|
end = len(items)
|
||||||
}
|
}
|
||||||
if err := l.db.CreateInBatches(items[i:end], batchSize).Error; err != nil {
|
if err := l.db.Clauses(clause.OnConflict{DoNothing: true}).CreateInBatches(items[i:end], batchSize).Error; err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -856,6 +1350,32 @@ func (l *LocalDB) GetLocalPriceForLot(pricelistID uint, lotName string) (float64
|
|||||||
return item.Price, nil
|
return item.Price, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// GetLocalPricesForLots returns prices for multiple lots from a local pricelist in a single query.
|
||||||
|
// Uses the composite index (pricelist_id, lot_name). Missing lots are omitted from the result.
|
||||||
|
func (l *LocalDB) GetLocalPricesForLots(pricelistID uint, lotNames []string) (map[string]float64, error) {
|
||||||
|
result := make(map[string]float64, len(lotNames))
|
||||||
|
if len(lotNames) == 0 {
|
||||||
|
return result, nil
|
||||||
|
}
|
||||||
|
type row struct {
|
||||||
|
LotName string `gorm:"column:lot_name"`
|
||||||
|
Price float64 `gorm:"column:price"`
|
||||||
|
}
|
||||||
|
var rows []row
|
||||||
|
if err := l.db.Model(&LocalPricelistItem{}).
|
||||||
|
Select("lot_name, price").
|
||||||
|
Where("pricelist_id = ? AND lot_name IN ?", pricelistID, lotNames).
|
||||||
|
Find(&rows).Error; err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
for _, r := range rows {
|
||||||
|
if r.Price > 0 {
|
||||||
|
result[r.LotName] = r.Price
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return result, nil
|
||||||
|
}
|
||||||
|
|
||||||
// GetLocalLotCategoriesByServerPricelistID returns lot_category for each lot_name from a local pricelist resolved by server ID.
|
// GetLocalLotCategoriesByServerPricelistID returns lot_category for each lot_name from a local pricelist resolved by server ID.
|
||||||
// Missing lots are not included in the map; caller is responsible for strict validation.
|
// Missing lots are not included in the map; caller is responsible for strict validation.
|
||||||
func (l *LocalDB) GetLocalLotCategoriesByServerPricelistID(serverPricelistID uint, lotNames []string) (map[string]string, error) {
|
func (l *LocalDB) GetLocalLotCategoriesByServerPricelistID(serverPricelistID uint, lotNames []string) (map[string]string, error) {
|
||||||
@@ -1188,42 +1708,6 @@ func (l *LocalDB) repairConfigurationChange(change *PendingChange) error {
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// GetRemoteMigrationApplied returns a locally applied remote migration by ID.
|
|
||||||
func (l *LocalDB) GetRemoteMigrationApplied(id string) (*LocalRemoteMigrationApplied, error) {
|
|
||||||
var migration LocalRemoteMigrationApplied
|
|
||||||
if err := l.db.Where("id = ?", id).First(&migration).Error; err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
return &migration, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// UpsertRemoteMigrationApplied writes applied migration metadata.
|
|
||||||
func (l *LocalDB) UpsertRemoteMigrationApplied(id, checksum, appVersion string, appliedAt time.Time) error {
|
|
||||||
record := &LocalRemoteMigrationApplied{
|
|
||||||
ID: id,
|
|
||||||
Checksum: checksum,
|
|
||||||
AppVersion: appVersion,
|
|
||||||
AppliedAt: appliedAt,
|
|
||||||
}
|
|
||||||
return l.db.Clauses(clause.OnConflict{
|
|
||||||
Columns: []clause.Column{{Name: "id"}},
|
|
||||||
DoUpdates: clause.Assignments(map[string]interface{}{
|
|
||||||
"checksum": checksum,
|
|
||||||
"app_version": appVersion,
|
|
||||||
"applied_at": appliedAt,
|
|
||||||
}),
|
|
||||||
}).Create(record).Error
|
|
||||||
}
|
|
||||||
|
|
||||||
// GetLatestAppliedRemoteMigrationID returns last applied remote migration id.
|
|
||||||
func (l *LocalDB) GetLatestAppliedRemoteMigrationID() (string, error) {
|
|
||||||
var record LocalRemoteMigrationApplied
|
|
||||||
if err := l.db.Order("applied_at DESC").First(&record).Error; err != nil {
|
|
||||||
return "", err
|
|
||||||
}
|
|
||||||
return record.ID, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// GetSyncGuardState returns the latest readiness guard state.
|
// GetSyncGuardState returns the latest readiness guard state.
|
||||||
func (l *LocalDB) GetSyncGuardState() (*LocalSyncGuardState, error) {
|
func (l *LocalDB) GetSyncGuardState() (*LocalSyncGuardState, error) {
|
||||||
var state LocalSyncGuardState
|
var state LocalSyncGuardState
|
||||||
|
|||||||
@@ -4,6 +4,7 @@ import (
|
|||||||
"errors"
|
"errors"
|
||||||
"fmt"
|
"fmt"
|
||||||
"log/slog"
|
"log/slog"
|
||||||
|
"sort"
|
||||||
"strings"
|
"strings"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
@@ -108,6 +109,29 @@ var localMigrations = []localMigration{
|
|||||||
name: "Deduplicate configuration revisions by spec+price",
|
name: "Deduplicate configuration revisions by spec+price",
|
||||||
run: deduplicateConfigurationVersionsBySpecAndPrice,
|
run: deduplicateConfigurationVersionsBySpecAndPrice,
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
id: "2026_02_19_local_config_line_no",
|
||||||
|
name: "Add line_no to local_configurations and backfill ordering",
|
||||||
|
run: addLocalConfigurationLineNo,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
id: "2026_03_07_local_partnumber_book_catalog",
|
||||||
|
name: "Convert local partnumber book cache to book membership + deduplicated PN catalog",
|
||||||
|
run: migrateLocalPartnumberBookCatalog,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
id: "2026_03_13_pricelist_items_dedup_unique",
|
||||||
|
name: "Deduplicate local_pricelist_items and add unique index on (pricelist_id, lot_name)",
|
||||||
|
run: deduplicatePricelistItemsAndAddUniqueIndex,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
type localPartnumberCatalogRow struct {
|
||||||
|
Partnumber string
|
||||||
|
LotsJSON LocalPartnumberBookLots
|
||||||
|
Description string
|
||||||
|
CreatedAt time.Time
|
||||||
|
ServerID int
|
||||||
}
|
}
|
||||||
|
|
||||||
func runLocalMigrations(db *gorm.DB) error {
|
func runLocalMigrations(db *gorm.DB) error {
|
||||||
@@ -806,3 +830,293 @@ func addLocalConfigurationSupportCode(tx *gorm.DB) error {
|
|||||||
}
|
}
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func addLocalConfigurationLineNo(tx *gorm.DB) error {
|
||||||
|
type columnInfo struct {
|
||||||
|
Name string `gorm:"column:name"`
|
||||||
|
}
|
||||||
|
var columns []columnInfo
|
||||||
|
if err := tx.Raw(`
|
||||||
|
SELECT name FROM pragma_table_info('local_configurations')
|
||||||
|
WHERE name IN ('line_no')
|
||||||
|
`).Scan(&columns).Error; err != nil {
|
||||||
|
return fmt.Errorf("check local_configurations(line_no) existence: %w", err)
|
||||||
|
}
|
||||||
|
if len(columns) == 0 {
|
||||||
|
if err := tx.Exec(`
|
||||||
|
ALTER TABLE local_configurations
|
||||||
|
ADD COLUMN line_no INTEGER
|
||||||
|
`).Error; err != nil {
|
||||||
|
return fmt.Errorf("add local_configurations.line_no: %w", err)
|
||||||
|
}
|
||||||
|
slog.Info("added line_no to local_configurations")
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := tx.Exec(`
|
||||||
|
WITH ranked AS (
|
||||||
|
SELECT
|
||||||
|
id,
|
||||||
|
ROW_NUMBER() OVER (
|
||||||
|
PARTITION BY COALESCE(NULLIF(TRIM(project_uuid), ''), '__NO_PROJECT__')
|
||||||
|
ORDER BY created_at ASC, id ASC
|
||||||
|
) AS rn
|
||||||
|
FROM local_configurations
|
||||||
|
WHERE line_no IS NULL OR line_no <= 0
|
||||||
|
)
|
||||||
|
UPDATE local_configurations
|
||||||
|
SET line_no = (
|
||||||
|
SELECT rn * 10
|
||||||
|
FROM ranked
|
||||||
|
WHERE ranked.id = local_configurations.id
|
||||||
|
)
|
||||||
|
WHERE id IN (SELECT id FROM ranked)
|
||||||
|
`).Error; err != nil {
|
||||||
|
return fmt.Errorf("backfill local_configurations.line_no: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := tx.Exec(`
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_local_configurations_project_line_no
|
||||||
|
ON local_configurations(project_uuid, line_no)
|
||||||
|
`).Error; err != nil {
|
||||||
|
return fmt.Errorf("ensure idx_local_configurations_project_line_no: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func migrateLocalPartnumberBookCatalog(tx *gorm.DB) error {
|
||||||
|
type columnInfo struct {
|
||||||
|
Name string `gorm:"column:name"`
|
||||||
|
}
|
||||||
|
|
||||||
|
hasBooksTable := tx.Migrator().HasTable(&LocalPartnumberBook{})
|
||||||
|
hasItemsTable := tx.Migrator().HasTable(&LocalPartnumberBookItem{})
|
||||||
|
if !hasItemsTable {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
if hasBooksTable {
|
||||||
|
var bookCols []columnInfo
|
||||||
|
if err := tx.Raw(`SELECT name FROM pragma_table_info('local_partnumber_books')`).Scan(&bookCols).Error; err != nil {
|
||||||
|
return fmt.Errorf("load local_partnumber_books columns: %w", err)
|
||||||
|
}
|
||||||
|
hasPartnumbersJSON := false
|
||||||
|
for _, c := range bookCols {
|
||||||
|
if c.Name == "partnumbers_json" {
|
||||||
|
hasPartnumbersJSON = true
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if !hasPartnumbersJSON {
|
||||||
|
if err := tx.Exec(`ALTER TABLE local_partnumber_books ADD COLUMN partnumbers_json TEXT NOT NULL DEFAULT '[]'`).Error; err != nil {
|
||||||
|
return fmt.Errorf("add local_partnumber_books.partnumbers_json: %w", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
var itemCols []columnInfo
|
||||||
|
if err := tx.Raw(`SELECT name FROM pragma_table_info('local_partnumber_book_items')`).Scan(&itemCols).Error; err != nil {
|
||||||
|
return fmt.Errorf("load local_partnumber_book_items columns: %w", err)
|
||||||
|
}
|
||||||
|
hasBookID := false
|
||||||
|
hasLotName := false
|
||||||
|
hasLotsJSON := false
|
||||||
|
for _, c := range itemCols {
|
||||||
|
if c.Name == "book_id" {
|
||||||
|
hasBookID = true
|
||||||
|
}
|
||||||
|
if c.Name == "lot_name" {
|
||||||
|
hasLotName = true
|
||||||
|
}
|
||||||
|
if c.Name == "lots_json" {
|
||||||
|
hasLotsJSON = true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if !hasBookID && !hasLotName && !hasLotsJSON {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
type legacyRow struct {
|
||||||
|
BookID uint
|
||||||
|
Partnumber string
|
||||||
|
LotName string
|
||||||
|
Description string
|
||||||
|
CreatedAt time.Time
|
||||||
|
ServerID int
|
||||||
|
}
|
||||||
|
bookPNs := make(map[uint]map[string]struct{})
|
||||||
|
catalog := make(map[string]*localPartnumberCatalogRow)
|
||||||
|
|
||||||
|
if hasBookID || hasLotName {
|
||||||
|
var rows []legacyRow
|
||||||
|
if err := tx.Raw(`
|
||||||
|
SELECT
|
||||||
|
i.book_id,
|
||||||
|
i.partnumber,
|
||||||
|
i.lot_name,
|
||||||
|
COALESCE(i.description, '') AS description,
|
||||||
|
b.created_at,
|
||||||
|
b.server_id
|
||||||
|
FROM local_partnumber_book_items i
|
||||||
|
INNER JOIN local_partnumber_books b ON b.id = i.book_id
|
||||||
|
ORDER BY b.created_at DESC, b.id DESC, i.partnumber ASC, i.id ASC
|
||||||
|
`).Scan(&rows).Error; err != nil {
|
||||||
|
return fmt.Errorf("load legacy local partnumber book items: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, row := range rows {
|
||||||
|
if _, ok := bookPNs[row.BookID]; !ok {
|
||||||
|
bookPNs[row.BookID] = make(map[string]struct{})
|
||||||
|
}
|
||||||
|
bookPNs[row.BookID][row.Partnumber] = struct{}{}
|
||||||
|
|
||||||
|
entry, ok := catalog[row.Partnumber]
|
||||||
|
if !ok {
|
||||||
|
entry = &localPartnumberCatalogRow{
|
||||||
|
Partnumber: row.Partnumber,
|
||||||
|
Description: row.Description,
|
||||||
|
CreatedAt: row.CreatedAt,
|
||||||
|
ServerID: row.ServerID,
|
||||||
|
}
|
||||||
|
catalog[row.Partnumber] = entry
|
||||||
|
}
|
||||||
|
if row.CreatedAt.After(entry.CreatedAt) || (row.CreatedAt.Equal(entry.CreatedAt) && row.ServerID >= entry.ServerID) {
|
||||||
|
entry.Description = row.Description
|
||||||
|
entry.CreatedAt = row.CreatedAt
|
||||||
|
entry.ServerID = row.ServerID
|
||||||
|
}
|
||||||
|
found := false
|
||||||
|
for i := range entry.LotsJSON {
|
||||||
|
if entry.LotsJSON[i].LotName == row.LotName {
|
||||||
|
entry.LotsJSON[i].Qty += 1
|
||||||
|
found = true
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if !found && row.LotName != "" {
|
||||||
|
entry.LotsJSON = append(entry.LotsJSON, LocalPartnumberBookLot{LotName: row.LotName, Qty: 1})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
var books []LocalPartnumberBook
|
||||||
|
if err := tx.Find(&books).Error; err != nil {
|
||||||
|
return fmt.Errorf("load local partnumber books: %w", err)
|
||||||
|
}
|
||||||
|
for _, book := range books {
|
||||||
|
pnSet := bookPNs[book.ID]
|
||||||
|
partnumbers := make([]string, 0, len(pnSet))
|
||||||
|
for pn := range pnSet {
|
||||||
|
partnumbers = append(partnumbers, pn)
|
||||||
|
}
|
||||||
|
sort.Strings(partnumbers)
|
||||||
|
if err := tx.Model(&LocalPartnumberBook{}).
|
||||||
|
Where("id = ?", book.ID).
|
||||||
|
Update("partnumbers_json", LocalStringList(partnumbers)).Error; err != nil {
|
||||||
|
return fmt.Errorf("update partnumbers_json for local book %d: %w", book.ID, err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
var items []LocalPartnumberBookItem
|
||||||
|
if err := tx.Order("id DESC").Find(&items).Error; err != nil {
|
||||||
|
return fmt.Errorf("load canonical local partnumber book items: %w", err)
|
||||||
|
}
|
||||||
|
for _, item := range items {
|
||||||
|
entry, ok := catalog[item.Partnumber]
|
||||||
|
if !ok {
|
||||||
|
copiedLots := append(LocalPartnumberBookLots(nil), item.LotsJSON...)
|
||||||
|
catalog[item.Partnumber] = &localPartnumberCatalogRow{
|
||||||
|
Partnumber: item.Partnumber,
|
||||||
|
LotsJSON: copiedLots,
|
||||||
|
Description: item.Description,
|
||||||
|
}
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if entry.Description == "" && item.Description != "" {
|
||||||
|
entry.Description = item.Description
|
||||||
|
}
|
||||||
|
for _, lot := range item.LotsJSON {
|
||||||
|
merged := false
|
||||||
|
for i := range entry.LotsJSON {
|
||||||
|
if entry.LotsJSON[i].LotName == lot.LotName {
|
||||||
|
if lot.Qty > entry.LotsJSON[i].Qty {
|
||||||
|
entry.LotsJSON[i].Qty = lot.Qty
|
||||||
|
}
|
||||||
|
merged = true
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if !merged {
|
||||||
|
entry.LotsJSON = append(entry.LotsJSON, lot)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return rebuildLocalPartnumberBookCatalog(tx, catalog)
|
||||||
|
}
|
||||||
|
|
||||||
|
func rebuildLocalPartnumberBookCatalog(tx *gorm.DB, catalog map[string]*localPartnumberCatalogRow) error {
|
||||||
|
if err := tx.Exec(`
|
||||||
|
CREATE TABLE local_partnumber_book_items_new (
|
||||||
|
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||||
|
partnumber TEXT NOT NULL UNIQUE,
|
||||||
|
lots_json TEXT NOT NULL,
|
||||||
|
description TEXT
|
||||||
|
)
|
||||||
|
`).Error; err != nil {
|
||||||
|
return fmt.Errorf("create new local_partnumber_book_items table: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
orderedPartnumbers := make([]string, 0, len(catalog))
|
||||||
|
for pn := range catalog {
|
||||||
|
orderedPartnumbers = append(orderedPartnumbers, pn)
|
||||||
|
}
|
||||||
|
sort.Strings(orderedPartnumbers)
|
||||||
|
for _, pn := range orderedPartnumbers {
|
||||||
|
row := catalog[pn]
|
||||||
|
sort.Slice(row.LotsJSON, func(i, j int) bool {
|
||||||
|
return row.LotsJSON[i].LotName < row.LotsJSON[j].LotName
|
||||||
|
})
|
||||||
|
if err := tx.Table("local_partnumber_book_items_new").Create(&LocalPartnumberBookItem{
|
||||||
|
Partnumber: row.Partnumber,
|
||||||
|
LotsJSON: row.LotsJSON,
|
||||||
|
Description: row.Description,
|
||||||
|
}).Error; err != nil {
|
||||||
|
return fmt.Errorf("insert new local_partnumber_book_items row for %s: %w", pn, err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := tx.Exec(`DROP TABLE local_partnumber_book_items`).Error; err != nil {
|
||||||
|
return fmt.Errorf("drop legacy local_partnumber_book_items: %w", err)
|
||||||
|
}
|
||||||
|
if err := tx.Exec(`ALTER TABLE local_partnumber_book_items_new RENAME TO local_partnumber_book_items`).Error; err != nil {
|
||||||
|
return fmt.Errorf("rename new local_partnumber_book_items table: %w", err)
|
||||||
|
}
|
||||||
|
if err := tx.Exec(`CREATE UNIQUE INDEX IF NOT EXISTS idx_local_partnumber_book_items_partnumber ON local_partnumber_book_items(partnumber)`).Error; err != nil {
|
||||||
|
return fmt.Errorf("create local_partnumber_book_items partnumber index: %w", err)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func deduplicatePricelistItemsAndAddUniqueIndex(tx *gorm.DB) error {
|
||||||
|
// Remove duplicate (pricelist_id, lot_name) rows keeping only the row with the lowest id.
|
||||||
|
if err := tx.Exec(`
|
||||||
|
DELETE FROM local_pricelist_items
|
||||||
|
WHERE id NOT IN (
|
||||||
|
SELECT MIN(id) FROM local_pricelist_items
|
||||||
|
GROUP BY pricelist_id, lot_name
|
||||||
|
)
|
||||||
|
`).Error; err != nil {
|
||||||
|
return fmt.Errorf("deduplicate local_pricelist_items: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Add unique index to prevent future duplicates.
|
||||||
|
if err := tx.Exec(`
|
||||||
|
CREATE UNIQUE INDEX IF NOT EXISTS idx_local_pricelist_items_pricelist_lot_unique
|
||||||
|
ON local_pricelist_items(pricelist_id, lot_name)
|
||||||
|
`).Error; err != nil {
|
||||||
|
return fmt.Errorf("create unique index on local_pricelist_items: %w", err)
|
||||||
|
}
|
||||||
|
slog.Info("deduplicated local_pricelist_items and added unique index")
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|||||||
@@ -83,35 +83,39 @@ func (s *LocalStringList) Scan(value interface{}) error {
|
|||||||
|
|
||||||
// LocalConfiguration stores configurations in local SQLite
|
// LocalConfiguration stores configurations in local SQLite
|
||||||
type LocalConfiguration struct {
|
type LocalConfiguration struct {
|
||||||
ID uint `gorm:"primaryKey;autoIncrement" json:"id"`
|
ID uint `gorm:"primaryKey;autoIncrement" json:"id"`
|
||||||
UUID string `gorm:"uniqueIndex;not null" json:"uuid"`
|
UUID string `gorm:"uniqueIndex;not null" json:"uuid"`
|
||||||
ServerID *uint `json:"server_id"` // ID on MariaDB server, NULL if local only
|
ServerID *uint `json:"server_id"` // ID on MariaDB server, NULL if local only
|
||||||
ProjectUUID *string `gorm:"index" json:"project_uuid,omitempty"`
|
ProjectUUID *string `gorm:"index" json:"project_uuid,omitempty"`
|
||||||
CurrentVersionID *string `gorm:"index" json:"current_version_id,omitempty"`
|
CurrentVersionID *string `gorm:"index" json:"current_version_id,omitempty"`
|
||||||
IsActive bool `gorm:"default:true;index" json:"is_active"`
|
IsActive bool `gorm:"default:true;index" json:"is_active"`
|
||||||
Name string `gorm:"not null" json:"name"`
|
Name string `gorm:"not null" json:"name"`
|
||||||
Items LocalConfigItems `gorm:"type:text" json:"items"` // JSON stored as text in SQLite
|
Items LocalConfigItems `gorm:"type:text" json:"items"` // JSON stored as text in SQLite
|
||||||
TotalPrice *float64 `json:"total_price"`
|
TotalPrice *float64 `json:"total_price"`
|
||||||
CustomPrice *float64 `json:"custom_price"`
|
CustomPrice *float64 `json:"custom_price"`
|
||||||
Notes string `json:"notes"`
|
Notes string `json:"notes"`
|
||||||
IsTemplate bool `gorm:"default:false" json:"is_template"`
|
IsTemplate bool `gorm:"default:false" json:"is_template"`
|
||||||
ServerCount int `gorm:"default:1" json:"server_count"`
|
ServerCount int `gorm:"default:1" json:"server_count"`
|
||||||
ServerModel string `gorm:"size:100" json:"server_model,omitempty"`
|
ServerModel string `gorm:"size:100" json:"server_model,omitempty"`
|
||||||
SupportCode string `gorm:"size:20" json:"support_code,omitempty"`
|
SupportCode string `gorm:"size:20" json:"support_code,omitempty"`
|
||||||
Article string `gorm:"size:80" json:"article,omitempty"`
|
Article string `gorm:"size:80" json:"article,omitempty"`
|
||||||
PricelistID *uint `gorm:"index" json:"pricelist_id,omitempty"`
|
PricelistID *uint `gorm:"index" json:"pricelist_id,omitempty"`
|
||||||
WarehousePricelistID *uint `gorm:"index" json:"warehouse_pricelist_id,omitempty"`
|
WarehousePricelistID *uint `gorm:"index" json:"warehouse_pricelist_id,omitempty"`
|
||||||
CompetitorPricelistID *uint `gorm:"index" json:"competitor_pricelist_id,omitempty"`
|
CompetitorPricelistID *uint `gorm:"index" json:"competitor_pricelist_id,omitempty"`
|
||||||
|
DisablePriceRefresh bool `gorm:"default:false" json:"disable_price_refresh"`
|
||||||
OnlyInStock bool `gorm:"default:false" json:"only_in_stock"`
|
OnlyInStock bool `gorm:"default:false" json:"only_in_stock"`
|
||||||
PriceUpdatedAt *time.Time `gorm:"type:timestamp" json:"price_updated_at,omitempty"`
|
VendorSpec VendorSpec `gorm:"type:text" json:"vendor_spec,omitempty"`
|
||||||
CreatedAt time.Time `json:"created_at"`
|
Line int `gorm:"column:line_no;index" json:"line"`
|
||||||
UpdatedAt time.Time `json:"updated_at"`
|
PriceUpdatedAt *time.Time `gorm:"type:timestamp" json:"price_updated_at,omitempty"`
|
||||||
SyncedAt *time.Time `json:"synced_at"`
|
CreatedAt time.Time `json:"created_at"`
|
||||||
SyncStatus string `gorm:"default:'local'" json:"sync_status"` // 'local', 'synced', 'modified'
|
UpdatedAt time.Time `json:"updated_at"`
|
||||||
OriginalUserID uint `json:"original_user_id"` // UserID from MariaDB for reference
|
SyncedAt *time.Time `json:"synced_at"`
|
||||||
OriginalUsername string `gorm:"not null;default:'';index" json:"original_username"`
|
ConfigType string `gorm:"default:server" json:"config_type"` // "server" | "storage"
|
||||||
CurrentVersion *LocalConfigurationVersion `gorm:"foreignKey:CurrentVersionID;references:ID" json:"current_version,omitempty"`
|
SyncStatus string `gorm:"default:'local'" json:"sync_status"` // 'local', 'synced', 'modified'
|
||||||
Versions []LocalConfigurationVersion `gorm:"foreignKey:ConfigurationUUID;references:UUID" json:"versions,omitempty"`
|
OriginalUserID uint `json:"original_user_id"` // UserID from MariaDB for reference
|
||||||
|
OriginalUsername string `gorm:"not null;default:'';index" json:"original_username"`
|
||||||
|
CurrentVersion *LocalConfigurationVersion `gorm:"foreignKey:CurrentVersionID;references:ID" json:"current_version,omitempty"`
|
||||||
|
Versions []LocalConfigurationVersion `gorm:"foreignKey:ConfigurationUUID;references:UUID" json:"versions,omitempty"`
|
||||||
}
|
}
|
||||||
|
|
||||||
func (LocalConfiguration) TableName() string {
|
func (LocalConfiguration) TableName() string {
|
||||||
@@ -200,18 +204,6 @@ func (LocalComponent) TableName() string {
|
|||||||
return "local_components"
|
return "local_components"
|
||||||
}
|
}
|
||||||
|
|
||||||
// LocalRemoteMigrationApplied tracks remote SQLite migrations received from server and applied locally.
|
|
||||||
type LocalRemoteMigrationApplied struct {
|
|
||||||
ID string `gorm:"primaryKey;size:128" json:"id"`
|
|
||||||
Checksum string `gorm:"size:128;not null" json:"checksum"`
|
|
||||||
AppVersion string `gorm:"size:64" json:"app_version,omitempty"`
|
|
||||||
AppliedAt time.Time `gorm:"not null" json:"applied_at"`
|
|
||||||
}
|
|
||||||
|
|
||||||
func (LocalRemoteMigrationApplied) TableName() string {
|
|
||||||
return "local_remote_migrations_applied"
|
|
||||||
}
|
|
||||||
|
|
||||||
// LocalSyncGuardState stores latest sync readiness decision for UI and preflight checks.
|
// LocalSyncGuardState stores latest sync readiness decision for UI and preflight checks.
|
||||||
type LocalSyncGuardState struct {
|
type LocalSyncGuardState struct {
|
||||||
ID uint `gorm:"primaryKey;autoIncrement" json:"id"`
|
ID uint `gorm:"primaryKey;autoIncrement" json:"id"`
|
||||||
@@ -242,3 +234,112 @@ type PendingChange struct {
|
|||||||
func (PendingChange) TableName() string {
|
func (PendingChange) TableName() string {
|
||||||
return "pending_changes"
|
return "pending_changes"
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// LocalPartnumberBook stores a version snapshot of the PN→LOT mapping book (pull-only from PriceForge)
|
||||||
|
type LocalPartnumberBook struct {
|
||||||
|
ID uint `gorm:"primaryKey;autoIncrement" json:"id"`
|
||||||
|
ServerID int `gorm:"uniqueIndex;not null" json:"server_id"`
|
||||||
|
Version string `gorm:"not null" json:"version"`
|
||||||
|
CreatedAt time.Time `gorm:"not null" json:"created_at"`
|
||||||
|
IsActive bool `gorm:"not null;default:true" json:"is_active"`
|
||||||
|
PartnumbersJSON LocalStringList `gorm:"column:partnumbers_json;type:text" json:"partnumbers_json"`
|
||||||
|
}
|
||||||
|
|
||||||
|
func (LocalPartnumberBook) TableName() string {
|
||||||
|
return "local_partnumber_books"
|
||||||
|
}
|
||||||
|
|
||||||
|
type LocalPartnumberBookLot struct {
|
||||||
|
LotName string `json:"lot_name"`
|
||||||
|
Qty float64 `json:"qty"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type LocalPartnumberBookLots []LocalPartnumberBookLot
|
||||||
|
|
||||||
|
func (l LocalPartnumberBookLots) Value() (driver.Value, error) {
|
||||||
|
return json.Marshal(l)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (l *LocalPartnumberBookLots) Scan(value interface{}) error {
|
||||||
|
if value == nil {
|
||||||
|
*l = make(LocalPartnumberBookLots, 0)
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
var bytes []byte
|
||||||
|
switch v := value.(type) {
|
||||||
|
case []byte:
|
||||||
|
bytes = v
|
||||||
|
case string:
|
||||||
|
bytes = []byte(v)
|
||||||
|
default:
|
||||||
|
return errors.New("type assertion failed for LocalPartnumberBookLots")
|
||||||
|
}
|
||||||
|
return json.Unmarshal(bytes, l)
|
||||||
|
}
|
||||||
|
|
||||||
|
// LocalPartnumberBookItem stores the canonical PN composition pulled from PriceForge.
|
||||||
|
type LocalPartnumberBookItem struct {
|
||||||
|
ID uint `gorm:"primaryKey;autoIncrement" json:"id"`
|
||||||
|
Partnumber string `gorm:"not null" json:"partnumber"`
|
||||||
|
LotsJSON LocalPartnumberBookLots `gorm:"column:lots_json;type:text" json:"lots_json"`
|
||||||
|
Description string `json:"description,omitempty"`
|
||||||
|
}
|
||||||
|
|
||||||
|
func (LocalPartnumberBookItem) TableName() string {
|
||||||
|
return "local_partnumber_book_items"
|
||||||
|
}
|
||||||
|
|
||||||
|
// VendorSpecItem represents a single row in a vendor BOM specification
|
||||||
|
type VendorSpecItem struct {
|
||||||
|
SortOrder int `json:"sort_order"`
|
||||||
|
VendorPartnumber string `json:"vendor_partnumber"`
|
||||||
|
Quantity int `json:"quantity"`
|
||||||
|
Description string `json:"description,omitempty"`
|
||||||
|
UnitPrice *float64 `json:"unit_price,omitempty"`
|
||||||
|
TotalPrice *float64 `json:"total_price,omitempty"`
|
||||||
|
ResolvedLotName string `json:"resolved_lot_name,omitempty"`
|
||||||
|
ResolutionSource string `json:"resolution_source,omitempty"` // "book", "manual", "unresolved"
|
||||||
|
ManualLotSuggestion string `json:"manual_lot_suggestion,omitempty"`
|
||||||
|
LotQtyPerPN int `json:"lot_qty_per_pn,omitempty"`
|
||||||
|
LotAllocations []VendorSpecLotAllocation `json:"lot_allocations,omitempty"`
|
||||||
|
LotMappings []VendorSpecLotMapping `json:"lot_mappings,omitempty"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type VendorSpecLotAllocation struct {
|
||||||
|
LotName string `json:"lot_name"`
|
||||||
|
Quantity int `json:"quantity"` // quantity of LOT per 1 vendor PN
|
||||||
|
}
|
||||||
|
|
||||||
|
// VendorSpecLotMapping is the canonical persisted LOT mapping for a vendor PN row.
|
||||||
|
// It stores all mapped LOTs (base + bundle) uniformly.
|
||||||
|
type VendorSpecLotMapping struct {
|
||||||
|
LotName string `json:"lot_name"`
|
||||||
|
QuantityPerPN int `json:"quantity_per_pn"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// VendorSpec is a JSON-encodable slice of VendorSpecItem
|
||||||
|
type VendorSpec []VendorSpecItem
|
||||||
|
|
||||||
|
func (v VendorSpec) Value() (driver.Value, error) {
|
||||||
|
if v == nil {
|
||||||
|
return nil, nil
|
||||||
|
}
|
||||||
|
return json.Marshal(v)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (v *VendorSpec) Scan(value interface{}) error {
|
||||||
|
if value == nil {
|
||||||
|
*v = nil
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
var bytes []byte
|
||||||
|
switch val := value.(type) {
|
||||||
|
case []byte:
|
||||||
|
bytes = val
|
||||||
|
case string:
|
||||||
|
bytes = []byte(val)
|
||||||
|
default:
|
||||||
|
return errors.New("type assertion failed for VendorSpec")
|
||||||
|
}
|
||||||
|
return json.Unmarshal(bytes, v)
|
||||||
|
}
|
||||||
|
|||||||
53
internal/localdb/project_sync_timestamp_test.go
Normal file
53
internal/localdb/project_sync_timestamp_test.go
Normal file
@@ -0,0 +1,53 @@
|
|||||||
|
package localdb
|
||||||
|
|
||||||
|
import (
|
||||||
|
"path/filepath"
|
||||||
|
"testing"
|
||||||
|
"time"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestSaveProjectPreservingUpdatedAtKeepsProvidedTimestamp(t *testing.T) {
|
||||||
|
dbPath := filepath.Join(t.TempDir(), "project_sync_timestamp.db")
|
||||||
|
|
||||||
|
local, err := New(dbPath)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("open localdb: %v", err)
|
||||||
|
}
|
||||||
|
t.Cleanup(func() { _ = local.Close() })
|
||||||
|
|
||||||
|
createdAt := time.Date(2026, 2, 1, 10, 0, 0, 0, time.UTC)
|
||||||
|
updatedAt := time.Date(2026, 2, 3, 12, 30, 0, 0, time.UTC)
|
||||||
|
project := &LocalProject{
|
||||||
|
UUID: "project-1",
|
||||||
|
OwnerUsername: "tester",
|
||||||
|
Code: "OPS-1",
|
||||||
|
Variant: "Lenovo",
|
||||||
|
IsActive: true,
|
||||||
|
CreatedAt: createdAt,
|
||||||
|
UpdatedAt: updatedAt,
|
||||||
|
SyncStatus: "synced",
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := local.SaveProjectPreservingUpdatedAt(project); err != nil {
|
||||||
|
t.Fatalf("save project: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
syncedAt := time.Date(2026, 3, 16, 8, 45, 0, 0, time.UTC)
|
||||||
|
project.SyncedAt = &syncedAt
|
||||||
|
project.SyncStatus = "synced"
|
||||||
|
|
||||||
|
if err := local.SaveProjectPreservingUpdatedAt(project); err != nil {
|
||||||
|
t.Fatalf("save project second time: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
stored, err := local.GetProjectByUUID(project.UUID)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("get project: %v", err)
|
||||||
|
}
|
||||||
|
if !stored.UpdatedAt.Equal(updatedAt) {
|
||||||
|
t.Fatalf("updated_at changed during sync save: got %s want %s", stored.UpdatedAt, updatedAt)
|
||||||
|
}
|
||||||
|
if stored.SyncedAt == nil || !stored.SyncedAt.Equal(syncedAt) {
|
||||||
|
t.Fatalf("synced_at not updated correctly: got %+v want %s", stored.SyncedAt, syncedAt)
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -10,31 +10,36 @@ import (
|
|||||||
// BuildConfigurationSnapshot serializes the full local configuration state.
|
// BuildConfigurationSnapshot serializes the full local configuration state.
|
||||||
func BuildConfigurationSnapshot(localCfg *LocalConfiguration) (string, error) {
|
func BuildConfigurationSnapshot(localCfg *LocalConfiguration) (string, error) {
|
||||||
snapshot := map[string]interface{}{
|
snapshot := map[string]interface{}{
|
||||||
"id": localCfg.ID,
|
"id": localCfg.ID,
|
||||||
"uuid": localCfg.UUID,
|
"uuid": localCfg.UUID,
|
||||||
"server_id": localCfg.ServerID,
|
"server_id": localCfg.ServerID,
|
||||||
"project_uuid": localCfg.ProjectUUID,
|
"project_uuid": localCfg.ProjectUUID,
|
||||||
"current_version_id": localCfg.CurrentVersionID,
|
"current_version_id": localCfg.CurrentVersionID,
|
||||||
"is_active": localCfg.IsActive,
|
"is_active": localCfg.IsActive,
|
||||||
"name": localCfg.Name,
|
"name": localCfg.Name,
|
||||||
"items": localCfg.Items,
|
"items": localCfg.Items,
|
||||||
"total_price": localCfg.TotalPrice,
|
"total_price": localCfg.TotalPrice,
|
||||||
"custom_price": localCfg.CustomPrice,
|
"custom_price": localCfg.CustomPrice,
|
||||||
"notes": localCfg.Notes,
|
"notes": localCfg.Notes,
|
||||||
"is_template": localCfg.IsTemplate,
|
"is_template": localCfg.IsTemplate,
|
||||||
"server_count": localCfg.ServerCount,
|
"server_count": localCfg.ServerCount,
|
||||||
"server_model": localCfg.ServerModel,
|
"server_model": localCfg.ServerModel,
|
||||||
"support_code": localCfg.SupportCode,
|
"support_code": localCfg.SupportCode,
|
||||||
"article": localCfg.Article,
|
"article": localCfg.Article,
|
||||||
"pricelist_id": localCfg.PricelistID,
|
"pricelist_id": localCfg.PricelistID,
|
||||||
"only_in_stock": localCfg.OnlyInStock,
|
"warehouse_pricelist_id": localCfg.WarehousePricelistID,
|
||||||
"price_updated_at": localCfg.PriceUpdatedAt,
|
"competitor_pricelist_id": localCfg.CompetitorPricelistID,
|
||||||
"created_at": localCfg.CreatedAt,
|
"disable_price_refresh": localCfg.DisablePriceRefresh,
|
||||||
"updated_at": localCfg.UpdatedAt,
|
"only_in_stock": localCfg.OnlyInStock,
|
||||||
"synced_at": localCfg.SyncedAt,
|
"vendor_spec": localCfg.VendorSpec,
|
||||||
"sync_status": localCfg.SyncStatus,
|
"line": localCfg.Line,
|
||||||
"original_user_id": localCfg.OriginalUserID,
|
"price_updated_at": localCfg.PriceUpdatedAt,
|
||||||
"original_username": localCfg.OriginalUsername,
|
"created_at": localCfg.CreatedAt,
|
||||||
|
"updated_at": localCfg.UpdatedAt,
|
||||||
|
"synced_at": localCfg.SyncedAt,
|
||||||
|
"sync_status": localCfg.SyncStatus,
|
||||||
|
"original_user_id": localCfg.OriginalUserID,
|
||||||
|
"original_username": localCfg.OriginalUsername,
|
||||||
}
|
}
|
||||||
|
|
||||||
data, err := json.Marshal(snapshot)
|
data, err := json.Marshal(snapshot)
|
||||||
@@ -47,23 +52,28 @@ func BuildConfigurationSnapshot(localCfg *LocalConfiguration) (string, error) {
|
|||||||
// DecodeConfigurationSnapshot returns editable fields from one saved snapshot.
|
// DecodeConfigurationSnapshot returns editable fields from one saved snapshot.
|
||||||
func DecodeConfigurationSnapshot(data string) (*LocalConfiguration, error) {
|
func DecodeConfigurationSnapshot(data string) (*LocalConfiguration, error) {
|
||||||
var snapshot struct {
|
var snapshot struct {
|
||||||
ProjectUUID *string `json:"project_uuid"`
|
ProjectUUID *string `json:"project_uuid"`
|
||||||
IsActive *bool `json:"is_active"`
|
IsActive *bool `json:"is_active"`
|
||||||
Name string `json:"name"`
|
Name string `json:"name"`
|
||||||
Items LocalConfigItems `json:"items"`
|
Items LocalConfigItems `json:"items"`
|
||||||
TotalPrice *float64 `json:"total_price"`
|
TotalPrice *float64 `json:"total_price"`
|
||||||
CustomPrice *float64 `json:"custom_price"`
|
CustomPrice *float64 `json:"custom_price"`
|
||||||
Notes string `json:"notes"`
|
Notes string `json:"notes"`
|
||||||
IsTemplate bool `json:"is_template"`
|
IsTemplate bool `json:"is_template"`
|
||||||
ServerCount int `json:"server_count"`
|
ServerCount int `json:"server_count"`
|
||||||
ServerModel string `json:"server_model"`
|
ServerModel string `json:"server_model"`
|
||||||
SupportCode string `json:"support_code"`
|
SupportCode string `json:"support_code"`
|
||||||
Article string `json:"article"`
|
Article string `json:"article"`
|
||||||
PricelistID *uint `json:"pricelist_id"`
|
PricelistID *uint `json:"pricelist_id"`
|
||||||
OnlyInStock bool `json:"only_in_stock"`
|
WarehousePricelistID *uint `json:"warehouse_pricelist_id"`
|
||||||
PriceUpdatedAt *time.Time `json:"price_updated_at"`
|
CompetitorPricelistID *uint `json:"competitor_pricelist_id"`
|
||||||
OriginalUserID uint `json:"original_user_id"`
|
DisablePriceRefresh bool `json:"disable_price_refresh"`
|
||||||
OriginalUsername string `json:"original_username"`
|
OnlyInStock bool `json:"only_in_stock"`
|
||||||
|
VendorSpec VendorSpec `json:"vendor_spec"`
|
||||||
|
Line int `json:"line"`
|
||||||
|
PriceUpdatedAt *time.Time `json:"price_updated_at"`
|
||||||
|
OriginalUserID uint `json:"original_user_id"`
|
||||||
|
OriginalUsername string `json:"original_username"`
|
||||||
}
|
}
|
||||||
|
|
||||||
if err := json.Unmarshal([]byte(data), &snapshot); err != nil {
|
if err := json.Unmarshal([]byte(data), &snapshot); err != nil {
|
||||||
@@ -76,31 +86,42 @@ func DecodeConfigurationSnapshot(data string) (*LocalConfiguration, error) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
return &LocalConfiguration{
|
return &LocalConfiguration{
|
||||||
IsActive: isActive,
|
IsActive: isActive,
|
||||||
ProjectUUID: snapshot.ProjectUUID,
|
ProjectUUID: snapshot.ProjectUUID,
|
||||||
Name: snapshot.Name,
|
Name: snapshot.Name,
|
||||||
Items: snapshot.Items,
|
Items: snapshot.Items,
|
||||||
TotalPrice: snapshot.TotalPrice,
|
TotalPrice: snapshot.TotalPrice,
|
||||||
CustomPrice: snapshot.CustomPrice,
|
CustomPrice: snapshot.CustomPrice,
|
||||||
Notes: snapshot.Notes,
|
Notes: snapshot.Notes,
|
||||||
IsTemplate: snapshot.IsTemplate,
|
IsTemplate: snapshot.IsTemplate,
|
||||||
ServerCount: snapshot.ServerCount,
|
ServerCount: snapshot.ServerCount,
|
||||||
ServerModel: snapshot.ServerModel,
|
ServerModel: snapshot.ServerModel,
|
||||||
SupportCode: snapshot.SupportCode,
|
SupportCode: snapshot.SupportCode,
|
||||||
Article: snapshot.Article,
|
Article: snapshot.Article,
|
||||||
PricelistID: snapshot.PricelistID,
|
PricelistID: snapshot.PricelistID,
|
||||||
OnlyInStock: snapshot.OnlyInStock,
|
WarehousePricelistID: snapshot.WarehousePricelistID,
|
||||||
PriceUpdatedAt: snapshot.PriceUpdatedAt,
|
CompetitorPricelistID: snapshot.CompetitorPricelistID,
|
||||||
OriginalUserID: snapshot.OriginalUserID,
|
DisablePriceRefresh: snapshot.DisablePriceRefresh,
|
||||||
OriginalUsername: snapshot.OriginalUsername,
|
OnlyInStock: snapshot.OnlyInStock,
|
||||||
|
VendorSpec: snapshot.VendorSpec,
|
||||||
|
Line: snapshot.Line,
|
||||||
|
PriceUpdatedAt: snapshot.PriceUpdatedAt,
|
||||||
|
OriginalUserID: snapshot.OriginalUserID,
|
||||||
|
OriginalUsername: snapshot.OriginalUsername,
|
||||||
}, nil
|
}, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
type configurationSpecPriceFingerprint struct {
|
type configurationSpecPriceFingerprint struct {
|
||||||
Items []configurationSpecPriceFingerprintItem `json:"items"`
|
Items []configurationSpecPriceFingerprintItem `json:"items"`
|
||||||
ServerCount int `json:"server_count"`
|
ServerCount int `json:"server_count"`
|
||||||
TotalPrice *float64 `json:"total_price,omitempty"`
|
TotalPrice *float64 `json:"total_price,omitempty"`
|
||||||
CustomPrice *float64 `json:"custom_price,omitempty"`
|
CustomPrice *float64 `json:"custom_price,omitempty"`
|
||||||
|
PricelistID *uint `json:"pricelist_id,omitempty"`
|
||||||
|
WarehousePricelistID *uint `json:"warehouse_pricelist_id,omitempty"`
|
||||||
|
CompetitorPricelistID *uint `json:"competitor_pricelist_id,omitempty"`
|
||||||
|
DisablePriceRefresh bool `json:"disable_price_refresh"`
|
||||||
|
OnlyInStock bool `json:"only_in_stock"`
|
||||||
|
VendorSpec VendorSpec `json:"vendor_spec,omitempty"`
|
||||||
}
|
}
|
||||||
|
|
||||||
type configurationSpecPriceFingerprintItem struct {
|
type configurationSpecPriceFingerprintItem struct {
|
||||||
@@ -131,10 +152,16 @@ func BuildConfigurationSpecPriceFingerprint(localCfg *LocalConfiguration) (strin
|
|||||||
})
|
})
|
||||||
|
|
||||||
payload := configurationSpecPriceFingerprint{
|
payload := configurationSpecPriceFingerprint{
|
||||||
Items: items,
|
Items: items,
|
||||||
ServerCount: localCfg.ServerCount,
|
ServerCount: localCfg.ServerCount,
|
||||||
TotalPrice: localCfg.TotalPrice,
|
TotalPrice: localCfg.TotalPrice,
|
||||||
CustomPrice: localCfg.CustomPrice,
|
CustomPrice: localCfg.CustomPrice,
|
||||||
|
PricelistID: localCfg.PricelistID,
|
||||||
|
WarehousePricelistID: localCfg.WarehousePricelistID,
|
||||||
|
CompetitorPricelistID: localCfg.CompetitorPricelistID,
|
||||||
|
DisablePriceRefresh: localCfg.DisablePriceRefresh,
|
||||||
|
OnlyInStock: localCfg.OnlyInStock,
|
||||||
|
VendorSpec: localCfg.VendorSpec,
|
||||||
}
|
}
|
||||||
|
|
||||||
raw, err := json.Marshal(payload)
|
raw, err := json.Marshal(payload)
|
||||||
|
|||||||
@@ -1,238 +0,0 @@
|
|||||||
package lotmatch
|
|
||||||
|
|
||||||
import (
|
|
||||||
"errors"
|
|
||||||
"regexp"
|
|
||||||
"sort"
|
|
||||||
"strings"
|
|
||||||
|
|
||||||
"git.mchus.pro/mchus/quoteforge/internal/models"
|
|
||||||
"gorm.io/gorm"
|
|
||||||
)
|
|
||||||
|
|
||||||
var (
|
|
||||||
ErrResolveConflict = errors.New("multiple lot matches")
|
|
||||||
ErrResolveNotFound = errors.New("lot not found")
|
|
||||||
)
|
|
||||||
|
|
||||||
type LotResolver struct {
|
|
||||||
partnumberToLots map[string][]string
|
|
||||||
exactLots map[string]string
|
|
||||||
allLots []string
|
|
||||||
}
|
|
||||||
|
|
||||||
type MappingMatcher struct {
|
|
||||||
exact map[string][]string
|
|
||||||
exactLot map[string]string
|
|
||||||
wildcard []wildcardMapping
|
|
||||||
}
|
|
||||||
|
|
||||||
type wildcardMapping struct {
|
|
||||||
lotName string
|
|
||||||
re *regexp.Regexp
|
|
||||||
}
|
|
||||||
|
|
||||||
func NewLotResolverFromDB(db *gorm.DB) (*LotResolver, error) {
|
|
||||||
mappings, lots, err := loadMappingsAndLots(db)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
return NewLotResolver(mappings, lots), nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func NewMappingMatcherFromDB(db *gorm.DB) (*MappingMatcher, error) {
|
|
||||||
mappings, lots, err := loadMappingsAndLots(db)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
return NewMappingMatcher(mappings, lots), nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func NewLotResolver(mappings []models.LotPartnumber, lots []models.Lot) *LotResolver {
|
|
||||||
partnumberToLots := make(map[string][]string, len(mappings))
|
|
||||||
for _, m := range mappings {
|
|
||||||
pn := NormalizeKey(m.Partnumber)
|
|
||||||
lot := strings.TrimSpace(m.LotName)
|
|
||||||
if pn == "" || lot == "" {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
partnumberToLots[pn] = append(partnumberToLots[pn], lot)
|
|
||||||
}
|
|
||||||
for key := range partnumberToLots {
|
|
||||||
partnumberToLots[key] = uniqueCaseInsensitive(partnumberToLots[key])
|
|
||||||
}
|
|
||||||
|
|
||||||
exactLots := make(map[string]string, len(lots))
|
|
||||||
allLots := make([]string, 0, len(lots))
|
|
||||||
for _, l := range lots {
|
|
||||||
name := strings.TrimSpace(l.LotName)
|
|
||||||
if name == "" {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
exactLots[NormalizeKey(name)] = name
|
|
||||||
allLots = append(allLots, name)
|
|
||||||
}
|
|
||||||
sort.Slice(allLots, func(i, j int) bool {
|
|
||||||
li := len([]rune(allLots[i]))
|
|
||||||
lj := len([]rune(allLots[j]))
|
|
||||||
if li == lj {
|
|
||||||
return allLots[i] < allLots[j]
|
|
||||||
}
|
|
||||||
return li > lj
|
|
||||||
})
|
|
||||||
|
|
||||||
return &LotResolver{
|
|
||||||
partnumberToLots: partnumberToLots,
|
|
||||||
exactLots: exactLots,
|
|
||||||
allLots: allLots,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func NewMappingMatcher(mappings []models.LotPartnumber, lots []models.Lot) *MappingMatcher {
|
|
||||||
exact := make(map[string][]string, len(mappings))
|
|
||||||
wildcards := make([]wildcardMapping, 0, len(mappings))
|
|
||||||
for _, m := range mappings {
|
|
||||||
pn := NormalizeKey(m.Partnumber)
|
|
||||||
lot := strings.TrimSpace(m.LotName)
|
|
||||||
if pn == "" || lot == "" {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
if strings.Contains(pn, "*") {
|
|
||||||
pattern := "^" + regexp.QuoteMeta(pn) + "$"
|
|
||||||
pattern = strings.ReplaceAll(pattern, "\\*", ".*")
|
|
||||||
re, err := regexp.Compile(pattern)
|
|
||||||
if err != nil {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
wildcards = append(wildcards, wildcardMapping{lotName: lot, re: re})
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
exact[pn] = append(exact[pn], lot)
|
|
||||||
}
|
|
||||||
for key := range exact {
|
|
||||||
exact[key] = uniqueCaseInsensitive(exact[key])
|
|
||||||
}
|
|
||||||
|
|
||||||
exactLot := make(map[string]string, len(lots))
|
|
||||||
for _, l := range lots {
|
|
||||||
name := strings.TrimSpace(l.LotName)
|
|
||||||
if name == "" {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
exactLot[NormalizeKey(name)] = name
|
|
||||||
}
|
|
||||||
|
|
||||||
return &MappingMatcher{
|
|
||||||
exact: exact,
|
|
||||||
exactLot: exactLot,
|
|
||||||
wildcard: wildcards,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (r *LotResolver) Resolve(partnumber string) (string, string, error) {
|
|
||||||
key := NormalizeKey(partnumber)
|
|
||||||
if key == "" {
|
|
||||||
return "", "", ErrResolveNotFound
|
|
||||||
}
|
|
||||||
|
|
||||||
if mapped := r.partnumberToLots[key]; len(mapped) > 0 {
|
|
||||||
if len(mapped) == 1 {
|
|
||||||
return mapped[0], "mapping_table", nil
|
|
||||||
}
|
|
||||||
return "", "", ErrResolveConflict
|
|
||||||
}
|
|
||||||
if exact, ok := r.exactLots[key]; ok {
|
|
||||||
return exact, "article_exact", nil
|
|
||||||
}
|
|
||||||
|
|
||||||
best := ""
|
|
||||||
bestLen := -1
|
|
||||||
tie := false
|
|
||||||
for _, lot := range r.allLots {
|
|
||||||
lotKey := NormalizeKey(lot)
|
|
||||||
if lotKey == "" {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
if strings.HasPrefix(key, lotKey) {
|
|
||||||
l := len([]rune(lotKey))
|
|
||||||
if l > bestLen {
|
|
||||||
best = lot
|
|
||||||
bestLen = l
|
|
||||||
tie = false
|
|
||||||
} else if l == bestLen && !strings.EqualFold(best, lot) {
|
|
||||||
tie = true
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if best == "" {
|
|
||||||
return "", "", ErrResolveNotFound
|
|
||||||
}
|
|
||||||
if tie {
|
|
||||||
return "", "", ErrResolveConflict
|
|
||||||
}
|
|
||||||
return best, "prefix", nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (m *MappingMatcher) MatchLots(partnumber string) []string {
|
|
||||||
if m == nil {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
key := NormalizeKey(partnumber)
|
|
||||||
if key == "" {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
lots := make([]string, 0, 2)
|
|
||||||
if exact := m.exact[key]; len(exact) > 0 {
|
|
||||||
lots = append(lots, exact...)
|
|
||||||
}
|
|
||||||
for _, wc := range m.wildcard {
|
|
||||||
if wc.re == nil || !wc.re.MatchString(key) {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
lots = append(lots, wc.lotName)
|
|
||||||
}
|
|
||||||
if lot, ok := m.exactLot[key]; ok && strings.TrimSpace(lot) != "" {
|
|
||||||
lots = append(lots, lot)
|
|
||||||
}
|
|
||||||
return uniqueCaseInsensitive(lots)
|
|
||||||
}
|
|
||||||
|
|
||||||
func NormalizeKey(v string) string {
|
|
||||||
s := strings.ToLower(strings.TrimSpace(v))
|
|
||||||
replacer := strings.NewReplacer(" ", "", "-", "", "_", "", ".", "", "/", "", "\\", "", "\"", "", "'", "", "(", "", ")", "")
|
|
||||||
return replacer.Replace(s)
|
|
||||||
}
|
|
||||||
|
|
||||||
func loadMappingsAndLots(db *gorm.DB) ([]models.LotPartnumber, []models.Lot, error) {
|
|
||||||
var mappings []models.LotPartnumber
|
|
||||||
if err := db.Find(&mappings).Error; err != nil {
|
|
||||||
return nil, nil, err
|
|
||||||
}
|
|
||||||
var lots []models.Lot
|
|
||||||
if err := db.Select("lot_name").Find(&lots).Error; err != nil {
|
|
||||||
return nil, nil, err
|
|
||||||
}
|
|
||||||
return mappings, lots, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func uniqueCaseInsensitive(values []string) []string {
|
|
||||||
seen := make(map[string]struct{}, len(values))
|
|
||||||
out := make([]string, 0, len(values))
|
|
||||||
for _, v := range values {
|
|
||||||
trimmed := strings.TrimSpace(v)
|
|
||||||
if trimmed == "" {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
key := strings.ToLower(trimmed)
|
|
||||||
if _, ok := seen[key]; ok {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
seen[key] = struct{}{}
|
|
||||||
out = append(out, trimmed)
|
|
||||||
}
|
|
||||||
sort.Slice(out, func(i, j int) bool {
|
|
||||||
return strings.ToLower(out[i]) < strings.ToLower(out[j])
|
|
||||||
})
|
|
||||||
return out
|
|
||||||
}
|
|
||||||
@@ -1,62 +0,0 @@
|
|||||||
package lotmatch
|
|
||||||
|
|
||||||
import (
|
|
||||||
"testing"
|
|
||||||
|
|
||||||
"git.mchus.pro/mchus/quoteforge/internal/models"
|
|
||||||
)
|
|
||||||
|
|
||||||
func TestLotResolverPrecedence(t *testing.T) {
|
|
||||||
resolver := NewLotResolver(
|
|
||||||
[]models.LotPartnumber{
|
|
||||||
{Partnumber: "PN-1", LotName: "LOT_A"},
|
|
||||||
},
|
|
||||||
[]models.Lot{
|
|
||||||
{LotName: "CPU_X_LONG"},
|
|
||||||
{LotName: "CPU_X"},
|
|
||||||
},
|
|
||||||
)
|
|
||||||
|
|
||||||
lot, by, err := resolver.Resolve("PN-1")
|
|
||||||
if err != nil || lot != "LOT_A" || by != "mapping_table" {
|
|
||||||
t.Fatalf("expected mapping_table LOT_A, got lot=%s by=%s err=%v", lot, by, err)
|
|
||||||
}
|
|
||||||
|
|
||||||
lot, by, err = resolver.Resolve("CPU_X")
|
|
||||||
if err != nil || lot != "CPU_X" || by != "article_exact" {
|
|
||||||
t.Fatalf("expected article_exact CPU_X, got lot=%s by=%s err=%v", lot, by, err)
|
|
||||||
}
|
|
||||||
|
|
||||||
lot, by, err = resolver.Resolve("CPU_X_LONG_001")
|
|
||||||
if err != nil || lot != "CPU_X_LONG" || by != "prefix" {
|
|
||||||
t.Fatalf("expected prefix CPU_X_LONG, got lot=%s by=%s err=%v", lot, by, err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestMappingMatcherWildcardAndExactLot(t *testing.T) {
|
|
||||||
matcher := NewMappingMatcher(
|
|
||||||
[]models.LotPartnumber{
|
|
||||||
{Partnumber: "R750*", LotName: "SERVER_R750"},
|
|
||||||
{Partnumber: "HDD-01", LotName: "HDD_01"},
|
|
||||||
},
|
|
||||||
[]models.Lot{
|
|
||||||
{LotName: "MEM_DDR5_16G_4800"},
|
|
||||||
},
|
|
||||||
)
|
|
||||||
|
|
||||||
check := func(partnumber string, want string) {
|
|
||||||
t.Helper()
|
|
||||||
got := matcher.MatchLots(partnumber)
|
|
||||||
if len(got) != 1 || got[0] != want {
|
|
||||||
t.Fatalf("partnumber %s: expected [%s], got %#v", partnumber, want, got)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
check("R750XD", "SERVER_R750")
|
|
||||||
check("HDD-01", "HDD_01")
|
|
||||||
check("MEM_DDR5_16G_4800", "MEM_DDR5_16G_4800")
|
|
||||||
|
|
||||||
if got := matcher.MatchLots("UNKNOWN"); len(got) != 0 {
|
|
||||||
t.Fatalf("expected no matches for UNKNOWN, got %#v", got)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -1,110 +0,0 @@
|
|||||||
package middleware
|
|
||||||
|
|
||||||
import (
|
|
||||||
"net/http"
|
|
||||||
"strings"
|
|
||||||
|
|
||||||
"git.mchus.pro/mchus/quoteforge/internal/models"
|
|
||||||
"git.mchus.pro/mchus/quoteforge/internal/services"
|
|
||||||
"github.com/gin-gonic/gin"
|
|
||||||
)
|
|
||||||
|
|
||||||
const (
|
|
||||||
AuthUserKey = "auth_user"
|
|
||||||
AuthClaimsKey = "auth_claims"
|
|
||||||
)
|
|
||||||
|
|
||||||
func Auth(authService *services.AuthService) gin.HandlerFunc {
|
|
||||||
return func(c *gin.Context) {
|
|
||||||
authHeader := c.GetHeader("Authorization")
|
|
||||||
if authHeader == "" {
|
|
||||||
c.AbortWithStatusJSON(http.StatusUnauthorized, gin.H{
|
|
||||||
"error": "authorization header required",
|
|
||||||
})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
parts := strings.SplitN(authHeader, " ", 2)
|
|
||||||
if len(parts) != 2 || parts[0] != "Bearer" {
|
|
||||||
c.AbortWithStatusJSON(http.StatusUnauthorized, gin.H{
|
|
||||||
"error": "invalid authorization header format",
|
|
||||||
})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
claims, err := authService.ValidateToken(parts[1])
|
|
||||||
if err != nil {
|
|
||||||
c.AbortWithStatusJSON(http.StatusUnauthorized, gin.H{
|
|
||||||
"error": err.Error(),
|
|
||||||
})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
c.Set(AuthClaimsKey, claims)
|
|
||||||
c.Next()
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func RequireRole(roles ...models.UserRole) gin.HandlerFunc {
|
|
||||||
return func(c *gin.Context) {
|
|
||||||
claims, exists := c.Get(AuthClaimsKey)
|
|
||||||
if !exists {
|
|
||||||
c.AbortWithStatusJSON(http.StatusUnauthorized, gin.H{
|
|
||||||
"error": "authentication required",
|
|
||||||
})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
authClaims := claims.(*services.Claims)
|
|
||||||
|
|
||||||
for _, role := range roles {
|
|
||||||
if authClaims.Role == role {
|
|
||||||
c.Next()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
c.AbortWithStatusJSON(http.StatusForbidden, gin.H{
|
|
||||||
"error": "insufficient permissions",
|
|
||||||
})
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func RequireEditor() gin.HandlerFunc {
|
|
||||||
return RequireRole(models.RoleEditor, models.RolePricingAdmin, models.RoleAdmin)
|
|
||||||
}
|
|
||||||
|
|
||||||
func RequirePricingAdmin() gin.HandlerFunc {
|
|
||||||
return RequireRole(models.RolePricingAdmin, models.RoleAdmin)
|
|
||||||
}
|
|
||||||
|
|
||||||
func RequireAdmin() gin.HandlerFunc {
|
|
||||||
return RequireRole(models.RoleAdmin)
|
|
||||||
}
|
|
||||||
|
|
||||||
// GetClaims extracts auth claims from context
|
|
||||||
func GetClaims(c *gin.Context) *services.Claims {
|
|
||||||
claims, exists := c.Get(AuthClaimsKey)
|
|
||||||
if !exists {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
return claims.(*services.Claims)
|
|
||||||
}
|
|
||||||
|
|
||||||
// GetUserID extracts user ID from context
|
|
||||||
func GetUserID(c *gin.Context) uint {
|
|
||||||
claims := GetClaims(c)
|
|
||||||
if claims == nil {
|
|
||||||
return 0
|
|
||||||
}
|
|
||||||
return claims.UserID
|
|
||||||
}
|
|
||||||
|
|
||||||
// GetUsername extracts username from context
|
|
||||||
func GetUsername(c *gin.Context) string {
|
|
||||||
claims := GetClaims(c)
|
|
||||||
if claims == nil {
|
|
||||||
return ""
|
|
||||||
}
|
|
||||||
return claims.Username
|
|
||||||
}
|
|
||||||
@@ -39,6 +39,57 @@ func (c ConfigItems) Total() float64 {
|
|||||||
return total
|
return total
|
||||||
}
|
}
|
||||||
|
|
||||||
|
type VendorSpecLotAllocation struct {
|
||||||
|
LotName string `json:"lot_name"`
|
||||||
|
Quantity int `json:"quantity"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type VendorSpecLotMapping struct {
|
||||||
|
LotName string `json:"lot_name"`
|
||||||
|
QuantityPerPN int `json:"quantity_per_pn"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type VendorSpecItem struct {
|
||||||
|
SortOrder int `json:"sort_order"`
|
||||||
|
VendorPartnumber string `json:"vendor_partnumber"`
|
||||||
|
Quantity int `json:"quantity"`
|
||||||
|
Description string `json:"description,omitempty"`
|
||||||
|
UnitPrice *float64 `json:"unit_price,omitempty"`
|
||||||
|
TotalPrice *float64 `json:"total_price,omitempty"`
|
||||||
|
ResolvedLotName string `json:"resolved_lot_name,omitempty"`
|
||||||
|
ResolutionSource string `json:"resolution_source,omitempty"`
|
||||||
|
ManualLotSuggestion string `json:"manual_lot_suggestion,omitempty"`
|
||||||
|
LotQtyPerPN int `json:"lot_qty_per_pn,omitempty"`
|
||||||
|
LotAllocations []VendorSpecLotAllocation `json:"lot_allocations,omitempty"`
|
||||||
|
LotMappings []VendorSpecLotMapping `json:"lot_mappings,omitempty"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type VendorSpec []VendorSpecItem
|
||||||
|
|
||||||
|
func (v VendorSpec) Value() (driver.Value, error) {
|
||||||
|
if v == nil {
|
||||||
|
return nil, nil
|
||||||
|
}
|
||||||
|
return json.Marshal(v)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (v *VendorSpec) Scan(value interface{}) error {
|
||||||
|
if value == nil {
|
||||||
|
*v = nil
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
var bytes []byte
|
||||||
|
switch val := value.(type) {
|
||||||
|
case []byte:
|
||||||
|
bytes = val
|
||||||
|
case string:
|
||||||
|
bytes = []byte(val)
|
||||||
|
default:
|
||||||
|
return errors.New("type assertion failed for VendorSpec")
|
||||||
|
}
|
||||||
|
return json.Unmarshal(bytes, v)
|
||||||
|
}
|
||||||
|
|
||||||
type Configuration struct {
|
type Configuration struct {
|
||||||
ID uint `gorm:"primaryKey;autoIncrement" json:"id"`
|
ID uint `gorm:"primaryKey;autoIncrement" json:"id"`
|
||||||
UUID string `gorm:"size:36;uniqueIndex;not null" json:"uuid"`
|
UUID string `gorm:"size:36;uniqueIndex;not null" json:"uuid"`
|
||||||
@@ -59,13 +110,14 @@ type Configuration struct {
|
|||||||
PricelistID *uint `gorm:"index" json:"pricelist_id,omitempty"`
|
PricelistID *uint `gorm:"index" json:"pricelist_id,omitempty"`
|
||||||
WarehousePricelistID *uint `gorm:"index" json:"warehouse_pricelist_id,omitempty"`
|
WarehousePricelistID *uint `gorm:"index" json:"warehouse_pricelist_id,omitempty"`
|
||||||
CompetitorPricelistID *uint `gorm:"index" json:"competitor_pricelist_id,omitempty"`
|
CompetitorPricelistID *uint `gorm:"index" json:"competitor_pricelist_id,omitempty"`
|
||||||
|
VendorSpec VendorSpec `gorm:"type:json" json:"vendor_spec,omitempty"`
|
||||||
|
ConfigType string `gorm:"size:20;default:server" json:"config_type"` // "server" | "storage"
|
||||||
DisablePriceRefresh bool `gorm:"default:false" json:"disable_price_refresh"`
|
DisablePriceRefresh bool `gorm:"default:false" json:"disable_price_refresh"`
|
||||||
OnlyInStock bool `gorm:"default:false" json:"only_in_stock"`
|
OnlyInStock bool `gorm:"default:false" json:"only_in_stock"`
|
||||||
|
Line int `gorm:"column:line_no;index" json:"line"`
|
||||||
PriceUpdatedAt *time.Time `gorm:"type:timestamp" json:"price_updated_at,omitempty"`
|
PriceUpdatedAt *time.Time `gorm:"type:timestamp" json:"price_updated_at,omitempty"`
|
||||||
CreatedAt time.Time `gorm:"autoCreateTime" json:"created_at"`
|
CreatedAt time.Time `gorm:"autoCreateTime" json:"created_at"`
|
||||||
CurrentVersionNo int `gorm:"-" json:"current_version_no,omitempty"`
|
CurrentVersionNo int `gorm:"-" json:"current_version_no,omitempty"`
|
||||||
|
|
||||||
User *User `gorm:"foreignKey:UserID" json:"user,omitempty"`
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func (Configuration) TableName() string {
|
func (Configuration) TableName() string {
|
||||||
@@ -80,8 +132,6 @@ type PriceOverride struct {
|
|||||||
ValidUntil *time.Time `gorm:"type:date" json:"valid_until"`
|
ValidUntil *time.Time `gorm:"type:date" json:"valid_until"`
|
||||||
Reason string `gorm:"type:text" json:"reason"`
|
Reason string `gorm:"type:text" json:"reason"`
|
||||||
CreatedBy uint `gorm:"not null" json:"created_by"`
|
CreatedBy uint `gorm:"not null" json:"created_by"`
|
||||||
|
|
||||||
Creator *User `gorm:"foreignKey:CreatedBy" json:"creator,omitempty"`
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func (PriceOverride) TableName() string {
|
func (PriceOverride) TableName() string {
|
||||||
|
|||||||
@@ -55,17 +55,6 @@ func (StockLog) TableName() string {
|
|||||||
return "stock_log"
|
return "stock_log"
|
||||||
}
|
}
|
||||||
|
|
||||||
// LotPartnumber maps external part numbers to internal lots.
|
|
||||||
type LotPartnumber struct {
|
|
||||||
Partnumber string `gorm:"column:partnumber;size:255;primaryKey" json:"partnumber"`
|
|
||||||
LotName string `gorm:"column:lot_name;size:255;primaryKey" json:"lot_name"`
|
|
||||||
Description *string `gorm:"column:description;size:10000" json:"description,omitempty"`
|
|
||||||
}
|
|
||||||
|
|
||||||
func (LotPartnumber) TableName() string {
|
|
||||||
return "lot_partnumbers"
|
|
||||||
}
|
|
||||||
|
|
||||||
// StockIgnoreRule contains import ignore pattern rules.
|
// StockIgnoreRule contains import ignore pattern rules.
|
||||||
type StockIgnoreRule struct {
|
type StockIgnoreRule struct {
|
||||||
ID uint `gorm:"column:id;primaryKey;autoIncrement" json:"id"`
|
ID uint `gorm:"column:id;primaryKey;autoIncrement" json:"id"`
|
||||||
|
|||||||
@@ -10,7 +10,6 @@ import (
|
|||||||
// AllModels returns all models for auto-migration
|
// AllModels returns all models for auto-migration
|
||||||
func AllModels() []interface{} {
|
func AllModels() []interface{} {
|
||||||
return []interface{}{
|
return []interface{}{
|
||||||
&User{},
|
|
||||||
&Category{},
|
&Category{},
|
||||||
&LotMetadata{},
|
&LotMetadata{},
|
||||||
&Project{},
|
&Project{},
|
||||||
@@ -32,7 +31,9 @@ func Migrate(db *gorm.DB) error {
|
|||||||
errStr := err.Error()
|
errStr := err.Error()
|
||||||
if strings.Contains(errStr, "Can't DROP") ||
|
if strings.Contains(errStr, "Can't DROP") ||
|
||||||
strings.Contains(errStr, "Duplicate key name") ||
|
strings.Contains(errStr, "Duplicate key name") ||
|
||||||
strings.Contains(errStr, "check that it exists") {
|
strings.Contains(errStr, "check that it exists") ||
|
||||||
|
strings.Contains(errStr, "Cannot change column") ||
|
||||||
|
strings.Contains(errStr, "used in a foreign key constraint") {
|
||||||
slog.Warn("migration warning (skipped)", "model", model, "error", errStr)
|
slog.Warn("migration warning (skipped)", "model", model, "error", errStr)
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
@@ -52,54 +53,3 @@ func SeedCategories(db *gorm.DB) error {
|
|||||||
}
|
}
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// SeedAdminUser creates default admin user if not exists
|
|
||||||
// Default credentials: admin / admin123
|
|
||||||
func SeedAdminUser(db *gorm.DB, passwordHash string) error {
|
|
||||||
var count int64
|
|
||||||
db.Model(&User{}).Where("username = ?", "admin").Count(&count)
|
|
||||||
if count > 0 {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
admin := &User{
|
|
||||||
Username: "admin",
|
|
||||||
Email: "admin@example.com",
|
|
||||||
PasswordHash: passwordHash,
|
|
||||||
Role: RoleAdmin,
|
|
||||||
IsActive: true,
|
|
||||||
}
|
|
||||||
return db.Create(admin).Error
|
|
||||||
}
|
|
||||||
|
|
||||||
// EnsureDBUser creates or returns the user corresponding to the database connection username.
|
|
||||||
// This is used when RBAC is disabled - configurations are owned by the DB user.
|
|
||||||
// Returns the user ID that should be used for all operations.
|
|
||||||
func EnsureDBUser(db *gorm.DB, dbUsername string) (uint, error) {
|
|
||||||
if dbUsername == "" {
|
|
||||||
return 0, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
var user User
|
|
||||||
err := db.Where("username = ?", dbUsername).First(&user).Error
|
|
||||||
if err == nil {
|
|
||||||
return user.ID, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// User doesn't exist, create it
|
|
||||||
user = User{
|
|
||||||
Username: dbUsername,
|
|
||||||
Email: dbUsername + "@db.local",
|
|
||||||
PasswordHash: "-", // No password - this is a DB user, not an app user
|
|
||||||
Role: RoleAdmin,
|
|
||||||
IsActive: true,
|
|
||||||
}
|
|
||||||
|
|
||||||
if err := db.Create(&user).Error; err != nil {
|
|
||||||
slog.Error("failed to create DB user", "username", dbUsername, "error", err)
|
|
||||||
return 0, err
|
|
||||||
}
|
|
||||||
|
|
||||||
slog.Info("created DB user for configurations", "username", dbUsername, "user_id", user.ID)
|
|
||||||
return user.ID, nil
|
|
||||||
}
|
|
||||||
|
|||||||
@@ -1,39 +0,0 @@
|
|||||||
package models
|
|
||||||
|
|
||||||
import "time"
|
|
||||||
|
|
||||||
type UserRole string
|
|
||||||
|
|
||||||
const (
|
|
||||||
RoleViewer UserRole = "viewer"
|
|
||||||
RoleEditor UserRole = "editor"
|
|
||||||
RolePricingAdmin UserRole = "pricing_admin"
|
|
||||||
RoleAdmin UserRole = "admin"
|
|
||||||
)
|
|
||||||
|
|
||||||
type User struct {
|
|
||||||
ID uint `gorm:"primaryKey;autoIncrement" json:"id"`
|
|
||||||
Username string `gorm:"size:100;uniqueIndex;not null" json:"username"`
|
|
||||||
Email string `gorm:"size:255;uniqueIndex;not null" json:"email"`
|
|
||||||
PasswordHash string `gorm:"size:255;not null" json:"-"`
|
|
||||||
Role UserRole `gorm:"type:enum('viewer','editor','pricing_admin','admin');default:'viewer'" json:"role"`
|
|
||||||
IsActive bool `gorm:"default:true" json:"is_active"`
|
|
||||||
CreatedAt time.Time `gorm:"autoCreateTime" json:"created_at"`
|
|
||||||
UpdatedAt time.Time `gorm:"autoUpdateTime" json:"updated_at"`
|
|
||||||
}
|
|
||||||
|
|
||||||
func (User) TableName() string {
|
|
||||||
return "qt_users"
|
|
||||||
}
|
|
||||||
|
|
||||||
func (u *User) CanEdit() bool {
|
|
||||||
return u.Role == RoleEditor || u.Role == RolePricingAdmin || u.Role == RoleAdmin
|
|
||||||
}
|
|
||||||
|
|
||||||
func (u *User) CanManagePricing() bool {
|
|
||||||
return u.Role == RolePricingAdmin || u.Role == RoleAdmin
|
|
||||||
}
|
|
||||||
|
|
||||||
func (u *User) CanManageUsers() bool {
|
|
||||||
return u.Role == RoleAdmin
|
|
||||||
}
|
|
||||||
@@ -1,6 +1,8 @@
|
|||||||
package repository
|
package repository
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"strings"
|
||||||
|
|
||||||
"git.mchus.pro/mchus/quoteforge/internal/models"
|
"git.mchus.pro/mchus/quoteforge/internal/models"
|
||||||
"gorm.io/gorm"
|
"gorm.io/gorm"
|
||||||
)
|
)
|
||||||
@@ -14,7 +16,13 @@ func NewConfigurationRepository(db *gorm.DB) *ConfigurationRepository {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (r *ConfigurationRepository) Create(config *models.Configuration) error {
|
func (r *ConfigurationRepository) Create(config *models.Configuration) error {
|
||||||
return r.db.Create(config).Error
|
if err := r.db.Create(config).Error; err != nil {
|
||||||
|
if isUnknownLineNoColumnError(err) {
|
||||||
|
return r.db.Omit("line_no").Create(config).Error
|
||||||
|
}
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (r *ConfigurationRepository) GetByID(id uint) (*models.Configuration, error) {
|
func (r *ConfigurationRepository) GetByID(id uint) (*models.Configuration, error) {
|
||||||
@@ -36,7 +44,21 @@ func (r *ConfigurationRepository) GetByUUID(uuid string) (*models.Configuration,
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (r *ConfigurationRepository) Update(config *models.Configuration) error {
|
func (r *ConfigurationRepository) Update(config *models.Configuration) error {
|
||||||
return r.db.Save(config).Error
|
if err := r.db.Save(config).Error; err != nil {
|
||||||
|
if isUnknownLineNoColumnError(err) {
|
||||||
|
return r.db.Omit("line_no").Save(config).Error
|
||||||
|
}
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func isUnknownLineNoColumnError(err error) bool {
|
||||||
|
if err == nil {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
msg := strings.ToLower(err.Error())
|
||||||
|
return strings.Contains(msg, "unknown column 'line_no'") || strings.Contains(msg, "no column named line_no")
|
||||||
}
|
}
|
||||||
|
|
||||||
func (r *ConfigurationRepository) Delete(id uint) error {
|
func (r *ConfigurationRepository) Delete(id uint) error {
|
||||||
|
|||||||
174
internal/repository/partnumber_book.go
Normal file
174
internal/repository/partnumber_book.go
Normal file
@@ -0,0 +1,174 @@
|
|||||||
|
package repository
|
||||||
|
|
||||||
|
import (
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/localdb"
|
||||||
|
"gorm.io/gorm"
|
||||||
|
"gorm.io/gorm/clause"
|
||||||
|
)
|
||||||
|
|
||||||
|
// PartnumberBookRepository provides read-only access to local partnumber book snapshots.
|
||||||
|
type PartnumberBookRepository struct {
|
||||||
|
db *gorm.DB
|
||||||
|
}
|
||||||
|
|
||||||
|
func NewPartnumberBookRepository(db *gorm.DB) *PartnumberBookRepository {
|
||||||
|
return &PartnumberBookRepository{db: db}
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetActiveBook returns the most recently active local partnumber book.
|
||||||
|
func (r *PartnumberBookRepository) GetActiveBook() (*localdb.LocalPartnumberBook, error) {
|
||||||
|
var book localdb.LocalPartnumberBook
|
||||||
|
err := r.db.Where("is_active = 1").Order("created_at DESC, id DESC").First(&book).Error
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return &book, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetBookItems returns all items for the given local book ID.
|
||||||
|
func (r *PartnumberBookRepository) GetBookItems(bookID uint) ([]localdb.LocalPartnumberBookItem, error) {
|
||||||
|
book, err := r.getBook(bookID)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
items, _, err := r.listCatalogItems(book.PartnumbersJSON, "", 0, 0)
|
||||||
|
return items, err
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetBookItemsPage returns items for the given local book ID with optional search and pagination.
|
||||||
|
func (r *PartnumberBookRepository) GetBookItemsPage(bookID uint, search string, page, perPage int) ([]localdb.LocalPartnumberBookItem, int64, error) {
|
||||||
|
if page < 1 {
|
||||||
|
page = 1
|
||||||
|
}
|
||||||
|
if perPage < 1 {
|
||||||
|
perPage = 100
|
||||||
|
}
|
||||||
|
|
||||||
|
book, err := r.getBook(bookID)
|
||||||
|
if err != nil {
|
||||||
|
return nil, 0, err
|
||||||
|
}
|
||||||
|
return r.listCatalogItems(book.PartnumbersJSON, search, page, perPage)
|
||||||
|
}
|
||||||
|
|
||||||
|
// FindLotByPartnumber looks up a partnumber in the active book and returns the matching items.
|
||||||
|
func (r *PartnumberBookRepository) FindLotByPartnumber(bookID uint, partnumber string) ([]localdb.LocalPartnumberBookItem, error) {
|
||||||
|
book, err := r.getBook(bookID)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
found := false
|
||||||
|
for _, pn := range book.PartnumbersJSON {
|
||||||
|
if pn == partnumber {
|
||||||
|
found = true
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if !found {
|
||||||
|
return nil, nil
|
||||||
|
}
|
||||||
|
var items []localdb.LocalPartnumberBookItem
|
||||||
|
err = r.db.Where("partnumber = ?", partnumber).Find(&items).Error
|
||||||
|
return items, err
|
||||||
|
}
|
||||||
|
|
||||||
|
// ListBooks returns all local partnumber books ordered newest first.
|
||||||
|
func (r *PartnumberBookRepository) ListBooks() ([]localdb.LocalPartnumberBook, error) {
|
||||||
|
var books []localdb.LocalPartnumberBook
|
||||||
|
err := r.db.Order("created_at DESC, id DESC").Find(&books).Error
|
||||||
|
return books, err
|
||||||
|
}
|
||||||
|
|
||||||
|
// SaveBook saves a new partnumber book snapshot.
|
||||||
|
func (r *PartnumberBookRepository) SaveBook(book *localdb.LocalPartnumberBook) error {
|
||||||
|
return r.db.Save(book).Error
|
||||||
|
}
|
||||||
|
|
||||||
|
// SaveBookItems upserts canonical PN catalog rows.
|
||||||
|
func (r *PartnumberBookRepository) SaveBookItems(items []localdb.LocalPartnumberBookItem) error {
|
||||||
|
if len(items) == 0 {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
return r.db.Clauses(clause.OnConflict{
|
||||||
|
Columns: []clause.Column{{Name: "partnumber"}},
|
||||||
|
DoUpdates: clause.AssignmentColumns([]string{
|
||||||
|
"lots_json",
|
||||||
|
"description",
|
||||||
|
}),
|
||||||
|
}).CreateInBatches(items, 500).Error
|
||||||
|
}
|
||||||
|
|
||||||
|
// CountBookItems returns the number of items for a given local book ID.
|
||||||
|
func (r *PartnumberBookRepository) CountBookItems(bookID uint) int64 {
|
||||||
|
book, err := r.getBook(bookID)
|
||||||
|
if err != nil {
|
||||||
|
return 0
|
||||||
|
}
|
||||||
|
return int64(len(book.PartnumbersJSON))
|
||||||
|
}
|
||||||
|
|
||||||
|
func (r *PartnumberBookRepository) CountDistinctLots(bookID uint) int64 {
|
||||||
|
items, err := r.GetBookItems(bookID)
|
||||||
|
if err != nil {
|
||||||
|
return 0
|
||||||
|
}
|
||||||
|
seen := make(map[string]struct{})
|
||||||
|
for _, item := range items {
|
||||||
|
for _, lot := range item.LotsJSON {
|
||||||
|
if lot.LotName == "" {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
seen[lot.LotName] = struct{}{}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return int64(len(seen))
|
||||||
|
}
|
||||||
|
|
||||||
|
func (r *PartnumberBookRepository) HasAllBookItems(bookID uint) bool {
|
||||||
|
book, err := r.getBook(bookID)
|
||||||
|
if err != nil {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
if len(book.PartnumbersJSON) == 0 {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
var count int64
|
||||||
|
if err := r.db.Model(&localdb.LocalPartnumberBookItem{}).
|
||||||
|
Where("partnumber IN ?", []string(book.PartnumbersJSON)).
|
||||||
|
Count(&count).Error; err != nil {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
return count == int64(len(book.PartnumbersJSON))
|
||||||
|
}
|
||||||
|
|
||||||
|
func (r *PartnumberBookRepository) getBook(bookID uint) (*localdb.LocalPartnumberBook, error) {
|
||||||
|
var book localdb.LocalPartnumberBook
|
||||||
|
if err := r.db.First(&book, bookID).Error; err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return &book, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (r *PartnumberBookRepository) listCatalogItems(partnumbers localdb.LocalStringList, search string, page, perPage int) ([]localdb.LocalPartnumberBookItem, int64, error) {
|
||||||
|
if len(partnumbers) == 0 {
|
||||||
|
return []localdb.LocalPartnumberBookItem{}, 0, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
query := r.db.Model(&localdb.LocalPartnumberBookItem{}).Where("partnumber IN ?", []string(partnumbers))
|
||||||
|
if search != "" {
|
||||||
|
trimmedSearch := "%" + search + "%"
|
||||||
|
query = query.Where("partnumber LIKE ? OR lots_json LIKE ? OR description LIKE ?", trimmedSearch, trimmedSearch, trimmedSearch)
|
||||||
|
}
|
||||||
|
|
||||||
|
var total int64
|
||||||
|
if err := query.Count(&total).Error; err != nil {
|
||||||
|
return nil, 0, err
|
||||||
|
}
|
||||||
|
|
||||||
|
var items []localdb.LocalPartnumberBookItem
|
||||||
|
if page > 0 && perPage > 0 {
|
||||||
|
query = query.Offset((page - 1) * perPage).Limit(perPage)
|
||||||
|
}
|
||||||
|
err := query.Order("partnumber ASC, id ASC").Find(&items).Error
|
||||||
|
return items, total, err
|
||||||
|
}
|
||||||
@@ -3,12 +3,10 @@ package repository
|
|||||||
import (
|
import (
|
||||||
"errors"
|
"errors"
|
||||||
"fmt"
|
"fmt"
|
||||||
"sort"
|
|
||||||
"strconv"
|
"strconv"
|
||||||
"strings"
|
"strings"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"git.mchus.pro/mchus/quoteforge/internal/lotmatch"
|
|
||||||
"git.mchus.pro/mchus/quoteforge/internal/models"
|
"git.mchus.pro/mchus/quoteforge/internal/models"
|
||||||
"gorm.io/gorm"
|
"gorm.io/gorm"
|
||||||
)
|
)
|
||||||
@@ -245,91 +243,9 @@ func (r *PricelistRepository) GetItems(pricelistID uint, offset, limit int, sear
|
|||||||
items[i].Category = strings.TrimSpace(items[i].LotCategory)
|
items[i].Category = strings.TrimSpace(items[i].LotCategory)
|
||||||
}
|
}
|
||||||
|
|
||||||
if err := r.enrichItemsWithStock(items); err != nil {
|
|
||||||
return nil, 0, fmt.Errorf("enriching pricelist items with stock: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
return items, total, nil
|
return items, total, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (r *PricelistRepository) enrichItemsWithStock(items []models.PricelistItem) error {
|
|
||||||
if len(items) == 0 {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
resolver, err := lotmatch.NewLotResolverFromDB(r.db)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
type stockRow struct {
|
|
||||||
Partnumber string `gorm:"column:partnumber"`
|
|
||||||
Qty *float64 `gorm:"column:qty"`
|
|
||||||
}
|
|
||||||
rows := make([]stockRow, 0)
|
|
||||||
if err := r.db.Raw(`
|
|
||||||
SELECT s.partnumber, s.qty
|
|
||||||
FROM stock_log s
|
|
||||||
INNER JOIN (
|
|
||||||
SELECT partnumber, MAX(date) AS max_date
|
|
||||||
FROM stock_log
|
|
||||||
GROUP BY partnumber
|
|
||||||
) latest ON latest.partnumber = s.partnumber AND latest.max_date = s.date
|
|
||||||
WHERE s.qty IS NOT NULL
|
|
||||||
`).Scan(&rows).Error; err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
lotTotals := make(map[string]float64, len(items))
|
|
||||||
lotPartnumbers := make(map[string][]string, len(items))
|
|
||||||
seenPartnumbers := make(map[string]map[string]struct{}, len(items))
|
|
||||||
|
|
||||||
for i := range rows {
|
|
||||||
row := rows[i]
|
|
||||||
if strings.TrimSpace(row.Partnumber) == "" {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
lotName, _, resolveErr := resolver.Resolve(row.Partnumber)
|
|
||||||
if resolveErr != nil || strings.TrimSpace(lotName) == "" {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
if row.Qty != nil {
|
|
||||||
lotTotals[lotName] += *row.Qty
|
|
||||||
}
|
|
||||||
|
|
||||||
pn := strings.TrimSpace(row.Partnumber)
|
|
||||||
if pn == "" {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
if _, ok := seenPartnumbers[lotName]; !ok {
|
|
||||||
seenPartnumbers[lotName] = make(map[string]struct{}, 4)
|
|
||||||
}
|
|
||||||
key := strings.ToLower(pn)
|
|
||||||
if _, exists := seenPartnumbers[lotName][key]; exists {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
seenPartnumbers[lotName][key] = struct{}{}
|
|
||||||
lotPartnumbers[lotName] = append(lotPartnumbers[lotName], pn)
|
|
||||||
}
|
|
||||||
|
|
||||||
for i := range items {
|
|
||||||
lotName := items[i].LotName
|
|
||||||
if qty, ok := lotTotals[lotName]; ok {
|
|
||||||
qtyCopy := qty
|
|
||||||
items[i].AvailableQty = &qtyCopy
|
|
||||||
}
|
|
||||||
if partnumbers := lotPartnumbers[lotName]; len(partnumbers) > 0 {
|
|
||||||
sort.Slice(partnumbers, func(a, b int) bool {
|
|
||||||
return strings.ToLower(partnumbers[a]) < strings.ToLower(partnumbers[b])
|
|
||||||
})
|
|
||||||
items[i].Partnumbers = partnumbers
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// GetLotNames returns distinct lot names from pricelist items.
|
// GetLotNames returns distinct lot names from pricelist items.
|
||||||
func (r *PricelistRepository) GetLotNames(pricelistID uint) ([]string, error) {
|
func (r *PricelistRepository) GetLotNames(pricelistID uint) ([]string, error) {
|
||||||
var lotNames []string
|
var lotNames []string
|
||||||
|
|||||||
@@ -75,57 +75,6 @@ func TestGenerateVersion_IsolatedBySource(t *testing.T) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestGetItems_WarehouseAvailableQtyUsesPrefixResolver(t *testing.T) {
|
|
||||||
repo := newTestPricelistRepository(t)
|
|
||||||
db := repo.db
|
|
||||||
|
|
||||||
warehouse := models.Pricelist{
|
|
||||||
Source: string(models.PricelistSourceWarehouse),
|
|
||||||
Version: "S-2026-02-07-001",
|
|
||||||
CreatedBy: "test",
|
|
||||||
IsActive: true,
|
|
||||||
}
|
|
||||||
if err := db.Create(&warehouse).Error; err != nil {
|
|
||||||
t.Fatalf("create pricelist: %v", err)
|
|
||||||
}
|
|
||||||
if err := db.Create(&models.PricelistItem{
|
|
||||||
PricelistID: warehouse.ID,
|
|
||||||
LotName: "SSD_NVME_03.2T",
|
|
||||||
Price: 100,
|
|
||||||
}).Error; err != nil {
|
|
||||||
t.Fatalf("create pricelist item: %v", err)
|
|
||||||
}
|
|
||||||
if err := db.Create(&models.Lot{LotName: "SSD_NVME_03.2T"}).Error; err != nil {
|
|
||||||
t.Fatalf("create lot: %v", err)
|
|
||||||
}
|
|
||||||
qty := 5.0
|
|
||||||
if err := db.Create(&models.StockLog{
|
|
||||||
Partnumber: "SSD_NVME_03.2T_GEN3_P4610",
|
|
||||||
Date: time.Now(),
|
|
||||||
Price: 200,
|
|
||||||
Qty: &qty,
|
|
||||||
}).Error; err != nil {
|
|
||||||
t.Fatalf("create stock log: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
items, total, err := repo.GetItems(warehouse.ID, 0, 20, "")
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("GetItems: %v", err)
|
|
||||||
}
|
|
||||||
if total != 1 {
|
|
||||||
t.Fatalf("expected total=1, got %d", total)
|
|
||||||
}
|
|
||||||
if len(items) != 1 {
|
|
||||||
t.Fatalf("expected 1 item, got %d", len(items))
|
|
||||||
}
|
|
||||||
if items[0].AvailableQty == nil {
|
|
||||||
t.Fatalf("expected available qty to be set")
|
|
||||||
}
|
|
||||||
if *items[0].AvailableQty != 5 {
|
|
||||||
t.Fatalf("expected available qty=5, got %v", *items[0].AvailableQty)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestGetLatestActiveBySource_SkipsPricelistsWithoutItems(t *testing.T) {
|
func TestGetLatestActiveBySource_SkipsPricelistsWithoutItems(t *testing.T) {
|
||||||
repo := newTestPricelistRepository(t)
|
repo := newTestPricelistRepository(t)
|
||||||
db := repo.db
|
db := repo.db
|
||||||
@@ -228,7 +177,7 @@ func newTestPricelistRepository(t *testing.T) *PricelistRepository {
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
t.Fatalf("open sqlite: %v", err)
|
t.Fatalf("open sqlite: %v", err)
|
||||||
}
|
}
|
||||||
if err := db.AutoMigrate(&models.Pricelist{}, &models.PricelistItem{}, &models.Lot{}, &models.LotPartnumber{}, &models.StockLog{}); err != nil {
|
if err := db.AutoMigrate(&models.Pricelist{}, &models.PricelistItem{}, &models.Lot{}, &models.StockLog{}); err != nil {
|
||||||
t.Fatalf("migrate: %v", err)
|
t.Fatalf("migrate: %v", err)
|
||||||
}
|
}
|
||||||
return NewPricelistRepository(db)
|
return NewPricelistRepository(db)
|
||||||
|
|||||||
@@ -1,62 +0,0 @@
|
|||||||
package repository
|
|
||||||
|
|
||||||
import (
|
|
||||||
"git.mchus.pro/mchus/quoteforge/internal/models"
|
|
||||||
"gorm.io/gorm"
|
|
||||||
)
|
|
||||||
|
|
||||||
type UserRepository struct {
|
|
||||||
db *gorm.DB
|
|
||||||
}
|
|
||||||
|
|
||||||
func NewUserRepository(db *gorm.DB) *UserRepository {
|
|
||||||
return &UserRepository{db: db}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (r *UserRepository) Create(user *models.User) error {
|
|
||||||
return r.db.Create(user).Error
|
|
||||||
}
|
|
||||||
|
|
||||||
func (r *UserRepository) GetByID(id uint) (*models.User, error) {
|
|
||||||
var user models.User
|
|
||||||
err := r.db.First(&user, id).Error
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
return &user, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (r *UserRepository) GetByUsername(username string) (*models.User, error) {
|
|
||||||
var user models.User
|
|
||||||
err := r.db.Where("username = ?", username).First(&user).Error
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
return &user, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (r *UserRepository) GetByEmail(email string) (*models.User, error) {
|
|
||||||
var user models.User
|
|
||||||
err := r.db.Where("email = ?", email).First(&user).Error
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
return &user, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (r *UserRepository) Update(user *models.User) error {
|
|
||||||
return r.db.Save(user).Error
|
|
||||||
}
|
|
||||||
|
|
||||||
func (r *UserRepository) Delete(id uint) error {
|
|
||||||
return r.db.Delete(&models.User{}, id).Error
|
|
||||||
}
|
|
||||||
|
|
||||||
func (r *UserRepository) List(offset, limit int) ([]models.User, int64, error) {
|
|
||||||
var users []models.User
|
|
||||||
var total int64
|
|
||||||
|
|
||||||
r.db.Model(&models.User{}).Count(&total)
|
|
||||||
err := r.db.Offset(offset).Limit(limit).Find(&users).Error
|
|
||||||
return users, total, err
|
|
||||||
}
|
|
||||||
@@ -1,180 +0,0 @@
|
|||||||
package services
|
|
||||||
|
|
||||||
import (
|
|
||||||
"errors"
|
|
||||||
"time"
|
|
||||||
|
|
||||||
"github.com/golang-jwt/jwt/v5"
|
|
||||||
"git.mchus.pro/mchus/quoteforge/internal/config"
|
|
||||||
"git.mchus.pro/mchus/quoteforge/internal/models"
|
|
||||||
"git.mchus.pro/mchus/quoteforge/internal/repository"
|
|
||||||
"golang.org/x/crypto/bcrypt"
|
|
||||||
)
|
|
||||||
|
|
||||||
var (
|
|
||||||
ErrInvalidCredentials = errors.New("invalid username or password")
|
|
||||||
ErrUserNotFound = errors.New("user not found")
|
|
||||||
ErrUserInactive = errors.New("user account is inactive")
|
|
||||||
ErrInvalidToken = errors.New("invalid token")
|
|
||||||
ErrTokenExpired = errors.New("token expired")
|
|
||||||
)
|
|
||||||
|
|
||||||
type AuthService struct {
|
|
||||||
userRepo *repository.UserRepository
|
|
||||||
config config.AuthConfig
|
|
||||||
}
|
|
||||||
|
|
||||||
func NewAuthService(userRepo *repository.UserRepository, cfg config.AuthConfig) *AuthService {
|
|
||||||
return &AuthService{
|
|
||||||
userRepo: userRepo,
|
|
||||||
config: cfg,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
type TokenPair struct {
|
|
||||||
AccessToken string `json:"access_token"`
|
|
||||||
RefreshToken string `json:"refresh_token"`
|
|
||||||
ExpiresAt int64 `json:"expires_at"`
|
|
||||||
}
|
|
||||||
|
|
||||||
type Claims struct {
|
|
||||||
UserID uint `json:"user_id"`
|
|
||||||
Username string `json:"username"`
|
|
||||||
Role models.UserRole `json:"role"`
|
|
||||||
jwt.RegisteredClaims
|
|
||||||
}
|
|
||||||
|
|
||||||
func (s *AuthService) Login(username, password string) (*TokenPair, *models.User, error) {
|
|
||||||
user, err := s.userRepo.GetByUsername(username)
|
|
||||||
if err != nil {
|
|
||||||
return nil, nil, ErrInvalidCredentials
|
|
||||||
}
|
|
||||||
|
|
||||||
if !user.IsActive {
|
|
||||||
return nil, nil, ErrUserInactive
|
|
||||||
}
|
|
||||||
|
|
||||||
if err := bcrypt.CompareHashAndPassword([]byte(user.PasswordHash), []byte(password)); err != nil {
|
|
||||||
return nil, nil, ErrInvalidCredentials
|
|
||||||
}
|
|
||||||
|
|
||||||
tokens, err := s.generateTokenPair(user)
|
|
||||||
if err != nil {
|
|
||||||
return nil, nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
return tokens, user, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (s *AuthService) RefreshTokens(refreshToken string) (*TokenPair, error) {
|
|
||||||
claims, err := s.ValidateToken(refreshToken)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
user, err := s.userRepo.GetByID(claims.UserID)
|
|
||||||
if err != nil {
|
|
||||||
return nil, ErrUserNotFound
|
|
||||||
}
|
|
||||||
|
|
||||||
if !user.IsActive {
|
|
||||||
return nil, ErrUserInactive
|
|
||||||
}
|
|
||||||
|
|
||||||
return s.generateTokenPair(user)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (s *AuthService) ValidateToken(tokenString string) (*Claims, error) {
|
|
||||||
token, err := jwt.ParseWithClaims(tokenString, &Claims{}, func(token *jwt.Token) (interface{}, error) {
|
|
||||||
return []byte(s.config.JWTSecret), nil
|
|
||||||
})
|
|
||||||
|
|
||||||
if err != nil {
|
|
||||||
if errors.Is(err, jwt.ErrTokenExpired) {
|
|
||||||
return nil, ErrTokenExpired
|
|
||||||
}
|
|
||||||
return nil, ErrInvalidToken
|
|
||||||
}
|
|
||||||
|
|
||||||
claims, ok := token.Claims.(*Claims)
|
|
||||||
if !ok || !token.Valid {
|
|
||||||
return nil, ErrInvalidToken
|
|
||||||
}
|
|
||||||
|
|
||||||
return claims, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (s *AuthService) generateTokenPair(user *models.User) (*TokenPair, error) {
|
|
||||||
now := time.Now()
|
|
||||||
accessExpiry := now.Add(s.config.TokenExpiry)
|
|
||||||
refreshExpiry := now.Add(s.config.RefreshExpiry)
|
|
||||||
|
|
||||||
accessClaims := &Claims{
|
|
||||||
UserID: user.ID,
|
|
||||||
Username: user.Username,
|
|
||||||
Role: user.Role,
|
|
||||||
RegisteredClaims: jwt.RegisteredClaims{
|
|
||||||
ExpiresAt: jwt.NewNumericDate(accessExpiry),
|
|
||||||
IssuedAt: jwt.NewNumericDate(now),
|
|
||||||
Subject: user.Username,
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
accessToken := jwt.NewWithClaims(jwt.SigningMethodHS256, accessClaims)
|
|
||||||
accessTokenString, err := accessToken.SignedString([]byte(s.config.JWTSecret))
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
refreshClaims := &Claims{
|
|
||||||
UserID: user.ID,
|
|
||||||
Username: user.Username,
|
|
||||||
Role: user.Role,
|
|
||||||
RegisteredClaims: jwt.RegisteredClaims{
|
|
||||||
ExpiresAt: jwt.NewNumericDate(refreshExpiry),
|
|
||||||
IssuedAt: jwt.NewNumericDate(now),
|
|
||||||
Subject: user.Username,
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
refreshToken := jwt.NewWithClaims(jwt.SigningMethodHS256, refreshClaims)
|
|
||||||
refreshTokenString, err := refreshToken.SignedString([]byte(s.config.JWTSecret))
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
return &TokenPair{
|
|
||||||
AccessToken: accessTokenString,
|
|
||||||
RefreshToken: refreshTokenString,
|
|
||||||
ExpiresAt: accessExpiry.Unix(),
|
|
||||||
}, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (s *AuthService) HashPassword(password string) (string, error) {
|
|
||||||
hash, err := bcrypt.GenerateFromPassword([]byte(password), bcrypt.DefaultCost)
|
|
||||||
if err != nil {
|
|
||||||
return "", err
|
|
||||||
}
|
|
||||||
return string(hash), nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (s *AuthService) CreateUser(username, email, password string, role models.UserRole) (*models.User, error) {
|
|
||||||
hash, err := s.HashPassword(password)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
user := &models.User{
|
|
||||||
Username: username,
|
|
||||||
Email: email,
|
|
||||||
PasswordHash: hash,
|
|
||||||
Role: role,
|
|
||||||
IsActive: true,
|
|
||||||
}
|
|
||||||
|
|
||||||
if err := s.userRepo.Create(user); err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
return user, nil
|
|
||||||
}
|
|
||||||
@@ -45,18 +45,22 @@ func NewConfigurationService(
|
|||||||
}
|
}
|
||||||
|
|
||||||
type CreateConfigRequest struct {
|
type CreateConfigRequest struct {
|
||||||
Name string `json:"name"`
|
Name string `json:"name"`
|
||||||
Items models.ConfigItems `json:"items"`
|
Items models.ConfigItems `json:"items"`
|
||||||
ProjectUUID *string `json:"project_uuid,omitempty"`
|
ProjectUUID *string `json:"project_uuid,omitempty"`
|
||||||
CustomPrice *float64 `json:"custom_price"`
|
CustomPrice *float64 `json:"custom_price"`
|
||||||
Notes string `json:"notes"`
|
Notes string `json:"notes"`
|
||||||
IsTemplate bool `json:"is_template"`
|
IsTemplate bool `json:"is_template"`
|
||||||
ServerCount int `json:"server_count"`
|
ServerCount int `json:"server_count"`
|
||||||
ServerModel string `json:"server_model,omitempty"`
|
ServerModel string `json:"server_model,omitempty"`
|
||||||
SupportCode string `json:"support_code,omitempty"`
|
SupportCode string `json:"support_code,omitempty"`
|
||||||
Article string `json:"article,omitempty"`
|
Article string `json:"article,omitempty"`
|
||||||
PricelistID *uint `json:"pricelist_id,omitempty"`
|
PricelistID *uint `json:"pricelist_id,omitempty"`
|
||||||
OnlyInStock bool `json:"only_in_stock"`
|
WarehousePricelistID *uint `json:"warehouse_pricelist_id,omitempty"`
|
||||||
|
CompetitorPricelistID *uint `json:"competitor_pricelist_id,omitempty"`
|
||||||
|
ConfigType string `json:"config_type,omitempty"` // "server" | "storage"
|
||||||
|
DisablePriceRefresh bool `json:"disable_price_refresh"`
|
||||||
|
OnlyInStock bool `json:"only_in_stock"`
|
||||||
}
|
}
|
||||||
|
|
||||||
type ArticlePreviewRequest struct {
|
type ArticlePreviewRequest struct {
|
||||||
@@ -84,21 +88,28 @@ func (s *ConfigurationService) Create(ownerUsername string, req *CreateConfigReq
|
|||||||
}
|
}
|
||||||
|
|
||||||
config := &models.Configuration{
|
config := &models.Configuration{
|
||||||
UUID: uuid.New().String(),
|
UUID: uuid.New().String(),
|
||||||
OwnerUsername: ownerUsername,
|
OwnerUsername: ownerUsername,
|
||||||
ProjectUUID: projectUUID,
|
ProjectUUID: projectUUID,
|
||||||
Name: req.Name,
|
Name: req.Name,
|
||||||
Items: req.Items,
|
Items: req.Items,
|
||||||
TotalPrice: &total,
|
TotalPrice: &total,
|
||||||
CustomPrice: req.CustomPrice,
|
CustomPrice: req.CustomPrice,
|
||||||
Notes: req.Notes,
|
Notes: req.Notes,
|
||||||
IsTemplate: req.IsTemplate,
|
IsTemplate: req.IsTemplate,
|
||||||
ServerCount: req.ServerCount,
|
ServerCount: req.ServerCount,
|
||||||
ServerModel: req.ServerModel,
|
ServerModel: req.ServerModel,
|
||||||
SupportCode: req.SupportCode,
|
SupportCode: req.SupportCode,
|
||||||
Article: req.Article,
|
Article: req.Article,
|
||||||
PricelistID: pricelistID,
|
PricelistID: pricelistID,
|
||||||
OnlyInStock: req.OnlyInStock,
|
WarehousePricelistID: req.WarehousePricelistID,
|
||||||
|
CompetitorPricelistID: req.CompetitorPricelistID,
|
||||||
|
ConfigType: req.ConfigType,
|
||||||
|
DisablePriceRefresh: req.DisablePriceRefresh,
|
||||||
|
OnlyInStock: req.OnlyInStock,
|
||||||
|
}
|
||||||
|
if config.ConfigType == "" {
|
||||||
|
config.ConfigType = "server"
|
||||||
}
|
}
|
||||||
|
|
||||||
if err := s.configRepo.Create(config); err != nil {
|
if err := s.configRepo.Create(config); err != nil {
|
||||||
@@ -163,6 +174,9 @@ func (s *ConfigurationService) Update(uuid string, ownerUsername string, req *Cr
|
|||||||
config.SupportCode = req.SupportCode
|
config.SupportCode = req.SupportCode
|
||||||
config.Article = req.Article
|
config.Article = req.Article
|
||||||
config.PricelistID = pricelistID
|
config.PricelistID = pricelistID
|
||||||
|
config.WarehousePricelistID = req.WarehousePricelistID
|
||||||
|
config.CompetitorPricelistID = req.CompetitorPricelistID
|
||||||
|
config.DisablePriceRefresh = req.DisablePriceRefresh
|
||||||
config.OnlyInStock = req.OnlyInStock
|
config.OnlyInStock = req.OnlyInStock
|
||||||
|
|
||||||
if err := s.configRepo.Update(config); err != nil {
|
if err := s.configRepo.Update(config); err != nil {
|
||||||
@@ -230,18 +244,24 @@ func (s *ConfigurationService) CloneToProject(configUUID string, ownerUsername s
|
|||||||
}
|
}
|
||||||
|
|
||||||
clone := &models.Configuration{
|
clone := &models.Configuration{
|
||||||
UUID: uuid.New().String(),
|
UUID: uuid.New().String(),
|
||||||
OwnerUsername: ownerUsername,
|
OwnerUsername: ownerUsername,
|
||||||
ProjectUUID: resolvedProjectUUID,
|
ProjectUUID: resolvedProjectUUID,
|
||||||
Name: newName,
|
Name: newName,
|
||||||
Items: original.Items,
|
Items: original.Items,
|
||||||
TotalPrice: &total,
|
TotalPrice: &total,
|
||||||
CustomPrice: original.CustomPrice,
|
CustomPrice: original.CustomPrice,
|
||||||
Notes: original.Notes,
|
Notes: original.Notes,
|
||||||
IsTemplate: false, // Clone is never a template
|
IsTemplate: false, // Clone is never a template
|
||||||
ServerCount: original.ServerCount,
|
ServerCount: original.ServerCount,
|
||||||
PricelistID: original.PricelistID,
|
ServerModel: original.ServerModel,
|
||||||
OnlyInStock: original.OnlyInStock,
|
SupportCode: original.SupportCode,
|
||||||
|
Article: original.Article,
|
||||||
|
PricelistID: original.PricelistID,
|
||||||
|
WarehousePricelistID: original.WarehousePricelistID,
|
||||||
|
CompetitorPricelistID: original.CompetitorPricelistID,
|
||||||
|
DisablePriceRefresh: original.DisablePriceRefresh,
|
||||||
|
OnlyInStock: original.OnlyInStock,
|
||||||
}
|
}
|
||||||
|
|
||||||
if err := s.configRepo.Create(clone); err != nil {
|
if err := s.configRepo.Create(clone); err != nil {
|
||||||
@@ -314,7 +334,13 @@ func (s *ConfigurationService) UpdateNoAuth(uuid string, req *CreateConfigReques
|
|||||||
config.Notes = req.Notes
|
config.Notes = req.Notes
|
||||||
config.IsTemplate = req.IsTemplate
|
config.IsTemplate = req.IsTemplate
|
||||||
config.ServerCount = req.ServerCount
|
config.ServerCount = req.ServerCount
|
||||||
|
config.ServerModel = req.ServerModel
|
||||||
|
config.SupportCode = req.SupportCode
|
||||||
|
config.Article = req.Article
|
||||||
config.PricelistID = pricelistID
|
config.PricelistID = pricelistID
|
||||||
|
config.WarehousePricelistID = req.WarehousePricelistID
|
||||||
|
config.CompetitorPricelistID = req.CompetitorPricelistID
|
||||||
|
config.DisablePriceRefresh = req.DisablePriceRefresh
|
||||||
config.OnlyInStock = req.OnlyInStock
|
config.OnlyInStock = req.OnlyInStock
|
||||||
|
|
||||||
if err := s.configRepo.Update(config); err != nil {
|
if err := s.configRepo.Update(config); err != nil {
|
||||||
@@ -587,13 +613,7 @@ func (s *ConfigurationService) isOwner(config *models.Configuration, ownerUserna
|
|||||||
if config == nil || ownerUsername == "" {
|
if config == nil || ownerUsername == "" {
|
||||||
return false
|
return false
|
||||||
}
|
}
|
||||||
if config.OwnerUsername != "" {
|
return config.OwnerUsername == ownerUsername
|
||||||
return config.OwnerUsername == ownerUsername
|
|
||||||
}
|
|
||||||
if config.User != nil {
|
|
||||||
return config.User.Username == ownerUsername
|
|
||||||
}
|
|
||||||
return false
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// // Export configuration as JSON
|
// // Export configuration as JSON
|
||||||
|
|||||||
@@ -6,6 +6,7 @@ import (
|
|||||||
"fmt"
|
"fmt"
|
||||||
"io"
|
"io"
|
||||||
"math"
|
"math"
|
||||||
|
"sort"
|
||||||
"strings"
|
"strings"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
@@ -42,6 +43,7 @@ type ExportItem struct {
|
|||||||
// ConfigExportBlock represents one configuration (server) in the export.
|
// ConfigExportBlock represents one configuration (server) in the export.
|
||||||
type ConfigExportBlock struct {
|
type ConfigExportBlock struct {
|
||||||
Article string
|
Article string
|
||||||
|
Line int
|
||||||
ServerCount int
|
ServerCount int
|
||||||
UnitPrice float64 // sum of component prices for one server
|
UnitPrice float64 // sum of component prices for one server
|
||||||
Items []ExportItem
|
Items []ExportItem
|
||||||
@@ -53,6 +55,51 @@ type ProjectExportData struct {
|
|||||||
CreatedAt time.Time
|
CreatedAt time.Time
|
||||||
}
|
}
|
||||||
|
|
||||||
|
type ProjectPricingExportOptions struct {
|
||||||
|
IncludeLOT bool `json:"include_lot"`
|
||||||
|
IncludeBOM bool `json:"include_bom"`
|
||||||
|
IncludeEstimate bool `json:"include_estimate"`
|
||||||
|
IncludeStock bool `json:"include_stock"`
|
||||||
|
IncludeCompetitor bool `json:"include_competitor"`
|
||||||
|
Basis string `json:"basis"` // "fob" or "ddp"; empty defaults to "fob"
|
||||||
|
SaleMarkup float64 `json:"sale_markup"` // DDP multiplier; 0 defaults to 1.3
|
||||||
|
}
|
||||||
|
|
||||||
|
func (o ProjectPricingExportOptions) saleMarkupFactor() float64 {
|
||||||
|
if o.SaleMarkup > 0 {
|
||||||
|
return o.SaleMarkup
|
||||||
|
}
|
||||||
|
return 1.3
|
||||||
|
}
|
||||||
|
|
||||||
|
func (o ProjectPricingExportOptions) isDDP() bool {
|
||||||
|
return strings.EqualFold(strings.TrimSpace(o.Basis), "ddp")
|
||||||
|
}
|
||||||
|
|
||||||
|
type ProjectPricingExportData struct {
|
||||||
|
Configs []ProjectPricingExportConfig
|
||||||
|
CreatedAt time.Time
|
||||||
|
}
|
||||||
|
|
||||||
|
type ProjectPricingExportConfig struct {
|
||||||
|
Name string
|
||||||
|
Article string
|
||||||
|
Line int
|
||||||
|
ServerCount int
|
||||||
|
Rows []ProjectPricingExportRow
|
||||||
|
}
|
||||||
|
|
||||||
|
type ProjectPricingExportRow struct {
|
||||||
|
LotDisplay string
|
||||||
|
VendorPN string
|
||||||
|
Description string
|
||||||
|
Quantity int
|
||||||
|
BOMTotal *float64
|
||||||
|
Estimate *float64
|
||||||
|
Stock *float64
|
||||||
|
Competitor *float64
|
||||||
|
}
|
||||||
|
|
||||||
// ToCSV writes project export data in the new structured CSV format.
|
// ToCSV writes project export data in the new structured CSV format.
|
||||||
//
|
//
|
||||||
// Format:
|
// Format:
|
||||||
@@ -92,7 +139,10 @@ func (s *ExportService) ToCSV(w io.Writer, data *ProjectExportData) error {
|
|||||||
}
|
}
|
||||||
|
|
||||||
for i, block := range data.Configs {
|
for i, block := range data.Configs {
|
||||||
lineNo := (i + 1) * 10
|
lineNo := block.Line
|
||||||
|
if lineNo <= 0 {
|
||||||
|
lineNo = (i + 1) * 10
|
||||||
|
}
|
||||||
|
|
||||||
serverCount := block.ServerCount
|
serverCount := block.ServerCount
|
||||||
if serverCount < 1 {
|
if serverCount < 1 {
|
||||||
@@ -104,13 +154,13 @@ func (s *ExportService) ToCSV(w io.Writer, data *ProjectExportData) error {
|
|||||||
// Server summary row
|
// Server summary row
|
||||||
serverRow := []string{
|
serverRow := []string{
|
||||||
fmt.Sprintf("%d", lineNo), // Line
|
fmt.Sprintf("%d", lineNo), // Line
|
||||||
"", // Type
|
"", // Type
|
||||||
block.Article, // p/n
|
block.Article, // p/n
|
||||||
"", // Description
|
"", // Description
|
||||||
"", // Qty (1 pcs.)
|
"", // Qty (1 pcs.)
|
||||||
fmt.Sprintf("%d", serverCount), // Qty (total)
|
fmt.Sprintf("%d", serverCount), // Qty (total)
|
||||||
formatPriceInt(block.UnitPrice), // Price (1 pcs.)
|
formatPriceInt(block.UnitPrice), // Price (1 pcs.)
|
||||||
formatPriceWithSpace(totalPrice), // Price (total)
|
formatPriceWithSpace(totalPrice), // Price (total)
|
||||||
}
|
}
|
||||||
if err := csvWriter.Write(serverRow); err != nil {
|
if err := csvWriter.Write(serverRow); err != nil {
|
||||||
return fmt.Errorf("failed to write server row: %w", err)
|
return fmt.Errorf("failed to write server row: %w", err)
|
||||||
@@ -124,14 +174,14 @@ func (s *ExportService) ToCSV(w io.Writer, data *ProjectExportData) error {
|
|||||||
// Component rows
|
// Component rows
|
||||||
for _, item := range sortedItems {
|
for _, item := range sortedItems {
|
||||||
componentRow := []string{
|
componentRow := []string{
|
||||||
"", // Line
|
"", // Line
|
||||||
item.Category, // Type
|
item.Category, // Type
|
||||||
item.LotName, // p/n
|
item.LotName, // p/n
|
||||||
"", // Description
|
"", // Description
|
||||||
fmt.Sprintf("%d", item.Quantity), // Qty (1 pcs.)
|
fmt.Sprintf("%d", item.Quantity), // Qty (1 pcs.)
|
||||||
"", // Qty (total)
|
"", // Qty (total)
|
||||||
formatPriceComma(item.UnitPrice), // Price (1 pcs.)
|
formatPriceComma(item.UnitPrice), // Price (1 pcs.)
|
||||||
"", // Price (total)
|
"", // Price (total)
|
||||||
}
|
}
|
||||||
if err := csvWriter.Write(componentRow); err != nil {
|
if err := csvWriter.Write(componentRow); err != nil {
|
||||||
return fmt.Errorf("failed to write component row: %w", err)
|
return fmt.Errorf("failed to write component row: %w", err)
|
||||||
@@ -163,6 +213,78 @@ func (s *ExportService) ToCSVBytes(data *ProjectExportData) ([]byte, error) {
|
|||||||
return buf.Bytes(), nil
|
return buf.Bytes(), nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (s *ExportService) ProjectToPricingExportData(configs []models.Configuration, opts ProjectPricingExportOptions) (*ProjectPricingExportData, error) {
|
||||||
|
sortedConfigs := make([]models.Configuration, len(configs))
|
||||||
|
copy(sortedConfigs, configs)
|
||||||
|
sort.Slice(sortedConfigs, func(i, j int) bool {
|
||||||
|
leftLine := sortedConfigs[i].Line
|
||||||
|
rightLine := sortedConfigs[j].Line
|
||||||
|
|
||||||
|
if leftLine <= 0 {
|
||||||
|
leftLine = int(^uint(0) >> 1)
|
||||||
|
}
|
||||||
|
if rightLine <= 0 {
|
||||||
|
rightLine = int(^uint(0) >> 1)
|
||||||
|
}
|
||||||
|
if leftLine != rightLine {
|
||||||
|
return leftLine < rightLine
|
||||||
|
}
|
||||||
|
if !sortedConfigs[i].CreatedAt.Equal(sortedConfigs[j].CreatedAt) {
|
||||||
|
return sortedConfigs[i].CreatedAt.After(sortedConfigs[j].CreatedAt)
|
||||||
|
}
|
||||||
|
return sortedConfigs[i].UUID > sortedConfigs[j].UUID
|
||||||
|
})
|
||||||
|
|
||||||
|
blocks := make([]ProjectPricingExportConfig, 0, len(sortedConfigs))
|
||||||
|
for i := range sortedConfigs {
|
||||||
|
block, err := s.buildPricingExportBlock(&sortedConfigs[i], opts)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
blocks = append(blocks, block)
|
||||||
|
}
|
||||||
|
|
||||||
|
return &ProjectPricingExportData{
|
||||||
|
Configs: blocks,
|
||||||
|
CreatedAt: time.Now(),
|
||||||
|
}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *ExportService) ToPricingCSV(w io.Writer, data *ProjectPricingExportData, opts ProjectPricingExportOptions) error {
|
||||||
|
if _, err := w.Write([]byte{0xEF, 0xBB, 0xBF}); err != nil {
|
||||||
|
return fmt.Errorf("failed to write BOM: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
csvWriter := csv.NewWriter(w)
|
||||||
|
csvWriter.Comma = ';'
|
||||||
|
defer csvWriter.Flush()
|
||||||
|
|
||||||
|
headers := pricingCSVHeaders(opts)
|
||||||
|
if err := csvWriter.Write(headers); err != nil {
|
||||||
|
return fmt.Errorf("failed to write pricing header: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
writeRows := opts.IncludeLOT || opts.IncludeBOM
|
||||||
|
for _, cfg := range data.Configs {
|
||||||
|
if err := csvWriter.Write(pricingConfigSummaryRow(cfg, opts)); err != nil {
|
||||||
|
return fmt.Errorf("failed to write config summary row: %w", err)
|
||||||
|
}
|
||||||
|
if writeRows {
|
||||||
|
for _, row := range cfg.Rows {
|
||||||
|
if err := csvWriter.Write(pricingCSVRow(row, opts)); err != nil {
|
||||||
|
return fmt.Errorf("failed to write pricing row: %w", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
csvWriter.Flush()
|
||||||
|
if err := csvWriter.Error(); err != nil {
|
||||||
|
return fmt.Errorf("csv writer error: %w", err)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
// ConfigToExportData converts a single configuration into ProjectExportData.
|
// ConfigToExportData converts a single configuration into ProjectExportData.
|
||||||
func (s *ExportService) ConfigToExportData(cfg *models.Configuration) *ProjectExportData {
|
func (s *ExportService) ConfigToExportData(cfg *models.Configuration) *ProjectExportData {
|
||||||
block := s.buildExportBlock(cfg)
|
block := s.buildExportBlock(cfg)
|
||||||
@@ -174,9 +296,30 @@ func (s *ExportService) ConfigToExportData(cfg *models.Configuration) *ProjectEx
|
|||||||
|
|
||||||
// ProjectToExportData converts multiple configurations into ProjectExportData.
|
// ProjectToExportData converts multiple configurations into ProjectExportData.
|
||||||
func (s *ExportService) ProjectToExportData(configs []models.Configuration) *ProjectExportData {
|
func (s *ExportService) ProjectToExportData(configs []models.Configuration) *ProjectExportData {
|
||||||
|
sortedConfigs := make([]models.Configuration, len(configs))
|
||||||
|
copy(sortedConfigs, configs)
|
||||||
|
sort.Slice(sortedConfigs, func(i, j int) bool {
|
||||||
|
leftLine := sortedConfigs[i].Line
|
||||||
|
rightLine := sortedConfigs[j].Line
|
||||||
|
|
||||||
|
if leftLine <= 0 {
|
||||||
|
leftLine = int(^uint(0) >> 1)
|
||||||
|
}
|
||||||
|
if rightLine <= 0 {
|
||||||
|
rightLine = int(^uint(0) >> 1)
|
||||||
|
}
|
||||||
|
if leftLine != rightLine {
|
||||||
|
return leftLine < rightLine
|
||||||
|
}
|
||||||
|
if !sortedConfigs[i].CreatedAt.Equal(sortedConfigs[j].CreatedAt) {
|
||||||
|
return sortedConfigs[i].CreatedAt.After(sortedConfigs[j].CreatedAt)
|
||||||
|
}
|
||||||
|
return sortedConfigs[i].UUID > sortedConfigs[j].UUID
|
||||||
|
})
|
||||||
|
|
||||||
blocks := make([]ConfigExportBlock, 0, len(configs))
|
blocks := make([]ConfigExportBlock, 0, len(configs))
|
||||||
for i := range configs {
|
for i := range sortedConfigs {
|
||||||
blocks = append(blocks, s.buildExportBlock(&configs[i]))
|
blocks = append(blocks, s.buildExportBlock(&sortedConfigs[i]))
|
||||||
}
|
}
|
||||||
return &ProjectExportData{
|
return &ProjectExportData{
|
||||||
Configs: blocks,
|
Configs: blocks,
|
||||||
@@ -214,12 +357,129 @@ func (s *ExportService) buildExportBlock(cfg *models.Configuration) ConfigExport
|
|||||||
|
|
||||||
return ConfigExportBlock{
|
return ConfigExportBlock{
|
||||||
Article: cfg.Article,
|
Article: cfg.Article,
|
||||||
|
Line: cfg.Line,
|
||||||
ServerCount: serverCount,
|
ServerCount: serverCount,
|
||||||
UnitPrice: unitTotal,
|
UnitPrice: unitTotal,
|
||||||
Items: items,
|
Items: items,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (s *ExportService) buildPricingExportBlock(cfg *models.Configuration, opts ProjectPricingExportOptions) (ProjectPricingExportConfig, error) {
|
||||||
|
block := ProjectPricingExportConfig{
|
||||||
|
Name: cfg.Name,
|
||||||
|
Article: cfg.Article,
|
||||||
|
Line: cfg.Line,
|
||||||
|
ServerCount: exportPositiveInt(cfg.ServerCount, 1),
|
||||||
|
Rows: make([]ProjectPricingExportRow, 0),
|
||||||
|
}
|
||||||
|
if s.localDB == nil {
|
||||||
|
for _, item := range cfg.Items {
|
||||||
|
block.Rows = append(block.Rows, ProjectPricingExportRow{
|
||||||
|
LotDisplay: item.LotName,
|
||||||
|
VendorPN: "—",
|
||||||
|
Quantity: item.Quantity,
|
||||||
|
Estimate: floatPtr(item.UnitPrice * float64(item.Quantity)),
|
||||||
|
})
|
||||||
|
}
|
||||||
|
return block, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
localCfg, err := s.localDB.GetConfigurationByUUID(cfg.UUID)
|
||||||
|
if err != nil {
|
||||||
|
localCfg = nil
|
||||||
|
}
|
||||||
|
|
||||||
|
priceMap := s.resolvePricingTotals(cfg, localCfg, opts)
|
||||||
|
componentDescriptions := s.resolveLotDescriptions(cfg, localCfg)
|
||||||
|
if opts.IncludeBOM && localCfg != nil && len(localCfg.VendorSpec) > 0 {
|
||||||
|
coveredLots := make(map[string]struct{})
|
||||||
|
for _, row := range localCfg.VendorSpec {
|
||||||
|
rowMappings := normalizeLotMappings(row.LotMappings)
|
||||||
|
for _, mapping := range rowMappings {
|
||||||
|
coveredLots[mapping.LotName] = struct{}{}
|
||||||
|
}
|
||||||
|
|
||||||
|
description := strings.TrimSpace(row.Description)
|
||||||
|
if description == "" && len(rowMappings) > 0 {
|
||||||
|
description = componentDescriptions[rowMappings[0].LotName]
|
||||||
|
}
|
||||||
|
|
||||||
|
pricingRow := ProjectPricingExportRow{
|
||||||
|
LotDisplay: formatLotDisplay(rowMappings),
|
||||||
|
VendorPN: row.VendorPartnumber,
|
||||||
|
Description: description,
|
||||||
|
Quantity: exportPositiveInt(row.Quantity, 1),
|
||||||
|
BOMTotal: vendorRowTotal(row),
|
||||||
|
Estimate: computeMappingTotal(priceMap, rowMappings, row.Quantity, func(p pricingLevels) *float64 { return p.Estimate }),
|
||||||
|
Stock: computeMappingTotal(priceMap, rowMappings, row.Quantity, func(p pricingLevels) *float64 { return p.Stock }),
|
||||||
|
Competitor: computeMappingTotal(priceMap, rowMappings, row.Quantity, func(p pricingLevels) *float64 { return p.Competitor }),
|
||||||
|
}
|
||||||
|
block.Rows = append(block.Rows, pricingRow)
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, item := range cfg.Items {
|
||||||
|
if item.LotName == "" {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if _, ok := coveredLots[item.LotName]; ok {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
estimate := estimateOnlyTotal(priceMap[item.LotName].Estimate, item.UnitPrice, item.Quantity)
|
||||||
|
block.Rows = append(block.Rows, ProjectPricingExportRow{
|
||||||
|
LotDisplay: item.LotName,
|
||||||
|
VendorPN: "—",
|
||||||
|
Description: componentDescriptions[item.LotName],
|
||||||
|
Quantity: exportPositiveInt(item.Quantity, 1),
|
||||||
|
Estimate: estimate,
|
||||||
|
Stock: totalForUnitPrice(priceMap[item.LotName].Stock, item.Quantity),
|
||||||
|
Competitor: totalForUnitPrice(priceMap[item.LotName].Competitor, item.Quantity),
|
||||||
|
})
|
||||||
|
}
|
||||||
|
if opts.isDDP() {
|
||||||
|
applyDDPMarkup(block.Rows, opts.saleMarkupFactor())
|
||||||
|
}
|
||||||
|
return block, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, item := range cfg.Items {
|
||||||
|
if item.LotName == "" {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
estimate := estimateOnlyTotal(priceMap[item.LotName].Estimate, item.UnitPrice, item.Quantity)
|
||||||
|
block.Rows = append(block.Rows, ProjectPricingExportRow{
|
||||||
|
LotDisplay: item.LotName,
|
||||||
|
VendorPN: "—",
|
||||||
|
Description: componentDescriptions[item.LotName],
|
||||||
|
Quantity: exportPositiveInt(item.Quantity, 1),
|
||||||
|
Estimate: estimate,
|
||||||
|
Stock: totalForUnitPrice(priceMap[item.LotName].Stock, item.Quantity),
|
||||||
|
Competitor: totalForUnitPrice(priceMap[item.LotName].Competitor, item.Quantity),
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
if opts.isDDP() {
|
||||||
|
applyDDPMarkup(block.Rows, opts.saleMarkupFactor())
|
||||||
|
}
|
||||||
|
|
||||||
|
return block, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func applyDDPMarkup(rows []ProjectPricingExportRow, factor float64) {
|
||||||
|
for i := range rows {
|
||||||
|
rows[i].Estimate = scaleFloatPtr(rows[i].Estimate, factor)
|
||||||
|
rows[i].Stock = scaleFloatPtr(rows[i].Stock, factor)
|
||||||
|
rows[i].Competitor = scaleFloatPtr(rows[i].Competitor, factor)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func scaleFloatPtr(v *float64, factor float64) *float64 {
|
||||||
|
if v == nil {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
result := *v * factor
|
||||||
|
return &result
|
||||||
|
}
|
||||||
|
|
||||||
// resolveCategories returns lot_name → category map.
|
// resolveCategories returns lot_name → category map.
|
||||||
// Primary source: pricelist items (lot_category). Fallback: local_components table.
|
// Primary source: pricelist items (lot_category). Fallback: local_components table.
|
||||||
func (s *ExportService) resolveCategories(pricelistID *uint, lotNames []string) map[string]string {
|
func (s *ExportService) resolveCategories(pricelistID *uint, lotNames []string) map[string]string {
|
||||||
@@ -276,6 +536,324 @@ func sortItemsByCategory(items []ExportItem, categoryOrder map[string]int) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
type pricingLevels struct {
|
||||||
|
Estimate *float64
|
||||||
|
Stock *float64
|
||||||
|
Competitor *float64
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *ExportService) resolvePricingTotals(cfg *models.Configuration, localCfg *localdb.LocalConfiguration, opts ProjectPricingExportOptions) map[string]pricingLevels {
|
||||||
|
result := map[string]pricingLevels{}
|
||||||
|
lots := collectPricingLots(cfg, localCfg, opts.IncludeBOM)
|
||||||
|
if len(lots) == 0 || s.localDB == nil {
|
||||||
|
return result
|
||||||
|
}
|
||||||
|
|
||||||
|
estimateID := cfg.PricelistID
|
||||||
|
if estimateID == nil || *estimateID == 0 {
|
||||||
|
if latest, err := s.localDB.GetLatestLocalPricelistBySource("estimate"); err == nil && latest != nil {
|
||||||
|
estimateID = &latest.ServerID
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
var warehouseID *uint
|
||||||
|
var competitorID *uint
|
||||||
|
if localCfg != nil {
|
||||||
|
warehouseID = localCfg.WarehousePricelistID
|
||||||
|
competitorID = localCfg.CompetitorPricelistID
|
||||||
|
}
|
||||||
|
if warehouseID == nil || *warehouseID == 0 {
|
||||||
|
if latest, err := s.localDB.GetLatestLocalPricelistBySource("warehouse"); err == nil && latest != nil {
|
||||||
|
warehouseID = &latest.ServerID
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if competitorID == nil || *competitorID == 0 {
|
||||||
|
if latest, err := s.localDB.GetLatestLocalPricelistBySource("competitor"); err == nil && latest != nil {
|
||||||
|
competitorID = &latest.ServerID
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, lot := range lots {
|
||||||
|
level := pricingLevels{}
|
||||||
|
level.Estimate = s.lookupPricePointer(estimateID, lot)
|
||||||
|
level.Stock = s.lookupPricePointer(warehouseID, lot)
|
||||||
|
level.Competitor = s.lookupPricePointer(competitorID, lot)
|
||||||
|
result[lot] = level
|
||||||
|
}
|
||||||
|
return result
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *ExportService) lookupPricePointer(serverPricelistID *uint, lotName string) *float64 {
|
||||||
|
if s.localDB == nil || serverPricelistID == nil || *serverPricelistID == 0 || strings.TrimSpace(lotName) == "" {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
localPL, err := s.localDB.GetLocalPricelistByServerID(*serverPricelistID)
|
||||||
|
if err != nil {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
price, err := s.localDB.GetLocalPriceForLot(localPL.ID, lotName)
|
||||||
|
if err != nil || price <= 0 {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
return floatPtr(price)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *ExportService) resolveLotDescriptions(cfg *models.Configuration, localCfg *localdb.LocalConfiguration) map[string]string {
|
||||||
|
lots := collectPricingLots(cfg, localCfg, true)
|
||||||
|
result := make(map[string]string, len(lots))
|
||||||
|
if s.localDB == nil {
|
||||||
|
return result
|
||||||
|
}
|
||||||
|
for _, lot := range lots {
|
||||||
|
component, err := s.localDB.GetLocalComponent(lot)
|
||||||
|
if err != nil {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
result[lot] = component.LotDescription
|
||||||
|
}
|
||||||
|
return result
|
||||||
|
}
|
||||||
|
|
||||||
|
func collectPricingLots(cfg *models.Configuration, localCfg *localdb.LocalConfiguration, includeBOM bool) []string {
|
||||||
|
seen := map[string]struct{}{}
|
||||||
|
out := make([]string, 0)
|
||||||
|
if includeBOM && localCfg != nil {
|
||||||
|
for _, row := range localCfg.VendorSpec {
|
||||||
|
for _, mapping := range normalizeLotMappings(row.LotMappings) {
|
||||||
|
if _, ok := seen[mapping.LotName]; ok {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
seen[mapping.LotName] = struct{}{}
|
||||||
|
out = append(out, mapping.LotName)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
for _, item := range cfg.Items {
|
||||||
|
lot := strings.TrimSpace(item.LotName)
|
||||||
|
if lot == "" {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if _, ok := seen[lot]; ok {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
seen[lot] = struct{}{}
|
||||||
|
out = append(out, lot)
|
||||||
|
}
|
||||||
|
return out
|
||||||
|
}
|
||||||
|
|
||||||
|
func normalizeLotMappings(mappings []localdb.VendorSpecLotMapping) []localdb.VendorSpecLotMapping {
|
||||||
|
if len(mappings) == 0 {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
out := make([]localdb.VendorSpecLotMapping, 0, len(mappings))
|
||||||
|
for _, mapping := range mappings {
|
||||||
|
lot := strings.TrimSpace(mapping.LotName)
|
||||||
|
if lot == "" {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
qty := mapping.QuantityPerPN
|
||||||
|
if qty < 1 {
|
||||||
|
qty = 1
|
||||||
|
}
|
||||||
|
out = append(out, localdb.VendorSpecLotMapping{
|
||||||
|
LotName: lot,
|
||||||
|
QuantityPerPN: qty,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
return out
|
||||||
|
}
|
||||||
|
|
||||||
|
func vendorRowTotal(row localdb.VendorSpecItem) *float64 {
|
||||||
|
if row.TotalPrice != nil {
|
||||||
|
return floatPtr(*row.TotalPrice)
|
||||||
|
}
|
||||||
|
if row.UnitPrice == nil {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
return floatPtr(*row.UnitPrice * float64(exportPositiveInt(row.Quantity, 1)))
|
||||||
|
}
|
||||||
|
|
||||||
|
func computeMappingTotal(priceMap map[string]pricingLevels, mappings []localdb.VendorSpecLotMapping, pnQty int, selector func(pricingLevels) *float64) *float64 {
|
||||||
|
if len(mappings) == 0 {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
total := 0.0
|
||||||
|
hasValue := false
|
||||||
|
qty := exportPositiveInt(pnQty, 1)
|
||||||
|
for _, mapping := range mappings {
|
||||||
|
price := selector(priceMap[mapping.LotName])
|
||||||
|
if price == nil || *price <= 0 {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
total += *price * float64(qty*mapping.QuantityPerPN)
|
||||||
|
hasValue = true
|
||||||
|
}
|
||||||
|
if !hasValue {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
return floatPtr(total)
|
||||||
|
}
|
||||||
|
|
||||||
|
func totalForUnitPrice(unitPrice *float64, quantity int) *float64 {
|
||||||
|
if unitPrice == nil || *unitPrice <= 0 {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
total := *unitPrice * float64(exportPositiveInt(quantity, 1))
|
||||||
|
return &total
|
||||||
|
}
|
||||||
|
|
||||||
|
func estimateOnlyTotal(estimatePrice *float64, fallbackUnitPrice float64, quantity int) *float64 {
|
||||||
|
if estimatePrice != nil && *estimatePrice > 0 {
|
||||||
|
return totalForUnitPrice(estimatePrice, quantity)
|
||||||
|
}
|
||||||
|
if fallbackUnitPrice <= 0 {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
total := fallbackUnitPrice * float64(maxInt(quantity, 1))
|
||||||
|
return &total
|
||||||
|
}
|
||||||
|
|
||||||
|
func pricingCSVHeaders(opts ProjectPricingExportOptions) []string {
|
||||||
|
headers := make([]string, 0, 8)
|
||||||
|
headers = append(headers, "Line Item")
|
||||||
|
if opts.IncludeLOT {
|
||||||
|
headers = append(headers, "LOT")
|
||||||
|
}
|
||||||
|
headers = append(headers, "PN вендора", "Описание", "Кол-во")
|
||||||
|
if opts.IncludeBOM {
|
||||||
|
headers = append(headers, "BOM")
|
||||||
|
}
|
||||||
|
if opts.IncludeEstimate {
|
||||||
|
headers = append(headers, "Estimate")
|
||||||
|
}
|
||||||
|
if opts.IncludeStock {
|
||||||
|
headers = append(headers, "Stock")
|
||||||
|
}
|
||||||
|
if opts.IncludeCompetitor {
|
||||||
|
headers = append(headers, "Конкуренты")
|
||||||
|
}
|
||||||
|
return headers
|
||||||
|
}
|
||||||
|
|
||||||
|
func pricingCSVRow(row ProjectPricingExportRow, opts ProjectPricingExportOptions) []string {
|
||||||
|
record := make([]string, 0, 8)
|
||||||
|
record = append(record, "")
|
||||||
|
if opts.IncludeLOT {
|
||||||
|
record = append(record, emptyDash(row.LotDisplay))
|
||||||
|
}
|
||||||
|
record = append(record,
|
||||||
|
emptyDash(row.VendorPN),
|
||||||
|
emptyDash(row.Description),
|
||||||
|
fmt.Sprintf("%d", exportPositiveInt(row.Quantity, 1)),
|
||||||
|
)
|
||||||
|
if opts.IncludeBOM {
|
||||||
|
record = append(record, formatMoneyValue(row.BOMTotal))
|
||||||
|
}
|
||||||
|
if opts.IncludeEstimate {
|
||||||
|
record = append(record, formatMoneyValue(row.Estimate))
|
||||||
|
}
|
||||||
|
if opts.IncludeStock {
|
||||||
|
record = append(record, formatMoneyValue(row.Stock))
|
||||||
|
}
|
||||||
|
if opts.IncludeCompetitor {
|
||||||
|
record = append(record, formatMoneyValue(row.Competitor))
|
||||||
|
}
|
||||||
|
return record
|
||||||
|
}
|
||||||
|
|
||||||
|
func pricingConfigSummaryRow(cfg ProjectPricingExportConfig, opts ProjectPricingExportOptions) []string {
|
||||||
|
record := make([]string, 0, 8)
|
||||||
|
record = append(record, fmt.Sprintf("%d", cfg.Line))
|
||||||
|
if opts.IncludeLOT {
|
||||||
|
record = append(record, "")
|
||||||
|
}
|
||||||
|
record = append(record,
|
||||||
|
emptyDash(cfg.Article),
|
||||||
|
emptyDash(cfg.Name),
|
||||||
|
fmt.Sprintf("%d", exportPositiveInt(cfg.ServerCount, 1)),
|
||||||
|
)
|
||||||
|
if opts.IncludeBOM {
|
||||||
|
record = append(record, formatMoneyValue(sumPricingColumn(cfg.Rows, func(row ProjectPricingExportRow) *float64 { return row.BOMTotal })))
|
||||||
|
}
|
||||||
|
if opts.IncludeEstimate {
|
||||||
|
record = append(record, formatMoneyValue(sumPricingColumn(cfg.Rows, func(row ProjectPricingExportRow) *float64 { return row.Estimate })))
|
||||||
|
}
|
||||||
|
if opts.IncludeStock {
|
||||||
|
record = append(record, formatMoneyValue(sumPricingColumn(cfg.Rows, func(row ProjectPricingExportRow) *float64 { return row.Stock })))
|
||||||
|
}
|
||||||
|
if opts.IncludeCompetitor {
|
||||||
|
record = append(record, formatMoneyValue(sumPricingColumn(cfg.Rows, func(row ProjectPricingExportRow) *float64 { return row.Competitor })))
|
||||||
|
}
|
||||||
|
return record
|
||||||
|
}
|
||||||
|
|
||||||
|
func formatLotDisplay(mappings []localdb.VendorSpecLotMapping) string {
|
||||||
|
switch len(mappings) {
|
||||||
|
case 0:
|
||||||
|
return "н/д"
|
||||||
|
case 1:
|
||||||
|
return mappings[0].LotName
|
||||||
|
default:
|
||||||
|
return fmt.Sprintf("%s +%d", mappings[0].LotName, len(mappings)-1)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func formatMoneyValue(value *float64) string {
|
||||||
|
if value == nil {
|
||||||
|
return "—"
|
||||||
|
}
|
||||||
|
n := math.Round(*value*100) / 100
|
||||||
|
sign := ""
|
||||||
|
if n < 0 {
|
||||||
|
sign = "-"
|
||||||
|
n = -n
|
||||||
|
}
|
||||||
|
whole := int64(n)
|
||||||
|
fraction := int(math.Round((n - float64(whole)) * 100))
|
||||||
|
if fraction == 100 {
|
||||||
|
whole++
|
||||||
|
fraction = 0
|
||||||
|
}
|
||||||
|
return fmt.Sprintf("%s%s,%02d", sign, formatIntWithSpace(whole), fraction)
|
||||||
|
}
|
||||||
|
|
||||||
|
func emptyDash(value string) string {
|
||||||
|
if strings.TrimSpace(value) == "" {
|
||||||
|
return "—"
|
||||||
|
}
|
||||||
|
return value
|
||||||
|
}
|
||||||
|
|
||||||
|
func sumPricingColumn(rows []ProjectPricingExportRow, selector func(ProjectPricingExportRow) *float64) *float64 {
|
||||||
|
total := 0.0
|
||||||
|
hasValue := false
|
||||||
|
for _, row := range rows {
|
||||||
|
value := selector(row)
|
||||||
|
if value == nil {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
total += *value
|
||||||
|
hasValue = true
|
||||||
|
}
|
||||||
|
if !hasValue {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
return floatPtr(total)
|
||||||
|
}
|
||||||
|
|
||||||
|
func floatPtr(value float64) *float64 {
|
||||||
|
v := value
|
||||||
|
return &v
|
||||||
|
}
|
||||||
|
|
||||||
|
func exportPositiveInt(value, fallback int) int {
|
||||||
|
if value < 1 {
|
||||||
|
return fallback
|
||||||
|
}
|
||||||
|
return value
|
||||||
|
}
|
||||||
|
|
||||||
// formatPriceComma formats a price with comma as decimal separator (e.g., "2074,5").
|
// formatPriceComma formats a price with comma as decimal separator (e.g., "2074,5").
|
||||||
// Trailing zeros after the comma are trimmed, and if the value is an integer, no comma is shown.
|
// Trailing zeros after the comma are trimmed, and if the value is an integer, no comma is shown.
|
||||||
func formatPriceComma(value float64) string {
|
func formatPriceComma(value float64) string {
|
||||||
|
|||||||
@@ -8,6 +8,7 @@ import (
|
|||||||
"time"
|
"time"
|
||||||
|
|
||||||
"git.mchus.pro/mchus/quoteforge/internal/config"
|
"git.mchus.pro/mchus/quoteforge/internal/config"
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/models"
|
||||||
)
|
)
|
||||||
|
|
||||||
func newTestProjectData(items []ExportItem, article string, serverCount int) *ProjectExportData {
|
func newTestProjectData(items []ExportItem, article string, serverCount int) *ProjectExportData {
|
||||||
@@ -357,6 +358,51 @@ func TestToCSV_MultipleBlocks(t *testing.T) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func TestProjectToExportData_SortsByLine(t *testing.T) {
|
||||||
|
svc := NewExportService(config.ExportConfig{}, nil, nil)
|
||||||
|
|
||||||
|
configs := []models.Configuration{
|
||||||
|
{
|
||||||
|
UUID: "cfg-1",
|
||||||
|
Line: 30,
|
||||||
|
Article: "ART-30",
|
||||||
|
ServerCount: 1,
|
||||||
|
Items: models.ConfigItems{{LotName: "LOT-30", Quantity: 1, UnitPrice: 300}},
|
||||||
|
CreatedAt: time.Now().Add(-1 * time.Hour),
|
||||||
|
},
|
||||||
|
{
|
||||||
|
UUID: "cfg-2",
|
||||||
|
Line: 10,
|
||||||
|
Article: "ART-10",
|
||||||
|
ServerCount: 1,
|
||||||
|
Items: models.ConfigItems{{LotName: "LOT-10", Quantity: 1, UnitPrice: 100}},
|
||||||
|
CreatedAt: time.Now().Add(-2 * time.Hour),
|
||||||
|
},
|
||||||
|
{
|
||||||
|
UUID: "cfg-3",
|
||||||
|
Line: 20,
|
||||||
|
Article: "ART-20",
|
||||||
|
ServerCount: 1,
|
||||||
|
Items: models.ConfigItems{{LotName: "LOT-20", Quantity: 1, UnitPrice: 200}},
|
||||||
|
CreatedAt: time.Now().Add(-3 * time.Hour),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
data := svc.ProjectToExportData(configs)
|
||||||
|
if len(data.Configs) != 3 {
|
||||||
|
t.Fatalf("expected 3 blocks, got %d", len(data.Configs))
|
||||||
|
}
|
||||||
|
if data.Configs[0].Article != "ART-10" || data.Configs[0].Line != 10 {
|
||||||
|
t.Fatalf("first block must be line 10, got article=%s line=%d", data.Configs[0].Article, data.Configs[0].Line)
|
||||||
|
}
|
||||||
|
if data.Configs[1].Article != "ART-20" || data.Configs[1].Line != 20 {
|
||||||
|
t.Fatalf("second block must be line 20, got article=%s line=%d", data.Configs[1].Article, data.Configs[1].Line)
|
||||||
|
}
|
||||||
|
if data.Configs[2].Article != "ART-30" || data.Configs[2].Line != 30 {
|
||||||
|
t.Fatalf("third block must be line 30, got article=%s line=%d", data.Configs[2].Article, data.Configs[2].Line)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
func TestFormatPriceWithSpace(t *testing.T) {
|
func TestFormatPriceWithSpace(t *testing.T) {
|
||||||
tests := []struct {
|
tests := []struct {
|
||||||
input float64
|
input float64
|
||||||
@@ -398,6 +444,117 @@ func TestFormatPriceComma(t *testing.T) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func TestToPricingCSV_UsesSelectedColumns(t *testing.T) {
|
||||||
|
svc := NewExportService(config.ExportConfig{}, nil, nil)
|
||||||
|
data := &ProjectPricingExportData{
|
||||||
|
Configs: []ProjectPricingExportConfig{
|
||||||
|
{
|
||||||
|
Name: "Config A",
|
||||||
|
Article: "ART-1",
|
||||||
|
Line: 10,
|
||||||
|
ServerCount: 2,
|
||||||
|
Rows: []ProjectPricingExportRow{
|
||||||
|
{
|
||||||
|
LotDisplay: "LOT_A +1",
|
||||||
|
VendorPN: "PN-001",
|
||||||
|
Description: "Bundle row",
|
||||||
|
Quantity: 2,
|
||||||
|
BOMTotal: floatPtr(2400.5),
|
||||||
|
Estimate: floatPtr(2000),
|
||||||
|
Stock: floatPtr(1800.25),
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
CreatedAt: time.Now(),
|
||||||
|
}
|
||||||
|
opts := ProjectPricingExportOptions{
|
||||||
|
IncludeLOT: true,
|
||||||
|
IncludeBOM: true,
|
||||||
|
IncludeEstimate: true,
|
||||||
|
IncludeStock: true,
|
||||||
|
}
|
||||||
|
|
||||||
|
var buf bytes.Buffer
|
||||||
|
if err := svc.ToPricingCSV(&buf, data, opts); err != nil {
|
||||||
|
t.Fatalf("ToPricingCSV failed: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
reader := csv.NewReader(bytes.NewReader(buf.Bytes()[3:]))
|
||||||
|
reader.Comma = ';'
|
||||||
|
reader.FieldsPerRecord = -1
|
||||||
|
|
||||||
|
header, err := reader.Read()
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("read header row: %v", err)
|
||||||
|
}
|
||||||
|
expectedHeader := []string{"Line Item", "LOT", "PN вендора", "Описание", "Кол-во", "BOM", "Estimate", "Stock"}
|
||||||
|
for i, want := range expectedHeader {
|
||||||
|
if header[i] != want {
|
||||||
|
t.Fatalf("header[%d]: expected %q, got %q", i, want, header[i])
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
summary, err := reader.Read()
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("read summary row: %v", err)
|
||||||
|
}
|
||||||
|
expectedSummary := []string{"10", "", "", "Config A", "2", "2 400,50", "2 000,00", "1 800,25"}
|
||||||
|
for i, want := range expectedSummary {
|
||||||
|
if summary[i] != want {
|
||||||
|
t.Fatalf("summary[%d]: expected %q, got %q", i, want, summary[i])
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
row, err := reader.Read()
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("read data row: %v", err)
|
||||||
|
}
|
||||||
|
expectedRow := []string{"", "LOT_A +1", "PN-001", "Bundle row", "2", "2 400,50", "2 000,00", "1 800,25"}
|
||||||
|
for i, want := range expectedRow {
|
||||||
|
if row[i] != want {
|
||||||
|
t.Fatalf("row[%d]: expected %q, got %q", i, want, row[i])
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestProjectToPricingExportData_UsesCartRowsWithoutBOM(t *testing.T) {
|
||||||
|
svc := NewExportService(config.ExportConfig{}, nil, nil)
|
||||||
|
configs := []models.Configuration{
|
||||||
|
{
|
||||||
|
UUID: "cfg-1",
|
||||||
|
Name: "Config A",
|
||||||
|
Article: "ART-1",
|
||||||
|
ServerCount: 1,
|
||||||
|
Items: models.ConfigItems{
|
||||||
|
{LotName: "LOT_A", Quantity: 2, UnitPrice: 300},
|
||||||
|
},
|
||||||
|
CreatedAt: time.Now(),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
data, err := svc.ProjectToPricingExportData(configs, ProjectPricingExportOptions{
|
||||||
|
IncludeLOT: true,
|
||||||
|
IncludeEstimate: true,
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("ProjectToPricingExportData failed: %v", err)
|
||||||
|
}
|
||||||
|
if len(data.Configs) != 1 || len(data.Configs[0].Rows) != 1 {
|
||||||
|
t.Fatalf("unexpected rows count: %+v", data.Configs)
|
||||||
|
}
|
||||||
|
row := data.Configs[0].Rows[0]
|
||||||
|
if row.LotDisplay != "LOT_A" {
|
||||||
|
t.Fatalf("expected LOT_A, got %q", row.LotDisplay)
|
||||||
|
}
|
||||||
|
if row.VendorPN != "—" {
|
||||||
|
t.Fatalf("expected vendor dash, got %q", row.VendorPN)
|
||||||
|
}
|
||||||
|
if row.Estimate == nil || *row.Estimate != 600 {
|
||||||
|
t.Fatalf("expected estimate total 600, got %+v", row.Estimate)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// failingWriter always returns an error
|
// failingWriter always returns an error
|
||||||
type failingWriter struct{}
|
type failingWriter struct{}
|
||||||
|
|
||||||
|
|||||||
@@ -49,11 +49,13 @@ func NewLocalConfigurationService(
|
|||||||
|
|
||||||
// Create creates a new configuration in local SQLite and queues it for sync
|
// Create creates a new configuration in local SQLite and queues it for sync
|
||||||
func (s *LocalConfigurationService) Create(ownerUsername string, req *CreateConfigRequest) (*models.Configuration, error) {
|
func (s *LocalConfigurationService) Create(ownerUsername string, req *CreateConfigRequest) (*models.Configuration, error) {
|
||||||
// If online, check for new pricelists first
|
// If online, trigger pricelist sync in the background — do not block config creation
|
||||||
if s.isOnline() {
|
if s.isOnline() {
|
||||||
if err := s.syncService.SyncPricelistsIfNeeded(); err != nil {
|
go func() {
|
||||||
// Log but don't fail - we can still use local pricelists
|
if err := s.syncService.SyncPricelistsIfNeeded(); err != nil {
|
||||||
}
|
// Log but don't fail - we can still use local pricelists
|
||||||
|
}
|
||||||
|
}()
|
||||||
}
|
}
|
||||||
|
|
||||||
projectUUID, err := s.resolveProjectUUID(ownerUsername, req.ProjectUUID)
|
projectUUID, err := s.resolveProjectUUID(ownerUsername, req.ProjectUUID)
|
||||||
@@ -83,22 +85,29 @@ func (s *LocalConfigurationService) Create(ownerUsername string, req *CreateConf
|
|||||||
}
|
}
|
||||||
|
|
||||||
cfg := &models.Configuration{
|
cfg := &models.Configuration{
|
||||||
UUID: uuid.New().String(),
|
UUID: uuid.New().String(),
|
||||||
OwnerUsername: ownerUsername,
|
OwnerUsername: ownerUsername,
|
||||||
ProjectUUID: projectUUID,
|
ProjectUUID: projectUUID,
|
||||||
Name: req.Name,
|
Name: req.Name,
|
||||||
Items: req.Items,
|
Items: req.Items,
|
||||||
TotalPrice: &total,
|
TotalPrice: &total,
|
||||||
CustomPrice: req.CustomPrice,
|
CustomPrice: req.CustomPrice,
|
||||||
Notes: req.Notes,
|
Notes: req.Notes,
|
||||||
IsTemplate: req.IsTemplate,
|
IsTemplate: req.IsTemplate,
|
||||||
ServerCount: req.ServerCount,
|
ServerCount: req.ServerCount,
|
||||||
ServerModel: req.ServerModel,
|
ServerModel: req.ServerModel,
|
||||||
SupportCode: req.SupportCode,
|
SupportCode: req.SupportCode,
|
||||||
Article: req.Article,
|
Article: req.Article,
|
||||||
PricelistID: pricelistID,
|
PricelistID: pricelistID,
|
||||||
OnlyInStock: req.OnlyInStock,
|
WarehousePricelistID: req.WarehousePricelistID,
|
||||||
CreatedAt: time.Now(),
|
CompetitorPricelistID: req.CompetitorPricelistID,
|
||||||
|
ConfigType: req.ConfigType,
|
||||||
|
DisablePriceRefresh: req.DisablePriceRefresh,
|
||||||
|
OnlyInStock: req.OnlyInStock,
|
||||||
|
CreatedAt: time.Now(),
|
||||||
|
}
|
||||||
|
if cfg.ConfigType == "" {
|
||||||
|
cfg.ConfigType = "server"
|
||||||
}
|
}
|
||||||
|
|
||||||
// Convert to local model
|
// Convert to local model
|
||||||
@@ -107,6 +116,7 @@ func (s *LocalConfigurationService) Create(ownerUsername string, req *CreateConf
|
|||||||
if err := s.createWithVersion(localCfg, ownerUsername); err != nil {
|
if err := s.createWithVersion(localCfg, ownerUsername); err != nil {
|
||||||
return nil, fmt.Errorf("create configuration with version: %w", err)
|
return nil, fmt.Errorf("create configuration with version: %w", err)
|
||||||
}
|
}
|
||||||
|
cfg.Line = localCfg.Line
|
||||||
|
|
||||||
// Record usage stats
|
// Record usage stats
|
||||||
_ = s.quoteService.RecordUsage(req.Items)
|
_ = s.quoteService.RecordUsage(req.Items)
|
||||||
@@ -195,6 +205,9 @@ func (s *LocalConfigurationService) Update(uuid string, ownerUsername string, re
|
|||||||
localCfg.SupportCode = req.SupportCode
|
localCfg.SupportCode = req.SupportCode
|
||||||
localCfg.Article = req.Article
|
localCfg.Article = req.Article
|
||||||
localCfg.PricelistID = pricelistID
|
localCfg.PricelistID = pricelistID
|
||||||
|
localCfg.WarehousePricelistID = req.WarehousePricelistID
|
||||||
|
localCfg.CompetitorPricelistID = req.CompetitorPricelistID
|
||||||
|
localCfg.DisablePriceRefresh = req.DisablePriceRefresh
|
||||||
localCfg.OnlyInStock = req.OnlyInStock
|
localCfg.OnlyInStock = req.OnlyInStock
|
||||||
localCfg.UpdatedAt = time.Now()
|
localCfg.UpdatedAt = time.Now()
|
||||||
localCfg.SyncStatus = "pending"
|
localCfg.SyncStatus = "pending"
|
||||||
@@ -303,28 +316,32 @@ func (s *LocalConfigurationService) CloneToProject(configUUID string, ownerUsern
|
|||||||
}
|
}
|
||||||
|
|
||||||
clone := &models.Configuration{
|
clone := &models.Configuration{
|
||||||
UUID: uuid.New().String(),
|
UUID: uuid.New().String(),
|
||||||
OwnerUsername: ownerUsername,
|
OwnerUsername: ownerUsername,
|
||||||
ProjectUUID: resolvedProjectUUID,
|
ProjectUUID: resolvedProjectUUID,
|
||||||
Name: newName,
|
Name: newName,
|
||||||
Items: original.Items,
|
Items: original.Items,
|
||||||
TotalPrice: &total,
|
TotalPrice: &total,
|
||||||
CustomPrice: original.CustomPrice,
|
CustomPrice: original.CustomPrice,
|
||||||
Notes: original.Notes,
|
Notes: original.Notes,
|
||||||
IsTemplate: false,
|
IsTemplate: false,
|
||||||
ServerCount: original.ServerCount,
|
ServerCount: original.ServerCount,
|
||||||
ServerModel: original.ServerModel,
|
ServerModel: original.ServerModel,
|
||||||
SupportCode: original.SupportCode,
|
SupportCode: original.SupportCode,
|
||||||
Article: original.Article,
|
Article: original.Article,
|
||||||
PricelistID: original.PricelistID,
|
PricelistID: original.PricelistID,
|
||||||
OnlyInStock: original.OnlyInStock,
|
WarehousePricelistID: original.WarehousePricelistID,
|
||||||
CreatedAt: time.Now(),
|
CompetitorPricelistID: original.CompetitorPricelistID,
|
||||||
|
DisablePriceRefresh: original.DisablePriceRefresh,
|
||||||
|
OnlyInStock: original.OnlyInStock,
|
||||||
|
CreatedAt: time.Now(),
|
||||||
}
|
}
|
||||||
|
|
||||||
localCfg := localdb.ConfigurationToLocal(clone)
|
localCfg := localdb.ConfigurationToLocal(clone)
|
||||||
if err := s.createWithVersion(localCfg, ownerUsername); err != nil {
|
if err := s.createWithVersion(localCfg, ownerUsername); err != nil {
|
||||||
return nil, fmt.Errorf("clone configuration with version: %w", err)
|
return nil, fmt.Errorf("clone configuration with version: %w", err)
|
||||||
}
|
}
|
||||||
|
clone.Line = localCfg.Line
|
||||||
|
|
||||||
return clone, nil
|
return clone, nil
|
||||||
}
|
}
|
||||||
@@ -388,17 +405,29 @@ func (s *LocalConfigurationService) RefreshPrices(uuid string, ownerUsername str
|
|||||||
return nil, ErrConfigForbidden
|
return nil, ErrConfigForbidden
|
||||||
}
|
}
|
||||||
|
|
||||||
// Refresh local pricelists when online and use latest active/local pricelist for recalculation.
|
// Refresh local pricelists when online.
|
||||||
if s.isOnline() {
|
if s.isOnline() {
|
||||||
_ = s.syncService.SyncPricelistsIfNeeded()
|
_ = s.syncService.SyncPricelistsIfNeeded()
|
||||||
}
|
}
|
||||||
latestPricelist, latestErr := s.localDB.GetLatestLocalPricelist()
|
|
||||||
|
// Use the pricelist stored in the config; fall back to latest if unavailable.
|
||||||
|
var pricelist *localdb.LocalPricelist
|
||||||
|
if localCfg.PricelistID != nil && *localCfg.PricelistID > 0 {
|
||||||
|
if pl, err := s.localDB.GetLocalPricelistByServerID(*localCfg.PricelistID); err == nil {
|
||||||
|
pricelist = pl
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if pricelist == nil {
|
||||||
|
if pl, err := s.localDB.GetLatestLocalPricelist(); err == nil {
|
||||||
|
pricelist = pl
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// Update prices for all items from pricelist
|
// Update prices for all items from pricelist
|
||||||
updatedItems := make(localdb.LocalConfigItems, len(localCfg.Items))
|
updatedItems := make(localdb.LocalConfigItems, len(localCfg.Items))
|
||||||
for i, item := range localCfg.Items {
|
for i, item := range localCfg.Items {
|
||||||
if latestErr == nil && latestPricelist != nil {
|
if pricelist != nil {
|
||||||
price, err := s.localDB.GetLocalPriceForLot(latestPricelist.ID, item.LotName)
|
price, err := s.localDB.GetLocalPriceForLot(pricelist.ID, item.LotName)
|
||||||
if err == nil && price > 0 {
|
if err == nil && price > 0 {
|
||||||
updatedItems[i] = localdb.LocalConfigItem{
|
updatedItems[i] = localdb.LocalConfigItem{
|
||||||
LotName: item.LotName,
|
LotName: item.LotName,
|
||||||
@@ -423,8 +452,8 @@ func (s *LocalConfigurationService) RefreshPrices(uuid string, ownerUsername str
|
|||||||
}
|
}
|
||||||
|
|
||||||
localCfg.TotalPrice = &total
|
localCfg.TotalPrice = &total
|
||||||
if latestErr == nil && latestPricelist != nil {
|
if pricelist != nil {
|
||||||
localCfg.PricelistID = &latestPricelist.ServerID
|
localCfg.PricelistID = &pricelist.ServerID
|
||||||
}
|
}
|
||||||
|
|
||||||
// Set price update timestamp and mark for sync
|
// Set price update timestamp and mark for sync
|
||||||
@@ -519,6 +548,9 @@ func (s *LocalConfigurationService) UpdateNoAuth(uuid string, req *CreateConfigR
|
|||||||
localCfg.SupportCode = req.SupportCode
|
localCfg.SupportCode = req.SupportCode
|
||||||
localCfg.Article = req.Article
|
localCfg.Article = req.Article
|
||||||
localCfg.PricelistID = pricelistID
|
localCfg.PricelistID = pricelistID
|
||||||
|
localCfg.WarehousePricelistID = req.WarehousePricelistID
|
||||||
|
localCfg.CompetitorPricelistID = req.CompetitorPricelistID
|
||||||
|
localCfg.DisablePriceRefresh = req.DisablePriceRefresh
|
||||||
localCfg.OnlyInStock = req.OnlyInStock
|
localCfg.OnlyInStock = req.OnlyInStock
|
||||||
localCfg.UpdatedAt = time.Now()
|
localCfg.UpdatedAt = time.Now()
|
||||||
localCfg.SyncStatus = "pending"
|
localCfg.SyncStatus = "pending"
|
||||||
@@ -621,25 +653,32 @@ func (s *LocalConfigurationService) CloneNoAuthToProjectFromVersion(configUUID s
|
|||||||
}
|
}
|
||||||
|
|
||||||
clone := &models.Configuration{
|
clone := &models.Configuration{
|
||||||
UUID: uuid.New().String(),
|
UUID: uuid.New().String(),
|
||||||
OwnerUsername: ownerUsername,
|
OwnerUsername: ownerUsername,
|
||||||
ProjectUUID: resolvedProjectUUID,
|
ProjectUUID: resolvedProjectUUID,
|
||||||
Name: newName,
|
Name: newName,
|
||||||
Items: original.Items,
|
Items: original.Items,
|
||||||
TotalPrice: &total,
|
TotalPrice: &total,
|
||||||
CustomPrice: original.CustomPrice,
|
CustomPrice: original.CustomPrice,
|
||||||
Notes: original.Notes,
|
Notes: original.Notes,
|
||||||
IsTemplate: false,
|
IsTemplate: false,
|
||||||
ServerCount: original.ServerCount,
|
ServerCount: original.ServerCount,
|
||||||
PricelistID: original.PricelistID,
|
ServerModel: original.ServerModel,
|
||||||
OnlyInStock: original.OnlyInStock,
|
SupportCode: original.SupportCode,
|
||||||
CreatedAt: time.Now(),
|
Article: original.Article,
|
||||||
|
PricelistID: original.PricelistID,
|
||||||
|
WarehousePricelistID: original.WarehousePricelistID,
|
||||||
|
CompetitorPricelistID: original.CompetitorPricelistID,
|
||||||
|
DisablePriceRefresh: original.DisablePriceRefresh,
|
||||||
|
OnlyInStock: original.OnlyInStock,
|
||||||
|
CreatedAt: time.Now(),
|
||||||
}
|
}
|
||||||
|
|
||||||
localCfg := localdb.ConfigurationToLocal(clone)
|
localCfg := localdb.ConfigurationToLocal(clone)
|
||||||
if err := s.createWithVersion(localCfg, ownerUsername); err != nil {
|
if err := s.createWithVersion(localCfg, ownerUsername); err != nil {
|
||||||
return nil, fmt.Errorf("clone configuration without auth with version: %w", err)
|
return nil, fmt.Errorf("clone configuration without auth with version: %w", err)
|
||||||
}
|
}
|
||||||
|
clone.Line = localCfg.Line
|
||||||
|
|
||||||
return clone, nil
|
return clone, nil
|
||||||
}
|
}
|
||||||
@@ -741,8 +780,10 @@ func (s *LocalConfigurationService) ListTemplates(page, perPage int) ([]models.C
|
|||||||
return templates[start:end], total, nil
|
return templates[start:end], total, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// RefreshPricesNoAuth updates all component prices in the configuration without ownership check
|
// RefreshPricesNoAuth updates all component prices in the configuration without ownership check.
|
||||||
func (s *LocalConfigurationService) RefreshPricesNoAuth(uuid string) (*models.Configuration, error) {
|
// pricelistServerID optionally specifies which pricelist to use; if nil, the config's stored
|
||||||
|
// pricelist is used; if that is also absent, the latest local pricelist is used as a fallback.
|
||||||
|
func (s *LocalConfigurationService) RefreshPricesNoAuth(uuid string, pricelistServerID *uint) (*models.Configuration, error) {
|
||||||
// Get configuration from local SQLite
|
// Get configuration from local SQLite
|
||||||
localCfg, err := s.localDB.GetConfigurationByUUID(uuid)
|
localCfg, err := s.localDB.GetConfigurationByUUID(uuid)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -752,13 +793,36 @@ func (s *LocalConfigurationService) RefreshPricesNoAuth(uuid string) (*models.Co
|
|||||||
if s.isOnline() {
|
if s.isOnline() {
|
||||||
_ = s.syncService.SyncPricelistsIfNeeded()
|
_ = s.syncService.SyncPricelistsIfNeeded()
|
||||||
}
|
}
|
||||||
latestPricelist, latestErr := s.localDB.GetLatestLocalPricelist()
|
|
||||||
|
// Resolve which pricelist to use:
|
||||||
|
// 1. Explicitly requested pricelist (from UI selection)
|
||||||
|
// 2. Pricelist stored in the configuration
|
||||||
|
// 3. Latest local pricelist as last-resort fallback
|
||||||
|
var targetServerID *uint
|
||||||
|
if pricelistServerID != nil && *pricelistServerID > 0 {
|
||||||
|
targetServerID = pricelistServerID
|
||||||
|
} else if localCfg.PricelistID != nil && *localCfg.PricelistID > 0 {
|
||||||
|
targetServerID = localCfg.PricelistID
|
||||||
|
}
|
||||||
|
|
||||||
|
var pricelist *localdb.LocalPricelist
|
||||||
|
if targetServerID != nil {
|
||||||
|
if pl, err := s.localDB.GetLocalPricelistByServerID(*targetServerID); err == nil {
|
||||||
|
pricelist = pl
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if pricelist == nil {
|
||||||
|
// Fallback: use latest local pricelist
|
||||||
|
if pl, err := s.localDB.GetLatestLocalPricelist(); err == nil {
|
||||||
|
pricelist = pl
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// Update prices for all items from pricelist
|
// Update prices for all items from pricelist
|
||||||
updatedItems := make(localdb.LocalConfigItems, len(localCfg.Items))
|
updatedItems := make(localdb.LocalConfigItems, len(localCfg.Items))
|
||||||
for i, item := range localCfg.Items {
|
for i, item := range localCfg.Items {
|
||||||
if latestErr == nil && latestPricelist != nil {
|
if pricelist != nil {
|
||||||
price, err := s.localDB.GetLocalPriceForLot(latestPricelist.ID, item.LotName)
|
price, err := s.localDB.GetLocalPriceForLot(pricelist.ID, item.LotName)
|
||||||
if err == nil && price > 0 {
|
if err == nil && price > 0 {
|
||||||
updatedItems[i] = localdb.LocalConfigItem{
|
updatedItems[i] = localdb.LocalConfigItem{
|
||||||
LotName: item.LotName,
|
LotName: item.LotName,
|
||||||
@@ -783,8 +847,8 @@ func (s *LocalConfigurationService) RefreshPricesNoAuth(uuid string) (*models.Co
|
|||||||
}
|
}
|
||||||
|
|
||||||
localCfg.TotalPrice = &total
|
localCfg.TotalPrice = &total
|
||||||
if latestErr == nil && latestPricelist != nil {
|
if pricelist != nil {
|
||||||
localCfg.PricelistID = &latestPricelist.ServerID
|
localCfg.PricelistID = &pricelist.ServerID
|
||||||
}
|
}
|
||||||
|
|
||||||
// Set price update timestamp and mark for sync
|
// Set price update timestamp and mark for sync
|
||||||
@@ -826,21 +890,13 @@ func (s *LocalConfigurationService) UpdateServerCount(configUUID string, serverC
|
|||||||
return fmt.Errorf("save local configuration: %w", err)
|
return fmt.Errorf("save local configuration: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Use existing current version for the pending change
|
version, err := s.loadVersionForPendingTx(tx, localCfg)
|
||||||
var version localdb.LocalConfigurationVersion
|
if err != nil {
|
||||||
if localCfg.CurrentVersionID != nil && *localCfg.CurrentVersionID != "" {
|
return err
|
||||||
if err := tx.Where("id = ?", *localCfg.CurrentVersionID).First(&version).Error; err != nil {
|
|
||||||
return fmt.Errorf("load current version: %w", err)
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
if err := tx.Where("configuration_uuid = ?", localCfg.UUID).
|
|
||||||
Order("version_no DESC").First(&version).Error; err != nil {
|
|
||||||
return fmt.Errorf("load latest version: %w", err)
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
cfg = localdb.LocalToConfiguration(localCfg)
|
cfg = localdb.LocalToConfiguration(localCfg)
|
||||||
if err := s.enqueueConfigurationPendingChangeTx(tx, localCfg, "update", &version, ""); err != nil {
|
if err := s.enqueueConfigurationPendingChangeTx(tx, localCfg, "update", version, ""); err != nil {
|
||||||
return fmt.Errorf("enqueue server-count pending change: %w", err)
|
return fmt.Errorf("enqueue server-count pending change: %w", err)
|
||||||
}
|
}
|
||||||
return nil
|
return nil
|
||||||
@@ -852,6 +908,99 @@ func (s *LocalConfigurationService) UpdateServerCount(configUUID string, serverC
|
|||||||
return cfg, nil
|
return cfg, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (s *LocalConfigurationService) ReorderProjectConfigurationsNoAuth(projectUUID string, orderedUUIDs []string) ([]models.Configuration, error) {
|
||||||
|
projectUUID = strings.TrimSpace(projectUUID)
|
||||||
|
if projectUUID == "" {
|
||||||
|
return nil, ErrProjectNotFound
|
||||||
|
}
|
||||||
|
if _, err := s.localDB.GetProjectByUUID(projectUUID); err != nil {
|
||||||
|
return nil, ErrProjectNotFound
|
||||||
|
}
|
||||||
|
if len(orderedUUIDs) == 0 {
|
||||||
|
return []models.Configuration{}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
seen := make(map[string]struct{}, len(orderedUUIDs))
|
||||||
|
normalized := make([]string, 0, len(orderedUUIDs))
|
||||||
|
for _, raw := range orderedUUIDs {
|
||||||
|
u := strings.TrimSpace(raw)
|
||||||
|
if u == "" {
|
||||||
|
return nil, fmt.Errorf("ordered_uuids contains empty uuid")
|
||||||
|
}
|
||||||
|
if _, exists := seen[u]; exists {
|
||||||
|
return nil, fmt.Errorf("ordered_uuids contains duplicate uuid: %s", u)
|
||||||
|
}
|
||||||
|
seen[u] = struct{}{}
|
||||||
|
normalized = append(normalized, u)
|
||||||
|
}
|
||||||
|
|
||||||
|
err := s.localDB.DB().Transaction(func(tx *gorm.DB) error {
|
||||||
|
var active []localdb.LocalConfiguration
|
||||||
|
if err := tx.Where("project_uuid = ? AND is_active = ?", projectUUID, true).
|
||||||
|
Find(&active).Error; err != nil {
|
||||||
|
return fmt.Errorf("load project active configurations: %w", err)
|
||||||
|
}
|
||||||
|
if len(active) != len(normalized) {
|
||||||
|
return fmt.Errorf("ordered_uuids count mismatch: expected %d got %d", len(active), len(normalized))
|
||||||
|
}
|
||||||
|
|
||||||
|
byUUID := make(map[string]*localdb.LocalConfiguration, len(active))
|
||||||
|
for i := range active {
|
||||||
|
cfg := active[i]
|
||||||
|
byUUID[cfg.UUID] = &cfg
|
||||||
|
}
|
||||||
|
for _, id := range normalized {
|
||||||
|
if _, ok := byUUID[id]; !ok {
|
||||||
|
return fmt.Errorf("configuration %s not found in project %s", id, projectUUID)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
now := time.Now()
|
||||||
|
for idx, id := range normalized {
|
||||||
|
cfg := byUUID[id]
|
||||||
|
newLine := (idx + 1) * 10
|
||||||
|
if cfg.Line == newLine {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
cfg.Line = newLine
|
||||||
|
cfg.UpdatedAt = now
|
||||||
|
cfg.SyncStatus = "pending"
|
||||||
|
|
||||||
|
if err := tx.Save(cfg).Error; err != nil {
|
||||||
|
return fmt.Errorf("save reordered configuration %s: %w", cfg.UUID, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
version, err := s.loadVersionForPendingTx(tx, cfg)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if err := s.enqueueConfigurationPendingChangeTx(tx, cfg, "update", version, ""); err != nil {
|
||||||
|
return fmt.Errorf("enqueue reorder pending change for %s: %w", cfg.UUID, err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
var localConfigs []localdb.LocalConfiguration
|
||||||
|
if err := s.localDB.DB().
|
||||||
|
Preload("CurrentVersion").
|
||||||
|
Where("project_uuid = ? AND is_active = ?", projectUUID, true).
|
||||||
|
Order("CASE WHEN COALESCE(line_no, 0) <= 0 THEN 2147483647 ELSE line_no END ASC, created_at DESC, id DESC").
|
||||||
|
Find(&localConfigs).Error; err != nil {
|
||||||
|
return nil, fmt.Errorf("load reordered configurations: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
result := make([]models.Configuration, 0, len(localConfigs))
|
||||||
|
for i := range localConfigs {
|
||||||
|
result = append(result, *localdb.LocalToConfiguration(&localConfigs[i]))
|
||||||
|
}
|
||||||
|
return result, nil
|
||||||
|
}
|
||||||
|
|
||||||
// ImportFromServer imports configurations from MariaDB to local SQLite cache.
|
// ImportFromServer imports configurations from MariaDB to local SQLite cache.
|
||||||
func (s *LocalConfigurationService) ImportFromServer() (*sync.ConfigImportResult, error) {
|
func (s *LocalConfigurationService) ImportFromServer() (*sync.ConfigImportResult, error) {
|
||||||
return s.syncService.ImportConfigurationsToLocal()
|
return s.syncService.ImportConfigurationsToLocal()
|
||||||
@@ -965,34 +1114,43 @@ func (s *LocalConfigurationService) isOwner(cfg *localdb.LocalConfiguration, own
|
|||||||
|
|
||||||
func (s *LocalConfigurationService) createWithVersion(localCfg *localdb.LocalConfiguration, createdBy string) error {
|
func (s *LocalConfigurationService) createWithVersion(localCfg *localdb.LocalConfiguration, createdBy string) error {
|
||||||
return s.localDB.DB().Transaction(func(tx *gorm.DB) error {
|
return s.localDB.DB().Transaction(func(tx *gorm.DB) error {
|
||||||
if err := tx.Create(localCfg).Error; err != nil {
|
return s.createWithVersionTx(tx, localCfg, createdBy)
|
||||||
return fmt.Errorf("create local configuration: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
version, err := s.appendVersionTx(tx, localCfg, "create", createdBy)
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("append create version: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
if err := tx.Model(&localdb.LocalConfiguration{}).
|
|
||||||
Where("uuid = ?", localCfg.UUID).
|
|
||||||
Update("current_version_id", version.ID).Error; err != nil {
|
|
||||||
return fmt.Errorf("set current version id: %w", err)
|
|
||||||
}
|
|
||||||
localCfg.CurrentVersionID = &version.ID
|
|
||||||
localCfg.CurrentVersion = version
|
|
||||||
|
|
||||||
if err := s.enqueueConfigurationPendingChangeTx(tx, localCfg, "create", version, createdBy); err != nil {
|
|
||||||
return fmt.Errorf("enqueue create pending change: %w", err)
|
|
||||||
}
|
|
||||||
if err := s.recalculateLocalPricelistUsageTx(tx); err != nil {
|
|
||||||
return fmt.Errorf("recalculate local pricelist usage: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (s *LocalConfigurationService) createWithVersionTx(tx *gorm.DB, localCfg *localdb.LocalConfiguration, createdBy string) error {
|
||||||
|
if localCfg.IsActive {
|
||||||
|
if err := s.ensureConfigurationLineTx(tx, localCfg); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if err := tx.Create(localCfg).Error; err != nil {
|
||||||
|
return fmt.Errorf("create local configuration: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
version, err := s.appendVersionTx(tx, localCfg, "create", createdBy)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("append create version: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := tx.Model(&localdb.LocalConfiguration{}).
|
||||||
|
Where("uuid = ?", localCfg.UUID).
|
||||||
|
Update("current_version_id", version.ID).Error; err != nil {
|
||||||
|
return fmt.Errorf("set current version id: %w", err)
|
||||||
|
}
|
||||||
|
localCfg.CurrentVersionID = &version.ID
|
||||||
|
localCfg.CurrentVersion = version
|
||||||
|
|
||||||
|
if err := s.enqueueConfigurationPendingChangeTx(tx, localCfg, "create", version, createdBy); err != nil {
|
||||||
|
return fmt.Errorf("enqueue create pending change: %w", err)
|
||||||
|
}
|
||||||
|
if err := s.recalculateLocalPricelistUsageTx(tx); err != nil {
|
||||||
|
return fmt.Errorf("recalculate local pricelist usage: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
func (s *LocalConfigurationService) saveWithVersionAndPending(localCfg *localdb.LocalConfiguration, operation string, createdBy string) (*models.Configuration, error) {
|
func (s *LocalConfigurationService) saveWithVersionAndPending(localCfg *localdb.LocalConfiguration, operation string, createdBy string) (*models.Configuration, error) {
|
||||||
var cfg *models.Configuration
|
var cfg *models.Configuration
|
||||||
|
|
||||||
@@ -1021,12 +1179,31 @@ func (s *LocalConfigurationService) saveWithVersionAndPending(localCfg *localdb.
|
|||||||
return fmt.Errorf("compare revision content: %w", err)
|
return fmt.Errorf("compare revision content: %w", err)
|
||||||
}
|
}
|
||||||
if sameRevisionContent {
|
if sameRevisionContent {
|
||||||
cfg = localdb.LocalToConfiguration(&locked)
|
if !hasNonRevisionConfigurationChanges(&locked, localCfg) {
|
||||||
|
cfg = localdb.LocalToConfiguration(&locked)
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
if err := tx.Save(localCfg).Error; err != nil {
|
||||||
|
return fmt.Errorf("save local configuration (no new revision): %w", err)
|
||||||
|
}
|
||||||
|
cfg = localdb.LocalToConfiguration(localCfg)
|
||||||
|
if err := s.enqueueConfigurationPendingChangeTx(tx, localCfg, operation, currentVersion, createdBy); err != nil {
|
||||||
|
return fmt.Errorf("enqueue %s pending change without revision: %w", operation, err)
|
||||||
|
}
|
||||||
|
if err := s.recalculateLocalPricelistUsageTx(tx); err != nil {
|
||||||
|
return fmt.Errorf("recalculate local pricelist usage: %w", err)
|
||||||
|
}
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if localCfg.IsActive {
|
||||||
|
if err := s.ensureConfigurationLineTx(tx, localCfg); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
if err := tx.Save(localCfg).Error; err != nil {
|
if err := tx.Save(localCfg).Error; err != nil {
|
||||||
return fmt.Errorf("save local configuration: %w", err)
|
return fmt.Errorf("save local configuration: %w", err)
|
||||||
}
|
}
|
||||||
@@ -1061,6 +1238,85 @@ func (s *LocalConfigurationService) saveWithVersionAndPending(localCfg *localdb.
|
|||||||
return cfg, nil
|
return cfg, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func hasNonRevisionConfigurationChanges(current *localdb.LocalConfiguration, next *localdb.LocalConfiguration) bool {
|
||||||
|
if current == nil || next == nil {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
if current.Name != next.Name ||
|
||||||
|
current.Notes != next.Notes ||
|
||||||
|
current.IsTemplate != next.IsTemplate ||
|
||||||
|
current.ServerModel != next.ServerModel ||
|
||||||
|
current.SupportCode != next.SupportCode ||
|
||||||
|
current.Article != next.Article ||
|
||||||
|
current.IsActive != next.IsActive ||
|
||||||
|
current.Line != next.Line {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
if !equalStringPtr(current.ProjectUUID, next.ProjectUUID) {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *LocalConfigurationService) UpdateVendorSpecNoAuth(uuid string, spec localdb.VendorSpec) (*models.Configuration, error) {
|
||||||
|
localCfg, err := s.localDB.GetConfigurationByUUID(uuid)
|
||||||
|
if err != nil {
|
||||||
|
return nil, ErrConfigNotFound
|
||||||
|
}
|
||||||
|
|
||||||
|
localCfg.VendorSpec = spec
|
||||||
|
localCfg.UpdatedAt = time.Now()
|
||||||
|
localCfg.SyncStatus = "pending"
|
||||||
|
|
||||||
|
cfg, err := s.saveWithVersionAndPending(localCfg, "update", "")
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("update vendor spec without auth with version: %w", err)
|
||||||
|
}
|
||||||
|
return cfg, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *LocalConfigurationService) ApplyVendorSpecItemsNoAuth(uuid string, items localdb.LocalConfigItems) (*models.Configuration, error) {
|
||||||
|
localCfg, err := s.localDB.GetConfigurationByUUID(uuid)
|
||||||
|
if err != nil {
|
||||||
|
return nil, ErrConfigNotFound
|
||||||
|
}
|
||||||
|
|
||||||
|
localCfg.Items = items
|
||||||
|
total := items.Total()
|
||||||
|
if localCfg.ServerCount > 1 {
|
||||||
|
total *= float64(localCfg.ServerCount)
|
||||||
|
}
|
||||||
|
localCfg.TotalPrice = &total
|
||||||
|
localCfg.UpdatedAt = time.Now()
|
||||||
|
localCfg.SyncStatus = "pending"
|
||||||
|
|
||||||
|
cfg, err := s.saveWithVersionAndPending(localCfg, "update", "")
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("apply vendor spec items without auth with version: %w", err)
|
||||||
|
}
|
||||||
|
return cfg, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func equalStringPtr(a, b *string) bool {
|
||||||
|
if a == nil && b == nil {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
if a == nil || b == nil {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
return strings.TrimSpace(*a) == strings.TrimSpace(*b)
|
||||||
|
}
|
||||||
|
|
||||||
|
func equalUintPtr(a, b *uint) bool {
|
||||||
|
if a == nil && b == nil {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
if a == nil || b == nil {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
return *a == *b
|
||||||
|
}
|
||||||
|
|
||||||
func (s *LocalConfigurationService) loadCurrentVersionTx(tx *gorm.DB, localCfg *localdb.LocalConfiguration) (*localdb.LocalConfigurationVersion, error) {
|
func (s *LocalConfigurationService) loadCurrentVersionTx(tx *gorm.DB, localCfg *localdb.LocalConfiguration) (*localdb.LocalConfigurationVersion, error) {
|
||||||
var version localdb.LocalConfigurationVersion
|
var version localdb.LocalConfigurationVersion
|
||||||
if localCfg.CurrentVersionID != nil && *localCfg.CurrentVersionID != "" {
|
if localCfg.CurrentVersionID != nil && *localCfg.CurrentVersionID != "" {
|
||||||
@@ -1082,6 +1338,76 @@ func (s *LocalConfigurationService) loadCurrentVersionTx(tx *gorm.DB, localCfg *
|
|||||||
return &version, nil
|
return &version, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (s *LocalConfigurationService) loadVersionForPendingTx(tx *gorm.DB, localCfg *localdb.LocalConfiguration) (*localdb.LocalConfigurationVersion, error) {
|
||||||
|
if localCfg.CurrentVersionID != nil && *localCfg.CurrentVersionID != "" {
|
||||||
|
var current localdb.LocalConfigurationVersion
|
||||||
|
if err := tx.Where("id = ?", *localCfg.CurrentVersionID).First(¤t).Error; err == nil {
|
||||||
|
return ¤t, nil
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
var latest localdb.LocalConfigurationVersion
|
||||||
|
if err := tx.Where("configuration_uuid = ?", localCfg.UUID).
|
||||||
|
Order("version_no DESC").
|
||||||
|
First(&latest).Error; err != nil {
|
||||||
|
if !errors.Is(err, gorm.ErrRecordNotFound) {
|
||||||
|
return nil, fmt.Errorf("load version for pending change: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Legacy/imported rows may exist without local version history.
|
||||||
|
// Bootstrap the first version so pending sync payloads can reference a version.
|
||||||
|
version, createErr := s.appendVersionTx(tx, localCfg, "bootstrap", "")
|
||||||
|
if createErr != nil {
|
||||||
|
return nil, fmt.Errorf("bootstrap version for pending change: %w", createErr)
|
||||||
|
}
|
||||||
|
if err := tx.Model(&localdb.LocalConfiguration{}).
|
||||||
|
Where("uuid = ?", localCfg.UUID).
|
||||||
|
Update("current_version_id", version.ID).Error; err != nil {
|
||||||
|
return nil, fmt.Errorf("set current version id for bootstrapped pending change: %w", err)
|
||||||
|
}
|
||||||
|
localCfg.CurrentVersionID = &version.ID
|
||||||
|
return version, nil
|
||||||
|
}
|
||||||
|
return &latest, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *LocalConfigurationService) ensureConfigurationLineTx(tx *gorm.DB, localCfg *localdb.LocalConfiguration) error {
|
||||||
|
if localCfg == nil || !localCfg.IsActive {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
needsAssign := localCfg.Line <= 0
|
||||||
|
if !needsAssign {
|
||||||
|
query := tx.Model(&localdb.LocalConfiguration{}).
|
||||||
|
Where("is_active = ? AND line_no = ?", true, localCfg.Line)
|
||||||
|
|
||||||
|
if strings.TrimSpace(localCfg.UUID) != "" {
|
||||||
|
query = query.Where("uuid <> ?", strings.TrimSpace(localCfg.UUID))
|
||||||
|
}
|
||||||
|
|
||||||
|
if localCfg.ProjectUUID != nil && strings.TrimSpace(*localCfg.ProjectUUID) != "" {
|
||||||
|
query = query.Where("project_uuid = ?", strings.TrimSpace(*localCfg.ProjectUUID))
|
||||||
|
} else {
|
||||||
|
query = query.Where("project_uuid IS NULL OR TRIM(project_uuid) = ''")
|
||||||
|
}
|
||||||
|
|
||||||
|
var conflicts int64
|
||||||
|
if err := query.Count(&conflicts).Error; err != nil {
|
||||||
|
return fmt.Errorf("check line_no conflict for configuration %s: %w", localCfg.UUID, err)
|
||||||
|
}
|
||||||
|
needsAssign = conflicts > 0
|
||||||
|
}
|
||||||
|
|
||||||
|
if needsAssign {
|
||||||
|
line, err := localdb.NextConfigurationLineTx(tx, localCfg.ProjectUUID, localCfg.UUID)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("assign line_no for configuration %s: %w", localCfg.UUID, err)
|
||||||
|
}
|
||||||
|
localCfg.Line = line
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
func (s *LocalConfigurationService) hasSameRevisionContent(localCfg *localdb.LocalConfiguration, currentVersion *localdb.LocalConfigurationVersion) (bool, error) {
|
func (s *LocalConfigurationService) hasSameRevisionContent(localCfg *localdb.LocalConfiguration, currentVersion *localdb.LocalConfigurationVersion) (bool, error) {
|
||||||
currentSnapshotCfg, err := s.decodeConfigurationSnapshot(currentVersion.Data)
|
currentSnapshotCfg, err := s.decodeConfigurationSnapshot(currentVersion.Data)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -1193,8 +1519,18 @@ func (s *LocalConfigurationService) rollbackToVersion(configurationUUID string,
|
|||||||
current.Notes = rollbackData.Notes
|
current.Notes = rollbackData.Notes
|
||||||
current.IsTemplate = rollbackData.IsTemplate
|
current.IsTemplate = rollbackData.IsTemplate
|
||||||
current.ServerCount = rollbackData.ServerCount
|
current.ServerCount = rollbackData.ServerCount
|
||||||
|
current.ServerModel = rollbackData.ServerModel
|
||||||
|
current.SupportCode = rollbackData.SupportCode
|
||||||
|
current.Article = rollbackData.Article
|
||||||
current.PricelistID = rollbackData.PricelistID
|
current.PricelistID = rollbackData.PricelistID
|
||||||
|
current.WarehousePricelistID = rollbackData.WarehousePricelistID
|
||||||
|
current.CompetitorPricelistID = rollbackData.CompetitorPricelistID
|
||||||
|
current.DisablePriceRefresh = rollbackData.DisablePriceRefresh
|
||||||
current.OnlyInStock = rollbackData.OnlyInStock
|
current.OnlyInStock = rollbackData.OnlyInStock
|
||||||
|
current.VendorSpec = rollbackData.VendorSpec
|
||||||
|
if rollbackData.Line > 0 {
|
||||||
|
current.Line = rollbackData.Line
|
||||||
|
}
|
||||||
current.PriceUpdatedAt = rollbackData.PriceUpdatedAt
|
current.PriceUpdatedAt = rollbackData.PriceUpdatedAt
|
||||||
current.UpdatedAt = time.Now()
|
current.UpdatedAt = time.Now()
|
||||||
current.SyncStatus = "pending"
|
current.SyncStatus = "pending"
|
||||||
|
|||||||
@@ -137,6 +137,149 @@ func TestUpdateNoAuthSkipsRevisionWhenSpecAndPriceUnchanged(t *testing.T) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func TestUpdateNoAuthCreatesRevisionWhenPricingSettingsChanged(t *testing.T) {
|
||||||
|
service, local := newLocalConfigServiceForTest(t)
|
||||||
|
|
||||||
|
created, err := service.Create("tester", &CreateConfigRequest{
|
||||||
|
Name: "pricing",
|
||||||
|
Items: models.ConfigItems{{LotName: "CPU_A", Quantity: 1, UnitPrice: 1000}},
|
||||||
|
ServerCount: 1,
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("create config: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if _, err := service.UpdateNoAuth(created.UUID, &CreateConfigRequest{
|
||||||
|
Name: "pricing",
|
||||||
|
Items: models.ConfigItems{{LotName: "CPU_A", Quantity: 1, UnitPrice: 1000}},
|
||||||
|
ServerCount: 1,
|
||||||
|
DisablePriceRefresh: true,
|
||||||
|
OnlyInStock: true,
|
||||||
|
}); err != nil {
|
||||||
|
t.Fatalf("update pricing settings: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
versions := loadVersions(t, local, created.UUID)
|
||||||
|
if len(versions) != 2 {
|
||||||
|
t.Fatalf("expected 2 versions after pricing settings change, got %d", len(versions))
|
||||||
|
}
|
||||||
|
if versions[1].VersionNo != 2 {
|
||||||
|
t.Fatalf("expected latest version_no=2, got %d", versions[1].VersionNo)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestUpdateVendorSpecNoAuthCreatesRevision(t *testing.T) {
|
||||||
|
service, local := newLocalConfigServiceForTest(t)
|
||||||
|
|
||||||
|
created, err := service.Create("tester", &CreateConfigRequest{
|
||||||
|
Name: "bom",
|
||||||
|
Items: models.ConfigItems{{LotName: "CPU_A", Quantity: 1, UnitPrice: 1000}},
|
||||||
|
ServerCount: 1,
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("create config: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
spec := localdb.VendorSpec{
|
||||||
|
{
|
||||||
|
VendorPartnumber: "PN-001",
|
||||||
|
Quantity: 2,
|
||||||
|
SortOrder: 10,
|
||||||
|
LotMappings: []localdb.VendorSpecLotMapping{
|
||||||
|
{LotName: "CPU_A", QuantityPerPN: 1},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
if _, err := service.UpdateVendorSpecNoAuth(created.UUID, spec); err != nil {
|
||||||
|
t.Fatalf("update vendor spec: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
versions := loadVersions(t, local, created.UUID)
|
||||||
|
if len(versions) != 2 {
|
||||||
|
t.Fatalf("expected 2 versions after vendor spec change, got %d", len(versions))
|
||||||
|
}
|
||||||
|
|
||||||
|
cfg, err := local.GetConfigurationByUUID(created.UUID)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("load config after vendor spec update: %v", err)
|
||||||
|
}
|
||||||
|
if len(cfg.VendorSpec) != 1 || cfg.VendorSpec[0].VendorPartnumber != "PN-001" {
|
||||||
|
t.Fatalf("expected saved vendor spec, got %+v", cfg.VendorSpec)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestReorderProjectConfigurationsDoesNotCreateNewVersions(t *testing.T) {
|
||||||
|
service, local := newLocalConfigServiceForTest(t)
|
||||||
|
|
||||||
|
project := &localdb.LocalProject{
|
||||||
|
UUID: "project-reorder",
|
||||||
|
OwnerUsername: "tester",
|
||||||
|
Code: "PRJ-ORDER",
|
||||||
|
Variant: "",
|
||||||
|
Name: ptrString("Project Reorder"),
|
||||||
|
IsActive: true,
|
||||||
|
CreatedAt: time.Now(),
|
||||||
|
UpdatedAt: time.Now(),
|
||||||
|
SyncStatus: "pending",
|
||||||
|
}
|
||||||
|
if err := local.SaveProject(project); err != nil {
|
||||||
|
t.Fatalf("save project: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
first, err := service.Create("tester", &CreateConfigRequest{
|
||||||
|
Name: "Cfg A",
|
||||||
|
Items: models.ConfigItems{{LotName: "CPU_A", Quantity: 1, UnitPrice: 100}},
|
||||||
|
ServerCount: 1,
|
||||||
|
ProjectUUID: &project.UUID,
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("create first config: %v", err)
|
||||||
|
}
|
||||||
|
second, err := service.Create("tester", &CreateConfigRequest{
|
||||||
|
Name: "Cfg B",
|
||||||
|
Items: models.ConfigItems{{LotName: "CPU_B", Quantity: 1, UnitPrice: 200}},
|
||||||
|
ServerCount: 1,
|
||||||
|
ProjectUUID: &project.UUID,
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("create second config: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
beforeFirst := loadVersions(t, local, first.UUID)
|
||||||
|
beforeSecond := loadVersions(t, local, second.UUID)
|
||||||
|
|
||||||
|
reordered, err := service.ReorderProjectConfigurationsNoAuth(project.UUID, []string{second.UUID, first.UUID})
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("reorder configurations: %v", err)
|
||||||
|
}
|
||||||
|
if len(reordered) != 2 {
|
||||||
|
t.Fatalf("expected 2 reordered configs, got %d", len(reordered))
|
||||||
|
}
|
||||||
|
if reordered[0].UUID != second.UUID || reordered[0].Line != 10 {
|
||||||
|
t.Fatalf("expected second config first with line 10, got uuid=%s line=%d", reordered[0].UUID, reordered[0].Line)
|
||||||
|
}
|
||||||
|
if reordered[1].UUID != first.UUID || reordered[1].Line != 20 {
|
||||||
|
t.Fatalf("expected first config second with line 20, got uuid=%s line=%d", reordered[1].UUID, reordered[1].Line)
|
||||||
|
}
|
||||||
|
|
||||||
|
afterFirst := loadVersions(t, local, first.UUID)
|
||||||
|
afterSecond := loadVersions(t, local, second.UUID)
|
||||||
|
if len(afterFirst) != len(beforeFirst) || len(afterSecond) != len(beforeSecond) {
|
||||||
|
t.Fatalf("reorder must not create new versions")
|
||||||
|
}
|
||||||
|
|
||||||
|
var pendingCount int64
|
||||||
|
if err := local.DB().
|
||||||
|
Table("pending_changes").
|
||||||
|
Where("entity_type = ? AND operation = ? AND entity_uuid IN ?", "configuration", "update", []string{first.UUID, second.UUID}).
|
||||||
|
Count(&pendingCount).Error; err != nil {
|
||||||
|
t.Fatalf("count reorder pending changes: %v", err)
|
||||||
|
}
|
||||||
|
if pendingCount < 2 {
|
||||||
|
t.Fatalf("expected at least 2 pending update changes for reorder, got %d", pendingCount)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
func TestAppendOnlyInvariantOldRowsUnchanged(t *testing.T) {
|
func TestAppendOnlyInvariantOldRowsUnchanged(t *testing.T) {
|
||||||
service, local := newLocalConfigServiceForTest(t)
|
service, local := newLocalConfigServiceForTest(t)
|
||||||
|
|
||||||
|
|||||||
@@ -16,10 +16,12 @@ import (
|
|||||||
)
|
)
|
||||||
|
|
||||||
var (
|
var (
|
||||||
ErrProjectNotFound = errors.New("project not found")
|
ErrProjectNotFound = errors.New("project not found")
|
||||||
ErrProjectForbidden = errors.New("access to project forbidden")
|
ErrProjectForbidden = errors.New("access to project forbidden")
|
||||||
ErrProjectCodeExists = errors.New("project code and variant already exist")
|
ErrProjectCodeExists = errors.New("project code and variant already exist")
|
||||||
ErrCannotDeleteMainVariant = errors.New("cannot delete main variant")
|
ErrCannotDeleteMainVariant = errors.New("cannot delete main variant")
|
||||||
|
ErrReservedMainVariant = errors.New("variant name 'main' is reserved")
|
||||||
|
ErrCannotRenameMainVariant = errors.New("cannot rename main variant")
|
||||||
)
|
)
|
||||||
|
|
||||||
type ProjectService struct {
|
type ProjectService struct {
|
||||||
@@ -31,10 +33,10 @@ func NewProjectService(localDB *localdb.LocalDB) *ProjectService {
|
|||||||
}
|
}
|
||||||
|
|
||||||
type CreateProjectRequest struct {
|
type CreateProjectRequest struct {
|
||||||
Code string `json:"code"`
|
Code string `json:"code"`
|
||||||
Variant string `json:"variant,omitempty"`
|
Variant string `json:"variant,omitempty"`
|
||||||
Name *string `json:"name,omitempty"`
|
Name *string `json:"name,omitempty"`
|
||||||
TrackerURL string `json:"tracker_url"`
|
TrackerURL string `json:"tracker_url"`
|
||||||
}
|
}
|
||||||
|
|
||||||
type UpdateProjectRequest struct {
|
type UpdateProjectRequest struct {
|
||||||
@@ -63,6 +65,9 @@ func (s *ProjectService) Create(ownerUsername string, req *CreateProjectRequest)
|
|||||||
return nil, fmt.Errorf("project code is required")
|
return nil, fmt.Errorf("project code is required")
|
||||||
}
|
}
|
||||||
variant := strings.TrimSpace(req.Variant)
|
variant := strings.TrimSpace(req.Variant)
|
||||||
|
if err := validateProjectVariantName(variant); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
if err := s.ensureUniqueProjectCodeVariant("", code, variant); err != nil {
|
if err := s.ensureUniqueProjectCodeVariant("", code, variant); err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
@@ -104,7 +109,15 @@ func (s *ProjectService) Update(projectUUID, ownerUsername string, req *UpdatePr
|
|||||||
localProject.Code = code
|
localProject.Code = code
|
||||||
}
|
}
|
||||||
if req.Variant != nil {
|
if req.Variant != nil {
|
||||||
localProject.Variant = strings.TrimSpace(*req.Variant)
|
newVariant := strings.TrimSpace(*req.Variant)
|
||||||
|
// Block renaming of the main variant (empty Variant) — there must always be a main.
|
||||||
|
if strings.TrimSpace(localProject.Variant) == "" && newVariant != "" {
|
||||||
|
return nil, ErrCannotRenameMainVariant
|
||||||
|
}
|
||||||
|
localProject.Variant = newVariant
|
||||||
|
if err := validateProjectVariantName(localProject.Variant); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
}
|
}
|
||||||
if err := s.ensureUniqueProjectCodeVariant(projectUUID, localProject.Code, localProject.Variant); err != nil {
|
if err := s.ensureUniqueProjectCodeVariant(projectUUID, localProject.Code, localProject.Variant); err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
@@ -166,6 +179,13 @@ func normalizeProjectVariant(variant string) string {
|
|||||||
return strings.ToLower(strings.TrimSpace(variant))
|
return strings.ToLower(strings.TrimSpace(variant))
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func validateProjectVariantName(variant string) error {
|
||||||
|
if normalizeProjectVariant(variant) == "main" {
|
||||||
|
return ErrReservedMainVariant
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
func (s *ProjectService) Archive(projectUUID, ownerUsername string) error {
|
func (s *ProjectService) Archive(projectUUID, ownerUsername string) error {
|
||||||
return s.setProjectActive(projectUUID, ownerUsername, false)
|
return s.setProjectActive(projectUUID, ownerUsername, false)
|
||||||
}
|
}
|
||||||
@@ -275,8 +295,23 @@ func (s *ProjectService) ListConfigurations(projectUUID, ownerUsername, status s
|
|||||||
}, nil
|
}, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
query := s.localDB.DB().
|
||||||
|
Preload("CurrentVersion").
|
||||||
|
Where("project_uuid = ?", projectUUID).
|
||||||
|
Order("CASE WHEN COALESCE(line_no, 0) <= 0 THEN 2147483647 ELSE line_no END ASC, created_at DESC, id DESC")
|
||||||
|
|
||||||
|
switch status {
|
||||||
|
case "active", "":
|
||||||
|
query = query.Where("is_active = ?", true)
|
||||||
|
case "archived":
|
||||||
|
query = query.Where("is_active = ?", false)
|
||||||
|
case "all":
|
||||||
|
default:
|
||||||
|
query = query.Where("is_active = ?", true)
|
||||||
|
}
|
||||||
|
|
||||||
var localConfigs []localdb.LocalConfiguration
|
var localConfigs []localdb.LocalConfiguration
|
||||||
if err := s.localDB.DB().Preload("CurrentVersion").Order("created_at DESC").Find(&localConfigs).Error; err != nil {
|
if err := query.Find(&localConfigs).Error; err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -284,25 +319,6 @@ func (s *ProjectService) ListConfigurations(projectUUID, ownerUsername, status s
|
|||||||
total := 0.0
|
total := 0.0
|
||||||
for i := range localConfigs {
|
for i := range localConfigs {
|
||||||
localCfg := localConfigs[i]
|
localCfg := localConfigs[i]
|
||||||
if localCfg.ProjectUUID == nil || *localCfg.ProjectUUID != projectUUID {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
switch status {
|
|
||||||
case "active", "":
|
|
||||||
if !localCfg.IsActive {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
case "archived":
|
|
||||||
if localCfg.IsActive {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
case "all":
|
|
||||||
default:
|
|
||||||
if !localCfg.IsActive {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
cfg := localdb.LocalToConfiguration(&localCfg)
|
cfg := localdb.LocalToConfiguration(&localCfg)
|
||||||
if cfg.TotalPrice != nil {
|
if cfg.TotalPrice != nil {
|
||||||
total += *cfg.TotalPrice
|
total += *cfg.TotalPrice
|
||||||
|
|||||||
60
internal/services/project_test.go
Normal file
60
internal/services/project_test.go
Normal file
@@ -0,0 +1,60 @@
|
|||||||
|
package services
|
||||||
|
|
||||||
|
import (
|
||||||
|
"errors"
|
||||||
|
"path/filepath"
|
||||||
|
"testing"
|
||||||
|
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/localdb"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestProjectServiceCreateRejectsReservedMainVariant(t *testing.T) {
|
||||||
|
local, err := newProjectTestLocalDB(t)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("open localdb: %v", err)
|
||||||
|
}
|
||||||
|
service := NewProjectService(local)
|
||||||
|
|
||||||
|
_, err = service.Create("tester", &CreateProjectRequest{
|
||||||
|
Code: "OPS-1",
|
||||||
|
Variant: "main",
|
||||||
|
})
|
||||||
|
if !errors.Is(err, ErrReservedMainVariant) {
|
||||||
|
t.Fatalf("expected ErrReservedMainVariant, got %v", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestProjectServiceUpdateRejectsReservedMainVariant(t *testing.T) {
|
||||||
|
local, err := newProjectTestLocalDB(t)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("open localdb: %v", err)
|
||||||
|
}
|
||||||
|
service := NewProjectService(local)
|
||||||
|
|
||||||
|
created, err := service.Create("tester", &CreateProjectRequest{
|
||||||
|
Code: "OPS-1",
|
||||||
|
Variant: "Lenovo",
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("create project: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
mainName := "main"
|
||||||
|
_, err = service.Update(created.UUID, "tester", &UpdateProjectRequest{
|
||||||
|
Variant: &mainName,
|
||||||
|
})
|
||||||
|
if !errors.Is(err, ErrReservedMainVariant) {
|
||||||
|
t.Fatalf("expected ErrReservedMainVariant, got %v", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func newProjectTestLocalDB(t *testing.T) (*localdb.LocalDB, error) {
|
||||||
|
t.Helper()
|
||||||
|
dbPath := filepath.Join(t.TempDir(), "project_test.db")
|
||||||
|
local, err := localdb.New(dbPath)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
t.Cleanup(func() { _ = local.Close() })
|
||||||
|
return local, nil
|
||||||
|
}
|
||||||
@@ -289,6 +289,13 @@ func (s *QuoteService) CalculatePriceLevels(req *PriceLevelsRequest) (*PriceLeve
|
|||||||
result.ResolvedPricelistIDs[sourceKey] = latest.ID
|
result.ResolvedPricelistIDs[sourceKey] = latest.ID
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
if st.id == 0 && s.localDB != nil {
|
||||||
|
localPL, err := s.localDB.GetLatestLocalPricelistBySource(sourceKey)
|
||||||
|
if err == nil && localPL != nil {
|
||||||
|
st.id = localPL.ServerID
|
||||||
|
result.ResolvedPricelistIDs[sourceKey] = localPL.ServerID
|
||||||
|
}
|
||||||
|
}
|
||||||
if st.id == 0 {
|
if st.id == 0 {
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
@@ -381,13 +388,14 @@ func (s *QuoteService) lookupPricesByPricelistID(pricelistID uint, lotNames []st
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Fallback path (usually offline): local per-lot lookup.
|
// Fallback path (usually offline): batch local lookup (single query via index).
|
||||||
if s.localDB != nil {
|
if s.localDB != nil {
|
||||||
for _, lotName := range missing {
|
if localPL, err := s.localDB.GetLocalPricelistByServerID(pricelistID); err == nil {
|
||||||
price, found := s.lookupPriceByPricelistID(pricelistID, lotName)
|
if batchPrices, err := s.localDB.GetLocalPricesForLots(localPL.ID, missing); err == nil {
|
||||||
if found && price > 0 {
|
for lotName, price := range batchPrices {
|
||||||
result[lotName] = price
|
result[lotName] = price
|
||||||
loaded[lotName] = price
|
loaded[lotName] = price
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
s.updateCache(pricelistID, missing, loaded)
|
s.updateCache(pricelistID, missing, loaded)
|
||||||
|
|||||||
153
internal/services/sync/partnumber_books.go
Normal file
153
internal/services/sync/partnumber_books.go
Normal file
@@ -0,0 +1,153 @@
|
|||||||
|
package sync
|
||||||
|
|
||||||
|
import (
|
||||||
|
"encoding/json"
|
||||||
|
"fmt"
|
||||||
|
"log/slog"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/localdb"
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/repository"
|
||||||
|
"gorm.io/gorm"
|
||||||
|
)
|
||||||
|
|
||||||
|
// PullPartnumberBooks synchronizes partnumber book snapshots from MariaDB to local SQLite.
|
||||||
|
// Append-only for headers; re-pulls items if a book header exists but has 0 items.
|
||||||
|
func (s *Service) PullPartnumberBooks() (int, error) {
|
||||||
|
slog.Info("starting partnumber book pull")
|
||||||
|
|
||||||
|
mariaDB, err := s.getDB()
|
||||||
|
if err != nil {
|
||||||
|
return 0, fmt.Errorf("database not available: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
localBookRepo := repository.NewPartnumberBookRepository(s.localDB.DB())
|
||||||
|
|
||||||
|
type serverBook struct {
|
||||||
|
ID int `gorm:"column:id"`
|
||||||
|
Version string `gorm:"column:version"`
|
||||||
|
CreatedAt time.Time `gorm:"column:created_at"`
|
||||||
|
IsActive bool `gorm:"column:is_active"`
|
||||||
|
PartnumbersJSON string `gorm:"column:partnumbers_json"`
|
||||||
|
}
|
||||||
|
var serverBooks []serverBook
|
||||||
|
if err := mariaDB.Raw("SELECT id, version, created_at, is_active, partnumbers_json FROM qt_partnumber_books ORDER BY created_at DESC, id DESC").Scan(&serverBooks).Error; err != nil {
|
||||||
|
return 0, fmt.Errorf("querying server partnumber books: %w", err)
|
||||||
|
}
|
||||||
|
slog.Info("partnumber books found on server", "count", len(serverBooks))
|
||||||
|
|
||||||
|
pulled := 0
|
||||||
|
for _, sb := range serverBooks {
|
||||||
|
var existing localdb.LocalPartnumberBook
|
||||||
|
err := s.localDB.DB().Where("server_id = ?", sb.ID).First(&existing).Error
|
||||||
|
partnumbers, errPartnumbers := decodeServerPartnumbers(sb.PartnumbersJSON)
|
||||||
|
if errPartnumbers != nil {
|
||||||
|
slog.Error("failed to decode server partnumbers_json", "server_id", sb.ID, "error", errPartnumbers)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if err == nil {
|
||||||
|
existing.Version = sb.Version
|
||||||
|
existing.CreatedAt = sb.CreatedAt
|
||||||
|
existing.IsActive = sb.IsActive
|
||||||
|
existing.PartnumbersJSON = partnumbers
|
||||||
|
if err := localBookRepo.SaveBook(&existing); err != nil {
|
||||||
|
slog.Error("failed to update local partnumber book header", "server_id", sb.ID, "error", err)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
localItemCount := localBookRepo.CountBookItems(existing.ID)
|
||||||
|
if localItemCount > 0 && localBookRepo.HasAllBookItems(existing.ID) {
|
||||||
|
slog.Debug("partnumber book already synced, skipping", "server_id", sb.ID, "version", sb.Version, "items", localItemCount)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
slog.Info("partnumber book header exists but catalog items are missing, re-pulling items", "server_id", sb.ID, "version", sb.Version)
|
||||||
|
n, err := pullBookItems(mariaDB, localBookRepo, existing.PartnumbersJSON)
|
||||||
|
if err != nil {
|
||||||
|
slog.Error("failed to re-pull items for existing book", "server_id", sb.ID, "error", err)
|
||||||
|
} else {
|
||||||
|
slog.Info("re-pulled items for existing book", "server_id", sb.ID, "version", sb.Version, "items_saved", n)
|
||||||
|
pulled++
|
||||||
|
}
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
slog.Info("pulling new partnumber book", "server_id", sb.ID, "version", sb.Version, "is_active", sb.IsActive)
|
||||||
|
|
||||||
|
localBook := &localdb.LocalPartnumberBook{
|
||||||
|
ServerID: sb.ID,
|
||||||
|
Version: sb.Version,
|
||||||
|
CreatedAt: sb.CreatedAt,
|
||||||
|
IsActive: sb.IsActive,
|
||||||
|
PartnumbersJSON: partnumbers,
|
||||||
|
}
|
||||||
|
if err := localBookRepo.SaveBook(localBook); err != nil {
|
||||||
|
slog.Error("failed to save local partnumber book", "server_id", sb.ID, "error", err)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
n, err := pullBookItems(mariaDB, localBookRepo, localBook.PartnumbersJSON)
|
||||||
|
if err != nil {
|
||||||
|
slog.Error("failed to pull items for new book", "server_id", sb.ID, "error", err)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
slog.Info("partnumber book saved locally", "server_id", sb.ID, "version", sb.Version, "items_saved", n)
|
||||||
|
pulled++
|
||||||
|
}
|
||||||
|
|
||||||
|
slog.Info("partnumber book pull completed", "new_books_pulled", pulled, "total_on_server", len(serverBooks))
|
||||||
|
return pulled, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// pullBookItems fetches catalog items for a partnumber list from MariaDB and saves them to SQLite.
|
||||||
|
// Returns the number of items saved.
|
||||||
|
func pullBookItems(mariaDB *gorm.DB, repo *repository.PartnumberBookRepository, partnumbers localdb.LocalStringList) (int, error) {
|
||||||
|
if len(partnumbers) == 0 {
|
||||||
|
return 0, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
type serverItem struct {
|
||||||
|
Partnumber string `gorm:"column:partnumber"`
|
||||||
|
LotsJSON string `gorm:"column:lots_json"`
|
||||||
|
Description string `gorm:"column:description"`
|
||||||
|
}
|
||||||
|
var serverItems []serverItem
|
||||||
|
err := mariaDB.Raw("SELECT partnumber, lots_json, description FROM qt_partnumber_book_items WHERE partnumber IN ?", []string(partnumbers)).Scan(&serverItems).Error
|
||||||
|
if err != nil {
|
||||||
|
return 0, fmt.Errorf("querying items from server: %w", err)
|
||||||
|
}
|
||||||
|
slog.Info("partnumber book items fetched from server", "count", len(serverItems), "requested_partnumbers", len(partnumbers))
|
||||||
|
|
||||||
|
if len(serverItems) == 0 {
|
||||||
|
slog.Warn("server returned 0 partnumber book items")
|
||||||
|
return 0, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
localItems := make([]localdb.LocalPartnumberBookItem, 0, len(serverItems))
|
||||||
|
for _, si := range serverItems {
|
||||||
|
var lots localdb.LocalPartnumberBookLots
|
||||||
|
if err := json.Unmarshal([]byte(si.LotsJSON), &lots); err != nil {
|
||||||
|
return 0, fmt.Errorf("decode lots_json for %s: %w", si.Partnumber, err)
|
||||||
|
}
|
||||||
|
localItems = append(localItems, localdb.LocalPartnumberBookItem{
|
||||||
|
Partnumber: si.Partnumber,
|
||||||
|
LotsJSON: lots,
|
||||||
|
Description: si.Description,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
if err := repo.SaveBookItems(localItems); err != nil {
|
||||||
|
return 0, fmt.Errorf("saving items to local db: %w", err)
|
||||||
|
}
|
||||||
|
return len(localItems), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func decodeServerPartnumbers(raw string) (localdb.LocalStringList, error) {
|
||||||
|
if raw == "" {
|
||||||
|
return localdb.LocalStringList{}, nil
|
||||||
|
}
|
||||||
|
var items []string
|
||||||
|
if err := json.Unmarshal([]byte(raw), &items); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return localdb.LocalStringList(items), nil
|
||||||
|
}
|
||||||
49
internal/services/sync/partnumber_seen.go
Normal file
49
internal/services/sync/partnumber_seen.go
Normal file
@@ -0,0 +1,49 @@
|
|||||||
|
package sync
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"log/slog"
|
||||||
|
"time"
|
||||||
|
)
|
||||||
|
|
||||||
|
// SeenPartnumber represents an unresolved vendor partnumber to report.
|
||||||
|
type SeenPartnumber struct {
|
||||||
|
Partnumber string
|
||||||
|
Description string
|
||||||
|
Ignored bool
|
||||||
|
}
|
||||||
|
|
||||||
|
// PushPartnumberSeen inserts unresolved vendor partnumbers into qt_vendor_partnumber_seen on MariaDB.
|
||||||
|
// Existing rows are left untouched: no updates to last_seen_at, is_ignored, or description.
|
||||||
|
func (s *Service) PushPartnumberSeen(items []SeenPartnumber) error {
|
||||||
|
if len(items) == 0 {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
mariaDB, err := s.getDB()
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("database not available: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
now := time.Now().UTC()
|
||||||
|
for _, item := range items {
|
||||||
|
if item.Partnumber == "" {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
err := mariaDB.Exec(`
|
||||||
|
INSERT INTO qt_vendor_partnumber_seen
|
||||||
|
(source_type, vendor, partnumber, description, is_ignored, last_seen_at)
|
||||||
|
VALUES
|
||||||
|
('manual', '', ?, ?, ?, ?)
|
||||||
|
ON DUPLICATE KEY UPDATE
|
||||||
|
partnumber = partnumber
|
||||||
|
`, item.Partnumber, item.Description, item.Ignored, now).Error
|
||||||
|
if err != nil {
|
||||||
|
slog.Error("failed to insert partnumber_seen", "partnumber", item.Partnumber, "error", err)
|
||||||
|
// Continue with remaining items
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
slog.Info("partnumber_seen pushed to server", "count", len(items))
|
||||||
|
return nil
|
||||||
|
}
|
||||||
@@ -1,13 +1,10 @@
|
|||||||
package sync
|
package sync
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"bufio"
|
|
||||||
"crypto/sha256"
|
|
||||||
"encoding/hex"
|
|
||||||
"errors"
|
"errors"
|
||||||
"fmt"
|
"fmt"
|
||||||
"log/slog"
|
"log/slog"
|
||||||
"strconv"
|
"os"
|
||||||
"strings"
|
"strings"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
@@ -79,48 +76,6 @@ func (s *Service) GetReadiness() (*SyncReadiness, error) {
|
|||||||
)
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
migrations, err := listActiveClientMigrations(mariaDB)
|
|
||||||
if err != nil {
|
|
||||||
return s.blockedReadiness(
|
|
||||||
now,
|
|
||||||
"REMOTE_MIGRATION_REGISTRY_UNAVAILABLE",
|
|
||||||
"Синхронизация заблокирована: не удалось проверить централизованные миграции локальной БД.",
|
|
||||||
nil,
|
|
||||||
)
|
|
||||||
}
|
|
||||||
|
|
||||||
for i := range migrations {
|
|
||||||
m := migrations[i]
|
|
||||||
if strings.TrimSpace(m.MinAppVersion) != "" {
|
|
||||||
if compareVersions(appmeta.Version(), m.MinAppVersion) < 0 {
|
|
||||||
min := m.MinAppVersion
|
|
||||||
return s.blockedReadiness(
|
|
||||||
now,
|
|
||||||
"MIN_APP_VERSION_REQUIRED",
|
|
||||||
fmt.Sprintf("Требуется обновление приложения до версии %s для безопасной синхронизации.", m.MinAppVersion),
|
|
||||||
&min,
|
|
||||||
)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if err := s.applyMissingRemoteMigrations(migrations); err != nil {
|
|
||||||
if strings.Contains(strings.ToLower(err.Error()), "checksum") {
|
|
||||||
return s.blockedReadiness(
|
|
||||||
now,
|
|
||||||
"REMOTE_MIGRATION_CHECKSUM_MISMATCH",
|
|
||||||
"Синхронизация заблокирована: контрольная сумма миграции не совпадает.",
|
|
||||||
nil,
|
|
||||||
)
|
|
||||||
}
|
|
||||||
return s.blockedReadiness(
|
|
||||||
now,
|
|
||||||
"LOCAL_MIGRATION_APPLY_FAILED",
|
|
||||||
"Синхронизация заблокирована: не удалось применить миграции локальной БД.",
|
|
||||||
nil,
|
|
||||||
)
|
|
||||||
}
|
|
||||||
|
|
||||||
if err := s.reportClientSchemaState(mariaDB, now); err != nil {
|
if err := s.reportClientSchemaState(mariaDB, now); err != nil {
|
||||||
slog.Warn("failed to report client schema state", "error", err)
|
slog.Warn("failed to report client schema state", "error", err)
|
||||||
}
|
}
|
||||||
@@ -157,73 +112,72 @@ func (s *Service) isOnline() bool {
|
|||||||
return s.connMgr.IsOnline()
|
return s.connMgr.IsOnline()
|
||||||
}
|
}
|
||||||
|
|
||||||
type clientLocalMigration struct {
|
func ensureClientSchemaStateTable(db *gorm.DB) error {
|
||||||
ID string `gorm:"column:id"`
|
|
||||||
Name string `gorm:"column:name"`
|
|
||||||
SQLText string `gorm:"column:sql_text"`
|
|
||||||
Checksum string `gorm:"column:checksum"`
|
|
||||||
MinAppVersion string `gorm:"column:min_app_version"`
|
|
||||||
OrderNo int `gorm:"column:order_no"`
|
|
||||||
CreatedAt time.Time `gorm:"column:created_at"`
|
|
||||||
}
|
|
||||||
|
|
||||||
func listActiveClientMigrations(db *gorm.DB) ([]clientLocalMigration, error) {
|
|
||||||
if strings.EqualFold(db.Dialector.Name(), "sqlite") {
|
|
||||||
return []clientLocalMigration{}, nil
|
|
||||||
}
|
|
||||||
if err := ensureClientMigrationRegistryTable(db); err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
rows := make([]clientLocalMigration, 0)
|
|
||||||
if err := db.Raw(`
|
|
||||||
SELECT id, name, sql_text, checksum, COALESCE(min_app_version, '') AS min_app_version, order_no, created_at
|
|
||||||
FROM qt_client_local_migrations
|
|
||||||
WHERE is_active = 1
|
|
||||||
ORDER BY order_no ASC, created_at ASC, id ASC
|
|
||||||
`).Scan(&rows).Error; err != nil {
|
|
||||||
return nil, fmt.Errorf("load client local migrations: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
return rows, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func ensureClientMigrationRegistryTable(db *gorm.DB) error {
|
|
||||||
// Check if table exists instead of trying to create (avoids permission issues)
|
|
||||||
if !tableExists(db, "qt_client_local_migrations") {
|
|
||||||
if err := db.Exec(`
|
|
||||||
CREATE TABLE IF NOT EXISTS qt_client_local_migrations (
|
|
||||||
id VARCHAR(128) NOT NULL,
|
|
||||||
name VARCHAR(255) NOT NULL,
|
|
||||||
sql_text LONGTEXT NOT NULL,
|
|
||||||
checksum VARCHAR(128) NOT NULL,
|
|
||||||
min_app_version VARCHAR(64) NULL,
|
|
||||||
order_no INT NOT NULL DEFAULT 0,
|
|
||||||
is_active TINYINT(1) NOT NULL DEFAULT 1,
|
|
||||||
created_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
|
|
||||||
PRIMARY KEY (id),
|
|
||||||
INDEX idx_qt_client_local_migrations_active_order (is_active, order_no, created_at)
|
|
||||||
)
|
|
||||||
`).Error; err != nil {
|
|
||||||
return fmt.Errorf("create qt_client_local_migrations table: %w", err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if !tableExists(db, "qt_client_schema_state") {
|
if !tableExists(db, "qt_client_schema_state") {
|
||||||
if err := db.Exec(`
|
if err := db.Exec(`
|
||||||
CREATE TABLE IF NOT EXISTS qt_client_schema_state (
|
CREATE TABLE IF NOT EXISTS qt_client_schema_state (
|
||||||
username VARCHAR(100) NOT NULL,
|
username VARCHAR(100) NOT NULL,
|
||||||
last_applied_migration_id VARCHAR(128) NULL,
|
hostname VARCHAR(255) NOT NULL DEFAULT '',
|
||||||
app_version VARCHAR(64) NULL,
|
app_version VARCHAR(64) NULL,
|
||||||
|
last_sync_at DATETIME NULL,
|
||||||
|
last_sync_status VARCHAR(32) NULL,
|
||||||
|
pending_changes_count INT NOT NULL DEFAULT 0,
|
||||||
|
pending_errors_count INT NOT NULL DEFAULT 0,
|
||||||
|
configurations_count INT NOT NULL DEFAULT 0,
|
||||||
|
projects_count INT NOT NULL DEFAULT 0,
|
||||||
|
estimate_pricelist_version VARCHAR(128) NULL,
|
||||||
|
warehouse_pricelist_version VARCHAR(128) NULL,
|
||||||
|
competitor_pricelist_version VARCHAR(128) NULL,
|
||||||
|
last_sync_error_code VARCHAR(128) NULL,
|
||||||
|
last_sync_error_text TEXT NULL,
|
||||||
last_checked_at DATETIME NOT NULL,
|
last_checked_at DATETIME NOT NULL,
|
||||||
updated_at DATETIME NOT NULL,
|
updated_at DATETIME NOT NULL,
|
||||||
PRIMARY KEY (username),
|
PRIMARY KEY (username, hostname),
|
||||||
INDEX idx_qt_client_schema_state_checked (last_checked_at)
|
INDEX idx_qt_client_schema_state_checked (last_checked_at)
|
||||||
)
|
)
|
||||||
`).Error; err != nil {
|
`).Error; err != nil {
|
||||||
return fmt.Errorf("create qt_client_schema_state table: %w", err)
|
return fmt.Errorf("create qt_client_schema_state table: %w", err)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if tableExists(db, "qt_client_schema_state") {
|
||||||
|
if err := db.Exec(`
|
||||||
|
ALTER TABLE qt_client_schema_state
|
||||||
|
ADD COLUMN IF NOT EXISTS hostname VARCHAR(255) NOT NULL DEFAULT '' AFTER username
|
||||||
|
`).Error; err != nil {
|
||||||
|
return fmt.Errorf("add qt_client_schema_state.hostname: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := db.Exec(`
|
||||||
|
ALTER TABLE qt_client_schema_state
|
||||||
|
DROP PRIMARY KEY,
|
||||||
|
ADD PRIMARY KEY (username, hostname)
|
||||||
|
`).Error; err != nil && !isDuplicatePrimaryKeyDefinition(err) {
|
||||||
|
return fmt.Errorf("set qt_client_schema_state primary key: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, stmt := range []string{
|
||||||
|
"ALTER TABLE qt_client_schema_state ADD COLUMN IF NOT EXISTS last_sync_at DATETIME NULL AFTER app_version",
|
||||||
|
"ALTER TABLE qt_client_schema_state ADD COLUMN IF NOT EXISTS last_sync_status VARCHAR(32) NULL AFTER last_sync_at",
|
||||||
|
"ALTER TABLE qt_client_schema_state ADD COLUMN IF NOT EXISTS pending_changes_count INT NOT NULL DEFAULT 0 AFTER last_sync_status",
|
||||||
|
"ALTER TABLE qt_client_schema_state ADD COLUMN IF NOT EXISTS pending_errors_count INT NOT NULL DEFAULT 0 AFTER pending_changes_count",
|
||||||
|
"ALTER TABLE qt_client_schema_state ADD COLUMN IF NOT EXISTS configurations_count INT NOT NULL DEFAULT 0 AFTER pending_errors_count",
|
||||||
|
"ALTER TABLE qt_client_schema_state ADD COLUMN IF NOT EXISTS projects_count INT NOT NULL DEFAULT 0 AFTER configurations_count",
|
||||||
|
"ALTER TABLE qt_client_schema_state ADD COLUMN IF NOT EXISTS estimate_pricelist_version VARCHAR(128) NULL AFTER projects_count",
|
||||||
|
"ALTER TABLE qt_client_schema_state ADD COLUMN IF NOT EXISTS warehouse_pricelist_version VARCHAR(128) NULL AFTER estimate_pricelist_version",
|
||||||
|
"ALTER TABLE qt_client_schema_state ADD COLUMN IF NOT EXISTS competitor_pricelist_version VARCHAR(128) NULL AFTER warehouse_pricelist_version",
|
||||||
|
"ALTER TABLE qt_client_schema_state ADD COLUMN IF NOT EXISTS last_sync_error_code VARCHAR(128) NULL AFTER competitor_pricelist_version",
|
||||||
|
"ALTER TABLE qt_client_schema_state ADD COLUMN IF NOT EXISTS last_sync_error_text TEXT NULL AFTER last_sync_error_code",
|
||||||
|
"ALTER TABLE qt_client_schema_state ADD COLUMN IF NOT EXISTS local_pricelist_count INT NOT NULL DEFAULT 0 AFTER last_sync_error_text",
|
||||||
|
"ALTER TABLE qt_client_schema_state ADD COLUMN IF NOT EXISTS pricelist_items_count INT NOT NULL DEFAULT 0 AFTER local_pricelist_count",
|
||||||
|
"ALTER TABLE qt_client_schema_state ADD COLUMN IF NOT EXISTS components_count INT NOT NULL DEFAULT 0 AFTER pricelist_items_count",
|
||||||
|
"ALTER TABLE qt_client_schema_state ADD COLUMN IF NOT EXISTS db_size_bytes BIGINT NOT NULL DEFAULT 0 AFTER components_count",
|
||||||
|
} {
|
||||||
|
if err := db.Exec(stmt).Error; err != nil {
|
||||||
|
return fmt.Errorf("expand qt_client_schema_state: %w", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -239,159 +193,124 @@ func tableExists(db *gorm.DB, tableName string) bool {
|
|||||||
return count > 0
|
return count > 0
|
||||||
}
|
}
|
||||||
|
|
||||||
func (s *Service) applyMissingRemoteMigrations(migrations []clientLocalMigration) error {
|
|
||||||
for i := range migrations {
|
|
||||||
m := migrations[i]
|
|
||||||
computedChecksum := digestSQL(m.SQLText)
|
|
||||||
checksum := strings.TrimSpace(m.Checksum)
|
|
||||||
if checksum == "" {
|
|
||||||
checksum = computedChecksum
|
|
||||||
} else if !strings.EqualFold(checksum, computedChecksum) {
|
|
||||||
return fmt.Errorf("checksum mismatch for migration %s", m.ID)
|
|
||||||
}
|
|
||||||
|
|
||||||
applied, err := s.localDB.GetRemoteMigrationApplied(m.ID)
|
|
||||||
if err == nil {
|
|
||||||
if strings.TrimSpace(applied.Checksum) != checksum {
|
|
||||||
return fmt.Errorf("checksum mismatch for migration %s", m.ID)
|
|
||||||
}
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
if !errors.Is(err, gorm.ErrRecordNotFound) {
|
|
||||||
return fmt.Errorf("check local applied migration %s: %w", m.ID, err)
|
|
||||||
}
|
|
||||||
|
|
||||||
if strings.TrimSpace(m.SQLText) == "" {
|
|
||||||
if err := s.localDB.UpsertRemoteMigrationApplied(m.ID, checksum, appmeta.Version(), time.Now().UTC()); err != nil {
|
|
||||||
return fmt.Errorf("mark empty migration %s as applied: %w", m.ID, err)
|
|
||||||
}
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
statements := splitSQLStatementsLite(m.SQLText)
|
|
||||||
if err := s.localDB.DB().Transaction(func(tx *gorm.DB) error {
|
|
||||||
for _, stmt := range statements {
|
|
||||||
if err := tx.Exec(stmt).Error; err != nil {
|
|
||||||
return fmt.Errorf("apply migration %s statement %q: %w", m.ID, stmt, err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
}); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
if err := s.localDB.UpsertRemoteMigrationApplied(m.ID, checksum, appmeta.Version(), time.Now().UTC()); err != nil {
|
|
||||||
return fmt.Errorf("record applied migration %s: %w", m.ID, err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func splitSQLStatementsLite(script string) []string {
|
|
||||||
scanner := bufio.NewScanner(strings.NewReader(script))
|
|
||||||
scanner.Buffer(make([]byte, 1024), 1024*1024)
|
|
||||||
|
|
||||||
lines := make([]string, 0, 64)
|
|
||||||
for scanner.Scan() {
|
|
||||||
line := strings.TrimSpace(scanner.Text())
|
|
||||||
if line == "" || strings.HasPrefix(line, "--") {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
lines = append(lines, scanner.Text())
|
|
||||||
}
|
|
||||||
combined := strings.Join(lines, "\n")
|
|
||||||
raw := strings.Split(combined, ";")
|
|
||||||
stmts := make([]string, 0, len(raw))
|
|
||||||
for _, stmt := range raw {
|
|
||||||
trimmed := strings.TrimSpace(stmt)
|
|
||||||
if trimmed == "" {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
stmts = append(stmts, trimmed)
|
|
||||||
}
|
|
||||||
return stmts
|
|
||||||
}
|
|
||||||
|
|
||||||
func digestSQL(sqlText string) string {
|
|
||||||
hash := sha256.Sum256([]byte(sqlText))
|
|
||||||
return hex.EncodeToString(hash[:])
|
|
||||||
}
|
|
||||||
|
|
||||||
func compareVersions(left, right string) int {
|
|
||||||
leftParts := normalizeVersionParts(left)
|
|
||||||
rightParts := normalizeVersionParts(right)
|
|
||||||
maxLen := len(leftParts)
|
|
||||||
if len(rightParts) > maxLen {
|
|
||||||
maxLen = len(rightParts)
|
|
||||||
}
|
|
||||||
for i := 0; i < maxLen; i++ {
|
|
||||||
lv := 0
|
|
||||||
rv := 0
|
|
||||||
if i < len(leftParts) {
|
|
||||||
lv = leftParts[i]
|
|
||||||
}
|
|
||||||
if i < len(rightParts) {
|
|
||||||
rv = rightParts[i]
|
|
||||||
}
|
|
||||||
if lv < rv {
|
|
||||||
return -1
|
|
||||||
}
|
|
||||||
if lv > rv {
|
|
||||||
return 1
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return 0
|
|
||||||
}
|
|
||||||
|
|
||||||
func (s *Service) reportClientSchemaState(mariaDB *gorm.DB, checkedAt time.Time) error {
|
func (s *Service) reportClientSchemaState(mariaDB *gorm.DB, checkedAt time.Time) error {
|
||||||
if strings.EqualFold(mariaDB.Dialector.Name(), "sqlite") {
|
if strings.EqualFold(mariaDB.Dialector.Name(), "sqlite") {
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
if err := ensureClientSchemaStateTable(mariaDB); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
username := strings.TrimSpace(s.localDB.GetDBUser())
|
username := strings.TrimSpace(s.localDB.GetDBUser())
|
||||||
if username == "" {
|
if username == "" {
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
lastMigrationID := ""
|
hostname, err := os.Hostname()
|
||||||
if id, err := s.localDB.GetLatestAppliedRemoteMigrationID(); err == nil {
|
if err != nil {
|
||||||
lastMigrationID = id
|
hostname = ""
|
||||||
}
|
}
|
||||||
|
hostname = strings.TrimSpace(hostname)
|
||||||
|
lastSyncAt := s.localDB.GetLastSyncTime()
|
||||||
|
lastSyncStatus := ReadinessReady
|
||||||
|
pendingChangesCount := s.localDB.CountPendingChanges()
|
||||||
|
pendingErrorsCount := s.localDB.CountErroredChanges()
|
||||||
|
configurationsCount := s.localDB.CountConfigurations()
|
||||||
|
projectsCount := s.localDB.CountProjects()
|
||||||
|
estimateVersion := latestPricelistVersion(s.localDB, "estimate")
|
||||||
|
warehouseVersion := latestPricelistVersion(s.localDB, "warehouse")
|
||||||
|
competitorVersion := latestPricelistVersion(s.localDB, "competitor")
|
||||||
|
lastSyncErrorCode, lastSyncErrorText := latestSyncErrorState(s.localDB)
|
||||||
|
localPricelistCount := s.localDB.CountLocalPricelists()
|
||||||
|
pricelistItemsCount := s.localDB.CountAllPricelistItems()
|
||||||
|
componentsCount := s.localDB.CountComponents()
|
||||||
|
dbSizeBytes := s.localDB.DBFileSizeBytes()
|
||||||
return mariaDB.Exec(`
|
return mariaDB.Exec(`
|
||||||
INSERT INTO qt_client_schema_state (username, last_applied_migration_id, app_version, last_checked_at, updated_at)
|
INSERT INTO qt_client_schema_state (
|
||||||
VALUES (?, ?, ?, ?, ?)
|
username, hostname, app_version,
|
||||||
|
last_sync_at, last_sync_status, pending_changes_count, pending_errors_count,
|
||||||
|
configurations_count, projects_count,
|
||||||
|
estimate_pricelist_version, warehouse_pricelist_version, competitor_pricelist_version,
|
||||||
|
last_sync_error_code, last_sync_error_text,
|
||||||
|
local_pricelist_count, pricelist_items_count, components_count, db_size_bytes,
|
||||||
|
last_checked_at, updated_at
|
||||||
|
)
|
||||||
|
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
|
||||||
ON DUPLICATE KEY UPDATE
|
ON DUPLICATE KEY UPDATE
|
||||||
last_applied_migration_id = VALUES(last_applied_migration_id),
|
|
||||||
app_version = VALUES(app_version),
|
app_version = VALUES(app_version),
|
||||||
|
last_sync_at = VALUES(last_sync_at),
|
||||||
|
last_sync_status = VALUES(last_sync_status),
|
||||||
|
pending_changes_count = VALUES(pending_changes_count),
|
||||||
|
pending_errors_count = VALUES(pending_errors_count),
|
||||||
|
configurations_count = VALUES(configurations_count),
|
||||||
|
projects_count = VALUES(projects_count),
|
||||||
|
estimate_pricelist_version = VALUES(estimate_pricelist_version),
|
||||||
|
warehouse_pricelist_version = VALUES(warehouse_pricelist_version),
|
||||||
|
competitor_pricelist_version = VALUES(competitor_pricelist_version),
|
||||||
|
last_sync_error_code = VALUES(last_sync_error_code),
|
||||||
|
last_sync_error_text = VALUES(last_sync_error_text),
|
||||||
|
local_pricelist_count = VALUES(local_pricelist_count),
|
||||||
|
pricelist_items_count = VALUES(pricelist_items_count),
|
||||||
|
components_count = VALUES(components_count),
|
||||||
|
db_size_bytes = VALUES(db_size_bytes),
|
||||||
last_checked_at = VALUES(last_checked_at),
|
last_checked_at = VALUES(last_checked_at),
|
||||||
updated_at = VALUES(updated_at)
|
updated_at = VALUES(updated_at)
|
||||||
`, username, lastMigrationID, appmeta.Version(), checkedAt, checkedAt).Error
|
`, username, hostname, appmeta.Version(),
|
||||||
|
lastSyncAt, lastSyncStatus, pendingChangesCount, pendingErrorsCount,
|
||||||
|
configurationsCount, projectsCount,
|
||||||
|
estimateVersion, warehouseVersion, competitorVersion,
|
||||||
|
lastSyncErrorCode, lastSyncErrorText,
|
||||||
|
localPricelistCount, pricelistItemsCount, componentsCount, dbSizeBytes,
|
||||||
|
checkedAt, checkedAt).Error
|
||||||
}
|
}
|
||||||
|
|
||||||
func normalizeVersionParts(v string) []int {
|
func isDuplicatePrimaryKeyDefinition(err error) bool {
|
||||||
trimmed := strings.TrimSpace(v)
|
if err == nil {
|
||||||
trimmed = strings.TrimPrefix(trimmed, "v")
|
return false
|
||||||
chunks := strings.Split(trimmed, ".")
|
|
||||||
parts := make([]int, 0, len(chunks))
|
|
||||||
for _, chunk := range chunks {
|
|
||||||
clean := strings.TrimSpace(chunk)
|
|
||||||
if clean == "" {
|
|
||||||
parts = append(parts, 0)
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
n := 0
|
|
||||||
for i := 0; i < len(clean); i++ {
|
|
||||||
if clean[i] < '0' || clean[i] > '9' {
|
|
||||||
clean = clean[:i]
|
|
||||||
break
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if clean != "" {
|
|
||||||
if parsed, err := strconv.Atoi(clean); err == nil {
|
|
||||||
n = parsed
|
|
||||||
}
|
|
||||||
}
|
|
||||||
parts = append(parts, n)
|
|
||||||
}
|
}
|
||||||
return parts
|
msg := strings.ToLower(err.Error())
|
||||||
|
return strings.Contains(msg, "multiple primary key defined") ||
|
||||||
|
strings.Contains(msg, "duplicate key name 'primary'") ||
|
||||||
|
strings.Contains(msg, "duplicate entry")
|
||||||
|
}
|
||||||
|
|
||||||
|
func latestPricelistVersion(local *localdb.LocalDB, source string) *string {
|
||||||
|
if local == nil {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
pl, err := local.GetLatestLocalPricelistBySource(source)
|
||||||
|
if err != nil || pl == nil {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
version := strings.TrimSpace(pl.Version)
|
||||||
|
if version == "" {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
return &version
|
||||||
|
}
|
||||||
|
|
||||||
|
func latestSyncErrorState(local *localdb.LocalDB) (*string, *string) {
|
||||||
|
if local == nil {
|
||||||
|
return nil, nil
|
||||||
|
}
|
||||||
|
if guard, err := local.GetSyncGuardState(); err == nil && guard != nil && strings.EqualFold(guard.Status, ReadinessBlocked) {
|
||||||
|
return optionalString(strings.TrimSpace(guard.ReasonCode)), optionalString(strings.TrimSpace(guard.ReasonText))
|
||||||
|
}
|
||||||
|
|
||||||
|
var pending localdb.PendingChange
|
||||||
|
if err := local.DB().
|
||||||
|
Where("TRIM(COALESCE(last_error, '')) <> ''").
|
||||||
|
Order("id DESC").
|
||||||
|
First(&pending).Error; err == nil {
|
||||||
|
return optionalString("PENDING_CHANGE_ERROR"), optionalString(strings.TrimSpace(pending.LastError))
|
||||||
|
}
|
||||||
|
return nil, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func optionalString(value string) *string {
|
||||||
|
if strings.TrimSpace(value) == "" {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
v := strings.TrimSpace(value)
|
||||||
|
return &v
|
||||||
}
|
}
|
||||||
|
|
||||||
func toReadinessFromState(state *localdb.LocalSyncGuardState) *SyncReadiness {
|
func toReadinessFromState(state *localdb.LocalSyncGuardState) *SyncReadiness {
|
||||||
|
|||||||
@@ -7,6 +7,7 @@ import (
|
|||||||
"log/slog"
|
"log/slog"
|
||||||
"sort"
|
"sort"
|
||||||
"strings"
|
"strings"
|
||||||
|
"sync"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"git.mchus.pro/mchus/quoteforge/internal/appmeta"
|
"git.mchus.pro/mchus/quoteforge/internal/appmeta"
|
||||||
@@ -22,9 +23,10 @@ var ErrOffline = errors.New("database is offline")
|
|||||||
|
|
||||||
// Service handles synchronization between MariaDB and local SQLite
|
// Service handles synchronization between MariaDB and local SQLite
|
||||||
type Service struct {
|
type Service struct {
|
||||||
connMgr *db.ConnectionManager
|
connMgr *db.ConnectionManager
|
||||||
localDB *localdb.LocalDB
|
localDB *localdb.LocalDB
|
||||||
directDB *gorm.DB
|
directDB *gorm.DB
|
||||||
|
pricelistMu sync.Mutex // prevents concurrent pricelist syncs
|
||||||
}
|
}
|
||||||
|
|
||||||
// NewService creates a new sync service
|
// NewService creates a new sync service
|
||||||
@@ -45,10 +47,15 @@ func NewServiceWithDB(mariaDB *gorm.DB, localDB *localdb.LocalDB) *Service {
|
|||||||
|
|
||||||
// SyncStatus represents the current sync status
|
// SyncStatus represents the current sync status
|
||||||
type SyncStatus struct {
|
type SyncStatus struct {
|
||||||
LastSyncAt *time.Time `json:"last_sync_at"`
|
LastSyncAt *time.Time `json:"last_sync_at"`
|
||||||
ServerPricelists int `json:"server_pricelists"`
|
LastAttemptAt *time.Time `json:"last_attempt_at,omitempty"`
|
||||||
LocalPricelists int `json:"local_pricelists"`
|
LastSyncStatus string `json:"last_sync_status,omitempty"`
|
||||||
NeedsSync bool `json:"needs_sync"`
|
LastSyncError string `json:"last_sync_error,omitempty"`
|
||||||
|
ServerPricelists int `json:"server_pricelists"`
|
||||||
|
LocalPricelists int `json:"local_pricelists"`
|
||||||
|
NeedsSync bool `json:"needs_sync"`
|
||||||
|
IncompleteServerSync bool `json:"incomplete_server_sync"`
|
||||||
|
KnownServerChangesMiss bool `json:"known_server_changes_missing"`
|
||||||
}
|
}
|
||||||
|
|
||||||
type UserSyncStatus struct {
|
type UserSyncStatus struct {
|
||||||
@@ -145,6 +152,9 @@ func (s *Service) ImportConfigurationsToLocal() (*ConfigImportResult, error) {
|
|||||||
|
|
||||||
if existing != nil && err == nil {
|
if existing != nil && err == nil {
|
||||||
localCfg.ID = existing.ID
|
localCfg.ID = existing.ID
|
||||||
|
if localCfg.Line <= 0 && existing.Line > 0 {
|
||||||
|
localCfg.Line = existing.Line
|
||||||
|
}
|
||||||
result.Updated++
|
result.Updated++
|
||||||
} else {
|
} else {
|
||||||
result.Imported++
|
result.Imported++
|
||||||
@@ -212,7 +222,7 @@ func (s *Service) ImportProjectsToLocal() (*ProjectImportResult, error) {
|
|||||||
existing.SyncStatus = "synced"
|
existing.SyncStatus = "synced"
|
||||||
existing.SyncedAt = &now
|
existing.SyncedAt = &now
|
||||||
|
|
||||||
if err := s.localDB.SaveProject(existing); err != nil {
|
if err := s.localDB.SaveProjectPreservingUpdatedAt(existing); err != nil {
|
||||||
return nil, fmt.Errorf("saving local project %s: %w", project.UUID, err)
|
return nil, fmt.Errorf("saving local project %s: %w", project.UUID, err)
|
||||||
}
|
}
|
||||||
result.Updated++
|
result.Updated++
|
||||||
@@ -222,7 +232,7 @@ func (s *Service) ImportProjectsToLocal() (*ProjectImportResult, error) {
|
|||||||
localProject := localdb.ProjectToLocal(&project)
|
localProject := localdb.ProjectToLocal(&project)
|
||||||
localProject.SyncStatus = "synced"
|
localProject.SyncStatus = "synced"
|
||||||
localProject.SyncedAt = &now
|
localProject.SyncedAt = &now
|
||||||
if err := s.localDB.SaveProject(localProject); err != nil {
|
if err := s.localDB.SaveProjectPreservingUpdatedAt(localProject); err != nil {
|
||||||
return nil, fmt.Errorf("saving local project %s: %w", project.UUID, err)
|
return nil, fmt.Errorf("saving local project %s: %w", project.UUID, err)
|
||||||
}
|
}
|
||||||
result.Imported++
|
result.Imported++
|
||||||
@@ -237,30 +247,23 @@ func (s *Service) ImportProjectsToLocal() (*ProjectImportResult, error) {
|
|||||||
// GetStatus returns the current sync status
|
// GetStatus returns the current sync status
|
||||||
func (s *Service) GetStatus() (*SyncStatus, error) {
|
func (s *Service) GetStatus() (*SyncStatus, error) {
|
||||||
lastSync := s.localDB.GetLastSyncTime()
|
lastSync := s.localDB.GetLastSyncTime()
|
||||||
|
lastAttempt := s.localDB.GetLastPricelistSyncAttemptAt()
|
||||||
// Count server pricelists (only if already connected, don't reconnect)
|
lastSyncStatus := s.localDB.GetLastPricelistSyncStatus()
|
||||||
serverCount := 0
|
lastSyncError := s.localDB.GetLastPricelistSyncError()
|
||||||
connStatus := s.getConnectionStatus()
|
|
||||||
if connStatus.IsConnected {
|
|
||||||
if mariaDB, err := s.getDB(); err == nil && mariaDB != nil {
|
|
||||||
pricelistRepo := repository.NewPricelistRepository(mariaDB)
|
|
||||||
activeCount, err := pricelistRepo.CountActive()
|
|
||||||
if err == nil {
|
|
||||||
serverCount = int(activeCount)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Count local pricelists
|
|
||||||
localCount := s.localDB.CountLocalPricelists()
|
localCount := s.localDB.CountLocalPricelists()
|
||||||
|
hasFailedSync := strings.EqualFold(lastSyncStatus, "failed")
|
||||||
needsSync, _ := s.NeedSync()
|
needsSync := lastSync == nil || hasFailedSync
|
||||||
|
|
||||||
return &SyncStatus{
|
return &SyncStatus{
|
||||||
LastSyncAt: lastSync,
|
LastSyncAt: lastSync,
|
||||||
ServerPricelists: serverCount,
|
LastAttemptAt: lastAttempt,
|
||||||
LocalPricelists: int(localCount),
|
LastSyncStatus: lastSyncStatus,
|
||||||
NeedsSync: needsSync,
|
LastSyncError: lastSyncError,
|
||||||
|
ServerPricelists: 0,
|
||||||
|
LocalPricelists: int(localCount),
|
||||||
|
NeedsSync: needsSync,
|
||||||
|
IncompleteServerSync: hasFailedSync,
|
||||||
|
KnownServerChangesMiss: hasFailedSync,
|
||||||
}, nil
|
}, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -330,6 +333,7 @@ func (s *Service) SyncPricelists() (int, error) {
|
|||||||
// Get database connection
|
// Get database connection
|
||||||
mariaDB, err := s.getDB()
|
mariaDB, err := s.getDB()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
|
s.recordPricelistSyncFailure(err)
|
||||||
return 0, fmt.Errorf("database not available: %w", err)
|
return 0, fmt.Errorf("database not available: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -339,6 +343,7 @@ func (s *Service) SyncPricelists() (int, error) {
|
|||||||
// Get active pricelists from server (up to 100)
|
// Get active pricelists from server (up to 100)
|
||||||
serverPricelists, _, err := pricelistRepo.ListActive(0, 100)
|
serverPricelists, _, err := pricelistRepo.ListActive(0, 100)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
|
s.recordPricelistSyncFailure(err)
|
||||||
return 0, fmt.Errorf("getting active server pricelists: %w", err)
|
return 0, fmt.Errorf("getting active server pricelists: %w", err)
|
||||||
}
|
}
|
||||||
serverPricelistIDs := make([]uint, 0, len(serverPricelists))
|
serverPricelistIDs := make([]uint, 0, len(serverPricelists))
|
||||||
@@ -347,6 +352,7 @@ func (s *Service) SyncPricelists() (int, error) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
synced := 0
|
synced := 0
|
||||||
|
var syncErr error
|
||||||
for _, pl := range serverPricelists {
|
for _, pl := range serverPricelists {
|
||||||
// Check if pricelist already exists locally
|
// Check if pricelist already exists locally
|
||||||
existing, _ := s.localDB.GetLocalPricelistByServerID(pl.ID)
|
existing, _ := s.localDB.GetLocalPricelistByServerID(pl.ID)
|
||||||
@@ -355,6 +361,9 @@ func (s *Service) SyncPricelists() (int, error) {
|
|||||||
if s.localDB.CountLocalPricelistItems(existing.ID) == 0 {
|
if s.localDB.CountLocalPricelistItems(existing.ID) == 0 {
|
||||||
itemCount, err := s.SyncPricelistItems(existing.ID)
|
itemCount, err := s.SyncPricelistItems(existing.ID)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
|
if syncErr == nil {
|
||||||
|
syncErr = fmt.Errorf("sync items for existing pricelist %s: %w", pl.Version, err)
|
||||||
|
}
|
||||||
slog.Warn("failed to sync missing pricelist items for existing local pricelist", "version", pl.Version, "error", err)
|
slog.Warn("failed to sync missing pricelist items for existing local pricelist", "version", pl.Version, "error", err)
|
||||||
} else {
|
} else {
|
||||||
slog.Info("synced missing pricelist items for existing local pricelist", "version", pl.Version, "items", itemCount)
|
slog.Info("synced missing pricelist items for existing local pricelist", "version", pl.Version, "items", itemCount)
|
||||||
@@ -374,19 +383,15 @@ func (s *Service) SyncPricelists() (int, error) {
|
|||||||
IsUsed: false,
|
IsUsed: false,
|
||||||
}
|
}
|
||||||
|
|
||||||
if err := s.localDB.SaveLocalPricelist(localPL); err != nil {
|
itemCount, err := s.syncNewPricelistSnapshot(localPL)
|
||||||
slog.Warn("failed to save local pricelist", "version", pl.Version, "error", err)
|
if err != nil {
|
||||||
|
if syncErr == nil {
|
||||||
|
syncErr = fmt.Errorf("sync new pricelist %s: %w", pl.Version, err)
|
||||||
|
}
|
||||||
|
slog.Warn("failed to sync pricelist snapshot", "version", pl.Version, "error", err)
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
|
slog.Debug("synced pricelist with items", "version", pl.Version, "items", itemCount)
|
||||||
// Sync items for the newly created pricelist
|
|
||||||
itemCount, err := s.SyncPricelistItems(localPL.ID)
|
|
||||||
if err != nil {
|
|
||||||
slog.Warn("failed to sync pricelist items", "version", pl.Version, "error", err)
|
|
||||||
// Continue even if items sync fails - we have the pricelist metadata
|
|
||||||
} else {
|
|
||||||
slog.Debug("synced pricelist with items", "version", pl.Version, "items", itemCount)
|
|
||||||
}
|
|
||||||
|
|
||||||
synced++
|
synced++
|
||||||
}
|
}
|
||||||
@@ -401,14 +406,96 @@ func (s *Service) SyncPricelists() (int, error) {
|
|||||||
// Backfill lot_category for used pricelists (older local caches may miss the column values).
|
// Backfill lot_category for used pricelists (older local caches may miss the column values).
|
||||||
s.backfillUsedPricelistItemCategories(pricelistRepo, serverPricelistIDs)
|
s.backfillUsedPricelistItemCategories(pricelistRepo, serverPricelistIDs)
|
||||||
|
|
||||||
|
if syncErr != nil {
|
||||||
|
s.recordPricelistSyncFailure(syncErr)
|
||||||
|
return synced, syncErr
|
||||||
|
}
|
||||||
|
|
||||||
// Update last sync time
|
// Update last sync time
|
||||||
s.localDB.SetLastSyncTime(time.Now())
|
now := time.Now()
|
||||||
|
s.localDB.SetLastSyncTime(now)
|
||||||
|
s.recordPricelistSyncSuccess(now)
|
||||||
s.RecordSyncHeartbeat()
|
s.RecordSyncHeartbeat()
|
||||||
|
|
||||||
slog.Info("pricelist sync completed", "synced", synced, "total", len(serverPricelists))
|
slog.Info("pricelist sync completed", "synced", synced, "total", len(serverPricelists))
|
||||||
return synced, nil
|
return synced, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (s *Service) recordPricelistSyncSuccess(at time.Time) {
|
||||||
|
if s.localDB == nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
if err := s.localDB.SetPricelistSyncResult("success", "", at); err != nil {
|
||||||
|
slog.Warn("failed to persist pricelist sync success state", "error", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *Service) recordPricelistSyncFailure(syncErr error) {
|
||||||
|
if s.localDB == nil || syncErr == nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
s.markConnectionBroken(syncErr)
|
||||||
|
if err := s.localDB.SetPricelistSyncResult("failed", syncErr.Error(), time.Now()); err != nil {
|
||||||
|
slog.Warn("failed to persist pricelist sync failure state", "error", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *Service) markConnectionBroken(err error) {
|
||||||
|
if err == nil || s.connMgr == nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
msg := strings.ToLower(err.Error())
|
||||||
|
switch {
|
||||||
|
case strings.Contains(msg, "i/o timeout"),
|
||||||
|
strings.Contains(msg, "invalid connection"),
|
||||||
|
strings.Contains(msg, "bad connection"),
|
||||||
|
strings.Contains(msg, "connection reset"),
|
||||||
|
strings.Contains(msg, "broken pipe"),
|
||||||
|
strings.Contains(msg, "unexpected eof"):
|
||||||
|
s.connMgr.MarkOffline(err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *Service) syncNewPricelistSnapshot(localPL *localdb.LocalPricelist) (int, error) {
|
||||||
|
if localPL == nil {
|
||||||
|
return 0, fmt.Errorf("local pricelist is nil")
|
||||||
|
}
|
||||||
|
|
||||||
|
localItems, err := s.fetchServerPricelistItems(localPL.ServerID)
|
||||||
|
if err != nil {
|
||||||
|
return 0, err
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := s.localDB.DB().Transaction(func(tx *gorm.DB) error {
|
||||||
|
if err := tx.Create(localPL).Error; err != nil {
|
||||||
|
return fmt.Errorf("save local pricelist: %w", err)
|
||||||
|
}
|
||||||
|
if len(localItems) == 0 {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
for i := range localItems {
|
||||||
|
localItems[i].PricelistID = localPL.ID
|
||||||
|
}
|
||||||
|
batchSize := 500
|
||||||
|
for i := 0; i < len(localItems); i += batchSize {
|
||||||
|
end := i + batchSize
|
||||||
|
if end > len(localItems) {
|
||||||
|
end = len(localItems)
|
||||||
|
}
|
||||||
|
if err := tx.CreateInBatches(localItems[i:end], batchSize).Error; err != nil {
|
||||||
|
return fmt.Errorf("save local pricelist items: %w", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}); err != nil {
|
||||||
|
return 0, err
|
||||||
|
}
|
||||||
|
|
||||||
|
slog.Info("synced pricelist items", "pricelist_id", localPL.ID, "items", len(localItems))
|
||||||
|
return len(localItems), nil
|
||||||
|
}
|
||||||
|
|
||||||
func (s *Service) backfillUsedPricelistItemCategories(pricelistRepo *repository.PricelistRepository, activeServerPricelistIDs []uint) {
|
func (s *Service) backfillUsedPricelistItemCategories(pricelistRepo *repository.PricelistRepository, activeServerPricelistIDs []uint) {
|
||||||
if s.localDB == nil || pricelistRepo == nil {
|
if s.localDB == nil || pricelistRepo == nil {
|
||||||
return
|
return
|
||||||
@@ -667,27 +754,13 @@ func (s *Service) SyncPricelistItems(localPricelistID uint) (int, error) {
|
|||||||
return int(existingCount), nil
|
return int(existingCount), nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// Get database connection
|
localItems, err := s.fetchServerPricelistItems(localPL.ServerID)
|
||||||
mariaDB, err := s.getDB()
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return 0, fmt.Errorf("database not available: %w", err)
|
return 0, err
|
||||||
}
|
}
|
||||||
|
for i := range localItems {
|
||||||
// Create repository
|
localItems[i].PricelistID = localPricelistID
|
||||||
pricelistRepo := repository.NewPricelistRepository(mariaDB)
|
|
||||||
|
|
||||||
// Get items from server
|
|
||||||
serverItems, _, err := pricelistRepo.GetItems(localPL.ServerID, 0, 10000, "")
|
|
||||||
if err != nil {
|
|
||||||
return 0, fmt.Errorf("getting server pricelist items: %w", err)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// Convert and save locally
|
|
||||||
localItems := make([]localdb.LocalPricelistItem, len(serverItems))
|
|
||||||
for i, item := range serverItems {
|
|
||||||
localItems[i] = *localdb.PricelistItemToLocal(&item, localPricelistID)
|
|
||||||
}
|
|
||||||
|
|
||||||
if err := s.localDB.SaveLocalPricelistItems(localItems); err != nil {
|
if err := s.localDB.SaveLocalPricelistItems(localItems); err != nil {
|
||||||
return 0, fmt.Errorf("saving local pricelist items: %w", err)
|
return 0, fmt.Errorf("saving local pricelist items: %w", err)
|
||||||
}
|
}
|
||||||
@@ -696,6 +769,30 @@ func (s *Service) SyncPricelistItems(localPricelistID uint) (int, error) {
|
|||||||
return len(localItems), nil
|
return len(localItems), nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (s *Service) fetchServerPricelistItems(serverPricelistID uint) ([]localdb.LocalPricelistItem, error) {
|
||||||
|
// Get database connection
|
||||||
|
mariaDB, err := s.getDB()
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("database not available: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create repository
|
||||||
|
pricelistRepo := repository.NewPricelistRepository(mariaDB)
|
||||||
|
|
||||||
|
// Get items from server
|
||||||
|
serverItems, _, err := pricelistRepo.GetItems(serverPricelistID, 0, 10000, "")
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("getting server pricelist items: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
localItems := make([]localdb.LocalPricelistItem, len(serverItems))
|
||||||
|
for i, item := range serverItems {
|
||||||
|
localItems[i] = *localdb.PricelistItemToLocal(&item, 0)
|
||||||
|
}
|
||||||
|
|
||||||
|
return localItems, nil
|
||||||
|
}
|
||||||
|
|
||||||
// SyncPricelistItemsByServerID syncs items for a pricelist by its server ID
|
// SyncPricelistItemsByServerID syncs items for a pricelist by its server ID
|
||||||
func (s *Service) SyncPricelistItemsByServerID(serverPricelistID uint) (int, error) {
|
func (s *Service) SyncPricelistItemsByServerID(serverPricelistID uint) (int, error) {
|
||||||
localPL, err := s.localDB.GetLocalPricelistByServerID(serverPricelistID)
|
localPL, err := s.localDB.GetLocalPricelistByServerID(serverPricelistID)
|
||||||
@@ -736,9 +833,15 @@ func (s *Service) GetPricelistForOffline(serverPricelistID uint) (*localdb.Local
|
|||||||
return localPL, nil
|
return localPL, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// SyncPricelistsIfNeeded checks for new pricelists and syncs if needed
|
// SyncPricelistsIfNeeded checks for new pricelists and syncs if needed.
|
||||||
// This should be called before creating a new configuration when online
|
// If a sync is already in progress, returns immediately without blocking.
|
||||||
func (s *Service) SyncPricelistsIfNeeded() error {
|
func (s *Service) SyncPricelistsIfNeeded() error {
|
||||||
|
if !s.pricelistMu.TryLock() {
|
||||||
|
slog.Debug("pricelist sync already in progress, skipping")
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
defer s.pricelistMu.Unlock()
|
||||||
|
|
||||||
needSync, err := s.NeedSync()
|
needSync, err := s.NeedSync()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
slog.Warn("failed to check if sync needed", "error", err)
|
slog.Warn("failed to check if sync needed", "error", err)
|
||||||
@@ -790,6 +893,7 @@ func (s *Service) PushPendingChanges() (int, error) {
|
|||||||
for _, change := range sortedChanges {
|
for _, change := range sortedChanges {
|
||||||
err := s.pushSingleChange(&change)
|
err := s.pushSingleChange(&change)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
|
s.markConnectionBroken(err)
|
||||||
slog.Warn("failed to push change", "id", change.ID, "type", change.EntityType, "operation", change.Operation, "error", err)
|
slog.Warn("failed to push change", "id", change.ID, "type", change.EntityType, "operation", change.Operation, "error", err)
|
||||||
// Increment attempts
|
// Increment attempts
|
||||||
s.localDB.IncrementPendingChangeAttempts(change.ID, err.Error())
|
s.localDB.IncrementPendingChangeAttempts(change.ID, err.Error())
|
||||||
@@ -897,7 +1001,7 @@ func (s *Service) pushProjectChange(change *localdb.PendingChange) error {
|
|||||||
localProject.SyncStatus = "synced"
|
localProject.SyncStatus = "synced"
|
||||||
now := time.Now()
|
now := time.Now()
|
||||||
localProject.SyncedAt = &now
|
localProject.SyncedAt = &now
|
||||||
_ = s.localDB.SaveProject(localProject)
|
_ = s.localDB.SaveProjectPreservingUpdatedAt(localProject)
|
||||||
}
|
}
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
@@ -1167,7 +1271,7 @@ func (s *Service) ensureConfigurationProject(mariaDB *gorm.DB, cfg *models.Confi
|
|||||||
localProject.SyncStatus = "synced"
|
localProject.SyncStatus = "synced"
|
||||||
now := time.Now()
|
now := time.Now()
|
||||||
localProject.SyncedAt = &now
|
localProject.SyncedAt = &now
|
||||||
_ = s.localDB.SaveProject(localProject)
|
_ = s.localDB.SaveProjectPreservingUpdatedAt(localProject)
|
||||||
}
|
}
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -17,8 +17,6 @@ func TestSyncPricelists_BackfillsLotCategoryForUsedPricelistItems(t *testing.T)
|
|||||||
&models.Pricelist{},
|
&models.Pricelist{},
|
||||||
&models.PricelistItem{},
|
&models.PricelistItem{},
|
||||||
&models.Lot{},
|
&models.Lot{},
|
||||||
&models.LotPartnumber{},
|
|
||||||
&models.StockLog{},
|
|
||||||
); err != nil {
|
); err != nil {
|
||||||
t.Fatalf("migrate server tables: %v", err)
|
t.Fatalf("migrate server tables: %v", err)
|
||||||
}
|
}
|
||||||
@@ -104,4 +102,3 @@ func TestSyncPricelists_BackfillsLotCategoryForUsedPricelistItems(t *testing.T)
|
|||||||
t.Fatalf("expected lot_category backfilled to CPU, got %q", items[0].LotCategory)
|
t.Fatalf("expected lot_category backfilled to CPU, got %q", items[0].LotCategory)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -1,12 +1,15 @@
|
|||||||
package sync_test
|
package sync_test
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"errors"
|
||||||
|
"strings"
|
||||||
"testing"
|
"testing"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"git.mchus.pro/mchus/quoteforge/internal/localdb"
|
"git.mchus.pro/mchus/quoteforge/internal/localdb"
|
||||||
"git.mchus.pro/mchus/quoteforge/internal/models"
|
"git.mchus.pro/mchus/quoteforge/internal/models"
|
||||||
syncsvc "git.mchus.pro/mchus/quoteforge/internal/services/sync"
|
syncsvc "git.mchus.pro/mchus/quoteforge/internal/services/sync"
|
||||||
|
"gorm.io/gorm"
|
||||||
)
|
)
|
||||||
|
|
||||||
func TestSyncPricelistsDeletesMissingUnusedLocalPricelists(t *testing.T) {
|
func TestSyncPricelistsDeletesMissingUnusedLocalPricelists(t *testing.T) {
|
||||||
@@ -83,3 +86,58 @@ func TestSyncPricelistsDeletesMissingUnusedLocalPricelists(t *testing.T) {
|
|||||||
t.Fatalf("expected server pricelist to be synced locally: %v", err)
|
t.Fatalf("expected server pricelist to be synced locally: %v", err)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func TestSyncPricelistsDoesNotPersistHeaderWithoutItems(t *testing.T) {
|
||||||
|
local := newLocalDBForSyncTest(t)
|
||||||
|
serverDB := newServerDBForSyncTest(t)
|
||||||
|
if err := serverDB.AutoMigrate(&models.Pricelist{}, &models.PricelistItem{}); err != nil {
|
||||||
|
t.Fatalf("migrate server pricelist tables: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
serverPL := models.Pricelist{
|
||||||
|
Source: "estimate",
|
||||||
|
Version: "2026-03-17-001",
|
||||||
|
Notification: "server",
|
||||||
|
CreatedBy: "tester",
|
||||||
|
IsActive: true,
|
||||||
|
CreatedAt: time.Now().Add(-1 * time.Hour),
|
||||||
|
}
|
||||||
|
if err := serverDB.Create(&serverPL).Error; err != nil {
|
||||||
|
t.Fatalf("create server pricelist: %v", err)
|
||||||
|
}
|
||||||
|
if err := serverDB.Create(&models.PricelistItem{PricelistID: serverPL.ID, LotName: "CPU_A", Price: 10}).Error; err != nil {
|
||||||
|
t.Fatalf("create server pricelist item: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
const callbackName = "test:fail_qt_pricelist_items_query"
|
||||||
|
if err := serverDB.Callback().Query().Before("gorm:query").Register(callbackName, func(db *gorm.DB) {
|
||||||
|
if db.Statement != nil && db.Statement.Table == "qt_pricelist_items" {
|
||||||
|
_ = db.AddError(errors.New("forced pricelist item fetch failure"))
|
||||||
|
}
|
||||||
|
}); err != nil {
|
||||||
|
t.Fatalf("register query callback: %v", err)
|
||||||
|
}
|
||||||
|
defer serverDB.Callback().Query().Remove(callbackName)
|
||||||
|
|
||||||
|
svc := syncsvc.NewServiceWithDB(serverDB, local)
|
||||||
|
synced, err := svc.SyncPricelists()
|
||||||
|
if err == nil {
|
||||||
|
t.Fatalf("expected sync error when item fetch fails")
|
||||||
|
}
|
||||||
|
if synced != 0 {
|
||||||
|
t.Fatalf("expected synced=0 on incomplete sync, got %d", synced)
|
||||||
|
}
|
||||||
|
if !strings.Contains(err.Error(), "forced pricelist item fetch failure") {
|
||||||
|
t.Fatalf("expected item fetch error, got %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if _, err := local.GetLocalPricelistByServerID(serverPL.ID); err == nil {
|
||||||
|
t.Fatalf("expected pricelist header not to be persisted without items")
|
||||||
|
}
|
||||||
|
if got := local.CountLocalPricelists(); got != 0 {
|
||||||
|
t.Fatalf("expected no local pricelists after failed sync, got %d", got)
|
||||||
|
}
|
||||||
|
if ts := local.GetLastSyncTime(); ts != nil {
|
||||||
|
t.Fatalf("expected last_pricelist_sync to stay unset on incomplete sync, got %v", ts)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|||||||
@@ -250,6 +250,121 @@ func TestPushPendingChangesCreateIsIdempotent(t *testing.T) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func TestPushPendingChangesConfigurationPushesLine(t *testing.T) {
|
||||||
|
local := newLocalDBForSyncTest(t)
|
||||||
|
serverDB := newServerDBForSyncTest(t)
|
||||||
|
|
||||||
|
localSync := syncsvc.NewService(nil, local)
|
||||||
|
configService := services.NewLocalConfigurationService(local, localSync, &services.QuoteService{}, func() bool { return false })
|
||||||
|
pushService := syncsvc.NewServiceWithDB(serverDB, local)
|
||||||
|
|
||||||
|
created, err := configService.Create("tester", &services.CreateConfigRequest{
|
||||||
|
Name: "Cfg Line Push",
|
||||||
|
Items: models.ConfigItems{{LotName: "CPU_LINE", Quantity: 1, UnitPrice: 1000}},
|
||||||
|
ServerCount: 1,
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("create config: %v", err)
|
||||||
|
}
|
||||||
|
if created.Line != 10 {
|
||||||
|
t.Fatalf("expected local create line=10, got %d", created.Line)
|
||||||
|
}
|
||||||
|
|
||||||
|
if _, err := pushService.PushPendingChanges(); err != nil {
|
||||||
|
t.Fatalf("push pending changes: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
var serverCfg models.Configuration
|
||||||
|
if err := serverDB.Where("uuid = ?", created.UUID).First(&serverCfg).Error; err != nil {
|
||||||
|
t.Fatalf("load server config: %v", err)
|
||||||
|
}
|
||||||
|
if serverCfg.Line != 10 {
|
||||||
|
t.Fatalf("expected server line=10 after push, got %d", serverCfg.Line)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestImportConfigurationsToLocalPullsLine(t *testing.T) {
|
||||||
|
local := newLocalDBForSyncTest(t)
|
||||||
|
serverDB := newServerDBForSyncTest(t)
|
||||||
|
|
||||||
|
cfg := models.Configuration{
|
||||||
|
UUID: "server-line-config",
|
||||||
|
OwnerUsername: "tester",
|
||||||
|
Name: "Cfg Line Pull",
|
||||||
|
Items: models.ConfigItems{{LotName: "CPU_PULL", Quantity: 1, UnitPrice: 900}},
|
||||||
|
ServerCount: 1,
|
||||||
|
Line: 40,
|
||||||
|
}
|
||||||
|
total := cfg.Items.Total()
|
||||||
|
cfg.TotalPrice = &total
|
||||||
|
if err := serverDB.Create(&cfg).Error; err != nil {
|
||||||
|
t.Fatalf("seed server config: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
svc := syncsvc.NewServiceWithDB(serverDB, local)
|
||||||
|
if _, err := svc.ImportConfigurationsToLocal(); err != nil {
|
||||||
|
t.Fatalf("import configurations to local: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
localCfg, err := local.GetConfigurationByUUID(cfg.UUID)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("load local config: %v", err)
|
||||||
|
}
|
||||||
|
if localCfg.Line != 40 {
|
||||||
|
t.Fatalf("expected imported line=40, got %d", localCfg.Line)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestImportConfigurationsToLocalLoadsVendorSpecFromServer(t *testing.T) {
|
||||||
|
local := newLocalDBForSyncTest(t)
|
||||||
|
serverDB := newServerDBForSyncTest(t)
|
||||||
|
|
||||||
|
serverSpec := models.VendorSpec{
|
||||||
|
{
|
||||||
|
SortOrder: 10,
|
||||||
|
VendorPartnumber: "GPU-NVHGX-H200-8141",
|
||||||
|
Quantity: 1,
|
||||||
|
Description: "NVIDIA HGX Delta-Next GPU Baseboard",
|
||||||
|
LotMappings: []models.VendorSpecLotMapping{
|
||||||
|
{LotName: "GPU_NV_H200_141GB_SXM_(HGX)", QuantityPerPN: 8},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
cfg := models.Configuration{
|
||||||
|
UUID: "server-vendorspec-config",
|
||||||
|
OwnerUsername: "tester",
|
||||||
|
Name: "Cfg VendorSpec Pull",
|
||||||
|
Items: models.ConfigItems{{LotName: "CPU_PULL", Quantity: 1, UnitPrice: 900}},
|
||||||
|
ServerCount: 1,
|
||||||
|
Line: 50,
|
||||||
|
VendorSpec: serverSpec,
|
||||||
|
}
|
||||||
|
total := cfg.Items.Total()
|
||||||
|
cfg.TotalPrice = &total
|
||||||
|
if err := serverDB.Create(&cfg).Error; err != nil {
|
||||||
|
t.Fatalf("seed server config: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
svc := syncsvc.NewServiceWithDB(serverDB, local)
|
||||||
|
if _, err := svc.ImportConfigurationsToLocal(); err != nil {
|
||||||
|
t.Fatalf("import configurations to local: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
localCfg, err := local.GetConfigurationByUUID(cfg.UUID)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("load local config: %v", err)
|
||||||
|
}
|
||||||
|
if len(localCfg.VendorSpec) != 1 {
|
||||||
|
t.Fatalf("expected server vendor_spec imported, got %d rows", len(localCfg.VendorSpec))
|
||||||
|
}
|
||||||
|
if localCfg.VendorSpec[0].VendorPartnumber != "GPU-NVHGX-H200-8141" {
|
||||||
|
t.Fatalf("unexpected vendor_partnumber after import: %q", localCfg.VendorSpec[0].VendorPartnumber)
|
||||||
|
}
|
||||||
|
if len(localCfg.VendorSpec[0].LotMappings) != 1 || localCfg.VendorSpec[0].LotMappings[0].LotName != "GPU_NV_H200_141GB_SXM_(HGX)" {
|
||||||
|
t.Fatalf("unexpected lot mappings after import: %+v", localCfg.VendorSpec[0].LotMappings)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
func TestPushPendingChangesCreateThenUpdateBeforeFirstPush(t *testing.T) {
|
func TestPushPendingChangesCreateThenUpdateBeforeFirstPush(t *testing.T) {
|
||||||
local := newLocalDBForSyncTest(t)
|
local := newLocalDBForSyncTest(t)
|
||||||
serverDB := newServerDBForSyncTest(t)
|
serverDB := newServerDBForSyncTest(t)
|
||||||
@@ -361,7 +476,9 @@ CREATE TABLE qt_configurations (
|
|||||||
competitor_pricelist_id INTEGER NULL,
|
competitor_pricelist_id INTEGER NULL,
|
||||||
disable_price_refresh INTEGER NOT NULL DEFAULT 0,
|
disable_price_refresh INTEGER NOT NULL DEFAULT 0,
|
||||||
only_in_stock INTEGER NOT NULL DEFAULT 0,
|
only_in_stock INTEGER NOT NULL DEFAULT 0,
|
||||||
|
line_no INTEGER NULL,
|
||||||
price_updated_at DATETIME NULL,
|
price_updated_at DATETIME NULL,
|
||||||
|
vendor_spec TEXT NULL,
|
||||||
created_at DATETIME
|
created_at DATETIME
|
||||||
);`).Error; err != nil {
|
);`).Error; err != nil {
|
||||||
t.Fatalf("create qt_configurations: %v", err)
|
t.Fatalf("create qt_configurations: %v", err)
|
||||||
|
|||||||
141
internal/services/vendor_spec_resolver.go
Normal file
141
internal/services/vendor_spec_resolver.go
Normal file
@@ -0,0 +1,141 @@
|
|||||||
|
package services
|
||||||
|
|
||||||
|
import (
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/localdb"
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/repository"
|
||||||
|
"math"
|
||||||
|
)
|
||||||
|
|
||||||
|
// ResolvedBOMRow is the result of resolving a single vendor BOM row.
|
||||||
|
type ResolvedBOMRow struct {
|
||||||
|
localdb.VendorSpecItem
|
||||||
|
// ResolutionSource already on VendorSpecItem: "book", "manual_suggestion", "unresolved"
|
||||||
|
}
|
||||||
|
|
||||||
|
// AggregatedLOT represents a LOT with its aggregated quantity from the BOM.
|
||||||
|
type AggregatedLOT struct {
|
||||||
|
LotName string
|
||||||
|
Quantity int
|
||||||
|
}
|
||||||
|
|
||||||
|
// VendorSpecResolver resolves vendor BOM rows to LOT names using the active partnumber book.
|
||||||
|
type VendorSpecResolver struct {
|
||||||
|
bookRepo *repository.PartnumberBookRepository
|
||||||
|
}
|
||||||
|
|
||||||
|
func NewVendorSpecResolver(bookRepo *repository.PartnumberBookRepository) *VendorSpecResolver {
|
||||||
|
return &VendorSpecResolver{bookRepo: bookRepo}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Resolve resolves each vendor spec item's lot name using the 3-step algorithm.
|
||||||
|
// It returns the resolved items. Manual lot suggestions from the input are preserved as pre-fill.
|
||||||
|
func (r *VendorSpecResolver) Resolve(items []localdb.VendorSpecItem) ([]localdb.VendorSpecItem, error) {
|
||||||
|
// Step 1: Get the active book
|
||||||
|
book, err := r.bookRepo.GetActiveBook()
|
||||||
|
if err != nil {
|
||||||
|
// No book available — mark all as unresolved
|
||||||
|
for i := range items {
|
||||||
|
if items[i].ResolvedLotName == "" {
|
||||||
|
items[i].ResolutionSource = "unresolved"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return items, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
for i, item := range items {
|
||||||
|
pn := item.VendorPartnumber
|
||||||
|
|
||||||
|
// Step 1: Look up in active book
|
||||||
|
matches, err := r.bookRepo.FindLotByPartnumber(book.ID, pn)
|
||||||
|
if err == nil && len(matches) > 0 {
|
||||||
|
items[i].LotMappings = make([]localdb.VendorSpecLotMapping, 0, len(matches[0].LotsJSON))
|
||||||
|
for _, lot := range matches[0].LotsJSON {
|
||||||
|
if lot.LotName == "" {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
items[i].LotMappings = append(items[i].LotMappings, localdb.VendorSpecLotMapping{
|
||||||
|
LotName: lot.LotName,
|
||||||
|
QuantityPerPN: lotQtyToInt(lot.Qty),
|
||||||
|
})
|
||||||
|
}
|
||||||
|
if len(items[i].LotMappings) > 0 {
|
||||||
|
items[i].ResolvedLotName = items[i].LotMappings[0].LotName
|
||||||
|
}
|
||||||
|
items[i].ResolutionSource = "book"
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
// Step 2: Pre-fill from manual_lot_suggestion if provided
|
||||||
|
if item.ManualLotSuggestion != "" {
|
||||||
|
items[i].ResolvedLotName = item.ManualLotSuggestion
|
||||||
|
items[i].ResolutionSource = "manual_suggestion"
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
// Step 3: Unresolved
|
||||||
|
items[i].ResolvedLotName = ""
|
||||||
|
items[i].ResolutionSource = "unresolved"
|
||||||
|
}
|
||||||
|
|
||||||
|
return items, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// AggregateLOTs applies qty from the resolved PN composition stored in lots_json.
|
||||||
|
func AggregateLOTs(items []localdb.VendorSpecItem, book *localdb.LocalPartnumberBook, bookRepo *repository.PartnumberBookRepository) ([]AggregatedLOT, error) {
|
||||||
|
lotTotals := make(map[string]int)
|
||||||
|
|
||||||
|
if book != nil {
|
||||||
|
for _, item := range items {
|
||||||
|
if item.ResolvedLotName == "" {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
lot := item.ResolvedLotName
|
||||||
|
pn := item.VendorPartnumber
|
||||||
|
|
||||||
|
matches, err := bookRepo.FindLotByPartnumber(book.ID, pn)
|
||||||
|
if err != nil || len(matches) == 0 {
|
||||||
|
lotTotals[lot] += item.Quantity
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
for _, m := range matches {
|
||||||
|
for _, mappedLot := range m.LotsJSON {
|
||||||
|
if mappedLot.LotName != lot {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
lotTotals[lot] += item.Quantity * lotQtyToInt(mappedLot.Qty)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
// No book: all resolved rows contribute qty=1 per lot
|
||||||
|
for _, item := range items {
|
||||||
|
if item.ResolvedLotName != "" {
|
||||||
|
lotTotals[item.ResolvedLotName] += item.Quantity
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Build aggregated list
|
||||||
|
seen := make(map[string]bool)
|
||||||
|
var result []AggregatedLOT
|
||||||
|
for _, item := range items {
|
||||||
|
lot := item.ResolvedLotName
|
||||||
|
if lot == "" || seen[lot] {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
seen[lot] = true
|
||||||
|
qty := lotTotals[lot]
|
||||||
|
if qty < 1 {
|
||||||
|
qty = 1
|
||||||
|
}
|
||||||
|
result = append(result, AggregatedLOT{LotName: lot, Quantity: qty})
|
||||||
|
}
|
||||||
|
return result, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func lotQtyToInt(qty float64) int {
|
||||||
|
if qty < 1 {
|
||||||
|
return 1
|
||||||
|
}
|
||||||
|
return int(math.Round(qty))
|
||||||
|
}
|
||||||
560
internal/services/vendor_workspace_import.go
Normal file
560
internal/services/vendor_workspace_import.go
Normal file
@@ -0,0 +1,560 @@
|
|||||||
|
package services
|
||||||
|
|
||||||
|
import (
|
||||||
|
"bytes"
|
||||||
|
"encoding/xml"
|
||||||
|
"fmt"
|
||||||
|
"path/filepath"
|
||||||
|
"sort"
|
||||||
|
"strconv"
|
||||||
|
"strings"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/localdb"
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/models"
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/repository"
|
||||||
|
"github.com/google/uuid"
|
||||||
|
"gorm.io/gorm"
|
||||||
|
)
|
||||||
|
|
||||||
|
type VendorWorkspaceImportResult struct {
|
||||||
|
Imported int `json:"imported"`
|
||||||
|
Project *models.Project `json:"project,omitempty"`
|
||||||
|
Configs []VendorWorkspaceImportedConfig `json:"configs"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type VendorWorkspaceImportedConfig struct {
|
||||||
|
UUID string `json:"uuid"`
|
||||||
|
Name string `json:"name"`
|
||||||
|
ServerCount int `json:"server_count"`
|
||||||
|
ServerModel string `json:"server_model,omitempty"`
|
||||||
|
Rows int `json:"rows"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type importedWorkspace struct {
|
||||||
|
SourceFormat string
|
||||||
|
SourceDocID string
|
||||||
|
SourceFileName string
|
||||||
|
CurrencyCode string
|
||||||
|
Configurations []importedConfiguration
|
||||||
|
}
|
||||||
|
|
||||||
|
type importedConfiguration struct {
|
||||||
|
GroupID string
|
||||||
|
Name string
|
||||||
|
Line int
|
||||||
|
ServerCount int
|
||||||
|
ServerModel string
|
||||||
|
Article string
|
||||||
|
CurrencyCode string
|
||||||
|
Rows []localdb.VendorSpecItem
|
||||||
|
TotalPrice *float64
|
||||||
|
}
|
||||||
|
|
||||||
|
type groupedItem struct {
|
||||||
|
order int
|
||||||
|
row cfxmlProductLineItem
|
||||||
|
}
|
||||||
|
|
||||||
|
type cfxmlDocument struct {
|
||||||
|
XMLName xml.Name `xml:"CFXML"`
|
||||||
|
ThisDocumentIdentifier cfxmlDocumentIdentifier `xml:"thisDocumentIdentifier"`
|
||||||
|
CFData cfxmlData `xml:"CFData"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type cfxmlDocumentIdentifier struct {
|
||||||
|
ProprietaryDocumentIdentifier string `xml:"ProprietaryDocumentIdentifier"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type cfxmlData struct {
|
||||||
|
ProprietaryInformation []cfxmlProprietaryInformation `xml:"ProprietaryInformation"`
|
||||||
|
ProductLineItems []cfxmlProductLineItem `xml:"ProductLineItem"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type cfxmlProprietaryInformation struct {
|
||||||
|
Name string `xml:"Name"`
|
||||||
|
Value string `xml:"Value"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type cfxmlProductLineItem struct {
|
||||||
|
ProductLineNumber string `xml:"ProductLineNumber"`
|
||||||
|
ItemNo string `xml:"ItemNo"`
|
||||||
|
TransactionType string `xml:"TransactionType"`
|
||||||
|
ProprietaryGroupIdentifier string `xml:"ProprietaryGroupIdentifier"`
|
||||||
|
ConfigurationGroupLineNumberReference string `xml:"ConfigurationGroupLineNumberReference"`
|
||||||
|
Quantity string `xml:"Quantity"`
|
||||||
|
ProductIdentification cfxmlProductIdentification `xml:"ProductIdentification"`
|
||||||
|
UnitListPrice cfxmlUnitListPrice `xml:"UnitListPrice"`
|
||||||
|
ProductSubLineItems []cfxmlProductSubLineItem `xml:"ProductSubLineItem"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type cfxmlProductSubLineItem struct {
|
||||||
|
LineNumber string `xml:"LineNumber"`
|
||||||
|
TransactionType string `xml:"TransactionType"`
|
||||||
|
Quantity string `xml:"Quantity"`
|
||||||
|
ProductIdentification cfxmlProductIdentification `xml:"ProductIdentification"`
|
||||||
|
UnitListPrice cfxmlUnitListPrice `xml:"UnitListPrice"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type cfxmlProductIdentification struct {
|
||||||
|
PartnerProductIdentification cfxmlPartnerProductIdentification `xml:"PartnerProductIdentification"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type cfxmlPartnerProductIdentification struct {
|
||||||
|
ProprietaryProductIdentifier string `xml:"ProprietaryProductIdentifier"`
|
||||||
|
ProprietaryProductChar string `xml:"ProprietaryProductChar"`
|
||||||
|
ProductCharacter string `xml:"ProductCharacter"`
|
||||||
|
ProductDescription string `xml:"ProductDescription"`
|
||||||
|
ProductName string `xml:"ProductName"`
|
||||||
|
ProductTypeCode string `xml:"ProductTypeCode"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type cfxmlUnitListPrice struct {
|
||||||
|
FinancialAmount cfxmlFinancialAmount `xml:"FinancialAmount"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type cfxmlFinancialAmount struct {
|
||||||
|
GlobalCurrencyCode string `xml:"GlobalCurrencyCode"`
|
||||||
|
MonetaryAmount string `xml:"MonetaryAmount"`
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *LocalConfigurationService) ImportVendorWorkspaceToProject(projectUUID string, sourceFileName string, data []byte, ownerUsername string) (*VendorWorkspaceImportResult, error) {
|
||||||
|
project, err := s.localDB.GetProjectByUUID(projectUUID)
|
||||||
|
if err != nil {
|
||||||
|
return nil, ErrProjectNotFound
|
||||||
|
}
|
||||||
|
|
||||||
|
workspace, err := parseCFXMLWorkspace(data, filepath.Base(sourceFileName))
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
result := &VendorWorkspaceImportResult{
|
||||||
|
Imported: 0,
|
||||||
|
Project: localdb.LocalToProject(project),
|
||||||
|
Configs: make([]VendorWorkspaceImportedConfig, 0, len(workspace.Configurations)),
|
||||||
|
}
|
||||||
|
|
||||||
|
err = s.localDB.DB().Transaction(func(tx *gorm.DB) error {
|
||||||
|
bookRepo := repository.NewPartnumberBookRepository(tx)
|
||||||
|
for _, imported := range workspace.Configurations {
|
||||||
|
now := time.Now()
|
||||||
|
cfgUUID := uuid.NewString()
|
||||||
|
groupRows, items, totalPrice, estimatePricelistID, err := s.prepareImportedConfiguration(imported.Rows, imported.ServerCount, bookRepo)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("prepare imported configuration group %s: %w", imported.GroupID, err)
|
||||||
|
}
|
||||||
|
localCfg := &localdb.LocalConfiguration{
|
||||||
|
UUID: cfgUUID,
|
||||||
|
ProjectUUID: &projectUUID,
|
||||||
|
IsActive: true,
|
||||||
|
Name: imported.Name,
|
||||||
|
Items: items,
|
||||||
|
TotalPrice: totalPrice,
|
||||||
|
ServerCount: imported.ServerCount,
|
||||||
|
ServerModel: imported.ServerModel,
|
||||||
|
Article: imported.Article,
|
||||||
|
PricelistID: estimatePricelistID,
|
||||||
|
VendorSpec: groupRows,
|
||||||
|
CreatedAt: now,
|
||||||
|
UpdatedAt: now,
|
||||||
|
SyncStatus: "pending",
|
||||||
|
OriginalUsername: ownerUsername,
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := s.createWithVersionTx(tx, localCfg, ownerUsername); err != nil {
|
||||||
|
return fmt.Errorf("import configuration group %s: %w", imported.GroupID, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
result.Imported++
|
||||||
|
result.Configs = append(result.Configs, VendorWorkspaceImportedConfig{
|
||||||
|
UUID: localCfg.UUID,
|
||||||
|
Name: localCfg.Name,
|
||||||
|
ServerCount: localCfg.ServerCount,
|
||||||
|
ServerModel: localCfg.ServerModel,
|
||||||
|
Rows: len(localCfg.VendorSpec),
|
||||||
|
})
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
return result, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *LocalConfigurationService) prepareImportedConfiguration(rows []localdb.VendorSpecItem, serverCount int, bookRepo *repository.PartnumberBookRepository) (localdb.VendorSpec, localdb.LocalConfigItems, *float64, *uint, error) {
|
||||||
|
resolver := NewVendorSpecResolver(bookRepo)
|
||||||
|
resolved, err := resolver.Resolve(append([]localdb.VendorSpecItem(nil), rows...))
|
||||||
|
if err != nil {
|
||||||
|
return nil, nil, nil, nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
canonical := make(localdb.VendorSpec, 0, len(resolved))
|
||||||
|
for _, row := range resolved {
|
||||||
|
if len(row.LotMappings) == 0 && strings.TrimSpace(row.ResolvedLotName) != "" {
|
||||||
|
row.LotMappings = []localdb.VendorSpecLotMapping{
|
||||||
|
{LotName: strings.TrimSpace(row.ResolvedLotName), QuantityPerPN: 1},
|
||||||
|
}
|
||||||
|
}
|
||||||
|
row.LotMappings = normalizeImportedLotMappings(row.LotMappings)
|
||||||
|
row.ResolvedLotName = ""
|
||||||
|
row.ResolutionSource = ""
|
||||||
|
row.ManualLotSuggestion = ""
|
||||||
|
row.LotQtyPerPN = 0
|
||||||
|
row.LotAllocations = nil
|
||||||
|
canonical = append(canonical, row)
|
||||||
|
}
|
||||||
|
|
||||||
|
estimatePricelist, _ := s.localDB.GetLatestLocalPricelistBySource("estimate")
|
||||||
|
var serverPricelistID *uint
|
||||||
|
if estimatePricelist != nil {
|
||||||
|
serverPricelistID = &estimatePricelist.ServerID
|
||||||
|
}
|
||||||
|
|
||||||
|
items := aggregateVendorSpecToItems(canonical, estimatePricelist, s.localDB)
|
||||||
|
totalValue := items.Total()
|
||||||
|
if serverCount > 1 {
|
||||||
|
totalValue *= float64(serverCount)
|
||||||
|
}
|
||||||
|
totalPrice := &totalValue
|
||||||
|
return canonical, items, totalPrice, serverPricelistID, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func aggregateVendorSpecToItems(spec localdb.VendorSpec, estimatePricelist *localdb.LocalPricelist, local *localdb.LocalDB) localdb.LocalConfigItems {
|
||||||
|
if len(spec) == 0 {
|
||||||
|
return localdb.LocalConfigItems{}
|
||||||
|
}
|
||||||
|
|
||||||
|
lotMap := make(map[string]int)
|
||||||
|
order := make([]string, 0)
|
||||||
|
for _, row := range spec {
|
||||||
|
for _, mapping := range normalizeImportedLotMappings(row.LotMappings) {
|
||||||
|
if _, exists := lotMap[mapping.LotName]; !exists {
|
||||||
|
order = append(order, mapping.LotName)
|
||||||
|
}
|
||||||
|
lotMap[mapping.LotName] += row.Quantity * mapping.QuantityPerPN
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
sort.Strings(order)
|
||||||
|
items := make(localdb.LocalConfigItems, 0, len(order))
|
||||||
|
for _, lotName := range order {
|
||||||
|
unitPrice := 0.0
|
||||||
|
if estimatePricelist != nil && local != nil {
|
||||||
|
if price, err := local.GetLocalPriceForLot(estimatePricelist.ID, lotName); err == nil && price > 0 {
|
||||||
|
unitPrice = price
|
||||||
|
}
|
||||||
|
}
|
||||||
|
items = append(items, localdb.LocalConfigItem{
|
||||||
|
LotName: lotName,
|
||||||
|
Quantity: lotMap[lotName],
|
||||||
|
UnitPrice: unitPrice,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
return items
|
||||||
|
}
|
||||||
|
|
||||||
|
func normalizeImportedLotMappings(in []localdb.VendorSpecLotMapping) []localdb.VendorSpecLotMapping {
|
||||||
|
if len(in) == 0 {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
merged := make(map[string]int, len(in))
|
||||||
|
order := make([]string, 0, len(in))
|
||||||
|
for _, mapping := range in {
|
||||||
|
lot := strings.TrimSpace(mapping.LotName)
|
||||||
|
if lot == "" {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
qty := mapping.QuantityPerPN
|
||||||
|
if qty < 1 {
|
||||||
|
qty = 1
|
||||||
|
}
|
||||||
|
if _, exists := merged[lot]; !exists {
|
||||||
|
order = append(order, lot)
|
||||||
|
}
|
||||||
|
merged[lot] += qty
|
||||||
|
}
|
||||||
|
out := make([]localdb.VendorSpecLotMapping, 0, len(order))
|
||||||
|
for _, lot := range order {
|
||||||
|
out = append(out, localdb.VendorSpecLotMapping{
|
||||||
|
LotName: lot,
|
||||||
|
QuantityPerPN: merged[lot],
|
||||||
|
})
|
||||||
|
}
|
||||||
|
if len(out) == 0 {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
return out
|
||||||
|
}
|
||||||
|
|
||||||
|
func parseCFXMLWorkspace(data []byte, sourceFileName string) (*importedWorkspace, error) {
|
||||||
|
var doc cfxmlDocument
|
||||||
|
if err := xml.Unmarshal(data, &doc); err != nil {
|
||||||
|
return nil, fmt.Errorf("parse CFXML workspace: %w", err)
|
||||||
|
}
|
||||||
|
if doc.XMLName.Local != "CFXML" {
|
||||||
|
return nil, fmt.Errorf("unsupported workspace root: %s", doc.XMLName.Local)
|
||||||
|
}
|
||||||
|
if len(doc.CFData.ProductLineItems) == 0 {
|
||||||
|
return nil, fmt.Errorf("CFXML workspace has no ProductLineItem rows")
|
||||||
|
}
|
||||||
|
|
||||||
|
workspace := &importedWorkspace{
|
||||||
|
SourceFormat: "CFXML",
|
||||||
|
SourceDocID: strings.TrimSpace(doc.ThisDocumentIdentifier.ProprietaryDocumentIdentifier),
|
||||||
|
SourceFileName: sourceFileName,
|
||||||
|
CurrencyCode: detectWorkspaceCurrency(doc.CFData.ProprietaryInformation, doc.CFData.ProductLineItems),
|
||||||
|
}
|
||||||
|
|
||||||
|
type groupBucket struct {
|
||||||
|
order int
|
||||||
|
items []groupedItem
|
||||||
|
}
|
||||||
|
|
||||||
|
groupOrder := make([]string, 0)
|
||||||
|
groups := make(map[string]*groupBucket)
|
||||||
|
for idx, item := range doc.CFData.ProductLineItems {
|
||||||
|
groupID := strings.TrimSpace(item.ProprietaryGroupIdentifier)
|
||||||
|
if groupID == "" {
|
||||||
|
groupID = firstNonEmpty(strings.TrimSpace(item.ProductLineNumber), strings.TrimSpace(item.ItemNo), fmt.Sprintf("group-%d", idx+1))
|
||||||
|
}
|
||||||
|
bucket := groups[groupID]
|
||||||
|
if bucket == nil {
|
||||||
|
bucket = &groupBucket{order: idx}
|
||||||
|
groups[groupID] = bucket
|
||||||
|
groupOrder = append(groupOrder, groupID)
|
||||||
|
}
|
||||||
|
bucket.items = append(bucket.items, groupedItem{order: idx, row: item})
|
||||||
|
}
|
||||||
|
|
||||||
|
for lineIdx, groupID := range groupOrder {
|
||||||
|
bucket := groups[groupID]
|
||||||
|
if bucket == nil || len(bucket.items) == 0 {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
primary := pickPrimaryTopLevelRow(bucket.items)
|
||||||
|
serverCount := maxInt(parseInt(primary.row.Quantity), 1)
|
||||||
|
rows := make([]localdb.VendorSpecItem, 0, len(bucket.items)*4)
|
||||||
|
sortOrder := 10
|
||||||
|
|
||||||
|
for _, item := range bucket.items {
|
||||||
|
topRow := vendorSpecItemFromTopLevel(item.row, serverCount, sortOrder)
|
||||||
|
if topRow != nil {
|
||||||
|
rows = append(rows, *topRow)
|
||||||
|
sortOrder += 10
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, sub := range item.row.ProductSubLineItems {
|
||||||
|
subRow := vendorSpecItemFromSubLine(sub, sortOrder)
|
||||||
|
if subRow == nil {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
rows = append(rows, *subRow)
|
||||||
|
sortOrder += 10
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
total := sumVendorSpecRows(rows, serverCount)
|
||||||
|
name := strings.TrimSpace(primary.row.ProductIdentification.PartnerProductIdentification.ProductName)
|
||||||
|
if name == "" {
|
||||||
|
name = strings.TrimSpace(primary.row.ProductIdentification.PartnerProductIdentification.ProductDescription)
|
||||||
|
}
|
||||||
|
if name == "" {
|
||||||
|
name = fmt.Sprintf("Imported config %d", lineIdx+1)
|
||||||
|
}
|
||||||
|
|
||||||
|
workspace.Configurations = append(workspace.Configurations, importedConfiguration{
|
||||||
|
GroupID: groupID,
|
||||||
|
Name: name,
|
||||||
|
Line: (lineIdx + 1) * 10,
|
||||||
|
ServerCount: serverCount,
|
||||||
|
ServerModel: strings.TrimSpace(primary.row.ProductIdentification.PartnerProductIdentification.ProductDescription),
|
||||||
|
Article: strings.TrimSpace(primary.row.ProductIdentification.PartnerProductIdentification.ProprietaryProductIdentifier),
|
||||||
|
CurrencyCode: workspace.CurrencyCode,
|
||||||
|
Rows: rows,
|
||||||
|
TotalPrice: total,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(workspace.Configurations) == 0 {
|
||||||
|
return nil, fmt.Errorf("CFXML workspace has no importable configuration groups")
|
||||||
|
}
|
||||||
|
|
||||||
|
return workspace, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func detectWorkspaceCurrency(meta []cfxmlProprietaryInformation, rows []cfxmlProductLineItem) string {
|
||||||
|
for _, item := range meta {
|
||||||
|
if strings.EqualFold(strings.TrimSpace(item.Name), "Currencies") {
|
||||||
|
value := strings.TrimSpace(item.Value)
|
||||||
|
if value != "" {
|
||||||
|
return value
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
for _, row := range rows {
|
||||||
|
code := strings.TrimSpace(row.UnitListPrice.FinancialAmount.GlobalCurrencyCode)
|
||||||
|
if code != "" {
|
||||||
|
return code
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return ""
|
||||||
|
}
|
||||||
|
|
||||||
|
func pickPrimaryTopLevelRow(items []groupedItem) groupedItem {
|
||||||
|
best := items[0]
|
||||||
|
bestScore := primaryScore(best.row)
|
||||||
|
for _, item := range items[1:] {
|
||||||
|
score := primaryScore(item.row)
|
||||||
|
if score > bestScore {
|
||||||
|
best = item
|
||||||
|
bestScore = score
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if score == bestScore && compareLineNumbers(item.row.ProductLineNumber, best.row.ProductLineNumber) < 0 {
|
||||||
|
best = item
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return best
|
||||||
|
}
|
||||||
|
|
||||||
|
func primaryScore(row cfxmlProductLineItem) int {
|
||||||
|
score := len(row.ProductSubLineItems)
|
||||||
|
if strings.EqualFold(strings.TrimSpace(row.ProductIdentification.PartnerProductIdentification.ProductTypeCode), "Hardware") {
|
||||||
|
score += 100000
|
||||||
|
}
|
||||||
|
return score
|
||||||
|
}
|
||||||
|
|
||||||
|
func compareLineNumbers(left, right string) int {
|
||||||
|
li := parseInt(left)
|
||||||
|
ri := parseInt(right)
|
||||||
|
switch {
|
||||||
|
case li < ri:
|
||||||
|
return -1
|
||||||
|
case li > ri:
|
||||||
|
return 1
|
||||||
|
default:
|
||||||
|
return strings.Compare(left, right)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func vendorSpecItemFromTopLevel(item cfxmlProductLineItem, serverCount int, sortOrder int) *localdb.VendorSpecItem {
|
||||||
|
code := strings.TrimSpace(item.ProductIdentification.PartnerProductIdentification.ProprietaryProductIdentifier)
|
||||||
|
desc := strings.TrimSpace(item.ProductIdentification.PartnerProductIdentification.ProductDescription)
|
||||||
|
if code == "" && desc == "" {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
qty := normalizeTopLevelQuantity(item.Quantity, serverCount)
|
||||||
|
unitPrice := parseOptionalFloat(item.UnitListPrice.FinancialAmount.MonetaryAmount)
|
||||||
|
return &localdb.VendorSpecItem{
|
||||||
|
SortOrder: sortOrder,
|
||||||
|
VendorPartnumber: code,
|
||||||
|
Quantity: qty,
|
||||||
|
Description: desc,
|
||||||
|
UnitPrice: unitPrice,
|
||||||
|
TotalPrice: totalPrice(unitPrice, qty),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func vendorSpecItemFromSubLine(item cfxmlProductSubLineItem, sortOrder int) *localdb.VendorSpecItem {
|
||||||
|
code := strings.TrimSpace(item.ProductIdentification.PartnerProductIdentification.ProprietaryProductIdentifier)
|
||||||
|
desc := strings.TrimSpace(item.ProductIdentification.PartnerProductIdentification.ProductDescription)
|
||||||
|
if code == "" && desc == "" {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
qty := maxInt(parseInt(item.Quantity), 1)
|
||||||
|
unitPrice := parseOptionalFloat(item.UnitListPrice.FinancialAmount.MonetaryAmount)
|
||||||
|
return &localdb.VendorSpecItem{
|
||||||
|
SortOrder: sortOrder,
|
||||||
|
VendorPartnumber: code,
|
||||||
|
Quantity: qty,
|
||||||
|
Description: desc,
|
||||||
|
UnitPrice: unitPrice,
|
||||||
|
TotalPrice: totalPrice(unitPrice, qty),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func sumVendorSpecRows(rows []localdb.VendorSpecItem, serverCount int) *float64 {
|
||||||
|
total := 0.0
|
||||||
|
hasTotal := false
|
||||||
|
for _, row := range rows {
|
||||||
|
if row.TotalPrice == nil {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
total += *row.TotalPrice
|
||||||
|
hasTotal = true
|
||||||
|
}
|
||||||
|
if !hasTotal {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
if serverCount > 1 {
|
||||||
|
total *= float64(serverCount)
|
||||||
|
}
|
||||||
|
return &total
|
||||||
|
}
|
||||||
|
|
||||||
|
func totalPrice(unitPrice *float64, qty int) *float64 {
|
||||||
|
if unitPrice == nil {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
total := *unitPrice * float64(qty)
|
||||||
|
return &total
|
||||||
|
}
|
||||||
|
|
||||||
|
func parseOptionalFloat(raw string) *float64 {
|
||||||
|
trimmed := strings.TrimSpace(raw)
|
||||||
|
if trimmed == "" {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
value, err := strconv.ParseFloat(trimmed, 64)
|
||||||
|
if err != nil {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
return &value
|
||||||
|
}
|
||||||
|
|
||||||
|
func parseInt(raw string) int {
|
||||||
|
trimmed := strings.TrimSpace(raw)
|
||||||
|
if trimmed == "" {
|
||||||
|
return 0
|
||||||
|
}
|
||||||
|
value, err := strconv.Atoi(trimmed)
|
||||||
|
if err != nil {
|
||||||
|
return 0
|
||||||
|
}
|
||||||
|
return value
|
||||||
|
}
|
||||||
|
|
||||||
|
func firstNonEmpty(values ...string) string {
|
||||||
|
for _, value := range values {
|
||||||
|
if strings.TrimSpace(value) != "" {
|
||||||
|
return strings.TrimSpace(value)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return ""
|
||||||
|
}
|
||||||
|
|
||||||
|
func maxInt(value, floor int) int {
|
||||||
|
if value < floor {
|
||||||
|
return floor
|
||||||
|
}
|
||||||
|
return value
|
||||||
|
}
|
||||||
|
|
||||||
|
func normalizeTopLevelQuantity(raw string, serverCount int) int {
|
||||||
|
qty := maxInt(parseInt(raw), 1)
|
||||||
|
if serverCount <= 1 {
|
||||||
|
return qty
|
||||||
|
}
|
||||||
|
if qty%serverCount == 0 {
|
||||||
|
return maxInt(qty/serverCount, 1)
|
||||||
|
}
|
||||||
|
return qty
|
||||||
|
}
|
||||||
|
|
||||||
|
func IsCFXMLWorkspace(data []byte) bool {
|
||||||
|
return bytes.Contains(data, []byte("<CFXML>")) || bytes.Contains(data, []byte("<CFXML "))
|
||||||
|
}
|
||||||
360
internal/services/vendor_workspace_import_test.go
Normal file
360
internal/services/vendor_workspace_import_test.go
Normal file
@@ -0,0 +1,360 @@
|
|||||||
|
package services
|
||||||
|
|
||||||
|
import (
|
||||||
|
"path/filepath"
|
||||||
|
"testing"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/localdb"
|
||||||
|
syncsvc "git.mchus.pro/mchus/quoteforge/internal/services/sync"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestParseCFXMLWorkspaceGroupsSoftwareIntoConfiguration(t *testing.T) {
|
||||||
|
const sample = `<?xml version="1.0" encoding="UTF-8"?>
|
||||||
|
<CFXML>
|
||||||
|
<thisDocumentIdentifier>
|
||||||
|
<ProprietaryDocumentIdentifier>CFXML.workspace-test</ProprietaryDocumentIdentifier>
|
||||||
|
</thisDocumentIdentifier>
|
||||||
|
<CFData>
|
||||||
|
<ProprietaryInformation>
|
||||||
|
<Name>Currencies</Name>
|
||||||
|
<Value>USD</Value>
|
||||||
|
</ProprietaryInformation>
|
||||||
|
<ProductLineItem>
|
||||||
|
<ProductLineNumber>1000</ProductLineNumber>
|
||||||
|
<ItemNo>1000</ItemNo>
|
||||||
|
<TransactionType>NEW</TransactionType>
|
||||||
|
<ProprietaryGroupIdentifier>1000</ProprietaryGroupIdentifier>
|
||||||
|
<ConfigurationGroupLineNumberReference>100</ConfigurationGroupLineNumberReference>
|
||||||
|
<Quantity>6</Quantity>
|
||||||
|
<ProductIdentification>
|
||||||
|
<PartnerProductIdentification>
|
||||||
|
<ProprietaryProductIdentifier>7DG9-CTO1WW</ProprietaryProductIdentifier>
|
||||||
|
<ProductDescription>ThinkSystem SR630 V4</ProductDescription>
|
||||||
|
<ProductName>#1</ProductName>
|
||||||
|
<ProductTypeCode>Hardware</ProductTypeCode>
|
||||||
|
</PartnerProductIdentification>
|
||||||
|
</ProductIdentification>
|
||||||
|
<UnitListPrice>
|
||||||
|
<FinancialAmount>
|
||||||
|
<GlobalCurrencyCode>USD</GlobalCurrencyCode>
|
||||||
|
<MonetaryAmount>100.00</MonetaryAmount>
|
||||||
|
</FinancialAmount>
|
||||||
|
</UnitListPrice>
|
||||||
|
<ProductSubLineItem>
|
||||||
|
<LineNumber>1001</LineNumber>
|
||||||
|
<TransactionType>ADD</TransactionType>
|
||||||
|
<Quantity>2</Quantity>
|
||||||
|
<ProductIdentification>
|
||||||
|
<PartnerProductIdentification>
|
||||||
|
<ProprietaryProductIdentifier>CPU-1</ProprietaryProductIdentifier>
|
||||||
|
<ProductDescription>CPU</ProductDescription>
|
||||||
|
<ProductCharacter>PROCESSOR</ProductCharacter>
|
||||||
|
</PartnerProductIdentification>
|
||||||
|
</ProductIdentification>
|
||||||
|
<UnitListPrice>
|
||||||
|
<FinancialAmount>
|
||||||
|
<GlobalCurrencyCode>USD</GlobalCurrencyCode>
|
||||||
|
<MonetaryAmount>0</MonetaryAmount>
|
||||||
|
</FinancialAmount>
|
||||||
|
</UnitListPrice>
|
||||||
|
</ProductSubLineItem>
|
||||||
|
</ProductLineItem>
|
||||||
|
<ProductLineItem>
|
||||||
|
<ProductLineNumber>2000</ProductLineNumber>
|
||||||
|
<ItemNo>2000</ItemNo>
|
||||||
|
<TransactionType>NEW</TransactionType>
|
||||||
|
<ProprietaryGroupIdentifier>1000</ProprietaryGroupIdentifier>
|
||||||
|
<ConfigurationGroupLineNumberReference>100</ConfigurationGroupLineNumberReference>
|
||||||
|
<Quantity>6</Quantity>
|
||||||
|
<ProductIdentification>
|
||||||
|
<PartnerProductIdentification>
|
||||||
|
<ProprietaryProductIdentifier>7S0X-CTO8WW</ProprietaryProductIdentifier>
|
||||||
|
<ProductDescription>XClarity Controller Prem-FOD</ProductDescription>
|
||||||
|
<ProductName>software1</ProductName>
|
||||||
|
<ProductTypeCode>Software</ProductTypeCode>
|
||||||
|
</PartnerProductIdentification>
|
||||||
|
</ProductIdentification>
|
||||||
|
<UnitListPrice>
|
||||||
|
<FinancialAmount>
|
||||||
|
<GlobalCurrencyCode>USD</GlobalCurrencyCode>
|
||||||
|
<MonetaryAmount>25.00</MonetaryAmount>
|
||||||
|
</FinancialAmount>
|
||||||
|
</UnitListPrice>
|
||||||
|
<ProductSubLineItem>
|
||||||
|
<LineNumber>2001</LineNumber>
|
||||||
|
<TransactionType>ADD</TransactionType>
|
||||||
|
<Quantity>1</Quantity>
|
||||||
|
<ProductIdentification>
|
||||||
|
<PartnerProductIdentification>
|
||||||
|
<ProprietaryProductIdentifier>LIC-1</ProprietaryProductIdentifier>
|
||||||
|
<ProductDescription>License</ProductDescription>
|
||||||
|
<ProductCharacter>SOFTWARE</ProductCharacter>
|
||||||
|
</PartnerProductIdentification>
|
||||||
|
</ProductIdentification>
|
||||||
|
<UnitListPrice>
|
||||||
|
<FinancialAmount>
|
||||||
|
<GlobalCurrencyCode>USD</GlobalCurrencyCode>
|
||||||
|
<MonetaryAmount>0</MonetaryAmount>
|
||||||
|
</FinancialAmount>
|
||||||
|
</UnitListPrice>
|
||||||
|
</ProductSubLineItem>
|
||||||
|
</ProductLineItem>
|
||||||
|
<ProductLineItem>
|
||||||
|
<ProductLineNumber>3000</ProductLineNumber>
|
||||||
|
<ItemNo>3000</ItemNo>
|
||||||
|
<TransactionType>NEW</TransactionType>
|
||||||
|
<ProprietaryGroupIdentifier>3000</ProprietaryGroupIdentifier>
|
||||||
|
<ConfigurationGroupLineNumberReference>100</ConfigurationGroupLineNumberReference>
|
||||||
|
<Quantity>2</Quantity>
|
||||||
|
<ProductIdentification>
|
||||||
|
<PartnerProductIdentification>
|
||||||
|
<ProprietaryProductIdentifier>7DG9-CTO1WW</ProprietaryProductIdentifier>
|
||||||
|
<ProductDescription>ThinkSystem SR630 V4</ProductDescription>
|
||||||
|
<ProductName>#2</ProductName>
|
||||||
|
<ProductTypeCode>Hardware</ProductTypeCode>
|
||||||
|
</PartnerProductIdentification>
|
||||||
|
</ProductIdentification>
|
||||||
|
<UnitListPrice>
|
||||||
|
<FinancialAmount>
|
||||||
|
<GlobalCurrencyCode>USD</GlobalCurrencyCode>
|
||||||
|
<MonetaryAmount>90.00</MonetaryAmount>
|
||||||
|
</FinancialAmount>
|
||||||
|
</UnitListPrice>
|
||||||
|
</ProductLineItem>
|
||||||
|
</CFData>
|
||||||
|
</CFXML>`
|
||||||
|
|
||||||
|
workspace, err := parseCFXMLWorkspace([]byte(sample), "sample.xml")
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("parseCFXMLWorkspace: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if workspace.SourceFormat != "CFXML" {
|
||||||
|
t.Fatalf("unexpected source format: %q", workspace.SourceFormat)
|
||||||
|
}
|
||||||
|
if len(workspace.Configurations) != 2 {
|
||||||
|
t.Fatalf("expected 2 configurations, got %d", len(workspace.Configurations))
|
||||||
|
}
|
||||||
|
|
||||||
|
first := workspace.Configurations[0]
|
||||||
|
if first.GroupID != "1000" {
|
||||||
|
t.Fatalf("expected first group 1000, got %q", first.GroupID)
|
||||||
|
}
|
||||||
|
if first.Name != "#1" {
|
||||||
|
t.Fatalf("expected first config name #1, got %q", first.Name)
|
||||||
|
}
|
||||||
|
if first.ServerCount != 6 {
|
||||||
|
t.Fatalf("expected first server count 6, got %d", first.ServerCount)
|
||||||
|
}
|
||||||
|
if len(first.Rows) != 4 {
|
||||||
|
t.Fatalf("expected 4 vendor rows in first config, got %d", len(first.Rows))
|
||||||
|
}
|
||||||
|
|
||||||
|
foundSoftwareTopLevel := false
|
||||||
|
foundSoftwareSubRow := false
|
||||||
|
foundPrimaryTopLevelQty := 0
|
||||||
|
for _, row := range first.Rows {
|
||||||
|
if row.VendorPartnumber == "7DG9-CTO1WW" {
|
||||||
|
foundPrimaryTopLevelQty = row.Quantity
|
||||||
|
}
|
||||||
|
if row.VendorPartnumber == "7S0X-CTO8WW" {
|
||||||
|
foundSoftwareTopLevel = true
|
||||||
|
}
|
||||||
|
if row.VendorPartnumber == "LIC-1" {
|
||||||
|
foundSoftwareSubRow = true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if !foundSoftwareTopLevel {
|
||||||
|
t.Fatalf("expected software top-level row to stay inside configuration")
|
||||||
|
}
|
||||||
|
if !foundSoftwareSubRow {
|
||||||
|
t.Fatalf("expected software sub-row to stay inside configuration")
|
||||||
|
}
|
||||||
|
if foundPrimaryTopLevelQty != 1 {
|
||||||
|
t.Fatalf("expected primary top-level qty normalized to 1, got %d", foundPrimaryTopLevelQty)
|
||||||
|
}
|
||||||
|
if first.TotalPrice == nil || *first.TotalPrice != 750 {
|
||||||
|
t.Fatalf("expected first total price 750, got %+v", first.TotalPrice)
|
||||||
|
}
|
||||||
|
|
||||||
|
second := workspace.Configurations[1]
|
||||||
|
if second.Name != "#2" {
|
||||||
|
t.Fatalf("expected second config name #2, got %q", second.Name)
|
||||||
|
}
|
||||||
|
if len(second.Rows) != 1 {
|
||||||
|
t.Fatalf("expected second config to contain single top-level row, got %d", len(second.Rows))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestImportVendorWorkspaceToProject_AutoResolvesAndAppliesEstimate(t *testing.T) {
|
||||||
|
local, err := localdb.New(filepath.Join(t.TempDir(), "local.db"))
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("init local db: %v", err)
|
||||||
|
}
|
||||||
|
t.Cleanup(func() { _ = local.Close() })
|
||||||
|
|
||||||
|
projectName := "OPS-2079"
|
||||||
|
if err := local.SaveProject(&localdb.LocalProject{
|
||||||
|
UUID: "project-1",
|
||||||
|
OwnerUsername: "tester",
|
||||||
|
Code: "OPS-2079",
|
||||||
|
Variant: "",
|
||||||
|
Name: &projectName,
|
||||||
|
IsActive: true,
|
||||||
|
CreatedAt: time.Now(),
|
||||||
|
UpdatedAt: time.Now(),
|
||||||
|
}); err != nil {
|
||||||
|
t.Fatalf("save project: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := local.SaveLocalPricelist(&localdb.LocalPricelist{
|
||||||
|
ServerID: 101,
|
||||||
|
Source: "estimate",
|
||||||
|
Version: "E-1",
|
||||||
|
Name: "Estimate",
|
||||||
|
CreatedAt: time.Now(),
|
||||||
|
SyncedAt: time.Now(),
|
||||||
|
}); err != nil {
|
||||||
|
t.Fatalf("save estimate pricelist: %v", err)
|
||||||
|
}
|
||||||
|
estimatePL, err := local.GetLocalPricelistByServerID(101)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("get estimate pricelist: %v", err)
|
||||||
|
}
|
||||||
|
if err := local.SaveLocalPricelistItems([]localdb.LocalPricelistItem{
|
||||||
|
{PricelistID: estimatePL.ID, LotName: "CPU_INTEL_6747P", Price: 1000},
|
||||||
|
{PricelistID: estimatePL.ID, LotName: "LICENSE_XCC", Price: 50},
|
||||||
|
}); err != nil {
|
||||||
|
t.Fatalf("save estimate items: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
bookRepo := local.DB()
|
||||||
|
if err := bookRepo.Create(&localdb.LocalPartnumberBook{
|
||||||
|
ServerID: 501,
|
||||||
|
Version: "B-1",
|
||||||
|
CreatedAt: time.Now(),
|
||||||
|
IsActive: true,
|
||||||
|
PartnumbersJSON: localdb.LocalStringList{"CPU-1", "LIC-1"},
|
||||||
|
}).Error; err != nil {
|
||||||
|
t.Fatalf("save active book: %v", err)
|
||||||
|
}
|
||||||
|
if err := bookRepo.Create([]localdb.LocalPartnumberBookItem{
|
||||||
|
{Partnumber: "CPU-1", LotsJSON: localdb.LocalPartnumberBookLots{{LotName: "CPU_INTEL_6747P", Qty: 1}}},
|
||||||
|
{Partnumber: "LIC-1", LotsJSON: localdb.LocalPartnumberBookLots{{LotName: "LICENSE_XCC", Qty: 1}}},
|
||||||
|
}).Error; err != nil {
|
||||||
|
t.Fatalf("save book items: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
service := NewLocalConfigurationService(local, syncsvc.NewService(nil, local), &QuoteService{}, func() bool { return false })
|
||||||
|
|
||||||
|
const sample = `<?xml version="1.0" encoding="UTF-8"?>
|
||||||
|
<CFXML>
|
||||||
|
<thisDocumentIdentifier>
|
||||||
|
<ProprietaryDocumentIdentifier>CFXML.workspace-test</ProprietaryDocumentIdentifier>
|
||||||
|
</thisDocumentIdentifier>
|
||||||
|
<CFData>
|
||||||
|
<ProductLineItem>
|
||||||
|
<ProductLineNumber>1000</ProductLineNumber>
|
||||||
|
<ItemNo>1000</ItemNo>
|
||||||
|
<TransactionType>NEW</TransactionType>
|
||||||
|
<ProprietaryGroupIdentifier>1000</ProprietaryGroupIdentifier>
|
||||||
|
<Quantity>2</Quantity>
|
||||||
|
<ProductIdentification>
|
||||||
|
<PartnerProductIdentification>
|
||||||
|
<ProprietaryProductIdentifier>7DG9-CTO1WW</ProprietaryProductIdentifier>
|
||||||
|
<ProductDescription>ThinkSystem SR630 V4</ProductDescription>
|
||||||
|
<ProductName>#1</ProductName>
|
||||||
|
<ProductTypeCode>Hardware</ProductTypeCode>
|
||||||
|
</PartnerProductIdentification>
|
||||||
|
</ProductIdentification>
|
||||||
|
<ProductSubLineItem>
|
||||||
|
<LineNumber>1001</LineNumber>
|
||||||
|
<Quantity>2</Quantity>
|
||||||
|
<ProductIdentification>
|
||||||
|
<PartnerProductIdentification>
|
||||||
|
<ProprietaryProductIdentifier>CPU-1</ProprietaryProductIdentifier>
|
||||||
|
<ProductDescription>CPU</ProductDescription>
|
||||||
|
</PartnerProductIdentification>
|
||||||
|
</ProductIdentification>
|
||||||
|
</ProductSubLineItem>
|
||||||
|
</ProductLineItem>
|
||||||
|
<ProductLineItem>
|
||||||
|
<ProductLineNumber>2000</ProductLineNumber>
|
||||||
|
<ItemNo>2000</ItemNo>
|
||||||
|
<TransactionType>NEW</TransactionType>
|
||||||
|
<ProprietaryGroupIdentifier>1000</ProprietaryGroupIdentifier>
|
||||||
|
<Quantity>2</Quantity>
|
||||||
|
<ProductIdentification>
|
||||||
|
<PartnerProductIdentification>
|
||||||
|
<ProprietaryProductIdentifier>7S0X-CTO8WW</ProprietaryProductIdentifier>
|
||||||
|
<ProductDescription>XClarity Controller</ProductDescription>
|
||||||
|
<ProductName>software1</ProductName>
|
||||||
|
<ProductTypeCode>Software</ProductTypeCode>
|
||||||
|
</PartnerProductIdentification>
|
||||||
|
</ProductIdentification>
|
||||||
|
<ProductSubLineItem>
|
||||||
|
<LineNumber>2001</LineNumber>
|
||||||
|
<Quantity>1</Quantity>
|
||||||
|
<ProductIdentification>
|
||||||
|
<PartnerProductIdentification>
|
||||||
|
<ProprietaryProductIdentifier>LIC-1</ProprietaryProductIdentifier>
|
||||||
|
<ProductDescription>License</ProductDescription>
|
||||||
|
</PartnerProductIdentification>
|
||||||
|
</ProductIdentification>
|
||||||
|
</ProductSubLineItem>
|
||||||
|
</ProductLineItem>
|
||||||
|
</CFData>
|
||||||
|
</CFXML>`
|
||||||
|
|
||||||
|
result, err := service.ImportVendorWorkspaceToProject("project-1", "sample.xml", []byte(sample), "tester")
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("ImportVendorWorkspaceToProject: %v", err)
|
||||||
|
}
|
||||||
|
if result.Imported != 1 || len(result.Configs) != 1 {
|
||||||
|
t.Fatalf("unexpected import result: %+v", result)
|
||||||
|
}
|
||||||
|
|
||||||
|
cfg, err := local.GetConfigurationByUUID(result.Configs[0].UUID)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("load imported config: %v", err)
|
||||||
|
}
|
||||||
|
if cfg.PricelistID == nil || *cfg.PricelistID != 101 {
|
||||||
|
t.Fatalf("expected estimate pricelist id 101, got %+v", cfg.PricelistID)
|
||||||
|
}
|
||||||
|
if len(cfg.VendorSpec) != 4 {
|
||||||
|
t.Fatalf("expected 4 vendor spec rows, got %d", len(cfg.VendorSpec))
|
||||||
|
}
|
||||||
|
if len(cfg.Items) != 2 {
|
||||||
|
t.Fatalf("expected 2 cart items, got %d", len(cfg.Items))
|
||||||
|
}
|
||||||
|
if cfg.Items[0].LotName != "CPU_INTEL_6747P" || cfg.Items[0].Quantity != 2 || cfg.Items[0].UnitPrice != 1000 {
|
||||||
|
t.Fatalf("unexpected first item: %+v", cfg.Items[0])
|
||||||
|
}
|
||||||
|
if cfg.Items[1].LotName != "LICENSE_XCC" || cfg.Items[1].Quantity != 1 || cfg.Items[1].UnitPrice != 50 {
|
||||||
|
t.Fatalf("unexpected second item: %+v", cfg.Items[1])
|
||||||
|
}
|
||||||
|
if cfg.TotalPrice == nil || *cfg.TotalPrice != 4100 {
|
||||||
|
t.Fatalf("expected total price 4100 for 2 servers, got %+v", cfg.TotalPrice)
|
||||||
|
}
|
||||||
|
|
||||||
|
foundCPU := false
|
||||||
|
foundLIC := false
|
||||||
|
for _, row := range cfg.VendorSpec {
|
||||||
|
if row.VendorPartnumber == "CPU-1" {
|
||||||
|
foundCPU = true
|
||||||
|
if len(row.LotMappings) != 1 || row.LotMappings[0].LotName != "CPU_INTEL_6747P" || row.LotMappings[0].QuantityPerPN != 1 {
|
||||||
|
t.Fatalf("unexpected CPU mappings: %+v", row.LotMappings)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if row.VendorPartnumber == "LIC-1" {
|
||||||
|
foundLIC = true
|
||||||
|
if len(row.LotMappings) != 1 || row.LotMappings[0].LotName != "LICENSE_XCC" || row.LotMappings[0].QuantityPerPN != 1 {
|
||||||
|
t.Fatalf("unexpected LIC mappings: %+v", row.LotMappings)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if !foundCPU || !foundLIC {
|
||||||
|
t.Fatalf("expected resolved rows for CPU and LIC in vendor spec")
|
||||||
|
}
|
||||||
|
}
|
||||||
41
memory.md
41
memory.md
@@ -1,41 +0,0 @@
|
|||||||
# Changes summary (2026-02-11)
|
|
||||||
|
|
||||||
Implemented strict `lot_category` flow using `pricelist_items.lot_category` only (no parsing from `lot_name`), plus local caching and backfill:
|
|
||||||
|
|
||||||
1. Local DB schema + migrations
|
|
||||||
- Added `lot_category` column to `local_pricelist_items` via `LocalPricelistItem` model.
|
|
||||||
- Added local migration `2026_02_11_local_pricelist_item_category` to add the column if missing and create indexes:
|
|
||||||
- `idx_local_pricelist_items_pricelist_lot (pricelist_id, lot_name)`
|
|
||||||
- `idx_local_pricelist_items_lot_category (lot_category)`
|
|
||||||
|
|
||||||
2. Server model/repository
|
|
||||||
- Added `LotCategory` field to `models.PricelistItem`.
|
|
||||||
- `PricelistRepository.GetItems` now sets `Category` from `LotCategory` (no parsing from `lot_name`).
|
|
||||||
|
|
||||||
3. Sync + local DB helpers
|
|
||||||
- `SyncPricelistItems` now saves `lot_category` into local cache via `PricelistItemToLocal`.
|
|
||||||
- Added `LocalDB.CountLocalPricelistItemsWithEmptyCategory` and `LocalDB.ReplaceLocalPricelistItems`.
|
|
||||||
- Added `LocalDB.GetLocalLotCategoriesByServerPricelistID` for strict category lookup.
|
|
||||||
- Added `SyncPricelists` backfill step: for used active pricelists with empty categories, force refresh items from server.
|
|
||||||
|
|
||||||
4. API handler
|
|
||||||
- `GET /api/pricelists/:id/items` returns `category` from `local_pricelist_items.lot_category` (no parsing from `lot_name`).
|
|
||||||
|
|
||||||
5. Article category foundation
|
|
||||||
- New package `internal/article`:
|
|
||||||
- `ResolveLotCategoriesStrict` pulls categories from local pricelist items and errors on missing category.
|
|
||||||
- `GroupForLotCategory` maps only allowed codes (CPU/MEM/GPU/M2/SSD/HDD/EDSFF/HHHL/NIC/HCA/DPU/PSU/PS) to article groups; excludes `SFP`.
|
|
||||||
- Error type `MissingCategoryForLotError` with base `ErrMissingCategoryForLot`.
|
|
||||||
|
|
||||||
6. Tests
|
|
||||||
- Added unit tests for converters and article category resolver.
|
|
||||||
- Added handler test to ensure `/api/pricelists/:id/items` returns `lot_category`.
|
|
||||||
- Added sync test for category backfill on used pricelist items.
|
|
||||||
- `go test ./...` passed.
|
|
||||||
|
|
||||||
Additional fixes (2026-02-11):
|
|
||||||
- Fixed article parsing bug: CPU/GPU parsers were swapped in `internal/article/generator.go`. CPU now uses last token from CPU lot; GPU uses model+memory from `GPU_vendor_model_mem_iface`.
|
|
||||||
- Adjusted configurator base tab layout to align labels on the same row (separate label row + input row grid).
|
|
||||||
|
|
||||||
UI rule (2026-02-19):
|
|
||||||
- In all breadcrumbs, truncate long specification/configuration names to 16 characters using ellipsis.
|
|
||||||
@@ -1,7 +0,0 @@
|
|||||||
CREATE TABLE IF NOT EXISTS lot_partnumbers (
|
|
||||||
partnumber VARCHAR(255) NOT NULL,
|
|
||||||
lot_name VARCHAR(255) NOT NULL DEFAULT '',
|
|
||||||
description VARCHAR(10000) NULL,
|
|
||||||
PRIMARY KEY (partnumber, lot_name),
|
|
||||||
INDEX idx_lot_partnumbers_lot_name (lot_name)
|
|
||||||
);
|
|
||||||
@@ -1,25 +0,0 @@
|
|||||||
-- Allow placeholder mappings (partnumber without bound lot) and store import description.
|
|
||||||
ALTER TABLE lot_partnumbers
|
|
||||||
ADD COLUMN IF NOT EXISTS description VARCHAR(10000) NULL AFTER lot_name;
|
|
||||||
|
|
||||||
ALTER TABLE lot_partnumbers
|
|
||||||
MODIFY COLUMN lot_name VARCHAR(255) NOT NULL DEFAULT '';
|
|
||||||
|
|
||||||
-- Drop FK on lot_name if it exists to allow unresolved placeholders.
|
|
||||||
SET @lp_fk_name := (
|
|
||||||
SELECT kcu.CONSTRAINT_NAME
|
|
||||||
FROM information_schema.KEY_COLUMN_USAGE kcu
|
|
||||||
WHERE kcu.TABLE_SCHEMA = DATABASE()
|
|
||||||
AND kcu.TABLE_NAME = 'lot_partnumbers'
|
|
||||||
AND kcu.COLUMN_NAME = 'lot_name'
|
|
||||||
AND kcu.REFERENCED_TABLE_NAME IS NOT NULL
|
|
||||||
LIMIT 1
|
|
||||||
);
|
|
||||||
SET @lp_drop_fk_sql := IF(
|
|
||||||
@lp_fk_name IS NULL,
|
|
||||||
'SELECT 1',
|
|
||||||
CONCAT('ALTER TABLE lot_partnumbers DROP FOREIGN KEY `', @lp_fk_name, '`')
|
|
||||||
);
|
|
||||||
PREPARE lp_stmt FROM @lp_drop_fk_sql;
|
|
||||||
EXECUTE lp_stmt;
|
|
||||||
DEALLOCATE PREPARE lp_stmt;
|
|
||||||
18
migrations/028_add_line_no_to_configurations.sql
Normal file
18
migrations/028_add_line_no_to_configurations.sql
Normal file
@@ -0,0 +1,18 @@
|
|||||||
|
ALTER TABLE qt_configurations
|
||||||
|
ADD COLUMN IF NOT EXISTS line_no INT NULL AFTER only_in_stock;
|
||||||
|
|
||||||
|
UPDATE qt_configurations q
|
||||||
|
JOIN (
|
||||||
|
SELECT
|
||||||
|
id,
|
||||||
|
ROW_NUMBER() OVER (
|
||||||
|
PARTITION BY COALESCE(NULLIF(TRIM(project_uuid), ''), '__NO_PROJECT__')
|
||||||
|
ORDER BY created_at ASC, id ASC
|
||||||
|
) AS rn
|
||||||
|
FROM qt_configurations
|
||||||
|
WHERE line_no IS NULL OR line_no <= 0
|
||||||
|
) ranked ON ranked.id = q.id
|
||||||
|
SET q.line_no = ranked.rn * 10;
|
||||||
|
|
||||||
|
ALTER TABLE qt_configurations
|
||||||
|
ADD INDEX IF NOT EXISTS idx_qt_configurations_project_line_no (project_uuid, line_no);
|
||||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user